Archive.fm

TechCentral (main feed)

TCS+ | Angus Hay on Africa Data Centres' big Samrand expansion

In this episode of TechCentral’s business technology show TCS+, Africa Data Centres regional executive for South Africa Angus Hay discusses the significant upgrades taking place at the company’s Samrand facility in Gauteng. Africa Data Centres is in the midst of a major upgrade cycle at the facility known as JHB 2. The colocation facility, which is one of the few tier-4 data centres in Africa, was originally designed to handle 10MW of IT load. When the upgrades are completed in 2025, the facility will house an additional 20MW. VIDEO In this informative discussion, Hay delves into:

  • The drivers behind the upgrades at Samrand and how “hyperscalers” will benefit from them;
  • The security standards at the facility and the implications for highly sensitive clients such as the financial sector;
  • The energy redundancies that ensure continuous operations at the facility, even in the event of grid collapse;
  • Innovations Africa Data Centres uses to manage the efficient use of energy at the facility; and
  • Initiatives to minimise Africa Data Centres’ carbon footprint by investing in renewable energy sources. Don’t miss this lively discussion in which Hay provides a behind-the-scenes peek into the inner workings of state-of-the-art data centres, the powerhouses of the modern internet.

Duration:
32m
Broadcast on:
06 Aug 2024
Audio Format:
aac

[MUSIC PLAYING] Data centers are at the core of the internet infrastructure driving the world's modern digital economy. On this exciting episode of TCS+ Tech Central is on site at the Africa Data Centers JHB2 Facility in Samratt. This facility, when it was built, was one of the largest on the continent. But today, it is undergoing further expansion. And to tell us more about that, I have Dr. Angus Hay, regional executive for South Africa, speaking to us today. Dr. Hay, welcome to TCS. Thank you. Thank you. Dr. Hay, as mentioned, this facility was one of the largest on the continent when it was built. But it is being expanded. Why is that? So we acquired this facility some years back. It's been known in the industry for some time as one of the flagship facilities on the continent, as you mentioned, and also one of the largest. We've moved into an era where we've moved away from private data centers, which the Samratt facility used to be, into collocation data centers. And we're one of the largest providers of collocation data centers across the African continent. And so what we are converting the Samratt facility into is from a highly secure environment which provided a secure, highly available environment for a single customer into a collocation facility, which provides for multiple customers. And more importantly, we're scaling it up into what we call a hyperscale facility. And hyperscale is what houses what we recognize today as the global cloud. Yes. Now, building this facility larger means that you are going to have to use more resources to run the facility. And we do know that we have power challenges in the country. So how do you ensure that this facility can run despite power outages? So one of the key elements about when running data centers, and it should be obvious, is that when customers are moving into data center, and we're talking about major customers. We're talking about your hyperscale cloud providers. We're talking about major financial institutions. And in Samratt's case, that's literally the case, as well as any multinational enterprises that are looking for collocation. The things they're looking for, if you think about why you would really move into collocation, firstly, you're looking for high availability. So we offer 100% availability of power, not $99.999, $100. We will literally sign an agreement at 100% availability. The second thing they're looking for is security. And we can talk a little bit more about the security on that site. And the third thing that they're looking for is that it's part of the internet, connectivity. So as we've moved into expanding this facility, we're addressing all of those. The really important thing about Samratt is as we acquired the site, it already had a power allocation. It's within the city of Twiney. And the city of Twiney has a substation which has already allocated the power to that site. So as we stand today, that site has an availability of up to 40 to 50 megawatts of power from the grid already available. And that's the power we'll be using for the expansion. So if you're looking for criteria for choosing a data center, the first thing you would look for is power. Then obviously, there are a number of other criteria that you look for in choosing a data center. So our ability to grow is based on the availability of power. But just to address your specific question, because what often sits behind that is people's perceptions of the power availability in South Africa. So first things first, we obviously have a much larger plan in terms of ensuring power availability. And that's got to do with the changes that happened in the electricity industry in South Africa. So as you'll know, part of the problem that we have in South Africa is that it's not just that we have issues with availability of power. But the power that we do have is generally non-renewable. So 80% to 85% of the power on the national grid currently is generated from non-renewable sources. But we've had a massive opening up of that industry. And the ability to start generating new electricity based on renewable sources. And the model for that is what we call wheeling. So we've actually signed an intermittent power producer agreement, so a power purchase agreement with somebody that is generating. And we're working with that independent power producer to generate our own electricity that we will consume on our own site. But it's because the site is physically not in a location where you can generate electricity, what we're actually doing is we're building that power plant, which is a solar PV power plant, in the free state. And we've entered into wheeling agreements, which will wheel that energy across the S-con grid into the city of Zwani and into our facility. So towards the end of this year, that's when that power will come on stream. And we're going to move from having 10% to 15% renewable energy, we're going to add about another 30% of renewable energy into the mix. And over time, we have a plan to grow that even further. So the SAMRAN facility, as much as it has high availability of power, is also going to have a significant percentage of renewable energy growing into that facility. All of that is in place. And then the last thing, which is perhaps something which is not more well known, but that particular facility is in an area which does not see load jetting, because it's on a particular grid that is not load jet. So the combination of having high availability of power and the renewable energy means that we have a facility that is really optimum for anybody that is looking for 100% availability of power. We can talk more about how we achieve 100% availability, even if there's a power outage. But that's the principle. That's actually what I'd like to lead into. It's that even though you have grid power availability, you also have means to ensure that you have power when there is no grid power available. What have you done to ensure that? So all data centers that are going to meet the standards required by any kind of global customer need to meet a minimum requirement to have availability of power. So to achieve 100% availability of power, what you need to build is resiliency and redundancy of available power on the site. So what you do is you assume that if there's a main failure, what do you have in terms of available power that can guarantee that you can continue to operate 100% of the time? And what you do effectively is provide on-site generation. And our on-site generation is typically diesel generators. That's the standard approach that's taken globally with backup power. Those diesel generators, however, own a configuration that will guarantee that you can operate continuously. So we talk about tiering in data centers. And the current construction in the SAMRAN facility is what we call a tier four environment. And tier four means that effectively we have-- if we need three generators, we've got six. We effectively have what we call a two in environment. So there's literally double the amount of available equipment on the site than you need to operate. Now, what that means is that you're resilient to any failure. So you're completely resilient to any failure. You can lose any part of that equipment, one part of that equipment, and you'll be able to operate continuously. In general, if you look at aggregate data centers across the continent, our minimum standard is what we call tier three. So many of our future builds will be built to tier three. And tier three is what we call concurrent maintainability. So that's where, for example, if you need three generators, you have a fourth spare one rather than six. And at tier three, you can still achieve the 100% availability. Because effectively, you have one spare. You can always cycle through maintenance, et cetera. And that's exactly how we operate. So as much as we have 100% availability of power from the grid, we also effectively have 100% of availability of power on the site as well. So we can operate that site autonomously, even if they were a complete shutdown of the grid. OK. But now, having access to power is one thing. How you use that power. Also speak to how sustainable your data centers are. And what do you do to use power more efficiently? So a lot of the focus within data centers over the last few years has been on one particular parameter. The parameter that we talk about is called power usage effectiveness. And that basically, if you think about what goes into a data center, you're going to hold a lot of racks of equipment. So we house customer's equipment. We're not an IT company. We're the guys who do the building. And as I've said, we provide 100% availability of that building. But within the building itself, there are two things really that are consuming power. The one is the racks. And the second is all of the cooling and humidity systems that ensure that environment is maintained. If you look at the way we calculate PUE, if you take the amount of power that goes into a rack and you look at the total amount of power going into the facility, that ratio-- so for example, if you're using, say, one megawatt inside, the IT equipment, and you're using, say, a third of that, say, 300 kilowatts within the econ, that would be a PUE of 1.3. So what we do within our facilities is we look to optimize that parameter. And we do that by focusing on the cooling systems. And so there are a number of things we do within the environments to ensure we optimize cooling. We do what we call containment. If you cool down the whole room, that's like a fridge, for example, you cool down the whole room, you're going to have to cool all the air in the room. If you put a container or a box around the aisle within which you're operating the equipment, you can contain that air, and then you can improve that efficiency. We do exactly that. So we do what we call cold aisle containment in the current Samran facility, where we pipe there into the racks. So that's sort of step one. That's the basics of running data centers. That's been the experience over the last few years. And we optimize by doing that with our new builds. So when we look at the Samran facility as it currently stands, the existing facility is built in such a way that we can do cold aisle containment, and we optimize by ensuring that we do that within the data halls. As we expand the facility, we're getting much smarter. So the technology is moved on significantly over the years since the original facility was built. And so as we build out the new shell on the Samran facility-- and I can explain what we're doing there-- but effectively, as we build out the new expansion on the Samran facility, we're adding a whole lot of new technologies. The first thing we're doing is that we're starting to use what you call free cooling. So in other words, the chiller systems, if you think about it, I mean, we're sitting here on a winter's day. If you go outside right now, it's not that warm. As soon as the temperature's below about 17 degrees, we don't have to run a refrigerator. We can just use the cold air. So we can use the cold air outside to cool the chiller systems. And then we've got a closed loop system to pop the cooling into the environment. But effectively, we don't have to be running the refrigerator. And so about 180 days a year, we can get some kind of free cooling. So at night in winter, it's great to turn the fridge off and open the door, as it were. It's not quite so simple. But that's effectively what we're doing, is to keep stuff cold in the fridge at night, if you just put it outside and open the door, it'd be fine. And that's effectively what we're doing, except it's a little more technical than that. So about half the year, we get some free cooling. So we get some percentage of free cooling. So that's one of the first things we do. Now, if you think about it, free cooling is costing you nothing. Therefore, it's not taking any kilowatts or any kilowatt hours of energy. So the result is that that reduces the PUE. The second thing we do is we do a lot of smarter things in terms of the actual cooling environment. So obviously, we do this containment. We put a lot more control. And there are two really smart things we're doing right now. So one of them has to do with the mechanical engineering, and the other has to do with AI. So on the mechanical engineering side, when we start to talk hyperscale-- so I mentioned that this is a hyperscale facility that really the expansion we're building. When we talk about hyperscale, it's a general term people use in the data center industry. We think of it in terms of your global providers, the guys building those massive data centers and putting the AI stuff in it. But hyperscale for us is really sort of anything from about two megawatts up. So again, just for the benefit of people who are not used to calculating in data centers, we used to measure data centers in square meters and in recs, et cetera. Nowadays, we tend to count megawatts. So one kettle is 1 kilowatt, 1,000 kettle is a megawatt. If you look at the expansion we're doing on Samran, that's 20 megawatts. So I think 20,000 kettles, it's a suburb. Yes, it's effectively a suburb. But the important thing is that when you start building a hyperscale hall, and you put on lots and lots of high-powered racks into a hall, and that's typically what your cloud providers do. It's increasingly what your financial services are doing. How are you going to make sure that that's an optimum? Well, one of the easy ways to do it is to build the whole system around a higher density of cooling. And we do that with a fairly radical change in the design. So a typical data center, you would have raised flooring, and you pipe the air through the floor, and it goes through the racks. I mean, I think people who are in the industry are used to what this looks like. You're used to that computer room lock. It's got a raised floor, it's got air coming in the side, et cetera. We get rid of that. And we take out the raised floor, we drop everything down onto the concrete. And we build the whole hall as a fridge with what we call fan walls on the side. We pump cold air into the fridge. And then inside, we do what we call hot-ile containment. So the air goes through the racks and flows out through the hot-iles. And there we get another step improvement. So that's called a fan wall design. So some of the new halls, not the existing ones, but some of the new halls we're building in this massive new 20 megawatt build that we're doing at Samran. Some of those halls are going to be fan wall-based halls, which get that next level of efficiency. And people have in their mind this idea of what these big global data centers look like with their thousands of racks. That's exactly it. That's exactly how they're built. So that's what we're moving to. So that's a mechanical change. On the electrical side, if you like the control side, we are deploying what we call a data center infrastructure management system, so a decent. And it's a platform that lets us actually monitor the entire environment, right down to the exact air flows through every econ unit, temperatures in, temperatures out, humidity. We literally have a digital twin of the environment. So with this digital twin of the environment, we then look at the whole environment and we run it through an AI model. And what the AI model tells us is how to optimize it. Move that tile, move this air flow, just this temperature. So what we can do is then incrementally improve the performance. So even though you've got a really built out facility and somebody's worked out that if I put the tiles here and I put this here, et cetera, that works about right. Now you can do an actual live monitoring of exactly what's happening in that facility. So that's what we're doing now is adding a data center infrastructure management system. So we've deployed that within SAMRAN and we're busy using that to optimize in the existing environment. So literally you run the AI model and then it says, switch close that, switch that, change that, release one. - Very, very smart. I think you touched on this, but you mentioned that you have shells that you're building now. So you're expanding the data center in different ways. What is shell ready? - Okay, so just to give you a sense of the kind of market we end. So I spoke about the old markets. But the kind of data center that SAMRAN used to be when it was a private data center run by a sink for a single entity. In that environment, a company would typically invest in a purposeful facility that matched exactly their requirements. In the larger market globally, what we're seeing now is that the data center demand is dominated by really a couple of different industries. The obvious one is the cloud providers. So again, we've talked about AI. We've talked about the fact that if you look at what's actually being deployed around the world, it's actually the deployments from the global cloud providers. We're very careful not to mention our customers by name, but if you can think of the largest companies in the world, the ones on the west coast of the United States and a few similar ones. - Yes. - Yeah, that's the guys driving the demand. And their demand is to roll out data centers that get further and further into the world and extend closer to customers. So we talk about edge data centers getting closer to all the customers. The consequence of that is we've got enormous growth. And so instead of building data centers with one megawatt at a time, we need to be able to roll out data centers that grow in phases of 10 megawatts at a time. So five to 10 megawatts at a time. So we have a model which is a well-developed model globally called sort of core and shell, where you build the shell. So the initial investment from us as the data center provider is really creating the concrete shell with an environment that will enable you to build a highly secure physical building. And then we can either tailor the fit out of that to the exact requirements of, for example, one of the global cloud providers. So we talk about that in, we've got a product name, we call it super scale. So in our environment, we would talk to the global cloud providers exactly what you want. You know, exactly what electrical configuration do you want? What mechanical configuration do you want? How many wrecks do you want in the halls? What layout do you want? Which kind of containment, et cetera, et cetera. And we could effectively build to suit within the shell. - Okay. - For a number of customers, and this is perhaps more common in things like your financial services industry. Samran, by the way, is the biggest financial services outsource hub on the continent. I mean, second to none, simply because it houses a number of major banks and financial services players who are also moving towards outsource. But a typical bank, for example, wouldn't necessarily want to go and specify all of this technical detail. So we would build something which is essentially, take the shell and say, okay, we'll build our standard build. So we'll build out, for example, we spoke about the tier three designs, whatever it might be, we would build out our design to their specifications and match it, and we can be flexible on that. That's what we call our flex scale product. And those sort of environments would be where a customer comes to us, maybe a financial services customer and says, okay, I don't want to run my own data center anymore. By the way, that's not an uncommon thing in the market today. I think a lot of companies have realized that the specialist skills and capability, particularly with all these standards in place, of having to run a data center in an environment where you want flexibility, you want to move some of your workloads to cloud, you want to be able to have scalability in your operations. And so that's our product we call flex scale. It essentially is building out an environment for a single customer. We can build a cage around it, we can put it in a single hall. And then obviously your sort of tail end of customers is your regular co-location customers. I think that's what people are more familiar with when you talk about something like your, sort of when people think of a co-location data center, they think of the fact that you're putting ones and twos and tins of racks into them from individual customers. And that's what we call our really scale product. And really scale basically is what you would imagine a data center to look like. So it's already fitted out. You can walk into racks of there, you can provide racks, and you can literally move in and switch on. So we've got different, so the idea of the core and shell is really to target the high end of the market to really be more like a wholesale provider to the very largest customers, all the way down to being a sort of very retail focused, one rack provider, all the way through. So core and shell gives us this flexibility. And importantly I think for ADC as a business, it's also a question of how we phase the build. So we're meeting both the customer's requirement in terms of when they want what and how they want it. At the same time as being, the model is smart from our perspective because it enables us to scale appropriately to do as the market requires demand. - Yes, so you did mention that a lot of customers are moving away from managing their own infrastructure to you, you have economies of scale and economies of expertise, right? And that's to meet standards. Can you tell us about the various different standards that you meet for your customers? - Yes, absolutely, yeah. So I mean, just a couple of comments there, we engage with a lot of CIOs in the wider enterprise and government space at the moment. And in South Africa, the question has always been the decision on total cost of ownership between art source and in source. But there's some real nuances to that nowadays. So my favourite question to CIOs nowadays is, are you sure you can get a diesel technician when you have a problem with your generator during load shedding? Most of them realise that that sort of highlights this point about this has become a specialised environment because we're not talking about maintaining an environment which might sometimes have some power failures and therefore you might have to have something on standby. We're talking about being able to operate through, from everything from low load shedding, right the way through to extreme levels of load shedding. But at the same time, there's pressure on the whole economy. Everybody has this problem. So that's what drives the logic of using a specialist provider. So just give you a sense of the standards. So we need to first meet this availability requirement. So we spoke about having tier three or tier four data centres. I mean, that's like a ticket to the game. No one would be providing a data centre that for outsource that doesn't provide at least those kinds of availabilities. But later on top of that are a number of ISO standards. So if you look at and if you look, talk to any one of the major enterprises we'll deal with and our financial services in particular and are very, very strict in terms of their standards. So you start with things like a basic ISO 9001 quality certification which we have for all our data centres. You need to add to that your ISO 27001 for cyber security, for information security. Now we don't do the cyber security for a customer. I said we're not an IT provider. But just the ID card coming in, that data has to be secured at a level that is ISO 27001. And we've just been certified, busy being certified on the ISO 27001 2022 edition. So the guys out there who do this will recognise that as being the latest levels of cybersecurity. But there's also environmental certifications. So every customer we deal with, every major customer nowadays has got a broader ESG requirement. So they require us to be both environmentally and health and safety certified. So those also ISO standards. So that's ISO 14001, ISO 45001. And then you get to the specialist standards. So one of the specialist standards that we adopt and that's what we have for example for SAMRAND. But in fact, we have it across all of our facilities is what we call the PCI DSS standard. Now, I mean, it's working about the three things that a customer is looking for. You know, customers are looking for one, high availability, two, they're looking for connectivity, but three, they're looking for security. And we talk about security. You know, often when people talk about data center security, they're thinking cyber security. But that's not where we focus our security as a provider of the facility. Our security is focused on physical security. So you'll notice that if you come into a facility like SAMRAND, the first thing you'll see is that you can't get in without actually being identified with the reference number and producing a suitable photographic government ID to prove who you are. Secondly, you're going to be on camera continuously. So we have a camera environment that covers every corner of the data center. All of that, that access control, those camera systems, all of that goes into archived information. So we can correlate all of this. So take a simple example. We know who's come into the facility and we can watch them on camera and see exactly what they do. All of that is archived and it's stored for a required period. And that is then ordered under the PCI DSS standard. Now, PCI stands for payment card industry. So payment card industry has the standard. So anywhere that a credit transaction happens in the world, including in SAMRAND, any one of those institutions that has to run those kinds of transactions has to meet the standard or they won't get a banking license, they won't be allowed to operate with the global card providers, for example. So that standard is the key. And then most recently what we've got is a SOC2 standard. That's more of an ordered standard. So we have a SOC2 TUT2, which is a standard that essentially demonstrates to customers that everything that we say we do in terms of our processes, our procedures, our security, all of that has been independently audited by an audit firm. So it's not a technical standard in a sense. It's a full audit of the processes, procedures of the organization. And it's over a period. So that's a SOC2 TUT2 standard. And so yeah, there are a number of other standards we're doing. We're working on ISO 50,000 in one, which is the Energy Management Standard. We work with ISO 22301, which is the standard for business continuity. So that's like your disaster recovery business continuity standard. So city for these very largest customers. Those are basically the things they look for. So you'd buy from us because we're a data center provider. We would get a ticket to the game for being a tier 3 or tier 4 data center and having an available, high available data center with security and with connectivity. But those guys that we're talking about would not buy from a provider that can't produce all of these certifications. So that's really what differentiates us. And I think from a South Africa perspective, if you look at South Africa on the global stage, we've seen multiple countries on African continents starting to position themselves to being providers on the global stage in data centers. I mean, one of the ironies is that Africa has 15% of the world's population and 1% of the world's data centers. So we have a huge potential for growth. And as we see this AI growth happening around the world, we're going to see that edge of the AI starting to expand into the African continent. And so to be part of that world, we need to meet these standards. So that's what ADC is all about. ADC is all about being a competitor with global players. We don't benchmark ourselves against a regular enterprise data center. We're benchmarking ourselves against the best of the best in global hyperscale data centers, which is heavy lifting. That means we have to do a lot more. I mean, we literally have to do a lot more than a regular data center that's being operated within an enterprise. But it does mean that customers that come to us are going to get all of that. So again, whether you're the 1000 rack customer, you're going to get access to a status center that's got all of that certification. Talking about the heavy lifting that you're doing, I've heard rumors that that Samran Felicility is also bomb proof and it can also survive an airplane. Yeah, absolutely. The Samran Felicility is a hugely impressive facility. I mean, as I say, we acquired the facility originally, but it was built to very, very high standards, very much focused on the financial services industry. And so as much as we talk about all these ISO standards, it's also built like Fort Knox. I mean, it really is. Getting in and out of there, we have meta-thick walls on the existing facility. It's got bomb roof doors, et cetera, et cetera. It's certainly a facility that it gives that additional level of comfort. So we've got to produce a certificate to say we meet all these standards. But I think there's certainly in the financial services industry and I think with people's understanding of the importance of privacy of information and safety of information, it gives that extra level of comfort that this is a super secure facility. OK. And now building such a large facility obviously takes a lot of investment, but a lot of expertise as well. Could you tell us who you've collaborated with in some of this facility? So yeah, I mean, we've got a very wide range of suppliers. I'm not going to go through the individual suppliers, but just to give you a sense of how we work. So we work typically, there are a number of major suppliers. One of the things, again, because we're not an IT company, our primary suppliers are actually in the building industry and in the equipment industry, electrical, mechanical. So as I don't want to pull out the individual suppliers, because we effectively work with the whole range of suppliers. But we've worked with all the major construction companies in the country. So Sam Rand, certainly, has been built by one of the leading construction companies. The thing I can say is that Sam Rand has also achieved certification as one of the safest sites for construction in the country. So that's because we're operating within all these very tight frameworks and standards. So yes, we've worked with multiple construction companies. And then if you look at the equipment that we buy, we're buying world-class leading edge generators, transformers, cooling systems, as I don't want to pull out the individual suppliers, because we work with all of them. But that's been the base of how we develop these facilities. In parallel, we work with providers in terms of the maintenance and the operation of these facilities. So we have a number of outsourced providers that we use. For example, specialists in the areas of the facility's management and maintenance operation. I mentioned earlier, how do you get a diesel technician? The simple answer is that you work with a provider that gives you a service level agreement that guarantees and available diesel technician on their payroll, who can be on your site when you need him. So that's the kind of provider that we work with. So there are a number of providers. And then also on the security side, obviously. So that is very good on physical security. We have a number of specialist security companies. So we work with those as well. One of the things that when we're dealing with international customers, if I just focus for a second, I spoke about connectivity. I spoke about the availability. When you're dealing with global customers, coming onto the African continent, security becomes a bigger question for them. And what we're able to do-- and this is not just SAMRAN, I mean, this applies to we've got facilities in Nigeria, we've got facilities in Kenya, and facilities across South Africa. So we've got three major data centers in South Africa. We're able to provide that level of comfort that we are not just specialists in the standards, but we're actually specialists in operating securely in a South African environment. So really just being integrated into the local security environment, knowing what's happening in the wider society, ensuring that we can lock down facilities if we need to, ensuring that we've got the right intelligence. I mean, we have everything-- because we're working with local security providers as well, we have everything from your basic monitoring to armed response to air support. Anything that you want on call is to give a global customer comfort that, you know, even though they're not familiar with, say, the South African market or any one of our other African markets, that we're the specialists. And I think also, if you talk financial services, you know, if you step back and look at it, that specialization is something which gives a, say, financial services customer comfort that we actually can do this better than in sourcing. OK. And that's the key, yeah. OK. Excellent. Dr. Angus, hey. Thank you very much. Great. Thank you. This was another episode of TCS+ brought to you by Africa Data Centers.