Connect with us

International Circuit

Power consumption in the world’s data centers: What can we do?

Technology is integrated into our lives so thoroughly; it has improved the quality of life around the globe. Information and communication technologies are predicted to grow exponentially, but there is a price for this advance. ICT is expected to continue to consume a significant and growing percentage of the world’s power.

Consequently, there is a great deal of media coverage examining the state of the world’s data centers and how much power they consume.

A good start

What’s encouraging is that significant steps have been taken in reducing the rate of growth in power usage in data centers. Data center power consumption typically supports computing power and its data center’s cooling requirements. Enter the Green Grid, now absorbed into ITI, whose aim was to drive more energy efficient practices, and so developed the concept of PUE (Power Usage Effectiveness) – ratio of power to usage.

The Open Compute Project (OCP), a group of the world’s hyperscalers, are working on bringing computing close to a PUE of one, which is the best possible outcome for a ratio of power to usage. However, edge computing is increasing in adoption, and its ubiquity will no doubt impact data center power usage, as it does not suit hyperscale data centers.

It’s also expensive to achieve perfection. What will it take to achieve a PUE of one? Some solutions include data centers located in cold countries where the heat they generate contribute towards warming buildings and designing servers that can run at high ambient temperatures, in an effort to eliminate air-conditioning units. Still, this is not a solution to the hot, very fast, single thread processors that consume vast amounts of power and expend a great deal of heat.

We’ve had some help along the way. Moore’s Law originally said that every year we would see a doubling of transistors in the same space. The law was then revised to doubling every two years. We have followed it as a guide to produce ever faster CPUs that do not increase power consumption. Combine that with a focus over the last few years on cores, memory controllers and accelerators enabling increasingly faster single threads for the majority of compute tasks.

However, Moore’s Law is coming to an end, in around 2023. We are already seeing the impacts as Intel struggles with its 10nm fabrication plant, driving them beyond two years since the last major change in transistor density.

Up until a few years ago, these faster and faster CPUs were dedicated to a single service. As that CPU went faster than the service required often the CPU and memory were left underutilized. The realization of the potential reduction in costs has helped usher in the significant rise in virtualization and containers.  However, running those hypervisors consumes processing power as well, as it is an overhead. Virtualization is nothing new, it was available on IBM mainframes many years ago, only it’s adoption into x86 servers is relatively new.

As always, one of the great successes of a data center is to manage its resources. Making sure there is enough to deliver, but not too much that money is wasted. We need to continually ask ourselves, where is the bottleneck? Is it the actual pure compute that is the bottleneck? Is it being slowed down by storage or the memory controller or even the network? There is a recognition that offloading some of the I/O work that the CPU does is a good thing, as it enables the CPU to concentrate on the computing.

Latency is certainly an issue. Latency is one of the things driving the adoption of edge computing, as are AI and machine learning, which are impacted by the smallest of latencies. However, can you tell if your web page was delivered in 2ms or 3ms? Are we expending a great deal of corporate/personal energy and power to achieve an effective and perfect latency when it is not necessarily required?

Why do servers typically include VGA and USB ports in the data center? They are there for people, and because the BIOS expects them, but when was the last time a person used them on your server?

We’ve had plenty of evolution, maybe it is time for a revolution?

Time for a rethink

All of those unnecessary components need to be stripped away and high throughput must become the priority. If one has a number of systems running at lower power, with higher bandwidth for network, storage and memory, they will provide a much higher throughput, in parallel, than traditional server architectures. The server needs to be smarter rather than using the blunt force of a faster processor that is stuck in the age of 1980s personal computing.

Ramping up the clock speed will certainly give you more performance, but at what cost in watts consumed? It isn’t linear at all, it is logarithmic. A CPU running with overall less GHz means less Watts and thus means less heat. That translates to more servers in a small space requiring less power for air-conditioning. However, you can now get more servers in the same space, so your individual thread may not be as fast but your overall throughput for a given space and power consumption is higher.

It’s possible that PUE is measuring the wrong thing. Instead, we should be looking toward how much work can be delivered per kilowatt-hour; that is a true measurement of efficiency. This would support the changes needed so a data center can begin to address what it takes to reduce its power consumption.

A change in architecture, combined with a switch to low power processors will enable massive and revolutionary change in power consumption levels in the modern data center.

―DCD

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Copyright © 2024 Communications Today

error: Content is protected !!