Tech
Chip redesign to optimise server ops, water to keep cool
Data centres and the Cloud – an integral part of the digital world where nearly all of the user data, photographs, music and movies end up getting stored – are also massive guzzlers of energy. Ironically, most of the energy consumed in running them is used not to process data, but to actually keep the servers cool.
This problem is aggravated by the complex design of the modern servers that results in a high operating temperature, according to David Atienza Alonso, who heads the Embedded Systems Laboratory at the Swiss Federal Institute of Technology Lausanne (EPFL). “As a result, servers cannot be operated at their full potential without the risk of overheating and system failures,” he told journalists visiting the EPFL campus in the hilly city of Lausanne on the shores of Lake Geneva, midway between the Jura Mountains and the Swiss Alps.
With this problem at hand, a new server architecture being developed at EPFL experiments with what is called a “multi-core architecture template with an integrated on-chip microfluidic fuel cell network” – meaning that it deploys tiny microfluidic channels at the chip level to ensure that the channels and the fluid flowing through them cools servers and also converts heat into electricity. Etching layers of small channels between the layers of silicon and then pumping fluid through these channels makes it theoretically possible to draw heat out of a stacked chip fast enough to keep it running without overheating.
This on-chip microfluidic fuel cell network is one among multiple solutions being tried out globally to tackle the heat generated by modern servers while in operation. Other tech interventions include an experiment from a US-based company called Subsea Cloud, which is proposing to put commercial data centres in deep ocean waters and has claimed it is close to a physical launch of an underwater pod near Port Angeles, Washington state.
Microsoft too has proposed something similar: building a big tube with closed ends, placing servers inside this tube, which will then be dropped down to the ocean floor. As part of this plant, Microsoft’s Project Natick team dropped its Northern Isles data centre 117 feet deep to the seafloor off Scotland’s Orkney Islands in the spring of 2018 and for the next two years, team members tested and monitored the performance and reliability of the datacenter’s servers. The team hypothesised that a sealed container on the ocean floor could provide ways to improve the overall reliability of data centres. Lessons learned from Project Natick inform Microsoft’s data centre sustainability strategy around energy, waste and water, Ben Cutler, a project manager in Microsoft’s Special Projects research group who lead Project Natick, said in an official blog after the data centre was reeled up in 2020.
The reason for all of these experiments is the way computer chips are designed today: how they get their electric power through thin copper wires running through them that then dissipate the generated heat into the surrounding air, thereby requiring large numbers of air conditioners to work overtime to keep the ambient air in server rooms cool. The need for continuous airflow to dissipate the heat has forced chip designers to rely on a more or less flat design for packing chips. This is extremely inefficient from a space utilisation point of view, especially since the integrated circuit technology is continuously scaling down to smaller transistor sizes in a bid to keep up with the increasing demand on computational capacity of the range of applications in use at homes and offices today.
By using fluidic channels with water running through them, designers can actually rely on water’s much higher heat-absorbing capacity as compared to air, thereby making it possible to cool chip components that are packed closer together, Atienza Alonso said. As a result, these components can actually be stacked on top of each other in a three-dimensional arrangement, thereby improving server efficiency and making them far more dense in terms of storage capacity.
According to Atienza Alonso, the EPFL project intends to completely revise the current computing server architecture to drastically improve its energy efficiency and that of the data centres it serves. The 3D architecture that his team is designing, he said, can overcome “the worst-case power and cooling issues” at the same time by deploying what he terms as a “heterogeneous computing architecture template”, which recycles the energy spent in cooling with the integrated microfluidic cell array channels, and recovers up to 40 per cent of the energy typically consumed by data centres. With more gains expected when the microfluidic cell array technology is improved in the future, the energy consumption of a data centre will be sharply reduced, with more computing being done using the same amount of energy.
“Thanks to integration of new optimised computing architectures and accelerators, the next generation of workloads on the cloud can be executed much more efficiently,” Atienza Alonso said. “As a result, servers in data centres can serve many more applications using much less energy, thus dramatically reducing the carbon footprint of the IT and cloud computing sector.”
If any, or all, of these experiments work out and can be deployed at scale, this could end up marking a quantum leap in the way typical data centres and the Cloud operate. The use of a liquid coolant inside the chip is an idea that has been debated for a while, with engineers at IBM originally proposing this to tackle the problem of cooling 3D chips nearly a decade ago. But with these cooling solutions now close to being market ready, the 3D server stacking is now being seen as a potentially path-breaking move to boost server performance.
Any breakthrough technology would be welcome news across countries that are seeing ever-growing data consumption, triggering the need to store and process data, and the growing demand for data centres. In most countries, including India, with data protection and security becoming the top priority, local storage of data has become increasingly critical.
Globally, the US dominates with over 2,500 data centres, Germany has some 490 of them. India ranks thirteenth among countries with the highest number of data centres, even as the country’s data centre capacity has been growing rapidly – pegged at 637 MW in the first half of 2022 and expected to double to 1318 MW by 2024. Mumbai has the highest number of data centres in the country, with close to half the data centres, followed by Bengaluru and Chennai.
(The writer was in Switzerland on a trip arranged by the Swiss Government)
