in search of power and new energy sources for LLM


Large language models (LLMs) require significant computational power. And this is only part of the costs. The rapid growth of generative artificial intelligence products leads to enormous energy consumption – the energy consumption of data centers is gradually reaching gigantic volumes.

Data centers will require modernization – for example, a combination of several renewable energy sources. Other solutions are emerging at the intersection of several technologies: AI systems can manage switching between different energy sources in search of the optimal power scenario.

Dell'Oro Group predicts that by 2027, investments in IT infrastructure for AI will increase capital spending on data centers to $500 billion.

Today we will talk about optimization trends that will not only satisfy the growing demand, but will also allow the construction of data centers in large quantities and on a much larger scale than now.

Energy consumption in the age of AI

Research firm Epoch AI estimated in 2022 that the computing power used to create a new advanced AI model should double every 6-10 months. And along with the increase in capacity, energy consumption increases.

Large language models require much more energy than traditional search engines. The International Energy Agency (IEA) estimates that a single request to ChatGPT consumes almost 10 times more energy than a single Google search.

The power consumption of LLM depends significantly on the size of the model. According to OpenAI, GPT-2, which has 1.5 billion parameters, consumed 28,000 kWh of energy for training. For comparison, GPT-3, which has 175 billion parameters, has already consumed 284,000 kWh of energy.

The IEA report shows that data centers consumed 460 terawatt-hours (TWh) in 2022—2% of total global electricity consumption. IEA predicts: The amount of energy consumed by data centers could more than double in three years as a result of the growth of AI systems. The world's data centers will consume 1,000 TWh, which is equivalent to Japan's annual electricity consumption. In turn, the Uptime Institute believes that by 2025, AI will account for 10% of global energy consumption in the data center industry.

However, growth should not cause acute shortages. This picture is explained by the fact that modern data centers can be optimized. There are many options available, from immersion cooling to nuclear power.

New energy sources

NiZn batteries

Since even a second failure in the city power grid can disrupt the operation of servers and other equipment, data centers use uninterruptible power supplies (UPS). UPSs smooth out short-term voltage surges, filter the supply voltage, and in the event of loss of power from the city network, they automatically switch their load to the batteries.

The latest data centers use lithium-ion batteries can used instead of diesel generators. Batteries not only provide backup power that protects equipment from damage in the event of a power outage, but also keep the entire data center running.

Unfortunately, power failures are still the most common cause of disasters in data centers. According to the Uptime Institute, UPS failure is the leading cause of power failures in data centers.

UPSs, like any other equipment, wear out over time. Frequent use and intense loads speed up this process. Using low-quality or incompatible components reduces system reliability and increases the likelihood of failures.

Nickel-zinc batteries can partially solve this problem. Unlike lithium and lead-acid batteries, NiZn batteries remain conductive even if they are weakened or discharged. They store more energy per unit volume and weight compared to some other types of batteries. They usually have a higher service life (number of charge/discharge cycles).

Hydrogen and derivatives

Source Images

The use of hydrogen in data centers is increasingly being discussed for a number of compelling reasons, including environmental, economic and technological aspects. Hydrogen systems offer high reliability and durability; they are scalable and easily adapt to changing needs without major infrastructure changes. Autonomy from the traditional electrical grid increases the resilience of data centers to power outages.

Hydrogen fuel cells have their downsides. For example, according to data Microsoft, for 48 hours of backup power supply to a data center, up to 100 tons of hydrogen will be required. And during transportation and storage it is necessary to maintain a temperature of -253°C (GOST R ISO 13985-2013).

As an alternative to hydrogen, you can consider ammonia. It is much easier to handle as it requires milder conditions: about 10 bar at -25°C. Its structure makes it more efficient during transportation. Ammonia is split into hydrogen and nitrogen, and the resulting hydrogen can also be used to produce electricity.

Compared to hydrogen, ammonia is less explosive, and its leaks can be easily detected by its characteristic odor. Ammonia is also environmentally beneficial: it does not release carbon when decomposed, and possible emissions of nitrogen oxides can be neutralized. In addition, technologies for using ammonia have already been well researched, which speeds up their implementation.

Modular nuclear power plants

Source Images

Small modular reactors (SMRs) can offer a sustainable, reliable and efficient solution to the energy needs of data centers. SMRs are designed to meet high standards of safety and crash resistance. The modular station allows you to use less fuel to produce a large amount of energy compared to hydrocarbon sources. Operating costs for nuclear reactors are predictable and stable.

MMR projects for data centers are gradually gaining popularity. Thus, Standard Power has planned for 2029 the commissioning of SMRs using NuScale reactor technology for several data centers. According to Standard Power's plans, NuScale will provide 24 modules with a capacity of 77 MW each.

By the way, Russia became the first country to deploy two SMRs with a capacity of 35 MW each. The project was implemented at the floating nuclear power plant “Akademik Lomonosov”.

Cooling system optimization

Source Images

Cooling systems in data centers typically consume a significant portion of energy – 30-50% of total energy consumption.

Liquid cooling is an established trend. New approaches include full immersion and cooling directly on the chip/cold plate. In the first case, the server is completely immersed in a non-conducting and non-flammable dielectric liquid. The second, more targeted approach uses a metal plate or heatsink for high heat generating components (such as chips). In this case, the heat is removed and then cooled using a liquid refrigerant.

Back in 2018, Microsoft flooded A data center at the bottom of the Scottish Sea, submerging 864 servers and 27.6 petabytes of storage to a depth of 35.7 meters. The cooling effect of seawater has significantly improved energy efficiency. The company reported that the experiment was a success: the failure rate of the underwater data center is 8 times lower than that of traditional sites. Lower failure rates are especially important given the challenges of maintaining servers in sealed containers on the ocean floor.

Thermoelectric generators (TEG)

Data centers generate significant amounts of heat, which can be converted into electricity. For example, thermoelectric generators (TEGs) can be used. The operating principle of such devices is based on the Seebeck effect. In a circuit of two different conductors, when maintaining a temperature difference, a thermoelectric voltage arises at their contact points. When one end of a conductor heats up while the other remains cold, a temperature gradient occurs. The electrons at the hot end of the conductor gain more energy and begin to move towards the cold end. As a result, an electromotive force arises, creating an electrical voltage.

TEGs consist of multiple thermoelectric modules, each containing pairs of different conductors connected in series and parallel to create the desired level of voltage and current.

By integrating TEG, data centers can improve their overall Power Usage Effectiveness (PUE). However, the cost of thermoelectric materials and the need for engineering modifications remain significant barriers to the rapid adoption of new technologies. As TEG develops and becomes cheaper, this approach may become more common in the future.

Edge Computing

Processing big data typically requires transferring data from the source to the data center and back, which leads to network congestion and increases costs. Now many regional companies have to obtain cloud services in large metropolitan data centers, since there are often no local cloud providers, and the provision of services is limited to VDS/VPS offers.

Modular data centers come to the rescue, which can be quickly built in almost any territory in order to process data as close as possible to its source. The main advantage of a modular data center is the ability to scale: the number of modules and racks increases gradually. This ensures the same fault tolerance as in large data centers. Each critical infrastructure element is reserved according to a scheme of at least N+1: there are several power modules, diesel generator sets, tanks with a supply of diesel fuel and antifreeze, etc.

Modularity implies not only the physical structure of the object, but also stage-by-stage development according to needs. Within the production site, modular data centers can be combined into high availability clusters.

IT systems optimization

One way to improve data center efficiency is to use dedicated infrastructure management software. These can be data center infrastructure management systems (Data Center Infrastructure Management, DCIM) or solutions based on supervisory control and data acquisition (SCADA) systems. They monitor the energy consumption of servers, storage, routers and air conditioning systems.

Such systems automatically distribute the load between servers, automatically turning off unused devices if necessary, and provide data center operators with recommendations for adjusting the speed of refrigeration fans.

DCIM solutions can detect that air conditioning fans are spinning too fast, and servers in the computer room do not need as much cold air flow. With the help of analytics and optimization, data centers can save tens and hundreds of kilowatts of energy.

Finally, AI itself can offer solutions to the problem of rising energy costs. Using machine learning, Google managed Reduce the amount of energy consumed by the cooling system by 40%. The neural network was trained to predict PUE based on 19 factors. For the calculations, a database containing 184,435 points with a resolution of 5 minutes was used; The accuracy of PUE prediction after training was 99.6%.

Now, for example, if some servers in a data center need to be shut down for a few days, the model will tell you what small changes to make to the cooling system to minimize the impact on PUE.


Data centers by their nature strive for high energy efficiency. Although a single data center can consume 10 to 50 times more energy per unit area than a standard commercial office building, data center energy consumption has increased little since 2010. Only with the powerful development of LLM, engineers were faced with the task of looking for unusual software and hardware solutions to reduce energy consumption.

In addition, virtualization itself reduces hardware maintenance, cooling, and energy costs. A report from the Environmental Protection Agency (EPA) states that server virtualization can lead to energy savings of up to 80%. Reducing equipment footprint also results in lower cooling costs, further contributing to overall savings.

Virtualization allows you to redistribute and expand virtual resources (processors, memory, storage) in real time. Using the services of IaaS providers, the client company pays only for the capacity actually used, which also helps reduce energy consumption.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *