How supercomputers found their industrial mojo – the evolution of high performance computing

Within the course “Mathematics for Data Science” prepared for you a traditional translation of an interesting article.

We also invite you to see open webinar on “Derivative of a function and Taylor’s formula”. Let us recall the key concepts of mathematical analysis: functions and their derivatives. And with the help of this mathematical apparatus, we will substantively discuss the Taylor formula – one of the most fundamental mathematical sections. Let’s talk about why this formula is needed, give the necessary theory and, of course, solve examples.


Previously, supercomputers were the preserve of scientists and the military. Industrial use cases are gaining traction now. But in a sense, all computers are supercomputers. Today we will take a look at how this area has developed and what the future holds for us.

Are supercomputers computers? No, not at all. The essence of supercomputers is that they are not computers at all. These are scientific tools of discovery or strategic business objects – they are simply created using computer technology. An impressive amount of computer technology.

Formally the traditional prerogative of scientific research and defense, supercomputers are used in the commercial sector.

Supercomputers – what could be their industrial applications?

Here are a few examples: Seismic processing in the oil industry is an application dominated by High Performance Computing (HPC) because most of the computation is explicit and is done on structured matrices. Likewise, both the applied field of electromagnetism using moment algorithm methods and the application field of signal processing – rely on the use of high performance computing in the aerospace community – for example, making the 787 lighter.

But wider applications can be found in engineering, product design, complex optimization of supply chains (in general, almost any kind of optimization), bitcoin mining (which you could implement on a PC if it did not become so complex). The world’s largest glass company AGC maintains a steady stream of simulations by relying on simulated product creation. For the City of Chicago, this is The Array of Things Project – 5000 sensors on lampposts can’t feed data to a data center for real-time simulation, so they built sensors that do the calculations themselves and act like a distributed supercomputer. … Expect to see many more similar implementations of smart “Things” in the future. With the help of a supercomputer Lawrence Livermore Laboratory the researchers found that trucks must use side skirts. Trek Bicycle streamlined bikes from all directions using time-sharing HPC modeling.

As supercomputers gradually move away from their obsession with mathematics, the newest ones have the flexibility they need to handle AI, analytics, and other common HPC workloads on big data, data science, convergence, innovation, visualization, simulation, and modeling.

HPE recently announced a program called GreenLake, which I’ll share with you in the next article. I was intrigued by their use of the term HPC. HPE, since the acquisition of Cray, Inc. in 2019, became a leader in supercomputers, codenamed exascale. Exascale stands for the ability to perform one or more quintillion double precision floating point calculations per second (exaFLOP). These machines, Aurora (1 exaFLOP, 2021), Frontier (1.5 exaFLOP, 2021) and El Capitan (2 exaFLOP, 2023), are in the assembly phase.

Keep in mind that the price of these monsters is over $ 500 million, and this does not include the cost of their housing, massive cooling systems and a 30-40 MW power system (and electricity bills). This is one of the reasons why you cannot install them anywhere. They need a trunk line with enough capacity to operate a small town. Next-generation supercomputers can be expected to dramatically reduce energy and cooling requirements over the next ten years.

Here is a quote that is so often repeated that it is no longer possible to alter it:

exaFLOP Is one quintillion (10 ^ 18) double precision floating point operations per second, or 1000 petaflops. To compare with what a computer system with one exaFLOP in just one second, you have to perform one calculation every second for 31,688,765,000 years. “

All of them were to replace the two current speedster; Summit (200 petaFLOP, 2018) and Sierra (125 petaFLOP, 2018) from IBM. Thus, until the HPE / Cray machines are launched, Summit and Sierra will continue to lead. At least we thought so.

Fujitsu surprised everyone in 2020 with a Fugaku machine running at an amazing speed of ~ 475 petaflops, taking the current first place. But rumor has it that this is just the beginning for Fugaku, because they developed everything from scratch – chips, interconnects and even software. Then HPE / Cray announced that they would bring a 500 petaflop computer to Finland in 2021. In theory, that would put it first, unless Aurora or Frontier are the first to earn.

TOP500 lists the 500 fastest supercomputers in the world. Five hundred. This is not a typo. My alma mater for supercomputers, Sandia National Labs, actually has a 486 speed lab (they have others), but just to get on the list, it has to do> 2 petaflops. By comparison, the 486th slowest supercomputer in the world can do in one second what you need to do one calculation every second in just 63,377,530 years! When I worked on the design for ASCI Red in 1997, we created the first teraFLOP computer. It is a million times slower than future exascale computers.

All computers are now supercomputers

It’s amazing that just one person invented the supercomputer. Seymour Cray founded Cray Research in 1972 and produced the Cray-1 supercomputer. From 1976 to 1982, it was the fastest supercomputer in the world. It measured 8½ feet wide and 6½ feet high and contained 60 miles of wires. He was pretty good. By comparison, today’s exascale monsters take up the space of two soccer fields and run several billion times faster.

The first customer was Los Alamos National Laboratory. In 1993, Cray released its first massively parallel supercomputer, T3D. Supercomputers became popular when they actually switched to large arrays of identical servers supported by multi-core chips.

So, in a sense, all computers are now supercomputers.

Sadly, Cray died in a car accident in 1996 and the company was sold to Silicon Graphics, which later merged with the Tera Computer Company in 2000. In the same year, Tera renamed itself Cray, Inc. According to analysts, HPE acquired Cay, Inc. to release the HPE Cray EX Shasta supercomputer (HPE used Cray technology), built for workloads of the exascale era, a seamless fusion of technologies. It can support converged workloads, eliminates the distinction between supercomputers and clusters, combining HPC and artificial intelligence workloads. Here’s a picture of Seymour Kray with this Cray-1 supercomputer:

Seymour Cray with Cray-1 Supercomputer
Seymour Cray with Cray-1 Supercomputer

The centerpiece of its design is the Slingshot ™ interconnect. All of the first three US exascale supercomputers are Shasta systems. For now, HPE / Cray is poised to provide three (or more) of the four fastest supercomputers in the world (in fact, Frontier was a Cray project before HPE acquired Cray, but they smoothly mixed their technologies). Next time, I’ll expand on GreenLake and how HPE brings HPC capabilities to organizations of all sizes.

One question: do they process data differently than in the commercial MPP scheme?

What doesn’t change is that using an HPC machine requires the programmer to think completely differently about how to solve their problems. Today’s supercomputers are somewhat analogous to the MPP clusters that make up commercial databases such as Oracle, Teradata, Vertica, IBM, etc. Both use a “no shared” MPP setup where each server is independent except for the network system … Each server consists of its own processors, memory and sometimes storage, as well as a copy of the operating system. The difference between the two is that the supercomputers produced today are much larger. IBM Sierra: Combines commercial CPUs and Nvidia GPUs in 4,320 nodes with a total of 190,080 cores and 256GB of memory per CPU + 64GB of memory per GPU. Deployed commercial databases represent only a fraction of its reach.

A commercial MPP cannot handle 2.56 quadrillion double precision floating point calculations per second in a Dell chassis with no more than 3000 cores. But the real difference is what they do. The MPP database can handle hundreds or thousands of requests per minute and perform optimization, load balancing, and workload management. The queries being processed are simple in themselves compared to modeling climate change. A supercomputer cannot handle this kind of parallelism, where each program can involve billions of calculations.

Currently, supercomputer programming is mostly done in Fortran, C, or C ++. Unlike operational, transactional, and analytical programs, these “codes,” as they call them, are relatively simple, the tricky part is configuration.

Supercomputers in the cloud

With the exception of air-gap installations, where the entire device is disconnected from the outside world, most supercomputers are “shared”, but in much the same manner as the cloud. Hundreds of users around the world can use supercomputers, but they are not interactive. Programs are executed as assignments and are queued under a special grant funded by research organizations. Grants amount to thousands or millions of processor hours. They are closed to the public. You pay per CPU hour multiplied by the cost of the queue (determined by the queue specifications and priority). If you have a program with 11 processors running for 1 hour, you consume eleven service units multiplied by the cost of the queue.)

Cloud providers support a variety of architectures, so it is possible that they can provide HPC with the actual supercomputer time. Benefits of the cloud: Pay at rates, distributed and multi-tenant. For the next article, I’ll try to figure it out with HPE. As far as I understand, supercomputer computing in petascale and exaflop is not currently part of GreenLake, and the program is broader than just HPC.


Learn more about the course “Mathematics for Data Science”.

Watch the open webinar on the topic “Derivative of a function and Taylor’s formula”.

Similar Posts

Leave a Reply