From the legendary past through the present to the future. System X history

image

In the power race, which is at the heart of the chronicle of supercomputer development, project histories are largely erased. Leaders quickly push each other aside, technologies become obsolete, and only transitions to a new order still sink into memory. Today we would like to correct this injustice a little and tell about one unique supercomputer, the history of which began eighteen years ago. It was called System X and was assembled, one might say, from improvised means with an unexpectedly successful result.

The story of an unusual supercomputer began in the spring of 2003, and by the end of autumn it had already reached its climax. At the time, Virginia Polytechnic University set itself the goal of being one of the top thirty research universities in the United States. The local team could be helped in this, firstly, by additional computing power, and, secondly, by a bright and memorable project that could attract widespread attention. Looking ahead, it should be said that the System X supercomputer lived up to expectations and was able to give its creators both.

The name for the computing complex was chosen not just for its euphoniousness – the authors put a double meaning into it. First, the Roman numeral X alluded to the ten teraflops milestone they expected to be the first of any university-based research center. Secondly, all the same X, if we take it as a letter, echoed the name of the Mac OS X operating system, unobtrusively emphasizing the main distinguishing feature of the project – the use of standard Apple processors when assembling units for a supercomputer.

According to researchers, in the thought that ordinary user computers can be enough for the “stuffing” of an advanced computer, they were approved by the high assessment of the technical characteristics of the Apple Power Mac G5 model: “The G5 were ideal for our system in terms of architecture: two math coprocessors for double precision computing, excellent memory bandwidth and an I / O architecture that allows machines to be connected into a single supercomputer. “

The supercomputer consisted of 1100 nodes, each node has two 64-bit processors with one core and a clock speed of 2.0 GHz. Thus, the system had 2200 processors / cores). A little later, the team improved the system by switching to Apple Xserve G5 platforms and the final characteristics of the steel assembly:

  • bit width x64
  • clock frequency 2.3 GHz
  • number of cores – 2
  • number of processors – 2
  • 4 GB of RAM
  • 80 GB hard disk space

In total, the supercomputer had 4.4 terabytes of RAM and 88 terabytes of storage (HDD); in addition to the system, external storage with a volume of 53 terabytes was connected. For communication between the nodes, the InfiniBand network was used (a novelty at that time) – part of the project’s success, the authors attributed to its high bandwidth (20 Gbps per node), combined with low latency (less than 8 microseconds). Gigabit Ethernet technology was used as an auxiliary to manage the system and initiate operations.

By the standards of the world of supercomputers, the history of the creation of System X unfolded very, very rapidly. The overall concept was defined by March 2003 and the design was completed in mid-summer. In July, university researchers and students, as well as recruited volunteers, launched the installation and assembly process. A couple of months were spent on preparatory work: installing racks, carrying out all the necessary communications for the cooling system (hybrid, which was also an innovative solution at that time), power supply, air conditioning, and so on. The lab staff left quite detailed photo chronicle of events, by which you can get an idea of ​​how it is happening and what resources are required for placing a supercomputer within a normal, functioning institution. The processors and cases, which entered the university only in September, were assembled and connected into a single system in less than three weeks of work (according to the feedback from the participants, it was extremely stressful).

The rush paid off by the fact that recognition came to the authors of the project by the end of the year: they managed to get into the November edition of top500.org – the world rating of supercomputers. System X’s performance was evaluated on the HPL (High-Performance Linpack) benchmark by Jack Dongarr and performed impressively against competitors with far more solid reputations and funding. Researchers at Virginia Polytechnic University have succeeded in bringing the supercomputer to a level above 10 trillion operations per second: System X performance was 10.28 teraflops with a peak performance of 20.24 teraflops.

System X debuted in the world rankings from third place… Even the creators of the system did not expect such a result: they believed that in the most successful scenario they would fall into the middle of the top ten. However, they were delighted not only by the fact of the furore produced, but also by another honorary title that they were awarded: “the most powerful and cheapest supercomputer created under normal conditions.”

The authors hoped that the development would give an impetus to the development of a new branch of “budget” supercomputers. Due to the fact that ready-made machines were used as building blocks, System X cost the Polytechnic University cheap: all expenses amounted to $ 5.2 million. For comparison, the Los Alamos National Laboratory’s computing complex, which was ranked one line higher in the rating, was 30% more powerful, but at the same time $ 210 million more expensive (41 times).

“This system represents a big step forward in terms of performance, cost, and ease of management of supercomputers. It showed that virtually anyone with $ 5.2 million can build a computing machine on the scale required for high performance computing research.

What happened then? If we talk about the fate of System X itself, he continued and ended his career with dignity. In 2004, the team released a revised version: replacing the servers and fixing the problem with crashes due to the influence of cosmic rays, which clusters were especially vulnerable to due to the large number of memory chips. The updated supercomputer managed to stay in the top ten of top500.org – it got the seventh line. By the end of the same year, another update (worth about $ 600,000) was released, which allowed the team to overclock performance to 12.25 teraflops and reach the fourteenth place in the 2005 rankings.

Over the next few years, the position of the supercomputer in the overall ranking gradually fell (to forty-seventh in 2006 and two hundred and eightieth in 2008), but it still remained one of the most powerful machines at the university. In 2012, System X ceased functioning.

As for the revolution in the financial and logistical accessibility of supercomputers, it can hardly be said that it has taken place. System X had several successors; so, by 2005, the Xseed system from Bowie State University in Maryland appeared, which hit the one hundred and eighty-eighth place in the world ranking. The Polytechnic University also assembled a System G computing complex from more than three hundred Mac Pro computers using a proven scheme. However, in general, “semi-finished” supercomputers have not become a mass phenomenon – perhaps the reason is the accelerating pace of technology development or the general unprofitability of such enterprises under normal conditions. Nevertheless, one must not forget that there was a precedent – perhaps someday there will be another enterprising group of enthusiasts.

Today, the history of System X has already ceased to be a role model, but it still attracts community interest. So, this year we witnessed the development and release of the Performance Index 64 application from EcoComputers, JSC, which not only measures the power of the machine, but also allows us to evaluate it in a historical perspective. The main purpose of the Performance Index 64 is to calculate the performance of various configurations of 64-bit Mac systems based on a number of parameters. The number of parameters also includes performance, which is measured based on the same HPL test used to compile the top500.org supercomputer rating. However, the user receives the result not only in gigaflops, but also in special conventional units G5 – according to this indicator, one can understand how much the machine is more efficient than the basic model of the G5 computer, which was released in 2003 and became the basis for the third most powerful supercomputer in the world at that moment. In the meantime, there is no need to worry about it. ”

In addition, the application allows a similar test comparison of the user’s system and the base configuration of a Mac Pro 7.1 based on Intel processors. And finally, as a logical conclusion – measuring the performance of the user’s machine in comparison with the new innovative Mac computers based on the M1 chip (this test is available upon subscribing). In this simple way, the developer lets the user know how much we have advanced in computing capabilities in just a couple of decades. And at the same time ask yourself what our modern “workhorses” are capable of.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *