A brief history of the emergence of servers: from mainframes to modern times
Servers are interesting computing systems that have changed a lot over time. Now the term “server” is explained as an electronic device that maintains / provides access to connected systems (clients) to local or global network resources (files, data, databases, applications, etc.).
The history of servers is rather difficult to show in the form of a line of devices that are gradually changing in appearance and function, since the purpose and functions of servers have gradually changed. The multimeters, which we wrote about earlier, externally, although they changed, but their functions remained the same. Since the 70s and 80s of the last century, they have remained almost unchanged, with a few exceptions. And systems such as servers are quite another matter. Under the cut – their brief history, this article can be called the groundwork for a whole series of longreads about servers, mainframes, their history and other interesting things.
How it all began
If we talk about the usual client-server paradigm, then the beginning was laid back in the era of the mainframe. Then there were no personal PCs we were used to, data processing was carried out on powerful computers, which were called mainframes. Operators only had terminals that allowed access to data. Typically, the terminal was a simple alphanumeric display and keyboard that plugged into the mainframe.
Mainframes made it possible for several users to work with data at the same time. We are not talking about 2-3 users, but about thousands of concurrently supported sessions. Due to their capabilities, such systems have always been very expensive, and either rich large companies or academic organizations, plus the governments of some countries, could afford them.
The end-user experience with mainframes did not require programming knowledge, so the managers of the organizations in which the systems were running were happy because they could do the required tasks without too much trouble. With the help of terminals, the user got access to the necessary data and worked with them, without thinking about how their storage and processing is actually organized.
When the mainframe became obsolete (with an estimated lifespan of more than 15 years), it was replaced with a new one whenever possible. At the same time, the old one was left to work as a spare device in case something went wrong with the new system. The problem with mainframes was that they were produced by several competing companies and were incompatible with each other.
Migration in case of buying a new mainframe from another manufacturer was possible, but it was a complex and lengthy process. Not only hardware was incompatible with each other, but also software. The communication protocols were also incompatible, since there was no standardization yet, and if it was, it was not as strong as it is now.
The first mainframe was introduced by IBM in 1964 and was the first model in the IBM System / 360 series. About $ 5 billion was spent on the development and implementation of the project, which at that time was a huge amount, comparable to the funding of a number of important space programs by NASA. However, IBM was right as its mainframes became very popular. The corporation’s income has more than doubled. If in 1965 it was $ 3.6 billion, then in 1971 it was $ 8.3 billion.
Then other companies began to release mainframes:
Some of the models became popular, some ceased to be produced shortly after launch. The great thing about mainframes was that they were unified systems. Previously, computers were created and adapted for each client, and with the advent of mainframes, computer families with a single compatible architecture appeared – however, the architecture of different manufacturers was most often proprietary.
HP ProLiant 380 G5
IBM also adopted the principle of backward compatibility – almost all software of old models is compatible with new ones. Moreover, programs for System / 360 work with certain reservations on modern systems.
HP NetServer LH
Mainframes allowed businesses to run faster and academics to compute faster, which helped drive progress. Over time, mainframes have differentiated based on the tasks they performed. If earlier in corporations mainframes worked as universal soldiers, then later in different departments they began to perform different tasks – of course, only in those companies that could afford such pleasure. Since there are fewer people in the department than in the entire corporation, the power of the mainframe may be lower – this is how “small” mainframes appeared, which some companies began to produce. Such systems were several times cheaper than “adult” mainframes, which allowed companies to save money.
The i4004 microprocessor and the emergence of the IBM PC
Probably, mainframes would have been much more widespread than they are now, if not for the development of the i4004 processor and the later appearance of the personal computer. PCs evolved quite quickly, becoming more powerful. Business, government and scientific organizations have gradually begun to move from centralized data processing to distributed. PCs began to supplant terminals, and mainframes became less and less. The roles of both servers and clients began to be performed by ordinary PCs.
For example, if earlier in most cases only FTP and Telnet were needed, then with the development of the global network, telecommunication servers were already required, including web servers, the same ftp, domain name servers and mail servers. File servers were gradually losing importance and were being replaced by database servers.
We can say that the development of the server industry was given an impetus just in the 70s of the last century. Then, in addition to microprocessors, almost simultaneously have developed:
- High capacity memory, DRAM… Hewlett-Packard soon introduced the HP-9800 personal computers shortly after memory hit the market. Two years later, Intel’s PMOS DRAM IC became the best-selling chip on the market.
- SCSI – the appearance of this technology dates back to the 79th year of the last century, although standardization was delayed until 1986. For a long time, SCSI has become the default technology for storage I / O interfaces on all types of network servers. We talked in more detail about the appearance of disk interfaces in this article.
And two more factors that appeared later, but also activated the development of the server industry:
- RAID – the emergence of this technology is difficult to underestimate, it was certainly a breakthrough. Various types of RAID configurations continue to be widely used in today’s server world, and in conventional consumer PCs as well. The advantages of RAID are increased data integrity, fault tolerance, and data throughput.
- Development of standards. We are talking about the size of the cases, racks, server form factor, etc. Once appeared, the standards 1U, 2U, 4U, etc. received widespread recognition.
Local and global networks developed very actively, personal computers became more and more powerful, and the tasks of servers were also differentiated.
Sometime in the mid-80s, companies began to move away from ordinary personal computers as servers. Specialized equipment appeared again, but not mainframes (they are still used today, working in about 25,000 organizations around the world), but servers. On August 6, 1991, the first Internet server was launched. A year later, there were 26 such servers on the global network – they worked autonomously, without requiring the constant presence of a person.
Unlike mainframes, centralized data processing now does not require additional costs – simply because the hardware and software of many companies are interchangeable, and there are a huge number of very different solutions on the market. The server is now an extremely important, critical infrastructure element.