What Old Hard Drive Ads Can Teach Us

Advertisements for old computer hardware, and especially hard drives, are often found in amusing posts on computer forums and on the nerdsky part of the Internet.one… For example, a couple of days ago, Glenn Lockwood tweeted this old ad:


At least this is not an advertisement for a HAMR disc. 10 thousand dollars at today’s prices.

Obviously, in the early 80s, these disks provided a search speed of 70 ms, access speeds of about 900 KB / s, and a volume of up to 10 MB. Ridiculous, isn’t it? But at the same time, such advertisements hide in themselves hints of very important trends, like nothing else can explain the design of systems. To understand what’s going on, let’s compare this decrepit 10MB drive with a modern one. Most consumers no longer buy magnetic disks, so let’s take an SSD for comparison.

XCOMP 10MB Modern HDD The change Modern SSD The change
Volume 10 MB 18 TB 1.8 million times 2 TB 200,000 times
Delays 70 ms 5 ms 14 times 50 μs 1400 times
Bandwidth 900 KB / s 220 MB / s 250 times 3000 MB / s 3300 times
IOPS / GiB (QD1) 1400 0.01 0.00007 times 10 0.007 times

Or so2… Let’s start with the magnetic disk: we got a HUGE increase in storage, a large increase in throughput, a modest decrease in latency, and a significant decrease in random I / O operations per storage unit. It may surprise you, but SSD, despite much higher speed, fits well in all aspects of the general trend.

This observation is by no means new. 15 years ago, the great Jim Gray said “a disc is a tape.” David Patterson (you know, Turing Prize winner, co-inventor of RISC, etc.) wrote an excellent article in 2004 Latency Lags Bandwidth, in which he made the same observation. He wrote:

It struck me that the trend is the same in many technologies: throughput increases much faster than latency decreases.

and

During the time when the throughput has doubled, the delays have decreased by no more than a factor of 1.2-1.4.

It might not seem like a big difference, but remember we’re talking about exponential growth here, and this is the kind of thing that will blow your mind. If you multiply the trend noted by Patterson, then by the time the throughput increases 1000 times, the latencies will decrease by only 6-30 times. We see roughly the same in the table: an increase in throughput by 250 times and a decrease in latency by 14 times. The reduction in latency lags behind the growth in throughput. The throughput lags behind the volume.

We can consider this using the example of the duration of reading the entire disk as a whole with a sequential stream by reading arbitrary 4 KB blocks. With a 1980s disc, this would have taken about 3 minutes. The SSD would take about 8 hours. A modern hard drive will take about 10 months. It is no surprise to anyone that small random I / O is slow, but not everyone realizes how slow it is. And this problem is getting worse at an exponential rate.

Well, so what?

Every state-of-the-art system that humans build has a trade-off between latency, bandwidth, and storage costs. For example, 4 + 1 erase encoding in RAID5 style allows the system to survive the loss of one disk. 2-replication can do the same, but with a 1.6 increase in storage costs and a 2/5 decrease in IOPS. Journaling databases, file systems, and file formats make assumptions about storage costs, bandwidth, and random access. Changing the relationship between equipment parameters requires redesigning such systems to match the parameters of new equipment: yesterday’s software and techniques are simply not as effective as today’s systems.

Another important aspect is parallelism. I cheated a bit by choosing to use QD1. This is a one-stop queue. We send an I / O request, wait for the operation to complete, send the next one. Real storage devices can do more by passing multiple I / O requests to them at the same time. Hard drives run faster thanks to scheduling tricks that allow them to handle nearby I / O first. Operating systems have been doing I / O scheduling for a long time, and in the past two decades, drives have become smart enough to do it on their own. On the other hand, SSD have real internal parallelismbecause they are not limited to physical read / write heads. Thanks to the large number of concurrent I / O operations in the SSD, performance can be increased as much as 50 times. In the 1980s, I / O parallelism did not matter, but today it is extremely important.

There are two things the practicing system designer can draw from this. First, it is worth paying attention to equipment development trends. Be always curious and update your own internal constants from time to time. Exponential growth could mean that your mental model of hardware performance may be completely flawed, even if it’s only a couple of years out of date. Second, systems design is becoming obsolete. Real-world trade-offs change, both for the reasons described and for many others. The data structures and storage strategies described in your favorite tutorial probably haven’t stood the test of time. The POSIX I / O API certainly didn’t survive it.

Notes (edit)

  1. See, for example, this thread on reddit, forums unraid, this site, etc. Information is everywhere.
  2. I took these numbers from my head, but I think that they are more or less consistent with modern common NVMe and enterprise-grade magnetic disks.

Advertising

Epic servers – this is fast VDS with powerful processors of the AMD EPYC family and reliable disk storage based on Intel NVMe disks. Everyone can create a tariff for themselves!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *