History of the Open Compute Project. Storage systems

Second after servers by the importance of building a modern data center are storage systems. In the concept of the Open Compute Project, data storage follows the general idea of ​​minimizing the functionality implemented at the hardware level and focuses on Software Defined Everything technology. Therefore, JBOD (Just a Bunch Of Discs) and JBOF (Just a Bunch Of Flash) arrays have become widespread. These arrays attach directly to the server and provide disks to its operating system directly over the SAS or PCIe bus, and do not have a controller to manage storage and from.

The growth in the use of such JBOD / JBOF disk arrays has occurred due to the widespread adoption of software-defined storage and hyperconvergence systems. By increasing the disk space on a single server, it is possible to achieve the optimal balance of computing power and storage volume and achieve performance indicators comparable to traditional data storage systems and the best indicators in terms of cost per terabyte, while maintaining the required degree of reliability.

As part of OCP, the first disk arrays appeared in 2013 and since then have undergone several stages of changes.

year 2013

Knox JBOD

Knox (or as it is also called Open Vault) was released in 2013. This is the JBOD disk enclosure. The array consists of two trays of 15 3.5 ”drives. Like all OCP equipment, it is front-serviceable, with the exception of the fans, which can only be replaced from the hot aisle behind the rack. To access the discs inside the tray, you must release the hooks and pull the shelf towards you, opening access to the individual disc cells. Also, for ease of maintenance, the trays have a folding mechanism for folding the tray down. This feature is especially useful when the drive enclosure is located at the top of a rack. Each tray is equipped with a card with two SAS expanders and a control card for six installed fans. Expanders provide a fault-tolerant path to each disk.

Knox (Open Vault)
Knox (Open Vault)

2015 year

Honey badger

Honey Badger has already become an analogue of a full-fledged storage system. It has a Panther + microserver controller based on Intel Avoton SoC (C2350 and C2750) with four DDR3 SODIMM slots and mSATA / M.2 SATA3 interfaces. The microserver itself is installed into the main board of Honey Badger, which in turn already has a SAS controller, SAS expander, AST1250 BMC, two miniSAS connectors and a slot for an OCP 10GbE mezzanine card.

Microserver Panther + and Honey Badger
Microserver Panther + and Honey Badger

2016 year

HatTrick

JBOD disk array developed by Jabil (this is one of the world’s largest ODM) as part of the OCP product line under the Stack Velocity brand. The array chassis is a 3-device shelf for OpenRack v1 and v2. The array is equipped with a 12G SAS interface and can accommodate up to 15 3.5 “drives. The array itself supports disk zoning. Also on the array itself there is an OCP Debug connector. This product is the first to have a successful construct similar to the OCP Leopard and TiogaPass servers.

HatTrick
HatTrick
Zoning scheme.
Zoning scheme.

2018 year

Lightning JBOF

The first JBOF (Just a bunch of flash) OCP format device working with NVMe PCIe SSD disks. The same Knox was taken as a basis, since the form factor was already clear, and the product launch to the market was declining. New boards have appeared in operation:

PCIe retimer card – x16 PCIe gen3 card to be installed in the head server. PCIe retimer card provides recovery of PCIe bus signals over mini-SAS HD cables when devices are removed 1.5 meters outside the server.

PCIe Expansion Board (PEB) – This board houses the PCIe switch and BMC controller. One PEB is installed in each SSD tray and replaces the SAS Expander Board (SEB) used in Knox. This allows a common breakout board to be used for both trays, and also makes it easy to design new or different versions (for example, with next-generation switches) without changing the rest of the infrastructure. Each PEB has up to 32x PCIe uplinks.

Lightning JBOF
Lightning JBOF

Bryce Canyon Storage System

Bryce Canyon is designed for high-density cold archive storage and can pack 72 3.5 “drives in 4OU. This array is available in 3 different models.

The first model operates as two separate NAS servers thanks to the two Mono Lake SoC microserver controllers installed in it. The I / O module used in this configuration supports two PCIe M.2 slots with four PCIe Gen 3 lanes in addition to the OCP mezzanine NIC. Each server has 36 disks available.

For storage systems with lower performance requirements, but with more capacity, there is a version of the array with one Mono Lake microserver. In this configuration, all 72 drives are available to one system. For access, the 25GbE interface is used.

The third version of the array is a simple JBOD disk extender JBOD for 72 3.5 “disks, which are presented to the outside through two mini SAS HD ports 4x12G. On the management server side, there should be a RAID HBA LSI SAS Megaraid 9480-8e or non-RAID HBA LSI SAS Megaraid 9300-8e.

In all versions, the array supports hot-swap drives. Each element in the system is secured with latches or thumbscrews. In the central part of the array, under a cover, there are power cables in a movable sleeve, which allows the array to be removed from the rack for maintenance without disconnecting the power. A special feature is the same fans, which can only be replaced from the hot aisle behind the rack.

Bryce canyon
Bryce canyon

2019 year

Crystal lake

In 2019, MiTAC, known for its Mio and Tyan brands, introduced the Crystal Lake all-flash NVMe disk array in a form factor similar to modern OCP TiogaPass servers. The disk array chassis occupies 2OU and accommodates 3 arrays, each with a capacity of 16 NVMe U.2 drives. Thus, one Crystal Lake chassis holds a total of 48 disks. This is currently the highest U.2 disk density at two units.

The connection is made to the head server through the same PCIe retimer card. The form factor of the array is similar to that of modern OCP TiogaPass servers, which is very convenient when assembling high-density solutions in a server rack, since the array can be partitioned into 2 or 4 parts. The current version of the array supports PCIe gen3, but plans have been announced to implement PCIe gen 4. A power rail with sliding contacts is located inside the chassis. This allows the drive enclosure to be removed from the chassis without powering down for drive replacement, maintenance, or LED indication. When the array is opened, the increased cooling mode is automatically started to compensate for the changed airflow conditions – the fans are set to maximum speed. The fans themselves, unlike all previous models of disk arrays, are replaced from the front of the rack from the cold aisle. Management and monitoring of the array is provided by OpenBMC with a convenient WEB-interface. Support for Red Fish is also implemented.

Crystal lake
Crystal lake

Stay tunned …

Back in the history of OCP servers

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *