How compression works in an object-oriented memory architecture

A team of engineers from MIT has developed an object-oriented memory hierarchy to work more efficiently with data. The article deals with how it works.


/ Pxhere / pd

As is known, the growth in performance of modern CPUs is not accompanied by a corresponding decrease in the delay in accessing the memory. The difference in the indicators from year to year can go up to 10 times (PDF, p. 3). As a result, a bottleneck arises, which does not make full use of the resources available and slows down data processing.

Performance degradation is caused by the so-called decompression delay. In some cases, up to 64 processor cycles can be spent on preparatory data decompression.

For comparison: the addition and multiplication of floating-point numbers take no more than ten cycles. The problem is that the memory works with fixed-size data blocks, and applications operate on objects that can contain different types of data and differ from each other in size. To solve the problem, MIT engineers developed an object-oriented memory hierarchy that optimizes data processing.

How the technology works

The solution is based on three technologies: Hotpads, Zippads and the COCO compression algorithm.

Hotpads is a program-driven hierarchy of squirkad register memory (scratchpad). These registers are called pads (pads) and their three pieces are from L1 to L3. They contain objects of different sizes, metadata, and pointer arrays.

In fact, the architecture is a system of caches, but sharpened to work with objects. The level of the pad on which the object is located depends on how often it is used. If one of the levels is “full”, the system starts a mechanism similar to “garbage collectors” in Java or Go. It analyzes which objects are used less often than others and automatically moves them between levels.

Zippads is based on Hotpads – archives and unarchives data that arrives or leaves the last two levels of the hierarchy – the L3 pad and main memory. In the first and second pads, data is stored unchanged.

Zippads compresses objects whose size does not exceed 128 bytes. Larger objects are divided into parts, which are then placed in different parts of the memory. As the developers write, this approach increases the ratio of effectively used memory.

For the compression of objects, the COCO (Cross-Object COmpression) algorithm is used, which will be described later, although the system can also work with Base-Delta-Immediate or FPC. The COCO algorithm is a kind of differential compression (differential compression). It compares objects with “base” and removes duplicate bits — see the diagram below:

According to MIT engineers, their object-oriented memory hierarchy is 17% more productive than classical approaches. It is much closer in its structure to the architecture of modern applications, so the new method has potential.

It is expected that technology companies can start using companies that work with big data and machine learning algorithms. Another potential direction is cloud platforms. IaaS-providers will be able to more effectively work with virtualization, data storage systems and computing resources.

Our additional resources and sources:

"How we build IaaS": materials on the work of 1cloud

Evolution of cloud architecture 1cloud
1cloud object storage service

Potential HTTPS attacks and how to protect against them.
What are the similar and different approaches to Continuous Delivery and Continuous Integration?
How to protect the server on the Internet: 1cloud experience

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *