“Sembit” computers

Let me clarify right away that the term “sembit” is the author’s. Short for “semi bit”, i.e. “half bit” – partial bit.

We are used to the fact that the minimum size of information is a bit. However, information may be contained in a non-integer number of bits. For example, one finger out of ten. For an integer number of bits, 4 bits are needed, but this will be redundant, because 4 bits contain 2^4 = 16 options, 6 options will be redundant. This is a common practice when much more data is used than necessary, since whole bits can only contain the number of options to the power of two.

Then, for one finger out of ten, log2(10)=3.32192809489 bits is enough. But in modern bit systems this is simply impossible. The bit transistor design allows only an integer number of bits for any operation.

How then can a non-integer number of bits be represented? Analog computers. For most, these are bulky, outdated incandescent machines with punched cards, 1950-1960. However, analog computers allow working with “sembits”. A simple example would be a conventional analog adder circuit (https://ru.m.wikipedia.org/wiki/Adder) instead of bit. Those. sembit is a certain “truth” value from 0 to 1.

In the process of studying the issue, I came across an article from 2012, in which the author complained about the small number of developments in this area. (https://habr.com/ru/articles/146680/). 12 years have already passed, and there is some progress in this direction. A special impetus was the excitement around neurons. I would like to draw attention to these developments and evaluate future prospects.

The logic of most modern computers is built on Boolean algebra and binary code. Then any operations require discrete conversion of the input signal into a sequence of bits. Then, the collection of bits represents the value. Boolean algebra is limited – dichotomous: true/false. At the same time, such boundary states practically never occur in the physical world. Including in natural languages. In physics, 100% truth is physically unattainable (Heisenberg's uncertainty principle).

Probabilistic memristor processors

One of the most serious limitations in the development of avm was the storage of sembits. One solution is a memristor. (https://habr.com/ru/articles/667082/). The theoretical concept was developed in 2008. The memristor stores not only 1 or 0, but also intermediate values. A variant of processors built on memristors is called “probabilistic processors”. The first one was created in Boston in 2010. (https://habr.com/ru/articles/102152/). In 2019, MIPT already developed a second-generation hafnium-based memristor (https://www.cnews.ru/news/line/2019-08-28_v_mfti_sozdali_ustrojstvo).

Development does not stand still. The high-density memristor array was developed at MIT in 2020. MIT engineers called their development a “brain on a chip.” (https://www.techinsider.ru/technologies/1551463-ves-mozg-na-odnom-kristalle-budushchee-memristorov-priblizhaetsya/). In 2022, the book “Memristor Computer Systems” was published (https://link.springer.com/book/10.1007/978-3-030-90582-8). In 2023, a “neural network” processor was created in France (https://www.atomic-energy.ru/news/2023/01/31/132341 I couldn't find the original).

At the moment, such probabilistic processors are considered mainly as tools for solving specific problems or coprocessors. But they are already showing their effectiveness compared to the “classic” ones. For example, according to information from the wiki – “GP5 is 10 times faster and 1000 times more energy efficient than i7.”

One of these specific tasks is artificial intelligence models. Because mnemristor systems are the closest to a real brain model. In neural networks running on classical bit processors, a synapse is represented by a certain set of bits (64, for example). Those. A synapse can be in one of 2^64 states (1.8+E19). While in a memristor there are no artificial restrictions, only the physical properties of the element itself. Operations on them are also more “closer to reality”.

For example, the GP5 chip developed by Google is this type of coprocessor. (https://en.m.wikipedia.org/wiki/GP5_chip).

Sembit/analog/probabilistic logic

The main disadvantage of such systems is their low accuracy. However, bit systems are ultra-precise only in an ideal world. In the real world, bit systems are also subject to distortion. For example, dead pixels on the screen or problem sectors on the hard drive, hardware failures on the CPU. Even processors make mistakes, albeit extremely rarely, even when protected with additional checks.

But in essence, an analog system can take into account the error out of the box. And this error may well be configurable both in hardware and software, at the expense of reduced performance. Because bit systems are the same analog, just with a hidden error. (https://habr.com/ru/articles/146608/). Those. Formally, there is always a real error, but in the Sembit system it is always taken into account.

Thus, a sembit system can store a value with arbitrarily high precision, with a degree of error that is the inverse of the number of sembits used, based on the physical characteristics of the system.

Let's say the sembit value can be converted to int64 with 1% error. Then, for two sembits the error of conversion to int64 will be 0.01*0.01=0.0001, or 0.01%. Etc.

First of all, symbit logic is effective for problems of fuzzy logic and approximate calculations. But the accuracy of bit systems is theoretically achievable. In fact, in the case of converting Sembit to a bit, the accuracy should be similar, because de facto this is a bit system. But at the same time, the accuracy can be even higher if several sembits are converted to a bit value at once.

Systems with increased accuracy are ideal for multimedia. For example, for video games. Data transfer speed and performance play a key role here, and the difference between the color #ff0000 and #fe0000 in one pixel is almost invisible to the eye, especially on different monitors. Somewhere in the hub I came across the use of low-precision video cards, which gave good speed with virtually no visual changes. Even in the shaggy 90s, bit consoles, due to the effects of the TV, improved the picture and instead of squares, we got framed, but more pleasing to the eye images. Even now, such filters are widely used in pixel games.

Analog devices. For example, a keyboard with keystroke force. It will even be possible to programmatically implement the size of letters based on the pressing force, for example. Well, analog logic components. Perform complex math problems directly on the processor.

Manufacturers of the GP5 chip also announced the possibility of programming in a probabilistic language. About probabilistic programming (https://habr.com/ru/articles/244625/). Therefore, the prospects for analog information systems are very real. And the range of applicability may be not only in the field of AI.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *