Why do Solidity developers need the basics of computer science?

Before moving on to bitwise operations, you need to understand (or remember) what a bit and a byte actually are.

A little computer science

Bit is the smallest unit of information, a symbol or signal that can take only 2 values: on or off, yes or no, high or low, charged or uncharged. We can say that any light bulb can become a bit – it is either on or off. Accordingly, in the binary number system, a bit is 0 or 1.

In general, if a bit is a brick, then a wall of bricks is a byte.

Byte – this is already a collection of bits that a computer can process simultaneously, and historically, one byte in 99.9% of cases will equal 8 bits, because it turned out to be convenient and efficient. The leftmost bit in a byte is the most significant bit (MSB – Most Significant Bit). The far right is the youngest (LSB – Least Significant Bit). The bits in a byte are numbered from right to left. Bit 0 is the rightmost bit and it is the smallest one. Bit 7 is the leftmost bit and is the largest.

Important! Numbering from right to left refers to direct bit order, but there is also back order with numbering from left to right, although it is much less common, so all further examples will be in direct order.

There is an excellent article: “How to send two bytes”, which shows what will happen at the byte level if you send the word “Hello!” from one computer to another.

We can only put 0 or 1 in a bit, you can’t write a minus sign there, but you need to work with negative numbers somehow, so at the bit level there are iconic And unsigned numbers. More about everything later.

Unsigned numbers (positive)

One byte (if we are talking about an unsigned number) contains values ​​from 0 to 255, or if in binary form from 00000000 to 11111111.

It turns out that there are 8 bits in a byte. Each bit has its own ordinal place (position) in the byte. Depending on the position where the bit is located, its weight changes. Weight is the number that you get if you raise 2 to the power of the position on which it stands. Zero position – weight = 20 = 1, first position – weight = 21 = 2, second position – weight = 22 = 4 and so on.

To remember, you can practice, for example, you can first take only the first 4 positions of the byte and convert the numbers from 0 to 15 into binary code. To get the number 5 we need to add 22 + 20 because the number 4 is in the second position, and the number 1 is in the zero position, you get 0101.

For example, 10 in the binary system would be 1010, 13 = 1101, 14 = 1110, etc.

Thus, having 4 positions we can operate with numbers from 0 to 15 because 23 + 22 + 21 + 20 = 15.

IN this video It literally explains why this is so and how to convert numbers from binary to decimal.

And then you can practice converting from binary to decimal using online calculatorfor example, this is what the conversion of the number 93 from binary to decimal will look like:

010111012 = 0 + 26 + 0 + 24 + 23 + 22 + 0 + 20 = 0 + 64 + 0 + 16 + 8 + 4 + 0 + 1 = 9310

There are also video by reverse conversion from decimal to binary, and then again you can try to do it on calculatorfor example, this is how the binary number 35 will be calculated:

Division

Integer quotient

Remainder

35/2

17

1

17/2

8

1

8/2

4

0

4/2

2

0

2 / 2

1

0

12

0

1

3510 = 1000112

Note: In the standard calculator application of some operating systems (win, macos) there is a programming mode where you can perform all these operations and more.

In this simple way, bits allow you to encode numbers in the binary number system.

And this is what 2 bytes or 16 bits will look like if they are decomposed into decimal numbers. 2 bytes form a machine word, but more on that another time.

With this knowledge you can move on.

Signed numbers (positive and negative)

Numbers can be not only positive, but also negative. This forces us to think differently about storing numbers. Let's say the number 5 in binary looks like 00000101, but the number -5 has exactly the same sequence of zeros and ones, only with a minus. We do not have a special notation that would allow us to store this symbol in memory. Therefore, we need a special notation format by which the computer could distinguish between numbers and understand which number is positive and which is negative. For this there is forward, reverse and complementary code. These are three different representations of how to store negative numbers in binary code.

Direct code

Direct code simply uses the most significant bit to indicate the sign of the number. However, this creates problems with handling negative zeros and requires additional processing of the sign bit.

Return code

Reverse code solves some of these problems by inverting all the bits of a number to create its negative equivalent. But it still faces the negative zero problem.

Additional code

The two's complement code, which is used in most modern computers, solves all these problems by adding 1 to the reciprocal code of the number. This gets rid of the negative zero, makes the numbers orderly, and expands the range of numbers that can be represented.

The subtraction operation in two's complement code occurs by adding two numbers, one of which is a negative number. If the result of adding two large positive numbers or two large negative numbers does not fit within the range of numbers that can be represented, an overflow occurs. In this case, the result will be on the other side of the range of numbers indicated by the number of overflow steps.

The form of writing numbers in complementary code turned out to be the most optimal both from the point of view of mathematics and from the point of view of simplifying the computer architecture. I highly recommend watching This Video about negative numbers.

Now you can start understanding bit operations.

Bit operations

Bit operationsis a special type of operation in computing that allows bits of data to be manipulated directly. They work at the level of individual bits in the binary representation of numbers.

In many programming languages, you often have to work with logical operations such as:

  • && – Logical AND (AND)

  • || – Logical OR (OR)

  • ! – Logical NOT (NOT)

  • < > = – less, more, equal and their combinations

Here is an example of truth tables for the operators AND, OR and NOT:

AND

left operand

right operand

result

0

0

0

0

1

0

1

0

0

1

1

1

OR

left operand

right operand

result

0

0

0

0

1

1

1

0

1

1

1

1

NOT

But these operators work with numbers or boolean values ​​(in fact, with bytes) while bit operations work at a lower level and a few more operations are added to them, so the complete list of bit operations will look like this:

  • & – Bitwise AND (AND)

  • | – Bitwise OR

  • ^ – Exclusive OR (XOR)

  • ~ – Bitwise negation (NOT)

  • << – Bitwise shift left

  • >> – Bitwise shift right

Another truth table is also added for exclusive OR – XOR:

XOR

left operand

right operand

result

0

0

0

0

1

1

1

0

1

1

1

0

Tables for AND, OR, NOT work the same as with logical operations, so it’s not difficult to remember. Exclusive or gives 1 only when the operands are different, and when they are the same we get 0.

Logic gates

In fact, bit operations in combination with storage elements are what all computing technology is built on at the lowest level. Since these are basic elementary logical operations (they are also called logic gates) with which the processor can work directly using voltage and various circuits. For example, this is what the NOT negation operator diagram will look like:

Scheme of an elementary logical operation NOT at the electronics level

Scheme of an elementary logical operation NOT at the electronics level

That is, if you imagine that bulb this is the presence of voltage or 1 at the output, and Х This switchthen in the case when switch When the current is closed, no current flows to the light bulb and we get 0, otherwise it will be 1.

And this is what OR will look like, that is, when at least one switch (x or y) closes at the output we get the voltage:

Scheme of an elementary logical operation OR at the electronics level

Scheme of an elementary logical operation OR at the electronics level

In the case of AND, they need to be closed both switches:

Scheme of an elementary logical AND operation at the electronics level

Scheme of an elementary logical AND operation at the electronics level

There is also a wiring diagram for XOR, but it doesn't include electrical concepts like dip switches, inverters, etc. If interested, you can read here.

Simple operations with bitwise operators

Before studying the information below, I highly recommend This Video.

So, let's move on to examples at the binary code level. Let's start with a simple example, if we take two numbers 5 and 6 and apply the bitwise operator to them & we get 4. To understand why this happened, we need to convert the decimal numbers to binary and perform a bitwise AND (AND) with each individual bit using the truth table for AND:

    // x     = 101 = 4 + 0 + 1 = 5
    // y     = 110 = 4 + 2 + 0 = 6
    // x & y = 100 = 4 + 0 + 0 = 4
    function and(uint x, uint y) external pure returns (uint) {
        return x & y;
    }

Similar examples for other operators can be viewed Hereyou can also see here is the video. This is the simple principle by which all basic bitwise operations will work. The only nuance worth remembering is that we can have a signed or unsigned binary number, for example, if we use the operator ~ NOT to the number 8 (00001000) i.e. perform the inversion, depending on whether the type is signed or unsigned, we will get two different numbers.

    x  = 00001000 =   0 +  0 +  0 +  0 + 8 + 0 + 0 + 0 = 8
    ~x = 11110111 = 128 + 64 + 32 + 16 + 0 + 4 + 2 + 1 = 247 (unsigned)
    ~x = 11110111 =   0 +  0 +  0 +  0 + 8 + 0 + 0 + 0 + 1 = -9 (signed)

Remember that in two's complement code we need to add one to get the desired negative number.

Bit check

And now a little magic that can be done using bit operations. For example, we store three states in one byte using the last 3 bits. We need to find out what is inside a single bit.

bytes1 state1 = 0x1; // 00000001
bytes1 state2 = 0x2; // 00000010
bytes1 state3 = 0x4; // 00000100

To do this we need to create a mask for the desired bit and apply the operator & AND, for example, we want to get the value of the second bit:

    mask = 00000010

    00110010
  &
    00000010
    00000010 // result
    --------
    00110001
  &
    00000010
    00000000 // result

Thus, if there is something in the second bit, then we will get 1, and if not, there will be 0.

Changing the value of one or more bits using OR and AND:

  1. Setting a bit to one

    00110000
  |
    00000010
    00110010

Here we changed the second bit to 1 using bitwise OR | OR while leaving the remaining bits the same.

  1. Setting a bit to zero (use bitwise to set the second bit to zero)

    00110010
  &
    11111101
    00110000

And here the second bit is zeroed using AND & AND leaving other bits untouched.

Inverting individual bits

EXCLUSIVE OR operator ^ XOR allows you to invert individual bits. To do this, take the second bit as always and substitute 1 there. If there was a 0 in the second bit, then it will turn into a one:

    00110000
  ^
    00000010
    00110010

And vice versa, if there was 1 there it will become 0:

   00110010
 ^
   00000010
   00110000

There is nothing complicated here, but in order to write the appropriate functions for working with individual bits you need to understand one more topic – bit shifts.

Bit shifts

In general, there are three types of bit shifts:

  • Arithmetic shift << >>

  • Logical <<< >>>

  • Cyclical <<<< >>>>

It all depends on what programming language is used. Shifts occur to the left << and right >> side.

Arithmetic shift

In the case of an arithmetic right shift >> All bits will be shifted n positions to the right, and the vacated high bit will be filled with the sign bit: if the number is positive – 0, if negative – 1.

Arithmetic shift right

Arithmetic shift right

Writing to shift the number 8 by one position would look like this: 8 >> 1 (on the left is a number, on the right is how many bits need to be shifted)

    00001000 >> 1 = 00000100 // 8 >> 1 = 4

In this case we get 4.

With an arithmetic shift to the left << All bits will be shifted n positions to the left, and the least significant bit will be filled with 0.

Arithmetic shift left

Arithmetic shift left

Move 8 one position to the left 8 << 1 and we get the number 16.

    00001000 << 1 = 00010000 // 8 << 1 = 16

This gives us a quick way to multiply or divide a number by 2n

    n << 1 == n * 2   |   n >> 1 == n / 2
    n << 2 == n * 4   |   n >> 2 == n / 4
    n << 3 == n * 8   |   n >> 1 == n / 8

Important! Divisions are rounded down, for example 9 >> 1 = 4.

Shifting a number to the right (-1) will always give (-1), it will look like this:

    11111111 >> 1 = 11111111 // -1 >> 1 = -1

Because, as we remember, when shifting a number to the right, the most significant bit is filled with the sign bit; for a negative number it is 1.

Logical shift

In the case of a logical shift, the freed bit will always be filled with 0, as with a right shift >>> and when shifting to the left <<<. This shift is also called an unsigned shift.

    01000001 <<< 1 = 10000010 //  65 <<< 1 = 130
    10000001 >>> 1 = 01000000 // 129 >>> 1 = 64
Logical shift right and left

Logical shift right and left

Important! Logical shift only applies to unsigned numbers.

Cyclic shift

When rotating left <<<< or right >>>> The high or low bit that goes beyond the register goes to the opposite side:

Important! Solidity uses only the arithmetic shift type and only for unsigned numbers (uint).

Solidity code examples

Now knowing about the shifts you can take this code and poke at Remix. These are 4 examples of using bitwise operations that we discussed above:

  • Checking a single bit

  • Setting a single bit to 1

  • Setting a single bit to 0

  • Inversion of a single bit

You can also now look at the remaining examples with shifts here. More complex operations and algorithms can be viewed here.

Use Cases

1. Working with flags and masks.

One of the main uses of bit operations is working with flags and masks. In the bit search section we have already used both approaches. Flags are bits that are used to represent states or properties of objects or systems. Bit operations allow you to set or clear specific bits, which is useful for manipulating flags. Masks (sequences of bits) can also be used to select or hide certain properties of an object or data.

An elegant example of working with flags in solidity can be seen in the contract Universal Router from Uniswap where they put a large number of commands into one byte.

2. Compact data presentation.

Bit operations allow compact storage and transmission of data. We can combine multiple Boolean flags into a single number and use bit operations to read or write each flag separately. This saves memory and simplifies working with data.

For example, in solidity type bool stored in uint8if you take uint8 and storing each flag in a separate bit, you can put 8 boolean values ​​in it.

3. Performance optimization.

Bit operations are much faster than standard arithmetic operations or conditional expressions. The use of bit operations is especially useful in smart contracts on the blockchain, where gas is required for each operation and code optimization is critical to reducing code execution costs.

4. Encryption and hashing.

Bit operations play an important role in data encryption and hash generation. Many encryption and hashing algorithms use bit operations to process data at the bit level. This helps ensure data security and hash uniqueness.

5. Working with hardware.

Bit operations are widely used in embedded system programming and hardware manipulation. They allow you to manipulate device registers, send and receive bit data, and perform other low-level programming tasks.

Conclusion

Bit operations are a powerful tool that gives developers the ability to work more efficiently with data, optimize code, and ensure security. They have wide application in various areas of programming, including blockchain. Understanding bit operations will help you take advantage of them in your projects and create more efficient and performant code.

PS Thank you for reading the article to the end, I hope it was useful and interesting. In our Blockchain Wiki We are publishing even more useful content for those interested in the world of web3 and the Solidity language.

Links:

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *