A little more about LDPC codes

I decided to write a Python code for encoding and decoding information in order to use it as a block in the GnuRadio environment. For those who don't know: GnuRadio is a powerful software package for processing signals received from SDR receivers. It allows you to convert signals (amplify, transfer spectra, demodulate) using ready-made blocks, as well as write your own in Python and C++. The GnuRadio software package can be launched on a single-board PC RuspberryPi, which allows you to implement the developed solutions in various systems that require wireless communication

Encoding information

Let's first consider the classical encoding method. Suppose we have some matrix H:

In order to encode information, we need to obtain a generating matrix G from the matrix H:

Information coding is reduced to matrix multiplication of the generating matrix G and the vector with information bits x:

To check the correctness of the received bits, we need to multiply the matrix H with the vector of the encoded message:

If we get a zero vector when multiplying, the information is obtained correctly!

However, there are various algorithms that speed up the process of information encoding. Below we will consider the Richardson-Urbanke method (In this (You can read more about it in the scientific article). This method involves replacing the multiplication of the matrices G and the vector s with some simple operations, which we will consider below.

Another advantage of this method is that it is convenient to use with standards such as 802.11 (our favorite WiFi), where the standard itself provides us with a way to calculate the matrix H, which is convenient to use in the RU method. For example, this is how the 802.11 standard offers us to calculate the matrix H (taken from this articles):

  The basic matrix from the IEEE 802.11n standard

The basic matrix from the IEEE 802.11n standard

In order to obtain the desired matrix H from the base matrix, we first decide on the bit rate. I decided to take the lowest bit rate of 1/2 (324 information bits out of 648) for the experiment. The cells in the base table tell us that we should take the identity matrix 27×27 (the dimension for the code is 324 x 648) and cyclically shift it by n bits to the right, where n is the value of the cell. That is, if we have the number 0, then we shift by 0 and get just the identity matrix. The shifted identity matrix replaces our cell in the column from the base matrix. A dash denotes a zero matrix. As a result, I got the following matrix H:

  The resulting matrix H

The resulting matrix H

P.s. I typed this matrix in a notepad manually to be as sure as possible 🙂

And now, let's take linear algebra and the RU method into our naughty hands and get to work!

First, we are asked to divide the matrix H into sectors A, B, C, D, E and T according to the following principle (taken from this articles):

  Representation of the matrix H

Representation of the matrix H

Knowing that M = 324, M = 648, and MG = 297, I divided the table as follows:

  My divided matrix H

My divided matrix H

Next, to optimize further calculations, we will use the Gauss method:

There is nothing supernatural here, we essentially just added the first line multiplied by -ET-1 to the second line.

Next, to simplify things, we will make a small substitution:

We are already close to the end! Let's assume that our message is a vector x, where x = [s, p1, p2]where s is our original (non-encoded) information, and p1 and p2 are the parity check bits we need to calculate. We know that H@x = 0. Then to find p1 and p2 we just need to solve a small equation:

I decided to skip some mathematical calculations and immediately deduced the solution. By the way, there is one more requirement: D' should not be singular or, in Russian, D' should be such that the determinant of the matrix D' is not equal to zero. If D' is singular, then one more transformation of the rows of the matrix H' must be performed so that det(D') ≠ 0 is satisfied.

Now let's try to write code in Python:

Python code
import numpy as np
from numpy.linalg import inv

H = np.loadtxt("./H.txt")

T = H[0:297, 351:648]
A = H[0:297, 0:324]
B = H[0:297, 324:351]
C = H[297:324, 0:324]
D = H[297:324, 324:351]
E = H[297:324, 351:648]


ET = ((-1*E)@inv(T))%2
ETA = (ET@A)%2

ETB = (ET@B)%2

ETT = (ET@T)%2

C_1 = (ETA + C)%2
D_1 = (ETB + D) % 2
E_1 = (ETT + E) % 2

#Тестируем
s = np.random.randint(0, 2, size = 324)#генерируем рандомное сообщение

s_T = np.transpose(s)

Cs_T = (C_1@s_T)%2

As_T = (A@s_T)%2

p1 = (inv(D_1)@Cs_T)%2

Bp_T = (B@np.transpose(p1))%2

p2 = ((-inv(T)%2)@((As_T + Bp_T)%2))%2
x = np.concatenate([s, p1, p2], axis=0)

#Проверка
print("Проверяем произведение H на x:\n", (H@np.transpose(x)) % 2)
print("Полученное сообщение:\n", x)
np.savetxt('text.txt', H);
np.savetxt('msg.txt', x);

In response we receive the following:

Проверяем произведение H на x:
 [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Полученное сообщение:
 [1. 1. 0. 1. 0. 0. 0. 1. 0. 0. 1. 0. 0. 0. 1. 1. 0. 1. 1. 1. 1. 0. 1. 1.
 0. 0. 0. 0. 1. 1. 0. 0. 0. 1. 1. 0. 1. 1. 1. 1. 0. 1. 0. 1. 1. 1. 0. 0.
 0. 1. 1. 0. 1. 0. 0. 1. 0. 1. 1. 1. 1. 1. 0. 1. 0. 1. 0. 1. 1. 1. 1. 0.
 1. 1. 1. 1. 0. 1. 0. 1. 1. 0. 1. 1. 0. 1. 1. 1. 0. 1. 0. 0. 1. 0. 1. 1.
 0. 0. 1. 0. 0. 1. 1. 1. 0. 1. 0. 0. 0. 0. 0. 1. 1. 0. 1. 0. 0. 1. 0. 0.
 0. 1. 1. 1. 0. 0. 1. 0. 0. 0. 1. 1. 1. 0. 0. 1. 0. 1. 0. 1. 0. 1. 1. 0.
 1. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 1. 0. 0. 1. 1. 1. 0. 1. 0. 1. 0. 0. 0.
 1. 1. 0. 1. 0. 1. 0. 1. 1. 1. 0. 1. 1. 1. 0. 1. 1. 1. 0. 0. 1. 1. 1. 0.
 0. 0. 1. 1. 1. 0. 0. 0. 0. 0. 1. 1. 0. 0. 0. 1. 0. 1. 1. 0. 1. 1. 0. 0.
 1. 1. 0. 1. 0. 1. 1. 1. 0. 1. 1. 0. 1. 0. 0. 1. 1. 1. 0. 1. 1. 1. 1. 1.
 1. 0. 1. 1. 1. 0. 0. 0. 0. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 1. 1. 0.
 1. 1. 1. 1. 0. 1. 1. 1. 1. 1. 0. 1. 0. 0. 1. 1. 1. 0. 1. 1. 1. 0. 0. 0.
 1. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 1. 0. 1. 1. 1. 0. 1. 0.
 1. 1. 0. 1. 0. 0. 1. 1. 0. 0. 1. 1. 1. 1. 1. 1. 1. 0. 0. 1. 0. 0. 1. 0.
 1. 1. 0. 1. 0. 1. 0. 1. 0. 0. 0. 1. 0. 1. 0. 0. 1. 1. 0. 0. 1. 0. 0. 0.
 1. 1. 0. 0. 0. 1. 0. 1. 1. 1. 1. 1. 0. 1. 0. 0. 0. 1. 0. 1. 1. 1. 1. 1.
 1. 1. 0. 0. 1. 1. 1. 1. 1. 0. 1. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 0. 0. 1.
 1. 1. 0. 1. 0. 0. 0. 1. 1. 1. 0. 1. 0. 1. 0. 1. 1. 1. 0. 0. 1. 0. 1. 1.
 1. 0. 0. 1. 1. 1. 0. 0. 1. 0. 0. 1. 1. 0. 1. 0. 1. 0. 0. 1. 0. 1. 0. 1.
 1. 0. 0. 0. 1. 1. 0. 0. 1. 0. 1. 1. 0. 1. 0. 1. 0. 0. 1. 0. 1. 1. 1. 1.
 1. 0. 1. 1. 1. 0. 1. 1. 1. 0. 1. 1. 0. 1. 1. 0. 0. 1. 0. 1. 1. 1. 0. 0.
 1. 1. 0. 1. 1. 1. 1. 0. 1. 0. 0. 0. 0. 1. 1. 0. 1. 1. 1. 0. 1. 1. 1. 1.
 0. 0. 1. 0. 0. 1. 1. 0. 0. 1. 0. 1. 1. 0. 1. 1. 1. 0. 0. 1. 1. 0. 0. 1.
 0. 1. 0. 1. 1. 0. 1. 0. 1. 1. 0. 1. 0. 0. 0. 0. 1. 1. 0. 1. 1. 1. 0. 0.
 0. 1. 0. 0. 0. 0. 0. 1. 1. 1. 1. 0. 1. 1. 0. 0. 1. 0. 1. 0. 1. 1. 1. 0.
 0. 1. 1. 0. 1. 0. 1. 1. 0. 0. 0. 1. 1. 0. 1. 0. 0. 0. 0. 1. 1. 0. 0. 0.
 1. 1. 0. 1. 1. 1. 0. 1. 0. 1. 1. 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]

Voila! Everything works as it should!

Decoding information

Since the author of the article reviewed the SPA method, I decided to try to implement the minsum method based on the author's results. If you are not familiar with the SPA method, please read this article to understand what's what. This method is essentially designed to speed up the SPA method. It differs in that the operation of finding the tangent when calculating the matrix E:

is replaced by a simpler operation with multiplication of elements and finding the minimum:

I got the following algorithm:

minsum algorithm in python
import numpy as np

class CustomMinSum:
  def __init__(self, H):
    self.H = H

  def nrz(self, arr):
    for i in range(np.shape(arr)[0]):
      if arr[i] <= -1:
        arr[i] = 1
      elif arr[i] > -1:
        arr[i] = 0
      elif arr[i] >= 1:
        arr[i] = 0
      else:
        arr[i] = 1
    return arr

  def encode(self, r):
    M = np.zeros(np.shape(self.H))
    E = np.zeros(np.shape(M))

    self.r = r

    while True:
      for j in range(np.shape(H)[0]):
        M[j, :] = r*H[j, :]
      E = self.evaluate_E(M, E)
      l = self.evaluate_l(E)
      l = self.nrz(l)
      s = (H@np.transpose(l)) % 2
      if np.prod(s == np.zeros(np.size(s))) == 1:
        return l
      else:
        M = self.evaluate_M(E, M)

  def evaluate_M(self, E, M):
    for j in range(np.shape(self.H)[0]):
      for i in range(np.shape(self.H)[1]):
        if self.H[j,i] != 0:
          M[j,i] = np.sum(E[:, i]) - E[j,i] + self.r[i]
    M = M * self.H

    return M
  
  def evaluate_E(self, M, E):
    for i in range(np.shape(M)[0]):
      non_zero_mask = H[i,:] != 0
      for j in range(np.shape(M)[1]):
        if H[i, j] != 0:
          E[i, j] = (np.prod(np.sign(M[i,:])[non_zero_mask]) / np.sign(M[i,j]))

          exclude = np.arange(len(M[i,:])) != j

          min = np.min(np.abs(M[i,:])[exclude & non_zero_mask])
          E[i, j] = E[i, j]*min
    return E
  
  def evaluate_l(self, E):
    return self.r + np.sum(E, axis=0)


H = np.loadtxt('H.txt')

r = np.loadtxt('msg.txt')

r = r*(-1)

res = CustomMinSum(H).encode(r)
print("Результат:", res)

#Проверяем что декодированные и изначально закодированные символы совпадают:
true_message = np.loadtxt('true_message.txt')
true_message = true_message
true_message[true_message == -1.34] = 0
true_message[true_message == 1.34] = 1

print("Совпадение: ", np.prod(res == true_message) == 1)
Decoding testing

In the file msg.txt I have the result of encoding with the RU algorithm from the previous section. I replaced all the ones with 1.34, and zeros with -1.34 in it to emulate values ​​in the form of logarithmized likelihood coefficients LLR (we assume that our demodulator returns soft modulation values ​​instead of ones and zeros). For the experiment, I randomly replaced 10 bits to emulate errors, and saved the message without errors in the file true_message to compare them and make sure that everything is OK. The result was as follows:

Результат: [1. 0. 0. 0. 1. 0. 1. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 0. 1. 0.
 0. 1. 0. 0. 0. 1. 1. 0. 1. 0. 1. 1. 1. 0. 1. 0. 1. 1. 1. 0. 1. 0. 1. 1.
 0. 0. 0. 1. 1. 1. 0. 1. 1. 1. 1. 1. 0. 0. 1. 0. 0. 1. 0. 0. 0. 1. 0. 1.
 0. 1. 1. 0. 1. 1. 0. 0. 0. 1. 1. 1. 1. 1. 1. 1. 0. 1. 1. 1. 1. 1. 0. 1.
 0. 0. 1. 1. 0. 1. 1. 1. 0. 1. 0. 1. 1. 1. 1. 0. 1. 0. 0. 1. 0. 0. 0. 0.
 1. 0. 0. 0. 0. 0. 1. 0. 1. 0. 0. 1. 1. 1. 0. 0. 0. 1. 1. 0. 1. 1. 1. 0.
 0. 1. 0. 1. 1. 1. 0. 0. 1. 1. 1. 0. 1. 0. 0. 1. 1. 1. 1. 1. 1. 1. 1. 1.
 0. 1. 1. 1. 1. 1. 1. 1. 0. 1. 1. 1. 1. 0. 0. 0. 0. 0. 1. 1. 0. 0. 0. 1.
 1. 1. 1. 0. 1. 1. 1. 1. 0. 0. 0. 0. 1. 0. 1. 1. 0. 0. 1. 1. 1. 0. 0. 1.
 0. 0. 1. 1. 0. 1. 0. 1. 1. 1. 1. 0. 1. 1. 0. 0. 1. 0. 1. 0. 1. 0. 1. 0.
 0. 0. 1. 1. 0. 1. 0. 1. 1. 1. 1. 0. 1. 0. 0. 0. 0. 0. 0. 1. 1. 1. 1. 1.
 1. 0. 1. 1. 0. 1. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 0. 0. 0. 1. 0. 1. 1. 1.
 1. 1. 1. 1. 1. 0. 0. 0. 0. 1. 1. 1. 1. 1. 1. 0. 1. 1. 1. 1. 1. 0. 0. 0.
 1. 1. 1. 0. 1. 1. 0. 1. 0. 1. 0. 0. 1. 0. 0. 0. 1. 0. 1. 0. 0. 0. 1. 1.
 0. 0. 1. 0. 1. 1. 1. 1. 1. 0. 0. 1. 0. 0. 1. 1. 0. 0. 0. 1. 0. 0. 0. 0.
 1. 1. 1. 1. 1. 1. 0. 1. 1. 1. 0. 0. 1. 0. 1. 1. 0. 1. 0. 1. 1. 1. 0. 0.
 0. 0. 1. 1. 1. 1. 1. 0. 1. 0. 1. 1. 0. 1. 0. 1. 0. 0. 1. 0. 1. 0. 0. 1.
 0. 1. 0. 0. 0. 1. 1. 1. 1. 1. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 1.
 1. 0. 0. 0. 0. 0. 0. 1. 0. 1. 1. 0. 1. 0. 0. 1. 0. 1. 0. 0. 0. 1. 0. 0.
 1. 0. 1. 0. 1. 0. 1. 0. 0. 0. 1. 0. 1. 0. 0. 1. 1. 1. 0. 0. 1. 1. 1. 0.
 1. 0. 0. 1. 0. 0. 0. 1. 1. 1. 0. 1. 0. 0. 1. 1. 0. 0. 1. 0. 1. 0. 0. 1.
 1. 0. 0. 1. 0. 1. 0. 0. 0. 0. 1. 0. 1. 0. 1. 0. 0. 1. 1. 0. 1. 1. 0. 1.
 1. 1. 1. 1. 1. 1. 1. 0. 0. 1. 0. 1. 1. 1. 0. 1. 1. 0. 1. 0. 1. 1. 1. 0.
 1. 0. 0. 1. 0. 1. 0. 0. 0. 0. 0. 0. 1. 1. 1. 1. 1. 1. 1. 0. 1. 1. 1. 1.
 1. 1. 0. 0. 1. 0. 1. 0. 0. 0. 0. 1. 0. 0. 1. 1. 1. 0. 1. 1. 0. 0. 1. 1.
 0. 0. 1. 0. 0. 0. 1. 0. 1. 0. 1. 1. 0. 1. 0. 0. 0. 0. 1. 0. 1. 0. 0. 1.
 1. 1. 1. 1. 0. 0. 1. 0. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 1. 0.]
Совпадение: True

As you can see, all the bits matched!

If you believe it this article, the minsum algorithm loses to spa in terms of energy gain in coding by 0.2 – 0.5 dB. But this can be fixed, because we can “tweak” our calculations so that they are closer to what it would be like if we calculated E through tanh.

If you look at the BER graph, you can see that the loss of the MinSum algorithm in front of SPA is indeed approximately 0.2 – 0.3 dB.

BER vs SNR graph

P.s. the graph is taken from this articles.

The article mentioned provides two ways to “tweak” the results:

  1. Minsum normalized. Enter the coefficient a, which can be changed in the range (0, 1]. By changing this coefficient, it is possible to measure the BER (Bit Error Rate) at a fixed SNR (signal-to-noise ratio) and find out at what value of a the BER indicator will be the best:

  2. Minsum offset. You can enter the bias coefficient b, which can also vary in the range (0, 1]. This coefficient is selected in the same way as in the case of Minsum normalized:

  3. Combined Minsum. No one prevents us from using the coefficients a and b simultaneously:

    Well, that's all, thank you for your attention! As I write my dissertation, I will try to write more new articles. All the best!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *