Calculate the Fibonacci sequence in logarithmic time

On the eve of the start of the basic chicken “Mathematics for Data Science” We invite you to sign up for a free demo lesson, which will be led by our experts.

And right now we offer translation of a useful article.


The naive computation of the famous Fibonacci sequence is done in exponential time. Most people can figure out how to do this in constant time. But can we improve this indicator? Don’t even hesitate.

What is the Fibonacci sequence?

This is a classic infinite number series in which the nth term is the sum of the previous two terms. The first two terms are 1 and 1. The third is 2, the fourth is 3, followed by 5, 8, and so on.

How is it usually calculated?

More often than not, people tend to use a naive recursive solution that calls a function on the previous two members. Obviously, this solution is incredibly inefficient because it runs in exponential time (each function call branches into two separate calls to compute n-1 and n-2).

// calculates nth term of fibonacci, 1 indexed
Procedure Fib_Naive(int n):
    if(n < 3):
        return 1;
        end_if
    return Fib_Naive(n-1) + Fib_Naive(n-2)
end_Fib_Naive

Improved approach

A better solution would be to implement a recursive function that can either remember or store a temporary variable containing the previous two members.

Alternatively, we could just use a temporary variable (temp) and perform calculations iteratively, which is preferable, since recursion usually takes longer than iterative solutions (because calls pushed onto the stack require additional assembly instructions).

// calculates nth term of fibonacci, 1 indexed
Procedure Fib(int n):
        if(n < 3): return 1;
        int prev = 1;
        int cur = 1;
        for i = 3...n:
            int temp = cur;
            cur += prev;
            prev = temp;
        end_for
return cur;
end_Fib

New approach

Before we get started, let’s brush up on some basic knowledge of linear algebra.

Remember the identity matrix, along the main diagonal of which the ones are located, and all other elements are equal to zero:

The product of any matrix and the identity matrix is ​​equal to the matrix itself, and vice versa, as shown above.
The product of any matrix and the identity matrix is ​​equal to the matrix itself, and vice versa, as shown above.

If we swap two rows of the identity matrix and multiply it by another matrix, this is equivalent to swapping the same two rows in another matrix:

Such a matrix (in the center) is called a permutation matrix.
Such a matrix (in the center) is called a permutation matrix.

Now suppose we set one to the element in the lower right corner of such a permutation matrix and multiply it by the vector 2×1 (a, b). What will happen? By the rules of matrix multiplication, this will swap the first two rows and make the bottommost element of the resulting vector equal to (a + b).

We can now return to our task. Remember our iterative Fibonacci sequence? In this equation a represents the previous ((n-1) th) element and b represents the current.

The above matrix multiplication equation does the same.

It turns out that if we raise the matrix on the left to the n-th power and multiply it by the vector (0, 1), we get the n-th element of the Fibonacci sequence.

Where does the logarithm come from?

Two words: multiple squaring… Instead of multiplying the left matrix n times, we can use the previous matrix multiplication products to get the answer faster.

A matrix raised to the 8th power can be calculated by squaring the original matrix, then squaring to reach the 4th power, and then squaring again. This is faster than multiplying the same matrix 8 times.

Thus, we get the following performance forecast:

In order to reduce confusion, since the implementation in a particular programming language can be complex, here is the actual implementation in Java (not pseudocode as in the previous examples).

There are no matrices here, but the implementation still takes advantage of multiple squaring, as discussed earlier, to achieve the desired logarithmic performance:

public int fib(int N) {
        if(N == 0) return 0;
        int[] sol = fibHelp(N);
        
        return sol[1];
    }
    
    public int[] fibHelp(int n) {
        if(n == 1) {
            return new int[] {0,1};
        }
        int m = n/2;
        
        int[] temp = fibHelp(m);
        
        int fPrev = temp[0];
        int fCur = temp[1];
        
        int prev = (fPrev * fPrev) + (fCur * fCur);
        int cur = fCur * (2 * fPrev + fCur);
        int next = prev + cur;
        
        if(n % 2 == 0) { 
            return new int[] {prev, cur};
        }
        else {
            return new int[] {cur, next};
        }
    }

*Note: While it might be tempting to say that we have significantly reduced the computation time and that the complexity of this implementation can be classified as logarithmic, there is still one caveat that we must consider: the time complexity of the arithmetic itself.

Most the fastest known multiplication algorithm runs in O (n log (n)). Technically the number of arithmetic operations does not correspond to the declared one. However, the depth of our recursion is logarithmic, as you can see from the example code.


Sign up for an open webinar.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *