Big O notation in Swift

What is Big O notation?

The Big O notation (or simply Big O) is a way of evaluating the relative performance of a data structure or algorithm, usually along two axes: time and space.

Dominant operations

The way we define Big O algorithms is by looking at the worst performance in its dominant operations.

Constant time – O(1)

func constantTime(_ n: Int) -> Int {
    let result = n * n
    return result
}

Algorithms that don’t loop (eg: for-in) or algorithms that simply return the result of some simple calculation are “constant time” or “O(1)”. This means that these operations are very fast.

Linear time – O(n)

func linearTime(_ A: [Int]) -> Int {
    for i in 0..<A.count {
        if A[i] == 0 {
            return 0
        }
    }
    return 1
}

Once the performance of an algorithm becomes dependent on the size of the input being passed, we can no longer say that it has “constant time”. If the time it takes to process is a straight line, we call it “linear time”. This means that the time it takes is directly proportional to the size of the input.

Even though the loop may return immediately if the first value of the array is zero, when evaluating Big O we are always looking for worst-case performance. It’s still O(n) even with O(1) best case.

Logarithmic time – O(log n)

func logarithmicTime(_ N: Int) -> Int {
    var n = N
    var result = 0
    while n > 1 {
        n /= 2
        result += 1
    }
    return result
}

Algorithms like Binary Search Trees (Binary Search Trees) are very fast because they will halve their results every time they look for a result. This bisection is logarithmic, which we denote as “O(log n)”.

Quadratic time – O(n^2)

func quadratic(_ n: Int) -> Int {
    var result = 0
    for i in 0..<n {
        for j in i..<n {
            result += 1
            print("(i) (j)")
        }
    }
    return result
}

When you nest one for-in loop within another, you have a quadratic effect applied to your algorithm, which can slow things down a lot. It’s okay to get the right answer, it’s just that they’re not the most productive.

The graph below will help you better understand the performance of the algorithms.

Algorithms that hit the bottom right corner (O(1), O(logn)) are considered very good. Linear time O(n) is not bad. But anything above that is not considered very performant (fast), e.g. O(n^2).

Memory cost

So far, everything we have considered has had to do with time. And when we talk about Big O, we usually mean time complexity. But there is another side of the coin – memory. We evaluate memory cost in the same way we evaluate time, in the sense that when estimating the memory cost of an algorithm, we look at how many variables are declared and their relative cost. Simple variable declarations are O(1). Whereas arrays and other data structures are relative size or O(n).

Change memory for time

One of the biggest improvements we can make to algorithms is swapping memory for time. Take, for example, a typical interview question:

Given two arrays, create a function that will let the user know if the two arrays contain any elements in common.

A rough way to answer this question would be to iterate over each element in both arrays until a match is found. Very efficient in terms of memory. Slow in terms of O(n^2) time.

// Простое(грубое) решение O(n^2)
func commonItemsBrute(_ A: [Int], _ B: [Int]) -> Bool {
    for i in 0..<A.count {
        for j in 0..<B.count {
            if A[i] == B[j] {
                return true
            }
        }
    }
    return false
}
//проверка
commonItemsBrute([1, 2, 3], [4, 5, 6])
commonItemsBrute([1, 2, 3], [3, 5, 6])

On the other hand, at the sacrifice of memory, we could improve the time if we created a hash map of one array and then used it to quickly look up the answer in another.

// Конвертируем в хэш и поищем по индексу
func commonItemsHash(_ A: [Int], _ B: [Int]) -> Bool {
    
  // Используем цикл но уже не вложенный в другой цикл,
  //как было ранее - O(2n) vs O(n^2)
    var hashA = [Int: Bool]()
    for a in A { // O(n)
        hashA[a] = true
    }
    //Теперь посмотрим в хэше, есть ли там элементы B.
    for b in B {
        if hashA[b] == true {
            return true
        }
    }
    return false
}
commonItemsHash([1, 2, 3], [4, 5, 6])
commonItemsHash([1, 2, 3], [3, 5, 6])

Here is an example of such an exchange of memory for time. The first option did not require additional space, but in terms of time it was very slow. By taking up a bit of memory (an extra hash map), we bought a lot of time and ended up with a much faster algorithm (O(n)).

Similar Posts

Leave a Reply