Big-O Notation#

Big-O notation is way to describe either time or memory requirements of an algorithm or program. It simplifies comparisons between different algorithms by looking only at the relationship between the amount of input data and the time or memory requirements.

Some notes on terminology:

  • the amount of input data points or items is \(n\) e.g. \(n\) elements in a list

  • we use the terms “requirements”, “cost” and “complexity” interchangeably

Note

Big-O Notation goes beyond the scope of this course, however it is a valuable framework to learn.

Time Complexity#

Time complexity measures how the running time of an algorithm grows with input size. It is usually measured in terms of number of operations required, rather than “wall-clock” time, which can vary based on hardware and programming language.

The operations can be additions, comparisons, or floating-point operations that the algorithm needs. We usually pick a single unit (like one comparison or one arithmetic operation) and count how many of those happen as input size increases.

Space Complexity#

Space complexity (also called memory complexity) describes how much memory (RAM) an algorithm needs, beyond the size of its inputs.

Memory complexity is measured in abstract units not a specific number of bits and bytes. We usually assume that each piece of data such as a bool, integer or floating point value take up the same unit of memory.

Big-O Details#

Big-O values describe how an algorithm behaves as the input data amount, \(n\), tends to infinity. For that reason, only the fastest growing terms are retained as the other terms contribute a relatively immaterial amount.

To work out an algorithm’s Big-O:

  1. Identify the basic operations whose count grows with input size (e.g. comparisons, assignments).

  2. Express the total count as a function of \(n\)

  3. Keep only the fastest-growing term (the “dominating term”) and drop constant factors.

For example an algorithm that has \(5n^2 + 20n + 100\) operations would be expressed as \(O(n^2)\).

Worked Example 1 - Time Complexity

Consider the Python code below

def multiply_pairs(lst):
    total = 0
    for i in lst:  # n iterations
        for j in lst:  # n iterations each
            total += i * j  # one constant-time operation
    return total


data = list(range(2000))
result = multiply_pairs(data)
print(result)

The function multiply_pairs consists of a nested loop. Each level of the loop operates over \(n\) items and inside the inner loop is a single operation.

Therefore it has a total of \(n \times n \times 1\) operations, which we simplify to \(O(n^2)\).

Worked Example 2 - Memory Complexity

Consider the Python code below

def fibonacci_n(n):
    f = [0, 1]  # list to store every element in sequence
    for i in range(2, n):
        f.append(f[-1] + f[-2])  # add element to the sequence
    return f[-1]


def fibonacci_constant(n):
    f_low = 0  # Store most recent low value
    f_high = 1  # Store most recent high values
    for i in range(2, n):
        new_f_high = f_low + f_high  # Three constant time operations and temp variable
        f_low = f_high
        f_high = new_f_high
    return f_high

The functions provided compute the value of the Fibonacci sequence at position n with the following difference:

  • fibonacci_n stores every element in sequence, therefore in terms of memory requires \(n\) values and has \(O(n)\) memory complexity

  • fibonacci_constant only stores three values f_low, f_high and new_f_high, no matter how big n is, and has \(O(1)\) memory complexity (3 becomes 1 discarding constant factors)