Order of growth

From CS 61A Wiki
(Redirected from Orders of growth)
Jump to: navigation, search
Purge this page if the LaTeX typesetting doesn't render.

Order of growth is a method for bounding the resources used by a function by the size of a problem.

Asymptotic notation

Illustration of $\Theta$, $O$, and $\Omega$

Asymptotic notation (Big O, Big Theta $\Theta$, or Big Omega $\Omega$) describes the asymptotic behavior of functions.

In the following, $c$, $c_1$, $c_2$, and $n_0$ are constants:

  • $O(g)$ is the set of functions that eventually grow no faster than $g$.
$f(n) \in O(g(n))$ means that $f(n) \leq cg(n)$ for all $n > n_0$. In other words, $g(n)$ is an asymptotic upper bound on $f(n)$.
  • $\Omega(g)$ is the set of functions that eventually grow as least as fast as $f$.
$f(n) \in \Omega(g(n))$ means that $f(n) \geq cg(n)$ for all $n > n_0$. In other words, $g(n)$ is an asymptotic lower bound on $f(n)$.
  • $\Theta(g)$ is the set of functions that eventually grow like $f$.
$f(n) \in \Theta(g(n))$ means that $c_1 g(n) \leq f(n) \leq c_2 g(n)$ for all $n > n_0$. In other words, $g(n)$ is an asymptotic tight bound (upper and lower bound) on $f(n)$.

In 61A, we use the notation to describe the runtime of programs, but it can be used for other things. After all, the statement $f(x) \in O(g(x))$ or $f(n) \in \Theta(g(n))$ simply expresses a relationship between two mathematical functions.

Note: In 61A, you need only worry about $\Theta$.

Relationships between $O$, $\Omega$, $\Theta$

  • If $f(n) \in O(g(n))$, then $g(n) \in \Omega(f(n))$ and vice versa.
  • If $f(n) \in \Theta(g(n))$, then $f(n) \in \Omega(g(n))$ and $f(n) \in O(g(n))$.


In the following, $c$ is a constant:

  1. $\Theta(cf(n)) = \Theta(f(n))$
  2. $\Theta(f(n) + c) = \Theta(f(n))$
  3. $\Theta(f(n) + g(n)) = \Theta(\max(f(n), g(n)))$, for $f, g > 0$
  4. $\Theta(\log_c n) = \Theta(\log n)$


  • $n \in O(n^2)$ and $n^2 \in \Omega(n)$
  • $n^2 \in \Omega(n \log n)$
  • $n^2 + 2n + 1 \in \Theta(n^2)$

Algorithm analysis


Oftentimes, there exist many different algorithms that achieve the same purpose, but we want most efficient one. Small input sizes can usually be computed instantaneously, so we are only interested in how an algorithm performs on large inputs. Therefore, we look at the asymptotic behavior of an algorithm — how it behaves as its input size approaches infinity.

Asymptotic notation also abstracts away details unrelated to the algorithm like hardware, implementation, and language that may affect the algorithm's performance in practice.


Complexity is a function describing the efficiency of an algorithm in terms of the input size. There are two types of complexities:

  • Time complexity $T(n)$ measures the runtime of an algorithm.
  • Space complexity $S(n)$ measures how much memory an algorithm uses. Space complexity is an important consideration if we have limited memory resources.

Sometimes one can be increased at the expense of another (space-time tradeoff), but in 61A, we are mostly concerned with time complexity.

It is not clear how to directly calculate $T(n)$, so we approximate it using asymptotic notation (as the problem size approaches infinity). The problem size depends on the type of input:

  • If the input is a sequence, the problem size is usually the length of the sequence.
  • If the input is a number, the problem size is usually the magnitude of the number.

To find $f(n)$ in the relation $T(n) \in \Theta(f(n))$:

  1. Find the worst-case number of primitive operations executed as a function of the input size. Primitive operations (arithmetic operations, assignment statements, etc) count as one unit of work and take constant time.
  2. Express this function in $\Theta(\cdot)$ notation.

Common complexities

Here are the most common complexities:

  • $\Theta(1)$ – constant growth. Any algorithm whose runtime is independent of its input size.
  • $\Theta(\log n)$ – logarithmic growth. At each step, the input size is reduced by some factor. Doesn't use the entire input.
  • $\Theta(n)$ – linear growth. At each step, the input size is decremented by a constant.
  • $\Theta(n \log n)$ – loglinear growth. Executing an $\Theta(\log n)$ algorithm $n$ times. Typical of divide-and-conquer algorithms that split input in half, recurse on both halves, and then combine the results in linear time.
  • $\Theta(n^2)$ – quadratic growth. Examining the input multiple times.
  • $\Theta(2^n)$ – exponential growth. Searching all possibilities. An exponential time algorithm is unreasonable and intractable.

Here is the hierarchy of the above complexities (from smallest to largest or best to worst): $\Theta(1) < \Theta(\log n) < \Theta(n) < \Theta(n \log n) < \Theta(n^2) < \Theta(2^n)$.

Python functions

In Python, primitive $\Theta(1)$ operations include arithmetic operations (+, -, *, /), logical (and, or, not), comparison (==, <, >, etc), assignment, indexing into a sequence. A function that contains only primitive operations has $\Theta(1)$ time. In addition, if the number of computations does not increase with input size, the runtime is $\Theta(1)$. Example: foo3.

The two ways that a function can get a higher order runtime are iteration or recursion:

  • For iteration, we are concerned with how many times the loop is executed.
  • For recursion, we are concerned with how many recursive calls are needed to reach the base case(s).

For a conditional statement like this:

if test:
    suite 1
    suite 2

Suppose the runtime of test is $T_{true}$ when it gives a truthy value and $T_{false}$ when it gives a falsy value. If the runtime of suite 1 is $S_1$, and the runtime of suite 1 is $S_2$, then the runtime of the entire statement is $\max(T_{true} + S_1, T_{false} + S_2)$, a worst case approximation. Example: foo2.

If the function contains calls to helper functions, you must take their runtimes into consideration as well.

Complexity of Python data structures

A blue asterisk (*) indicates a notable difference between lists and sets.


In the following, seq is a Python sequence, $n = {\tt len(seq)}$, $k, c$ are constants:

  • ${\tt len(seq)} \in \Theta(1)$
  • ${\tt seq[k]} \in \Theta(1)$
  • ${\tt seq[k] = c} \in \Theta(1)$
  • ${\tt seq[a:b:c]} \in \Theta(k)$, where $k$ is the number of elements in the slice
  • ${\tt x\ in\ seq} \in \Theta(n)$
  • ${\tt constructor(seq)} \in \Theta(n)$, where constructor is tuple or list

The following apply only to lists:

  • ${\tt lst.append(...)} \in \Theta(1)$
  • ${\tt lst.pop(k)} \text{ and } {\tt lst.remove(k)} \in \Theta(n)$


In the following, s is a Python set, $n = {\tt len(s)}$, $k$ is a constant:

  • ${\tt len(s)} \in \Theta(1)$
  • ${\tt s.add(...)} \in \Theta(1)$
  • ${\tt x\ in\ s} \in \Theta(1)$ *
  • ${\tt s.pop(k)} \text{ and } {\tt s.remove(k)} \in \Theta(1)$ *
  • ${\tt set(seq)} \in \Theta({\tt len(seq)})$


In the following, d is a Python dictionary, $n = {\tt len(d)}$, $k$ is a constant:

  • ${\tt len(d)} \in \Theta(1)$
  • ${\tt d[k]} \in \Theta(1)$
  • ${\tt d[k] = c} \in \Theta(1)$
  • ${\tt d.pop(k)} \text{ and } {\tt del\ d[k]} \in \Theta(1)$
  • ${\tt dict(seq)} \in \Theta({\tt len(seq)})$

Recursive functions

For simple recursion, determine how many recursive calls are needed to reach a base case and how much time it takes to process the input per recursive call. The runtime of the function is usually the product of these two. Examples: is_palindrome, to_binary_string.

For tree recursion, if there are $k$ recursive calls at each step, generally the runtime is $\Theta(k^n)$. Examples: fib, foo2.

To improve the time complexity of a recursive algorithm, memoization is useful. However, it trades better time complexity for worse space complexity because it takes space to store the previous computations.

Iterative functions

In general, the runtime of an iterative function is the product of the runtime of the body of the loop and the number of iterations. Example: max, is_palindrome2.

For nested loops, analyze how many times the inner loop runs. Example: has_dups.

Iterative functions are better than recursive functions in space complexity. Iterative functions usually have constant space ($\Theta(1)$) because they do not need to keep any information on the stack. Recursive functions have higher space complexity because they accumulate stack frames with each recursive call.



def max(lst):
    """Returns the maximum element of LST.
    >>> max([3, 7, 2, 5])
    x = lst[0]
    for i in range(1, len(lst)):
        if lst[i] > x:
            x = lst[i]
    return x

Let $n = {\tt len(lst)}$. The loop runs from ${\tt i} = 1$ to ${\tt i} = n - 1$, in total $(n - 1) - 1 + 1 = n - 1$ times. The runtime of the body is $\Theta(1)$ because it is composed of primitive operations. Therefore, the runtime of the function is $\Theta(1) \cdot \Theta(n - 1) = \Theta(n)$ by property 2.


def foo(x):
    while x > 0:
        x //= 3

Let $n = {\tt x}$. The loop runs until ${\tt x} \leq 0$. Since the input size is scaled down by a constant (3) each iteration, the runtime of foo is $\Theta(\log_3 n) = \Theta(\log n)$.


def foo2(x, y, n):
    if x > y:
        for i in range(n):
        for i in range(n):
            for j in range(n):
                print(i, j)

The first suite runs in $\Theta(n)$ time; the second suite runs in $\Theta(n^2)$ time. So the worst case runtime of foo2 is $\Theta(\max(n, n^2)) = \Theta(n^2)$.


def has_dups(lst):
    """Returns True if LST has duplicate element(s).
    >>> has_dups([1, 2, 2, 3])
    >>> has_dups([1, 2, 3])
    for i in range(len(lst)):
        for j in range(i + 1, len(lst)):
            if lst[i] == lst[j]:
                return True
    return False

Let $n = {\tt len(lst)}$.

Solution 1 (elegant): in the worst case, each pair of elements in lst must be compared, so ${\tt has\_dups} \in \Theta({n \choose 2}) = \Theta(\frac{n(n-1)}2) = \Theta(n^2)$ by properties 1, 2, and 3.

Solution 2 (careful bookkeeping): Since everything happens in the inner loop, we can count how many times it runs. For the outer loop, ${\tt i}$ ranges from $0$ to $n - 1$. For each iteration of the outer loop, ${\tt j}$ ranges from ${\tt i} + 1$ to $n - 1$, which means it runs at most $(n - 1) - ({\tt i} + 1) + 1$ times. So the total number of times the inner loop runs is $\sum_{i=0}^{n - 1} ((n - 1) - (i + 1) + 1) = \sum_{i=0}^{n - 1} (n - i - 1)$. Ignoring constant factors:
${\tt has\_dups} \in \Theta\left(\sum_{i=0}^{n - 1} (n - i - 1)\right)$
$= \Theta\left(\sum_{i = 0}^{n} n - \sum_{i = 0}^{n} i\right)$
$= \Theta\left(n(n+1) - \frac{n(n+1)}{2}\right)$
$= \Theta\left(\frac{n(n+1)}{2}\right)$
$= \Theta(n^2)$

Solution 3 (lax bookkeeping): The outer loop runs $n - 1$ times, and the inner loop runs at most $n - 1$ times for each iteration of the outer loop. Therefore, the runtime is $\Theta(n - 1) \cdot \Theta(n - 1) = \Theta(n^2)$ by property 2.


def is_palindrome(s):
    """ Returns if string S is a palindrome (reads the same forwards and backwards).
    >>> is_palindrome("abba")
    >>> is_palindrome("11011")
    >>> is_palindrome("110111")
    if len(s) <= 1:
        return True
    return s[0] == s[-1] and is_palindrome(s[1:-1])

Let $n = {\tt len(s)}$. Each recursive call takes $\Theta(1)$ time to check s[0] == s[-1] and at worst $\Theta(n - 2) = \Theta(n)$ time to slice s[1:-1]. The initial call to is_palindrome ends after $\frac n2$ recursive calls because string s is truncated by two letters each call. Therefore, the worst case runtime is $\Theta((1 + n) \cdot \frac n2) = \Theta(n^2)$.


def is_palindrome2(s):
    """ Returns if string S is a palindrome (reads the same forwards and backwards).
    >>> is_palindrome2("abba")
    >>> is_palindrome2("11011")
    >>> is_palindrome2("110111")
    for i in range(len(s) // 2):
        if s[i] != s[-i-1]:
            return False
    return True

Let $n = {\tt len(s)}$. The loop runs $\frac n2$ times, and the body of the loop runs in constant $\Theta(1)$ time. Therefore, the runtime is $\Theta(n / 2) = \Theta(n)$.


def to_binary_string(num):
    """Returns the binary representation of positive NUM.
    >>> to_binary_string(5)
    >>> to_binary_string(15)
    if num <= 1:
        return str(num)
    return to_binary_string(num // 2) + str(num % 2)

The input size is scaled down by a constant (2) each recursive call, and the work that one call does is constant, so the runtime is $\Theta(\log_2 n)$.


def fib(n):
    if n == 0 or n == 1:
        return 1
    return fib(n - 1) + fib(n - 2)

Solution 1: Each recursive call generates twice as many recursive calls as the previous one, so the runtime is about $\Theta(2^n)$.

Solution 2: The recursion tree looks like the following:

Fib call tree.png

The tree has about $2^n$ nodes, so the runtime is $\Theta(2^n)$.


def foo2(n):
    if n < -5:
        return 2
    return n * foo2(n - 1) * foo2(n - 2) * foo2(n - 3)

The function makes 3 recursive calls for each call, so the runtime is $\Theta(3^n)$.


def foo3(n):
    return n + max([1, 2, 3]) + fib(20)

Because fib(20) has the same runtime no matter what n is, ${\tt fib(20)} \in \Theta(1)$. The same applies to max([1, 2, 3]). Therefore, the runtime is $\Theta(1)$.