Big O Notation : A full guide to Complexity Analysis in 10 minutes

Analyzing an algorithm is very important when building a large scale software system. Engineers work on to write the most optimal code in terms of efficiency when writing the code. Big O notation is a standardised way to analyze the efficiency of algorithms in terms of Worst Case Scenario or the Upper Bound with respect to the increase in input size.

Big O is usually of the form O(f(n)). This actually means Order of a function f(n) where f(n) denotes the number of iterations or operations that the algorithm will take for the size of input being n. This denotes the asymptotic analysis of the algorithm, but not the exact value in the algorithm. This is usually sufficient to compare algorithms to find the more efficient one.

The definition of BigO notation mathematically says: Given two functions f(n) and g(n), we say that f(n) is O(g(n)) if there exist constants c > 0 and n0 >= 0 such that f(n) <= c*g(n) for all n >= n0.

Asymptotic Analysis:

Let’s look at some most common complexities:

  1. O(1): This denotes constant time complexity. This means that the time complexity will be the same irrespective of the size of the input.
  2. O(n): This denotes linear time complexity. The complexity increases linearly proportional to the size of the input.
  3. O(log n): Here with the increase in the size of input, the complexity grows logarithmically. In easy terms, it grows slowly as compared to the increase in size to the input.
  4. O(n log n): This means the complexity grows linearly along with the product of the logarithmic growth. This is usually efficient than quadratic time complexities but less efficient than linear or logarithmic ones.
  5. O(n^c): This means that with the increase in input size, the runtime grows in the degree of c. If c is 2 then it grows quadratically, if it is 3, then it grows cubically. This means that when the input size is very large, the algorithm will be performing with very less efficiency.
  6. O(c^n): This is highly non-performing and inefficient for large input size. This denotes exponential growth in runtime for increase in input size.
  7. O(n!): This is also one of the most inefficient complexity. Here, the time complexity grows with respect to factorial for growth in the input size.

This same can be used to denote Space complexity as well. Instead of the runtime, we would consider the amount of memory space, it would consume for it’s running.

When analysing an algorithm, we usually consider three scenarios to deal with:

  • Best Case Scenario: It denotes the minimum space or time required by the algorithm for an input.
  • Average Case Scenario: It denotes the average space or time required by the algorithm for all kind of expected inputs
  • Worst Case: It denotes the maximum space or time required by the algorithm for any input. This is usually what programmers look at while designing a large scale system.

Now, let’s look at some mathematical properties of Big O notation:

  1. Reflexivity: For any function f(n), f(n) = O(f(n)).
  2. Transitivity: If f(n) = O(g(n)) and g(n) = O(h(n)), then f(n) = O(h(n)).
  3. Sum Rule: If f(n) = O(g(n)) and h(n) = O(g(n)), then f(n) + h(n) = O(g(n)).
  4. Constant Factor: For any constant c > 0 and functions f(n) and g(n), if f(n) = O(g(n)), then cf(n) = O(g(n)).
  5. Product Rule: If f(n) = O(g(n)) and h(n) = O(k(n)), then f(n) * h(n) = O(g(n) * k(n)).
  6. Composition Rule: If f(n) = O(g(n)) and g(n) = O(h(n)), then f(g(n)) = O(h(n)).

How to find the Big O notation for an algorithm ?

  1. Look at the algorithm closely to find out the dominant term. Dominant term refers to as the term with the highest growth for increase in the input size.
  2. Next step is to identify the order of growth for the increase of size. Ignore any terms which are of lower order than the dominant terms. Also ignore any terms that are constant.
  3. Now write the Big O notation as O(f(n)), where f(n) is the dominant term. If the algorithm has different complexities at different sections, an additive notation is used to denote them. It might look like O(n) + O(n^2). We would consider the higher order term and say it as O(n^2).
  4. Remove any constant factors from the term if present. For example, O(3n) is written as O(n). It is important to understand that Big 0 notation is to understand the asymptotic behaviour and not any actual values.

Big O Examples:

Some algorithm examples along with their time complexities:

O(1)Addition of two numbers.
O(n)Linear Search.
O(log n)Binary Search.
O(n log n)Merge Sort.
O(n^c)Bubble Sort, Insertion Sort etc.
O(c^n)Tower of Hanoi Problem
O(n!)All permutation of a string
Finally, it is important to understand that for asymptotic analysis of an algorithm, apart from Big O notation, there is also Big Ω Notation which represents the lower bound of an algorithm and Big θ notation which represents both upper and the lower bounds. Usually, the best case scenario is represented by the Big Omega notation and average case using Big Theta notation.

Learn more about Big O notation from these resources:
1. https://en.wikipedia.org/wiki/Big_O_notation

2. https://www.freecodecamp.org/news/big-o-notation-why-it-matters-and-why-it-doesnt-1674cfa8a23c/

2 thoughts on “Big O Notation : A full guide to Complexity Analysis in 10 minutes”

  1. Hello just wanted to give you a quick heads up. The text in your post
    seem to be running off the screen in Opera. I’m not
    sure if this is a format issue or something to do with browser compatibility but I
    thought I’d post to let you know. The design and style look great though!
    Hope you get the problem resolved soon. Thanks

    Reply

Leave a Comment