Analyzing an algorithm is very important when building a large scale software system. Engineers work on to write the most optimal code in terms of efficiency when writing the code. Big O notation is a standardised way to analyze the efficiency of algorithms in terms of Worst Case Scenario or the Upper Bound with respect to the increase in input size.
Big O is usually of the form O(f(n)). This actually means Order of a function f(n) where f(n) denotes the number of iterations or operations that the algorithm will take for the size of input being n. This denotes the asymptotic analysis of the algorithm, but not the exact value in the algorithm. This is usually sufficient to compare algorithms to find the more efficient one.
The definition of BigO notation mathematically says: Given two functions f(n) and g(n), we say that f(n) is O(g(n)) if there exist constants c > 0 and n0 >= 0 such that f(n) <= c*g(n) for all n >= n0.
Asymptotic Analysis:
Let’s look at some most common complexities:
- O(1): This denotes constant time complexity. This means that the time complexity will be the same irrespective of the size of the input.
- O(n): This denotes linear time complexity. The complexity increases linearly proportional to the size of the input.
- O(log n): Here with the increase in the size of input, the complexity grows logarithmically. In easy terms, it grows slowly as compared to the increase in size to the input.
- O(n log n): This means the complexity grows linearly along with the product of the logarithmic growth. This is usually efficient than quadratic time complexities but less efficient than linear or logarithmic ones.
- O(n^c): This means that with the increase in input size, the runtime grows in the degree of c. If c is 2 then it grows quadratically, if it is 3, then it grows cubically. This means that when the input size is very large, the algorithm will be performing with very less efficiency.
- O(c^n): This is highly non-performing and inefficient for large input size. This denotes exponential growth in runtime for increase in input size.
- O(n!): This is also one of the most inefficient complexity. Here, the time complexity grows with respect to factorial for growth in the input size.
This same can be used to denote Space complexity as well. Instead of the runtime, we would consider the amount of memory space, it would consume for it’s running.
When analysing an algorithm, we usually consider three scenarios to deal with:
- Best Case Scenario: It denotes the minimum space or time required by the algorithm for an input.
- Average Case Scenario: It denotes the average space or time required by the algorithm for all kind of expected inputs
- Worst Case: It denotes the maximum space or time required by the algorithm for any input. This is usually what programmers look at while designing a large scale system.
Now, let’s look at some mathematical properties of Big O notation:
- Reflexivity: For any function f(n), f(n) = O(f(n)).
- Transitivity: If f(n) = O(g(n)) and g(n) = O(h(n)), then f(n) = O(h(n)).
- Sum Rule: If f(n) = O(g(n)) and h(n) = O(g(n)), then f(n) + h(n) = O(g(n)).
- Constant Factor: For any constant c > 0 and functions f(n) and g(n), if f(n) = O(g(n)), then cf(n) = O(g(n)).
- Product Rule: If f(n) = O(g(n)) and h(n) = O(k(n)), then f(n) * h(n) = O(g(n) * k(n)).
- Composition Rule: If f(n) = O(g(n)) and g(n) = O(h(n)), then f(g(n)) = O(h(n)).
How to find the Big O notation for an algorithm ?
- Look at the algorithm closely to find out the dominant term. Dominant term refers to as the term with the highest growth for increase in the input size.
- Next step is to identify the order of growth for the increase of size. Ignore any terms which are of lower order than the dominant terms. Also ignore any terms that are constant.
- Now write the Big O notation as O(f(n)), where f(n) is the dominant term. If the algorithm has different complexities at different sections, an additive notation is used to denote them. It might look like O(n) + O(n^2). We would consider the higher order term and say it as O(n^2).
- Remove any constant factors from the term if present. For example, O(3n) is written as O(n). It is important to understand that Big 0 notation is to understand the asymptotic behaviour and not any actual values.
Big O Examples:
Some algorithm examples along with their time complexities:
O(1) | Addition of two numbers. |
O(n) | Linear Search. |
O(log n) | Binary Search. |
O(n log n) | Merge Sort. |
O(n^c) | Bubble Sort, Insertion Sort etc. |
O(c^n) | Tower of Hanoi Problem |
O(n!) | All permutation of a string |
Learn more about Big O notation from these resources:
1. https://en.wikipedia.org/wiki/Big_O_notation
2. https://www.freecodecamp.org/news/big-o-notation-why-it-matters-and-why-it-doesnt-1674cfa8a23c/
Hello just wanted to give you a quick heads up. The text in your post
seem to be running off the screen in Opera. I’m not
sure if this is a format issue or something to do with browser compatibility but I
thought I’d post to let you know. The design and style look great though!
Hope you get the problem resolved soon. Thanks
Hey, Thanks for reaching out and reporting. We will surely work on resolving this ASAP.