Imagine, you are in a shop buying a TV. You found several one, which fit into your needs – they have all fancy stuff you are looking for – 4k, OLED, Dolby, but… they don’t have a price. How would you choose the TV now? Price for TV is exactly the same as the Big O for an algorithm. I am going to give the Big O benefits and intro.
Before I start, I want to remind that the Big O in this context is nor Japanese anime series, neither the bar in Soho!
What is the Big O?
The Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e.g. in memory or on disk) by an algorithm. What does that mean? It means The Big O helps to find out how complex an algorithm is, based on its’ input. Or in other words – how expensive it is.
It us useful because once algorithm is evaluated, you can:
- compare/improve the complexity of your algorithms
- choose wisely a design of the algorithm, when you know how complex they are (like buying a TV, isn’t it?)
- find bottlenecks of a code
- optimize algorithm
Examples how to calculate the Big O complexity
Let’s have a look at the example which sums numbers up to n.
What does the code do? It iterates from 1 to the number n and sums all the numbers. The thing which matters for complexity here is the loop, which will be executed n times. This means this algorithm has O(N) complexity. It is linear. Easy!
Adding a bit more complexity to the code:
This code iterates through the inner loop with each value of outter loop, which means it is executed n * m times. So complexity is O(N * M).
Again, tweaking the code a bit:
This code will iterate n times in the first loop and then m times in the second one. This means we execute the loops sequentially n and then m times O(N + M). Simple, isn’t it?
Let’s complicate things a bit with recursion. This code returns the n-th member of Fibonacci’s sequence:
As we know, recursion provides a tree in it’s calling hierarchy. If we ask for the 4th member, it looks like that:
I don’t go to details and a proof of the equation, just if you can see multiple calls of recursion in a function there is a formula to calculate a complexity of the algorithm. It is O(branches ^ depth). In our example, nodes have 2 branches and the depth is 4, which is actually the input n. So, the complexity is O(2 ^ n).
These are the few things to remember:
- drop non-dominant terms and constants
- when “do this and when you are done, do that” -> O(A + B)
- when “do this each time you do that” -> O(A * B)
- halve N until we get “1” -> O (log N)
- when recursive function has multiple calls -> O(branches ^ depth)
To compare complexities:
The picture above shows how different complexities look comparing to another ones. This shows how some complexities are dominated over another ones. This means once you know the price of an algorithm or different parts of algorithm, you can find a bottleneck. For ex, if you have two parts of the algorithm where O(log x) + O(x!), you can see in the picture that x! much more complex then log x. x! is the dominant member and it takes majority of time to compute the result, which means this is the place to optimise.
To sum up
This post is just a very very basic intro to the Big O. I want to show that it is not a rocket science to understand it and it is useful when looking for bottlenecks and optimisation. About the Big O think as the price for TV.
I was looking at the book “Cracking the code interview” as the starting point and exercices in the internet to solve. Now, I pay attention to how expensive my code is to execute!
- think about the Big O as a price for TV
- the Big O is useful when looking for optimisation or bottlenecks