Now that we are familiar with the concept of time complexity of an algorithm, it's time to study Big O itself, as well as ways to analyze and compute time complexity algorithms ourselves. This is quite important: be it to better understand the complexity of already existing algorithms, or to estimate the effectiveness of our own programs and algorithms. Indeed, writing an algorithm is not enough: proving that it is efficient is the key.
Big O notation
As we already mentioned, we will use the Big O notation to measure the efficiency of algorithms. As a matter of fact, we have borrowed this symbol from mathematics, however, we shall not worry about the mathematical meaning or definition of the Big O. Less formally, we can say that an algorithm has the time complexity if its number of operations grows bigger similar to (or slower than) the function when the input size is a large number. In order to avoid unnecessary abstractness, let's consider the following task: given a table with integers in its cells. Find the number in the given table.
Alice and Bob have come up with their own algorithms to solve the problem. Bob's algorithm consists in scanning every cell of the table and checking if the corresponding value is equal to . Well, this implies a maximum of comparisons, which means that the time complexity of Bob's algorithm is . On the other hand, Alice somehow knows earlier in which column the number will be located, hence, she only needs to scan the elements of that column. A column consists of cells, meaning that Alice's algorithm will take time.
What bound?
Basically, on a table , Bob will have to perform a maximum of operations; meanwhile, Alice will perform no more than . Not a big difference really, is it? What if we have a table for a large ? In this case, will be considerably bigger than , as shown below. This is exactly what determines the efficiency of an algorithm – the way it behaves with large input sizes. Hence, we conclude that Alice's algorithm is faster than Bob's, as the Big O notation suggests.
However, a simple question arises: why can't we write simply or for the complexities? Why do we need to add this beautiful round letter in front of these functions? Well, imagine that the element is placed in the first cell of the table. Bob will find it immediately and terminate his algorithm. How many steps does he perform – ? No, just one.
That is why we use the Big O: roughly speaking, it describes the upper bound for the function's growth rate. This is one of the Big O notation's essential advantages. It means that you can calculate how much time to allocate for processing a certain amount of input data and be sure that the algorithm will process it all in due time. In practice, an algorithm might sometimes work even better than what the Big O notation shows for its complexity, but not worse.
Calculating complexity: constants
Let's look at a simple example. You want to find the maximum of numbers. You will probably decide to go through them and compare every new element with the maximum so far. You will make comparisons, so the time complexity is .
However, algorithms are usually quite complex and consist of several steps, whose time complexity may belong to different time complexity classes from the past topic. Therefore, to be able to calculate complexities by yourself, it is essential for you to get familiar with the basic properties of the Big O:
-
Ignore the constants. As we discussed above, while calculating complexities, we focus solely on the behavior of our algorithm with large input sizes. Therefore, repeating some steps a constant number of times does not affect the complexity. For example, if you traverse elements times, we say that the algorithm's time complexity is , and not . Indeed, there is no significant difference between 1 000 000 000 and 5 000 000 000 operations performed by the algorithm. In either case, we conclude that it is relatively slow. Formally, we write . It is similar for the rest of the complexity classes.
More properties
But that is not all: there are other rules that might naturally occur, and you should be aware of:
-
Applying a procedure times. What if you need to go over elements times? It is not a constant anymore, as it depends on the input size. In this case, the time complexity becomes . It's simple: you do times an action proportional to , which means the result is proportional to . In Big O notation, we write it as .
-
Smaller terms do not matter. Recall from the loops in the past topic, that another common case is when after doing some actions, you need to do something else. For instance, you traverse elements times and then traverse elements again. In this case, the complexity is still . Additional actions do not affect your complexity, which is proportional to . In Big O notation, it looks like this: . All in all, always keep the largest term in Big O and forget about all others. It is rather easy to understand which terms are larger based on the knowledge about the order we already have. Naturally, a question arises: why is it correct to ignore the smaller terms? Let's illustrate the example above:
The images show that when the input size is large, the graphs of and almost coincide (their growth rate is similar). As for , for large this value is considerably greater than , therefore adding to it does not impact the value of the function much. This is why we can rightfully write instead of .
Conclusion
Let's recap what we've learned in this topic. Big O notation provides a theoretical upper bound on the running time as a function of the input size. While calculating the complexity of your own algorithms, it is important to:
- Ignore the constants;
- Multiply complexities of "nested" actions;
- Keep only the "big" terms.
Mastering Big O notation and understanding time complexity is crucial for building efficient algorithms, optimizing code performance, and ultimately delivering a better user experience. As you delve deeper into the world of algorithms, you'll gain a clearer picture of how to calculate and apply these concepts, making you a more proficient and effective programmer. So, let this topic serve as a motivator to explore further and become even more skilled in this fascinating field.