When we write an algorithm to get something done, the compiler takes up some resources from our computer to execute the code. These resources are not limitless and they should be checked so that minimal resources are taken within each execution. In this short article, we are going to traverse through the beautiful concepts of how we can speed up our algorithms if the input size is large for which the compilation takes a significant amount of time.
There’s a caveat here; we are not dealing with improving our code’s runtime, rather we are more concerned with how our code’s runtime will vary if and only if the input is large. To exemplify this, let’s consider we want to develop an algorithm that can sort all the netizens of a densely-populated country on the basis of their annual income. This is a huge task since the input size might be in the order of millions or billions. So, now we need to think about the Time Complexity of our code so that we don’t end up losing much of the resources (Time and Space- here we will accentuate on Time). Moreover, this algorithm might be a 10-line code but that doesn’t mean we can ignore the total time needed to run this code for all the netizens since the code is very short. Thus, we want to know the runtime of our algorithm if the input is very large.
Big O is denoted by O( ), and this refers to the worst-case scenario for our code’s runtime. Say, if we somehow happen to know the O( ) of our code, we can ensure that the code’s runtime will not be any longer than O( ) if the input is large.
For math-savvy: From the POV of mathematics, let’s consider we have a function T(n) where T is time or effort needed and n is input. Our goal is to find a function, X(n), by which we can asymptotically upper-bound T(n). Here, asymptotically is to account for large input size and upper-bound is to blanket T(n) from top (since we want to know the longest time). In other words, we want to know the worst-case scenario (how long the code can take to run) for T(n).
T(n) is O(X(n)) if and only if T(n) ≤ C * X(n) for all n≥ nₒ
To simplify, X(n) will upper-bound T(n) iff X(n) stays above or on T(n) after a threshold value of input n. Then, we can get O( ) of our code- O(X(n)).
Steps to find O( ):
- Determine T(n) of your code
- Find a suitable X(n) that behaves similar to T(n)
- Tune C to a higher (if X(n) resides below T(n)) or a lower value (if X(n) resides above T(n)) to move X(n) just above T(n). X(n) should not be far above T(n)
- If X(n) is upper-bounding T(n) asymptotically (it can upper-bound T(n) at smaller n values as well), then we can say O(X(n)) is our desired upper-bound. It implies if n increases rapidly, then the runtime will follow X(n)’s behavior.
Break-down of the overall process with a simple example:
This code here prints x values from 0 to n-1 (total n numbers). So, the total time would be n for this single for loop.
For the above code snippet, the total runtime would be n + n + n + n = 4*n
So, T(n) = 4n. We need to find a function X(n) that behaves similarly with T(n). Let’s take X(n) = 2n and plot it in python. Although they behave similarly, we can see T(n) > X(n), which contradicts the definition of T(n) ≤ C*X(n). So, we need to take a constant C and multiply it with X(n) so that T(n) ≤ C*X(n) is satisfied. Hence, we can take C = 3; thus, X(n) = 6n. Now, X(n) is upper-bounding T(n). We can also take C = 10 for which X(n) becomes 20n and this also satisfies the condition but it is not a tight upper-bound. 6n can be said to be a tight upper-bound for T(n) = 4n. We cannot take loose-bound values for Big O.
Next, big O of T(n) will be O(n) instead of O(6n). Why? The answer lies in the definition of Big O. When we are bounding T(n) by using X(n), we are already considering and tuning C (constant) values to get a proper bounding function X(n). Thus, we can eliminate the need for including C inside O( ) or it will be redundant. Moreover, for Big O, we only want to know how T(n) varies with n. In the aforementioned case, X(n) = 6n is a tight bounding function for T(n) = 4n. So, O(6n) becomes O(n) and it implies that the relationship between T(n) and n is linear; if n becomes large, T(n) or computational time will increase linearly. This is what we wanted to know only, how T(n) changes if n becomes large. But, X(n) = 6n also has its significance.
To determine the time complexity of an algorithm, we can follow two simple steps, and they are as follows:
Detect the fastest-growing term from all terms involved in T(n)
Remove co-efficient from this fastest-growing term
In the code snippet above, for the nested-for loop, for every value of x from 0 to n-1, all y values (from 0 to n-1) will be considered. Thus, n*n or n² operations will be performed. Next, z will be printed from 1 to n (total n operations). For the last line, the statement will be printed in constant time (say 0.5 ms)- it doesn’t depend on n.
T(n) = n² + n +0.5
Here, n² is the fastest-growing term.
T(n) = n²
The co-efficient of n² is 1 (1*n²).
T(n) = n²
This means with increasing input, T(n) will vary quadratically.
If we have 2 nested-for loops in the above case instead of only one nested-for loop, then time, T(n) = 2n² + n + 0.5
T(n) = 2n²
To sum up, this is just a simple introduction to Big O Notation. You can learn more from book chapters, internet blogs, and Youtube. I have tried to grasp the basics from various sources and tried to converge them all into this article. Please notify me if I need to correct anything. I am new to this topic. Stay safe and healthy.