Constants in Big O Notation
Constants in big O notations are usually ignored because they don’t add much to the complexity when the input size is very large. But, they do have significant meaning when the input is small or when we want to make a comparison between different algorithms. Let’s discuss these two scenarios in this short-but-somewhat-useful treatise.
In my previous write-up, Big O Notation, I tried to present big O notations in a fathomable way. When we determine O( ) for an algorithm, we tend to ignore constants since when the input is very large, these constants don’t matter much.
Constants matter for the following cases:
Small Input Size:
Let’s take T(n) = 5n² + 6n + 7, where T(n) becomes 2127 if n = 20. But, if n = 2 X 10⁴, then T(n) becomes 2000120007. Furthermore, if n becomes larger and larger, then T(n) will also be larger but the constant term (7 in this case) makes a lesser and lesser contribution to T(n). If the constant was 10 instead of 7, then T(n) would be 2130 instead of 2127 (for n = 20), and 2000120010 instead of 2000120007 (for n = 2 X 10⁴). In the first case where n is small, the constant term makes a somewhat significant contribution, but in the latter case, both values are quite similar (2000120007 and 2000120010). So, this is why constants are neglected while computing Big O for an algorithm since Big O is concerned with large inputs.
Comparing Two Algorithms Having Similar Time Complexity:
Since we are more concerned about determining a relationship between runtime and input size, constants don’t make contributory meaning to this relationship. Say, T(n) = n + 7 and T(n) = n + 70000 both have a linear nature between T(n) and n. We can say both have O(n). However, this difference is not insignificant at all when we want to know which algorithm is faster.
n + 7 will always be faster than n + 70000 for same values of n- there’s no doubt about it. But they both have the same O(n) time complexity; if we increase n, then T will also increase linearly for both cases.
Comparing Two Algorithms Having Different Time Complexities:
Now, if two algorithms have different O( )- one is of O(n²) and the other one is of O(n)- and thus different equations for T(n)- T(n) = n²/10 and T(n) = 100n, respectively, then T(n) = n²/10 is faster than T(n) = 100n up to a certain value of n. After this threshold of n, T(n) = n²/10 is always slower than T(n) = 100n. This is the reason O( ) has a condition (n≥ nₒ) in its definition.
Up to n = 10000, T(n) = n²/10 is faster than T(n) = 100n (we can get this threshold value by equating them and solving for n- where they become equal), and then, beyond that root value we just solved, T(n) = 100n will always be faster than T(n) = n²/10- as it should be since O(n) is always faster than O(n²) for all n≥ nₒ (here, nₒ = 10000).
To sum up, constants are considered for these afore-discussed three cases (there might be more, but this is what I am aware of as of writing this). There’s more to it but let’s keep that for a different write-up. I have read multiple articles on the internet regarding Big O and I am grateful to every one of them. Now, passing on to you geniuses. Stay safe and nerdiest.