Big O complexity
The computational complexity of an algorithm is usually denoted using the popular Big O notation. The Big O notation is used for expressing the worst-case scenario for the order of growth of an algorithm. It shows how the performance of an algorithm changes as the size of the data it processes grows.
O(1)
means constant time complexity, which does not depend on the amount of data at hand. O(n)
means that the execution time is proportional to n
(linear time)—you cannot process data without accessing it, so O(n)
is considered good. O(n
2)
(quadratic time) means that the execution time is proportional to n
2. O(n!)
(factorial time) means that the execution time of the algorithm is directly proportional to the factorial of n
. Simply put, if you have to process 100 values of some kind, then the O(n)
algorithm will do about 100 operations, O(n
2)
is going to perform about 10,000 operations, and the algorithm with the O(n!)
complexity 10
158 operations!
Now that...