The performance of an algorithm
We can now estimate the time it takes by running the program a billion times and checking how long it took to run it, as shown in the following code:
On my computer, it takes 4,393 milliseconds. So the time taken per call is 4,393 divided by a billion, that is, about 4.4 nanoseconds. Looks like a very reasonable time to do any computation. But what happens if the input is different? What if I pass power = 1000
? Let's check that out. Now it takes about 420,000 milliseconds to run a billion times, or about 420 nanoseconds per run. Clearly, the time taken to do this computation depends on the input, and that means any reasonable way to talk about the performance of a program needs to take into account the input to the program.
Okay, so we can say that the number of nanoseconds our program takes to run is 0.42 X power, approximately.
In the case of time, apart from the fact that the time depends on the input, it also depends on which computer you are running it on. The program that takes 4 seconds to run on my computer may take 40 seconds on a very old computer from the nineties and may run in 2 seconds in yours. However, the actual computer you run it on only improves the time by a constant multiplier. To avoid getting into too much detail about specifying the details of the hardware the program is running on, instead of saying the program takes 0.42 X power milliseconds approximately, we can say the time taken is a constant times the power, or simply say it is proportional to the power.
Saying the computation time is proportional to the power actually makes it so non-specific to hardware, or even the language the program is written in, that we can estimate this relationship by just looking at the program and analyzing it. Of course, the running time is sort of proportional to the power because there is a loop that executes power number of times, except, of course, when the power is so small that the other one-time operations outside the loop actually start to matter.
Analysis of asymptotic complexity
We seem to have hit upon an idea, an abstract sense of the running time. Let's spell it out. In an abstract way, we analyze the running time of and the space required by a program by using what is known as the asymptotic complexity.
We are only interested in what happens when the input is very large because it really does not matter how long it takes for a small input to be processed; it's going to be small anyway. So, if we have x3
+ x2, and if x is very large, it's almost the same as x3. We also don't want to consider constant factors of a function, as we have pointed out earlier, because it is dependent on the particular hardware we are running the program on and the particular language we have implemented it in. An algorithm implemented in Java will perform a constant times slower than the same algorithm written in C. The formal way of tackling these abstractions in defining the complexity of an algorithm is called an asymptotic bound. Strictly speaking, an asymptotic bound is for a function and not for an algorithm. The idea is to first express the time or space required for a given algorithm to process an input as a function of the size of the input in bits and then looking for an asymptotic bound of that function.
We will consider three types of asymptotic bounds—an upper bound, a lower bound and a tight bound. We will discuss these in the following sections.
Asymptotic upper bound of a function
An upper bound, as the name suggests, puts an upper limit of a function's growth. The upper bound is another function that grows at least as fast as the original function. What is the point of talking about one function in place of another? The function we use is in general a lot more simplified than the actual function for computing running time or space required to process a certain size of input. It is a lot easier to compare simplified functions than to compare complicated functions.
For a function f, we define the notation O, called big O, in the following ways:
- f(x) = O(f(x)).
- If f(x) = O(g(x)), then k f(x) = O(g(x)) for any non-zero constant k.
- For example, 5x3 = O(x3) and 2 log x = O(log x) and -x3 = O(x3) (taking k= -1).
- If f(x) = O(g(x)) and |h(x)|<|f(x)| for all sufficiently large x, then f(x) + h(x) = O(g(x)).
- For example, 5x3 - 25x2 + 1 = O(x3) because for a sufficiently large x, |- 25x2 + 1| = 25x2 - 1 is much less that | 5x3| = 5x3. So, f(x) + g(x) = 5x3 - 25x2 + 1 = O(x3) as f(x) = 5x3 = O(x3).
- We can prove by similar logic that x3 = O( 5x3 - 25x2 + 1).
- if f(x) = O(g(x)) and |h(x)| > |g(x)| for all sufficiently large x, then f(x) = O(h(x)).
- For example, x3 = O(x4), because if x is sufficiently large, x4 > x3.
Note that whenever there is an inequality on functions, we are only interested in what happens when x is large; we don't bother about what happens for small x.
Note
To summarize the above definition, you can drop constant multipliers (rule 2) and ignore lower order terms (rule 3). You can also overestimate (rule 4). You can also do all combinations for those because rules can be applied any number of times.
We had to consider the absolute values of the function to cater to the case when values are negative, which never happens in running time, but we still have it for completeness.
Note
There is something about the sign = that is not usual. Just because f(x) = O(g(x)), it does not mean, O(g(x)) = f(x). In fact, the last one does not even mean anything.
It is enough for all purposes to just know the preceding definition of the big O notation. You can read the following formal definition if you are interested. Otherwise you can skip the rest of this subsection.
The preceding idea can be summarized in a formal way. We say the expression f(x) = O(g(x)) means that positive constants M and x0 exist, such that |f(x)| < M|g(x)| whenever x > x0. Remember that you just have to find one example of M and x0 that satisfy the condition, to make the assertion f(x) = O(g(x)).
For example, Figure 1 shows an example of a function T(x) = 100x2
+2000x+200. This function is O(x2
), with some x0
= 11 and M = 300. The graph of 300x2 overcomes the graph of T(x) at x=11 and then stays above T(x) up to infinity. Notice that the function 300x2 is lower than T(x) for smaller values of x, but that does not affect our conclusion.
To see that it's the same thing as the previous four points, first think of x0 as the way to ensure that x is sufficiently large. I leave it up to you to prove the above four conditions from the formal definition.
I will, however, show some examples of using the formal definition:
- 5x2 = O(x2) because we can say, for example, x0 = 10 and M = 10 and thus f(x) < Mg(x) whenever x > x0, that is, 5x2 < 10x2 whenever x > 10.
- It is also true that 5x2 = O(x3) because we can say, for example, x0 = 10 and M = 10 and thus f(x) < Mg(x) whenever x > x0, that is, 5x2 < 10x3 whenever x > 10. This highlights a point that if f(x) = O(g(x)), it is also true that f(x) = O(h(x)) if h(x) is some functions that grows at least as fast as f(x).
- How about the function f(x) = 5x2 - 10x + 3? We can easily see that when x is sufficiently large, 5x2 will far surpass the term 10x. To prove my point, I can simply say x>5, 5x2> 10x. Every time we increment x by one, the increment in 5x2 is 10x + 1 and the increment in 10x is just a constant, 10. 10x+1 > 10 for all positive x, so it is easy to see why 5x2 is always going to stay above 10x as x goes higher and higher.
In general, any polynomial of the form an
xn
+ an-1
xn-1
+ an-2
xn-2
+ … + a0
= O(xn). To show this, we will first see that a0 = O(1). This is true because we can have x0 = 1 and M = 2|a0|, and we will have |a0| < 2|a0
| whenever x > 1.
Now, let us assume it is true for some n. Thus, an
xn
+ an-1
xn-1
+ an-2
xn-2
+ … + a0
= O(xn). What it means, of course, is that some Mn and x0 exist, such that |an
xn
+ an-1
xn-1
+ an-2
xn-2
+ … + a0
| < Mn
xn whenever x>x0. We can safely assume that x0
>2, because if it is not so, we can simply add 2 to it to get a new x0, which is at least 2.
Now, |an
xn
+ an-1
xn-1
+ an-2
xn-2
+ … + a0
| < Mn
xn implies |an+1
xn+1 + an
xn
+ an-1
xn-1
+ an-2
xn-2
+ … + a0
| ≤ |an+1
xn+1
| + |anxn
+ an-1
xn-1
+ an-2
xn-2
+ … + a0
| < |an+1
xn+1
| + Mn
xn.
This means |an+1
xn+1
| + Mn
xn
> |an
xn
+ an-1
xn-1
+ an-2
xn-2
+ … + a0
|.
If we take Mn+1
= |an+1
| + Mn, we can see that Mn+1
xn+1
= |an+1
| xn+1
+ Mn
xn+1
=|an+1
xn+1
| + Mn
xn+1
> |an+1
xn+1
| + Mn
xn
> |an+1
xn+1
+ an
xn
+ an-1
xn-1
+ an-2
xn-2
+ … + a0
|.
That is to say, |an+1
xn+1
+ an-1
xn-1
+ an-2
xn-2
+ … + a0
|< Mn+1
xn+1 for all x > x0, that is, an+1
xn+1
+ an
xn
+ an-1
xn-1
+ an-2
xn-2
+ … + a0
= O(xn+1
).
Now, we have it true for n=0, that is, a0 = O(1). This means, by our last conclusion, a
1x + a0
= O(x). This means, by the same logic, a2
x2
+ a1
x + a0
= O(x2
), and so on. We can easily see that this means it is true for all polynomials of positive integral degrees.
Asymptotic upper bound of an algorithm
Okay, so we figured out a way to sort of abstractly specify an upper bound on a function that has one argument. When we talk about the running time of a program, this argument has to contain information about the input. For example, in our algorithm, we can say, the execution time equals O(power). This scheme of specifying the input directly will work perfectly fine for all programs or algorithms solving the same problem because the input will be the same for all of them. However, we might want to use the same technique to measure the complexity of the problem itself: it is the complexity of the most efficient program or algorithm that can solve the problem. If we try to compare the complexity of different problems, though, we will hit a wall because different problems will have different inputs. We must specify the running time in terms of something that is common among all problems, and that something is the size of the input in bits or bytes. How many bits do we need to express the argument, power, when it's sufficiently large? Approximately log2 (power). So, in specifying the running time, our function needs to have an input that is of the size log2 (power) or lg (power). We have seen that the running time of our algorithm is proportional to the power, that is, constant times power, which is constant times 2 lg(power) = O(2x),where x= lg(power), which is the the size of the input.
Asymptotic lower bound of a function
Sometimes, we don't want to praise an algorithm, we want to shun it; for example, when the algorithm is written by someone we don't like or when some algorithm is really poorly performing. When we want to shun it for its horrible performance, we may want to talk about how badly it performs even for the best input. An a symptotic lower bound can be defined just like how greater-than-or-equal-to can be defined in terms of less-than-or-equal-to.
A function f(x) = Ω(g(x)) if and only if g(x) = O(f(x)). The following list shows a few examples:
- Since x3 = O(x3), x3 = Ω(x3)
- Since x3 = O(5x3), 5x3 = Ω(x3)
- Since x3 = O(5x3 - 25x2 + 1), 5x3 - 25x2 + 1 = Ω(x3)
- Since x3 = O(x4), x4 = O(x3)
Again, for those of you who are interested, we say the expression f(x) = Ω(g(x)) means there exist positive constants M and x0, such that |f(x)| > M|g(x)| whenever x > x0, which is the same as saying |g(x)| < (1/M)|f(x)| whenever x > x0, that is, g(x) = O(f(x)).
The preceding definition was introduced by Donald Knuth, which was a stronger and more practical definition to be used in computer science. Earlier, there was a different definition of the lower bound Ω that is more complicated to understand and covers a few more edge cases. We will not talk about edge cases here.
While talking about how horrible an algorithm is, we can use an asymptotic lower bound of the best case to really make our point. However, even a criticism of the worst case of an algorithm is quite a valid argument. We can use an asymptotic lower bound of the worst case too for this purpose, when we don't want to find out an asymptotic tight bound. In general, the asymptotic lower bound can be used to show a minimum rate of growth of a function when the input is large enough in size.
Asymptotic tight bound of a function
There is another kind of bound that sort of means equality in terms of asymptotic complexity. A theta bound is specified as f(x) = Ͽ(g(x)) if and only if f(x) = O(g(x)) and f(x) = Ω(g(x)). Let's see some examples to understand this even better:
- Since 5x3=O(x3) and also 5x3=Ω(x3), we have 5x3=Ͽ(x3)
- Since 5x3 + 4x2=O(x3) and 5x3 + 4x2=Ω(x3), we have 5x3 + 4x2=O(x3)
- However, even though 5x3 + 4x2 =O(x4), since it is not Ω(x4), it is also not Ͽ(x4)
- Similarly, 5x3 + 4x2 is not Ͽ(x2) because it is not O(x2)
In short, you can ignore constant multipliers and lower order terms while determining the tight bound, but you cannot choose a function which grows either faster or slower than the given function. The best way to check whether the bound is right is to check the O and the condition separately, and say it has a theta bound only if they are the same.
Note that since the complexity of an algorithm depends on the particular input, in general, the tight bound is used when the complexity remains unchanged by the nature of the input.
In some cases, we try to find the average case complexity, especially when the upper bound really happens only in the case of an extremely pathological input. But since the average must be taken in accordance with the probability distribution of the input, it is not just dependent on the algorithm itself. The bounds themselves are just bounds for particular functions and not for algorithms. However, the total running time of an algorithm can be expressed as a grand function that changes it's formula as per the input, and that function may have different upper and lower bounds. There is no sense in talking about an asymptotic average bound because, as we discussed, the average case is not just dependent on the algorithm itself, but also on the probability distribution of the input. The average case is thus stated as a function that would be a probabilistic average running time for all inputs, and, in general, the asymptotic upper bound of that average function is reported.