Summary
The Big O notation is a way to describe the time and space requirements of an algorithm (or data structure). This is not an exact science, however; it's about finding the primary growth factor of each of the things mentioned to answer this question: what happens when the problem space grows bigger?
Any algorithm will fall within a few relevant classes that describe that behavior. By applying the algorithm to one more element, how many more steps have to be taken? One easy way is to visualize the individual charts and think of whether it will be linear (O(n)), quasilinear (O(n log(n))), quadratic (O(n²)), or even exponential (O(2n)). Whatever the case may be, it is always best to do less work than there are elements to be looked at, such as constant (O(1)) or logarithmic (O(log(n)) behaviors!
Selecting the operations is typically done based on the worst-case behavior, that is, the upper limit of what is going to happen. In the next chapter, we will take a closer look at these behaviors...