Understanding Big O notation
Big O notation is used to determine algorithmic efficiency. It determines how time scales concerning input. Constant time equates to a Big O notation value of O(1). Data operations that scale linearly over time, depending on the size of the operation, have a Big O notation value of (N), where N equals the amount of data being processed.
For example, if you were iterating over several elements in an array or collection, you would use O(N), which is a linear time, where N is the size of the array or collection. If an iteration contains pairs such as x and y, where you iterate over x in the iteration and then y in the iteration, then your Big O notation would be O(N2). Another scenario would be identifying the amount of time it takes to harvest a square plot of land. This could be written as O(a), where a is the area of land. Alternatively, you could write the Big O notation as O(s2), where s is the length of one size.
There are some rules to consider...