How data locality affects code efficiency
This is a simple concept that requires very little to implement. We all actively think about how variables take up memory. We tend to forget that instructions also take up memory. They must be loaded from storage into faster memory, then into the CPU to be executed. CPUs try to make things run quicker by leveraging the fact that they have very fast, very small storage within them called the cache. The cache allows the CPU to pre-load instructions ready for execution and store its state in case of temporary context switching. This pre-loading behavior is necessary for the CPU to run at its most efficient as while there has been a technology race in CPU speeds, that hasn’t been mirrored in the world of RAM. You might be able to store massive programs entirely in your RAM but the bus speed of the motherboard limits how many instructions can be sent to the CPU per second. When we reach the bottleneck, it doesn’t matter how fast the...