Introducing serverless architectures
For decades, the infrastructure required to run code included an Operating System (OS) running on top of dedicated hardware, leading to a tremendous waste of computing resources.
While virtualization started in the late 1960s for mainframes, it wasn’t until the early 2000s that it became generally available and users could finally share resources, which started to simplify the original scenario. Virtualization created multiple logical servers on top of a shared pool of computing power, allowing for allocated resources to be better adjusted, and providing services to more users with the same or less hardware.
The use of containers, whose predecessors we've been using since the 1970s, exploded in popularity when Docker emerged in the early 2010s. Using containers reduces the contents of a deployment package to just the OS libraries and the dependencies that our code requires, making packaged applications much smaller and also portable...