Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Serverless Design Patterns and Best Practices

You're reading from   Serverless Design Patterns and Best Practices Build, secure, and deploy enterprise ready serverless applications with AWS to improve developer productivity

Arrow left icon
Product type Paperback
Published in Apr 2018
Publisher Packt
ISBN-13 9781788620642
Length 260 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Brian Zambrano Brian Zambrano
Author Profile Icon Brian Zambrano
Brian Zambrano
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Introduction FREE CHAPTER 2. A Three-Tier Web Application Using REST 3. A Three-Tier Web Application Pattern with GraphQL 4. Integrating Legacy APIs with the Proxy Pattern 5. Scaling Out with the Fan-Out Pattern 6. Asynchronous Processing with the Messaging Pattern 7. Data Processing Using the Lambda Pattern 8. The MapReduce Pattern 9. Deployment and CI/CD Patterns 10. Error Handling and Best Practices 11. Other Books You May Enjoy

What is serverless computing?

Let's start with the simpler of the two questions first—what is serverless computing? While there may be several ways to define serverless computing, or perhaps more accurately serverless architectures, most people can agree on a few key attributes. Serverless computing platforms share the following features:

  • No operating systems to configure or manage
  • Pay-per-invocation billing model
  • Ability to automatically scale with usage
  • Built-in availability and fault tolerance

While there are other attributes that come with serverless platforms, they all share these common traits. Additionally, there are other serverless systems that provide functionality other than general computing power. Examples of these are DynamoDB, Kinesis, and Simple Queue Service, all of which fall under the Amazon Web Services (AWS) umbrella. Even though these systems are not pay-per-invocation, they fall into the serverless category since the management of the underlying systems is delegated to the AWS team, scaling is a matter of changing a few settings, fault-tolerance is built-in, and high availability is handled automatically.

No servers to manage

Arguably, this is where the term serverless came from and is at the heart of this entire movement. If we look back not too long ago, we can see a time when operations teams had to purchase physical hardware, mount it in a data center, and configure it. All of this was required before engineers even had the chance of deploying their software.

Cloud computing, of course, revolutionized this process and turned it upside down, putting individual engineers in the driver's seat. With a few clicks or API calls, we could now get our very own virtual private server (VPS) in minutes rather than weeks or months. While this was and is incredibly enabling, most of the work of setting up systems remained. A short list of things to worry about includes the following:

  • Updating the operating system
  • Securing the operating system
  • Installing system packages
  • Dealing with dependency management

This list goes on and on. A point worth noting is that there may be hours and hours of configuration and management before we're in a position to deploy and test our software.

To ease the burden of system setup, configuration software such as Puppet, Chef, SaltStack, and Ansible arrived on the scene. Again, these were and are incredibly enabling. Once you have your recipes in place, configuring a new virtual host is, most importantly, repeatable and hopefully less error-prone than doing a manual setup. In systems that comprise hundreds or even thousands of virtual servers, some automation is a requirement rather than a mere convenience.

As lovely as these provisioning tools are, they do come with a significant cost of ownership and can be incredibly time-consuming to develop and maintain. Often, iterating on this infrastructure-as-code tooling requires making changes and then executing them. Starting up a new virtual host is orders of magnitude faster than setting up a physical server; however, we measure VPS boot time and provisioning time in minutes. Additionally, these are software systems in and of themselves that a dedicated team needs to learn, test, debug, and maintain. On top of this, you need to continually maintain and update provisioning tools and scripts in parallel with any changes to your operating systems. If you wanted to change the base operating system, it would be possible but not without significant investment and updates to your existing code.

When Lambda was launched by AWS in 2014, a new paradigm for computing and software management was born. In contrast to managing your virtual hosts, AWS Lambda provided developers the ability to deploy application code in a managed environment without needing to manage virtual hosts themselves. Of course, there are servers running somewhere that are operated by someone. However, the details of these servers are opaque to us as application developers. No longer do we need to worry about the operating system and its configuration directly. With AWS Lambda and other Functions as a Service (FaaS) platforms, we can now delegate the work of VPS management to the teams behind those platforms.

The most significant shift in thinking with FaaS platforms is that the unit of measure has shrunk from a virtual machine to a single function.

Pay-per-invocation billing model

Another significant change with the invention of serverless platforms is the pay-per-invocation model. Before this, billing models were typically per minute or hour. While this was the backbone of elastic computing, servers needed to stay up and running if they were used in any production environment.

Paying for a VPS only while it's running is a great model when developing since you can just start it at the beginning of the d``

ay and terminate it at the end of the day. However, when a system needs to be available all the time, the price you pay is nearly the same whether its CPU is at 100% usage or 0.0001% usage.

Serverless platforms, on the other hand, the bill only while the code is being executed. They are designed and shine for systems that are stateless and have a finite, relatively short duration. As such, billing is typically calculated based on a total invocation time. This model works exceptionally well for smaller systems that may get only a few calls or invocations per day. On many platforms, it's possible to run a production system that is always available completely for free. There is no such thing as idle time in the world of serverless.

Ability to automatically scale with usage

Gone are the days of needing to overprovision a system with more virtual hosts than you typically need. As invocations ramp up, the underlying system automatically scales up, providing you with a known number of concurrent invocations. Moving this limit higher is merely a matter of making a support request to Amazon, in the case of AWS Lambda. Before this, managing horizontal scalability was an exercise for the team designing the system. Never has horizontal scalability for computing resources been so easy.

Different cloud providers provide the ability to scale up or down (that is, be elastic) based on various parameters and metrics. Talk to DevOps folks or engineers who run systems with autoscaling and they will tell you it's not a trivial matter and is difficult to get right.

Built-in availability and fault tolerance

Servers, real or virtual, can and do fail. Since the hosts that run your code are now of little or no concern for you, it's a worry not worth having.

Just as the management of the operating system is handled for you, so too is the management of failing servers. You can be guaranteed that when your application code should be invoked, it will be.

You have been reading a chapter from
Serverless Design Patterns and Best Practices
Published in: Apr 2018
Publisher: Packt
ISBN-13: 9781788620642
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image