Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Serverless Computing

You're reading from   Hands-On Serverless Computing Build, run and orchestrate serverless applications using AWS Lambda, Microsoft Azure Functions, and Google Cloud Functions

Arrow left icon
Product type Paperback
Published in Jul 2018
Publisher Packt
ISBN-13 9781788836654
Length 350 pages
Edition 1st Edition
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Kuldeep Chowhan Kuldeep Chowhan
Author Profile Icon Kuldeep Chowhan
Kuldeep Chowhan
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. What is Serverless Computing? FREE CHAPTER 2. Development Environment, Tools, and SDKs 3. Getting Started with AWS Lambda 4. Triggers and Events for AWS Lambda 5. Your First Serverless Application on AWS 6. Serverless Orchestration on AWS 7. Getting Started with Azure Functions 8. Triggers and Bindings for Azure Functions 9. Your First Serverless Application on Azure 10. Getting Started with Google Cloud Functions 11. Triggers and Events for Google Cloud Functions 12. Your First Serverless Application on Google Cloud 13. Reference Architecture for a Web App 14. Reference Architecture for a Real-time File Processing 15. Other Books You May Enjoy

What is FaaS?

I've mentioned FaaS a few times already, so let's dig into what it really means. Serverless computing involves code that runs as a service on an infrastructure that is fully managed by the cloud provider. This is automatically provisioned, based on an event, and is automatically scaled to ensure high availability. You can think of this as FaaS that run on stateless, ephemeral containers created and maintained by the cloud provider. You might have already come across terms such as Software as a Service (SaaS), Infrastructure as a Service (IaaS), and Platform as a Service (PaaS). Let's look at what they mean. SaaS is a form of cloud computing in which software is licensed based on a subscription, hosted centrally, and is delivered remotely by the provider over the internet. Examples of SaaS are Google Apps, Citrix GoToMeeting, and Concur. IaaS is a form of cloud computing where the provision and management of compute infrastructure resources occur over the internet, scales up quickly, and you pay for what you use. Examples of IaaS are Azure Virtual Machines and AWS EC2. PaaS is a form of cloud computing where the software and infrastructure that are needed for application development are provided over the internet by the provider. Examples of PaaS are AWS Beanstalk and Azure App Services.

Let's also look at AWS Lambda to learn more about FaaS.

AWS Lambda helps you run code without supplying or administrating servers. You pay only for the total time you consume—no charge is applicable when your code is not running. Using Lambda, one can run code for virtually any type of application or backend service—all with zero administration. We need to upload the code and Lambda looks into everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

Let's look at features that AWS Lambda offers that fit into FaaS paradigm:

  • Essentially, FaaS runs code without having to provision and manage servers on your own and is executed based on an event rather than running all the time. With traditional architectures that involve containers or physical/virtual servers, you would need the servers be running all the time. As the infrastructure is used only based on the requirements, you achieve 100 percent server utilization, and cost savings are also huge as you pay only for the compute time you consume when the function/Lambda runs.
  • With FaaS solutions, you can run any type of application, which means there isn't a restriction on what languages you need to write the code in or on particular frameworks to be used. For example, AWS Lambda functions can be written in JavaScript, Python, Go, C# programming languages, and any JVM language (Java, Scala, and so on).
  • The deployment architecture for your code is also different from traditional systems, as there is no server to update yourself. Instead, you just upload your latest code to the cloud provider, AWS, and it takes care of making sure the new version of the code is used for subsequent executions.
  • AWS handles scaling of your function automatically based on the requests to process without any further configuration from us. If your function needs to be executed 10,000 times in parallel, AWS handles scaling up the infrastructure that is required to run your function 10,000 times in parallel. The containers that are executing your code are stateless, ephemeral with AWS provisioning, and destroyed only for the duration that is driven by the runtime needs.
  • In AWS, functions are triggered by different event types, such as S3 (file) updates, scheduled tasks based on a timer, messages sent to Kinesis Stream, messages sent to SNS topics, and many more event type triggers.
  • AWS also allows functions to be triggered as a response to HTTP requests through Amazon API Gateway.

State

Functions, as they run in ephemeral containers, have significant restrictions when it comes to management of state. You need to design your functions in such a way that the subsequent run of your function will not be able to access state from a previous run. In short, you should develop your functions with the point of view that they are stateless.

This affects how you design the architecture for your application. Considering that functions are stateless, you need to use external resources to manage the state of your application so that the state can be shared between runs of the functions. Some of the popular external resources that are widely using the FaaS architecture are Amazon S3, which provides a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web, caching solutions, such as Memcached or Redis, and database solutions, such as Amazon DynamoDB, which is a fast and flexible NoSQL database service for any scale.

Execution duration

Functions are limited in how long each invocation is allowed to run. Each cloud provider has different time limits for their FaaS offering, after which the function execution will be terminated. Let's look at the different timeouts provided by each cloud provider for functions:

  • AWS Lambda—5 minutes
  • Azure Functions—10 minutes
  • Google Functions—9 minutes

As the execution of the functions are limited by the time limit set by the providers, certain architectures with long running processes are not suited for the FaaS architecture. If you still want to fit those long running processes into a FaaS architecture, then you would need to design the architecture so that several functions are coordinated to accomplish a long running task, as oppose to in a traditional architecture where everything would be handled within the same application.

Understanding cold start

What is cold start?

Cold start = time it takes to boot a computer system

What is cold start in FaaS?

When you execute a cold (inactive) function for the first time a cold start occurs. The cold start time is when the cloud provider provisions the required runtime containers, downloads your code for the functions, and then runs your functions. This increases the execution time of the function considerably, as it may take more time to provision certain runtimes before your function gets executed. The converse of this is when your function is hot (active), which means that the container with your code required to execute the function stays alive, ready and awaiting for execution. If your function is running, it is considered active and if there is certain period of inactivity, then the cloud provider will drop the runtime container with your code in it to keep the operating costs low and at that point your function is considered cold again.

The time it takes for cold start varies between different runtimes. If your function utilizes runtimes such as Node.js or Python, then the cold start time isn't significantly huge; it may add < 100 ms overhead to your function execution.

If your function utilizes runtimes such as JVM, then you will see cold start times greater than a few seconds while the JVM runtime container is being spun up. The cold start latency has significant impact in the following scenarios:

  • Your functions are not invoked frequently and are invoked once every 15 minutes. This will add noticeable overhead to your function execution.
  • Your functions will see sudden spikes in your function execution. For example, your function may be typically executed once per second, but it suddenly ramps up to 50 executions per second. In this case, you will also see noticeable overhead to your function execution.

Understanding and knowing about this performance bottleneck is essential when you are architecting your FaaS application so that you can take this into account to understand how your functions operate.

Some analysis has been done to understand the container initialization times for AWS Lambda:

  • Containers are terminated after 15 minutes of inactivity
  • Lambda within a private VPC increases container initialization time

People overcome this by pinging their Lambda once every 5 or 10 minutes to keep the runtime container for a function alive and also preventing it from going into a cold state.

Is cold start an issue of concern? Whether your function will have a problem like this, or not, is something that you need to test with production, such as with load, and understand the overhead that cold start adds to your FaaS application.

API gateway

One of the things that I mentioned about FaaS earlier is the API Gateway. An API Gateway is a layer that stands in front of backend HTTP services or other resources, such as FaaS, and decides where to route the HTTP request based on the route configuration defined in the API gateway solution. In the context of FaaS, the API Gateway maps the incoming HTTP request parameters to the inputs to the FaaS function. The API gateway then transforms the response that it receives from the function and converts it into an HTTP response and returns that HTTP response back to the caller of the API gateway.

Each cloud provider has an offering in this space:

  • AWS has an offering called API gateway
  • Microsoft Azure has an offering called Azure API management
  • GCP has an offering called Cloud Endpoints

The working of the Amazon API gateway is as shown in the following figure:

How the Amazon API Gateway works

API gateways provide additional capabilities along with routing the requests, including:

  • Authentication
  • Throttling
  • Caching
  • Input validation
  • Response code mapping
  • Metrics and logging

The best use case for FaaS + API gateway is the creation of a feature-rich HTTP-based microservice with scaling, monitoring, provision, and management all taken care of by the provider in a true serverless computing environment.

You have been reading a chapter from
Hands-On Serverless Computing
Published in: Jul 2018
Publisher: Packt
ISBN-13: 9781788836654
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image