Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Serverless Computing

You're reading from   Hands-On Serverless Computing Build, run and orchestrate serverless applications using AWS Lambda, Microsoft Azure Functions, and Google Cloud Functions

Arrow left icon
Product type Paperback
Published in Jul 2018
Publisher Packt
ISBN-13 9781788836654
Length 350 pages
Edition 1st Edition
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Kuldeep Chowhan Kuldeep Chowhan
Author Profile Icon Kuldeep Chowhan
Kuldeep Chowhan
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. What is Serverless Computing? FREE CHAPTER 2. Development Environment, Tools, and SDKs 3. Getting Started with AWS Lambda 4. Triggers and Events for AWS Lambda 5. Your First Serverless Application on AWS 6. Serverless Orchestration on AWS 7. Getting Started with Azure Functions 8. Triggers and Bindings for Azure Functions 9. Your First Serverless Application on Azure 10. Getting Started with Google Cloud Functions 11. Triggers and Events for Google Cloud Functions 12. Your First Serverless Application on Google Cloud 13. Reference Architecture for a Web App 14. Reference Architecture for a Real-time File Processing 15. Other Books You May Enjoy

Limits to serverless computing

So far, I have talked about many things regarding serverless computing. Let's look at the drawbacks of serverless computing as well.

Serverless architecture has its limits as well and it is very important to realize the drawbacks to implementing serverless applications along with recognizing when to use serverless computing and how to implement it so that you will be able to address these concerns ahead of time. Some of these limits include:

  • Infrastructure control
  • Long running application
  • Vendor lock-in
  • Cold start
  • Shared infrastructure
  • Server optimizations is a thing of past
  • Limited number of testing tools

We will look at the options for addressing some of these issues in the following sections.

Infrastructure control

As I mentioned previously, with a serverless architecture you will not have access to control the underlying infrastructure as that is controlled by the cloud provider. However, developers are still able to choose which runtime that they want to run their function on, such as Node.js, Java, Python, C#, and Go. They still have control over the choice of the memory requirements for their function and the timeout duration for their function execution.

Long running application

One of the benefits of serverless architectures is that they are built to be fast, scalable, event-driven functions. Therefore, long-running batch operations are not well suited for this architecture. Most cloud providers have a timeout period of five minutes, so any process that takes longer than this allocated time is terminated. The idea is to move away from batch processing and into real-time, quick, responsive functionality.

Vendor lock-in

One of the biggest fears with serverless applications concerns vendor lock-in. This is a common fear with any move to cloud technology. For example, if you start committing to using Lambda, then you are committing to using AWS and either you will not be able to move to another cloud provider or you will not be able to afford transition to a cloud provider.

While this is understandable, there are many ways to develop applications to make a vendor switch using functions easier. A popular and preferred strategy is to pull the cloud provider logic out of the handler files so it can easily be switched to another provider. The following code illustrates a poor example of abstracting cloud provider logic.

The following code shows the handler file for a function that includes all of the database logic bound to the FaaS provider (AWS, in this case):

const database = require('database').connect();
const mail = require('mail');
module.exports.saveCustomer = (event, context, callback) => {
const customer = {
emailAddress: event.email,
createdAt: Date.now(),
};
database.saveCustomer(customer, (err) => {
if (err) {
callback(err);
} else {
mail.sendEmail(event.email);
callback();
}
});
};

The following code illustrates a better example of abstracting the cloud provider logic.

The following code shows a handler file that is abstracted away from the FaaS provider logic by creating a separate Users class:

class Customers {
constructor(database, mail) {
this.database = database;
this.mail = mail;
}
save(emailAddress, callback) {
const customer = {
emailAddress: emailAddress,
createdAt: Date.now(),
};
this.database.saveCustomer(customer, (err) => {
if (err) {
callback(err);
} else {
this.mail.sendEmail(emailAddress);
callback();
}
});
}
}
module.exports = Customers;
const database = require('database').connect();
const mail = require('mail');
const Customers = require('customers');
let customers = new Customers(database, mail);
module.exports.saveCustomer = (event, context, callback) => {
customers.save(event.email, callback);
};

The second method is preferable both for avoiding vendor lock-in and for testing. Removing the cloud provider logic from the event handler makes the application more flexible and applicable to many providers. It makes testing easier by allowing you to write unit tests to ensure it is working properly in a traditional way. You can also write integration tests to verify that integrations with external services are working properly.

Most of the serverless offerings by cloud providers are implemented in a similar way. However, if you had to switch vendors, you would definitely need to update your operational toolsets that you use for monitoring, deployments, and so on. You might have to change your code's interface to be compatible with the new cloud provider.

If you are using other solutions provided by the cloud vendor that are very much specific to the cloud provider, then moving between vendors becomes extremely difficult, as you would have to re-architect your application with the solutions that the new cloud provider provides.

Cold start

I have already discussed cold start earlier in the chapter. Let's recap about the concern we have regarding cold start. The concern about cold start is that a function takes slightly longer to respond to an event after a period of inactivity. The time it takes varies between the run times that your application chooses.

This does tend to happen, but there are ways around the cold start if you need an immediately responsive function. If you know your function will only be triggered periodically, an approach to overcoming the cold start is to establish a scheduler that calls your function to wake it up every so often. Within AWS, you can use CloudWatch Event Scheduler to have your Lambda function invoked every 5 to 10 minutes so that AWS Lambda will not mark your function as inactive or cold. Azure Functions and Google Functions also have similar capability to have functions invoked on a schedule.

Shared infrastructure

A multi-tenant infrastructure or shared infrastructure refers to an infrastructure where multiple applications of different customers (or tenants) are being run on the same machine. It is a very well-known strategy to achieve the economy of scale benefits that I mentioned earlier as well. This could be a concern from a business perspective since serverless applications can run alongside one another regardless of business ownership. Although this doesn't affect the code, it does mean the same availability and scalability will be provided across competitors. There could be situations where your code might be affected as well due to noisy neighbors (high load generating function). A multi-tenant infrastructure also has problems related to security and robustness, where one customer's function can take down another customer's function.

This problem is not unique to serverless offerings—they exist in many other service offerings that use multi-tenancy, such as Amazon EC2 and container platforms.

Server optimization is a thing of the past

As I mentioned earlier, you will not have access to control any aspects of the underlying infrastructure where your functions are executed by the cloud provider. As there is no access to the underlying infrastructure, you will lose access to optimizing the server for your application to improve performance for your clients. If you need to perform server optimizations so that your application can run optimally, then use an IaaS offerings, such as AWS EC2 or Microsoft Azure Virtual Machines.

Security concerns

Serverless computing also has security issues. However, the types of security issues that I have to deal with are significantly better than what I have to deal with when running applications on a traditional infrastructure. The same set of security issues related to your application remain the same in serverless computing. Different serverless offerings use different security implementations so as you start to use different serverless offerings for your applications it increases the surface area that is required for malicious intent and also increases the chances of a security attack.

Deployment of multiple functions

The tooling for deploying a single FaaS function is very robust at the moment. However, the tooling required to deploy multiple functions at the same time or co-ordination of deploying multiple functions is lacking.

Consider a case where you have multiple functions that make a serverless application and you need to deploy all of them at once. There are not many tools out there that can do that for you. The tooling to ensure zero downtime for serverless applications is not robust enough yet.

There are open source solutions, such as Serverless Framework, that are helping to solve some of these issues, but they can only be done with support from the cloud provider. AWS built the AWS Serverless Application model to address some of these concerns, which I will talk about in later chapters.

Limited number of testing tools

One of the limitations to the growth of serverless architectures is the limited number of testing and deployment tools. This is anticipated to change as the serverless field grows, and there are already some up-and-coming tools that have helped with deployment. I anticipate that cloud providers will start offering ways to test serverless applications locally as services. Azure has already made some moves in this direction, and AWS has been expanding on this as well. NPM has released a couple of testing tools so you can test locally without deploying to your provider. Some of these tools include node-lambda and aws-lambda- local. One of my current favorite deployment tools is the serverless Framework deployment tool (https://serverless.com/framework/). It is compatible with AWS, Azure, Google, and IBM. I like it because it makes configuring and deploying your function to your given provider incredibly easy, which also contributes to a more rapid development time.

Integration testing of your serverless applications is hard. As with the FaaS environment, you depend on external resources to maintain state. You need to make sure your integration tests cover these scenarios as well. Typically, as there are not a lot of solutions out there where you can run these external resources locally, people stub these external resources for the purpose of integration testing. The challenge will be in making sure that the stubs that you create are always in sync with the implementation of the external resources and some vendors may not even provide a stubbed implementation for their resources.

To ensure that your functions works, integration tests are usually run on production-like environments with all the necessary external resources in place. As our functions are very small compared to traditional infrastructure, we need to rely more on integration testing to ensure that our functions run optimally.

You have been reading a chapter from
Hands-On Serverless Computing
Published in: Jul 2018
Publisher: Packt
ISBN-13: 9781788836654
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image