Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Building Serverless Architectures

You're reading from   Building Serverless Architectures Unleash the power of AWS Lambdas for your applications

Arrow left icon
Product type Paperback
Published in Jul 2017
Publisher Packt
ISBN-13 9781787129191
Length 242 pages
Edition 1st Edition
Languages
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Cagatay Gurturk Cagatay Gurturk
Author Profile Icon Cagatay Gurturk
Cagatay Gurturk
Arrow right icon
View More author details
Toc

Getting Started with Serverless

If you are reading this book, you have probably already heard the term serverless on more than one occasion. You might have read more than one definition of the term, like every buzzword. I define serverless computing as a new and efficient software development approach that abstracts infrastructure from the functionality itself, letting the developers focus on their business instead of the infrastructure constraints.

I remember myself and my team struggling with these infrastructure constraints in one of the web shops in the late 2000s. After being born as a pet project in college years, Instela had suddenly grown from having hundreds of visits per day to thousands, and we were hosting it on a shared hosting provider. Our website was eating all the CPU available in those poor Xeon servers and our hosting provider unilaterally decided to shut it down to keep neighboring websites up on the same server with us. The local plumber and the coffee shop were online and we were homeless in the cyber world. We did not have any remedy other than running to buy a cheap desktop computer, make it our first server, and bring Instela up again. Our visitor count was increasing day by day, our ATX server was resetting itself a couple of times per day because of overheating, and we ended up buying our first DELL PowerEdge box, which was like a space station for us back in 2005. Everything was cool in the beginning, but as more visitors started to come, our site started to respond slower and slower. Sometimes it was rather fast, and sometimes it was as slow as molasses. Sometimes, there was viral content that attracts thousands of people, and sometimes we had 100 people online. For our data center, it was exactly the same. They were charging us a fixed price and enjoying it. When we needed a new server, we had to spend at least one week, ask whether the local dealer had it in their stock, wait for the delivery, and install the network and the operating system. And what if one of the machines had a hardware issue? We had to wait for the technician and deal with the traffic with one less machine. It was a real pain and there was no other way to run a web platform.

Virtual servers have already existed since the early 2000s, but one can say that real cloud computing started in 2006 with the launch of AWS EC2. It is worth noting that when this service was launched, it was offering very limited options, and for many companies, it was not a production-ready solution.

Nowadays, this horror story is just a legacy to remember for many companies. Public clouds are providing us with a dedicated compute power from their large machine pools. Cloud computing introduced many new concepts and drastically changed how we build and deploy software. We do not have to worry about maintaining an on-premise SAN we mount via NFS. There are S3, Azure Blog Storage, or Google Cloud Storage, which give us the space we really need. We do not monitor the free space or repair it when it is broken. Within the SLA levels, (99.999999999% for AWS S3 [1]) you always know that your storage engine is just there, working. You need a queue service such as RabbitMQ but have AWS Simple Queue Service or Windows Azure Queue Service? You need to implement a search functionality and are planning to deploy an Elasticsearch cluster? You have a managed one: CloudSearch. AWS is offering a managed service even if you are developing a platform that needs to transcode video. You upload your jobs and get the results.

So far, we have spoken about the supporting services that any size of application might need. Leveraging the managed service offerings from public cloud providers, we see that we have become able to shut down some of the servers we previously needed in an on-premise infrastructure. We might say that this is the first part of the serverless architecture. Some authors are calling this type of service Backend as a Service, or BaaS. However, so far, our software is still running on virtual machines, called instances on AWS and Google Cloud Platform or VMs on Windows Azure. We have to prepare virtual machine images with our application code, spin up instances using them, and configure the auto-scaling rules for cost optimization and scalability. More importantly, we have to pay for these servers on a timely basis, even if you really do not use the reserved compute capacity.

As an alternative to this paradigm, cloud providers came up with the Functions as a Service (FaaS) idea. With FaaS, the vast amount of the business logic is still written by the application developer, but they are deployed to fully managed, ephemeral containers that are live only during the invocation of the functions. These functions respond to specific events. For example, the application developer can author a function that gets binary image data as the input and returns its compressed version. This function can be deployed as an independent 'unit of work and invoked with an image data to get the compressed version. This function would run in an isolated container managed by the cloud provider itself, and the application developer would only be busy with the parameter the cloud provider gets and the return data they give away. Obviously, this function alone does not make much sense, but cloud providers are also providing a mechanism to make these small functions respond to specific cloud events. For instance, you can configure this function to be invoked automatically whenever a new file is added to an S3 Bucket. In this way, this function will always be notified when there is a new image uploaded by your users and save a compressed version of it to another bucket. You can deploy another function that returns plain JSON objects that configure it to respond to HTTP requests via API Gateway. You would now have a fully scalable web service that you pay for as you go.

Sounds good? Then we warmly welcome you to the serverless computing world!

For a good theoretical study on serverless computing, I recommend that you read Mike Robert's Serverless Architectures. He paints a big picture of the topic and carefully analyzes the advantage and drawbacks of a serverless approach. You can find information about this article in the bibliography section.

In this book, we will learn how to build a midsize serverless application with AWS Lambda and the Java language. Although Google Cloud Platform and Windows Azure offer similar functionalities, I picked AWS Lambda because, at the time of writing, AWS is the provider that offers the most mature solutions. I picked Java because, despite its power and popularity, I believe that Java has been always underestimated in the serverless computing community. In my opinion, this is because AWS started with offering JavaScript, thus the trend started with that language and went on with it. However, AWS Lambda has native support for Java, which offers a fully functional JVM 8 to developers. In this book, we will look at how to apply the most common techniques in the Java world, such as Dependency Injection, and try to apply OOP design patterns to our functions. Unlike JavaScript equivalents, our functions will be more sophisticated and we will create great build systems thanks to Gradle. Gradle is Maven like build tool which uses Groovy based language that you can build sophisticated build configurations.

In this journey, we will begin with the following:

  • We will create a fully serverless forum application on the AWS platform.
  • We will use Java 8 as language. Google's Guice will be our dependency injection framework.
  • We will use AWS CloudFormation to deploy our application. We will write small Gradle tasks that will help us to have a painless deployment process. Gradle will also manage our dependencies.
CloudFormation is an automated AWS tool for the provisioning of cloud resources. With CloudFormation, you can define your whole cloud platform using a single JSON file without having to deal with CLI or AWS Console and deploy your application with one command in any AWS account. It is a very strong tool and I advise against usage of any other method to build AWS-based applications. With CloudFormation, you can have a solid definition of your application that works everywhere in the same way. Besides the benefits of such solidity in the production environment, CloudFormation also lets us define our infrastructure as code, so we can leverage source control and observe the development of our infrastructure along with our code. Therefore, in this book, you will not find any CLI command or AWS Console screenshot, but will find CloudFormation template files.
  • We will create only REST endpoints and test them using a rest-assured testing tool. We will not create any frontend as it is out of the scope of this book. For REST endpoints, we will use API Gateway. For some backend services, we will also develop some standalone Lambda functions that will respond to cloud events, such as S3 events.
  • We will use AWS S3 to store static files.
  • We will use DynamoDB as the data layer and store static files in Amazon S3. For the search feature, we will learn how to use AWS CloudSearch. We will use SQS (Simple Queue Service) and SNS (Simple Notification Service) for some backend services.
  • You can use any IDE you want. We will operate on CLI, mostly with Gradle commands that make the project totally IDE-agnostic.

You may think that there are many unknown words in this list, especially if you are not familiar with the AWS ecosystem. No worries! We expect you to be familiar only with the Java language and common patterns such as Dependency Injection. Knowledge of Gradle is a plus but not mandatory. We do not expect you to know about the services that AWS offers. We will be covering most details and referring to relevant documentation whenever needed, and after completing this book, you will know what these abbreviations mean. However, you are free to go to the AWS documentation and learn what those services are offering.

The forum application we will be implementing will be a very basic but over-engineered application. It will include the REST API that users can register, create topics and posts under existing topics, update their profiles, and do some other operations. The application will have some supporting services, such as sending mobile notifications to users when someone replies to their posts, an image resizer, and so on. As it is very typical web application and we are assuming that the audience of the book is already familiar with the business requirements of such an application, we are omitting the definition of all the systems at this stage. Instead, we will adopt an iterative agile methodology and define the specifications of these subsystems when we need them in the upcoming chapters.

In this chapter, we will cover the following topics:

  • A brief theoretical introduction to AWS Lambda
  • Setting up an AWS account
  • Creating the Gradle project for our project and configuring dependencies
  • Developing the base Lambda handler class that will be shared with all Lambda functions in the future
  • Testing this implementation locally using Junit
  • Creating and deploying a basic Lambda function
  • Introducing AWS Lambda

As stated earlier, AWS Lambda is the core AWS offering we will be busy with throughout this book. While other services offer us important functionalities such as data storage, message queues, search, and so on, AWS Lambda is the glue that combines all this with our business logic.

In the simplest words, AWS Lambda is a computing service where we can upload our code, create independent functions, and tie them to specific events in the cloud infrastructure. AWS manages all the infrastructure where our functions run and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring, and logging. When our function has high demand, AWS automatically increases the underlying machine count to ensure that our function performs with the same performance. AWS Lambda supports JavaScript (Node.js), Java, and Python languages natively.

You can write AWS Lambda functions in one of the languages supported natively. Regardless of the chosen language, there is a common pattern that includes the following core concepts:

  • Handler: Handler is a method that Lambda runtime calls whenever your function is invoked. You configure the name of this method when you create your Lambda function. When your function is invoked, the Lambda runtime injects the event data to this method. After this entry point, your method can call other methods in your deployment package. In Java, the class that includes the handler method should implement a specific interface provided by the AWS Lambda Runtime dependency. We will look at the details later in this chapter.
  • Context: A special context object is also passed to the handler method. Using this object, you can access some AWS Lambda runtime values, such as the request ID, the execution time remaining before AWS Lambda terminates your Lambda function, and so on.
  • Event: Events are JSON payloads that Lambda runtime injects to your Lambda function upon execution. You can call Lambda function from many sources, like HTTP requests, messaging systems, and so on. For each execution type, structure of JSON will be different. In Node.js environment, events are passed to handler functions in string format. In Java runtime, you have two possibilities: Receive event as InputStream and parse yourself or create a POJO that can be deserialized from expected JSON. For the latter case, Lambda runtime will use Jackson library to convert the event to that POJO. In this book we will create our own deserializer because default Jackson configuration is not meeting our requirements.
  • Logging: Within your Lambda function, you can log in to CloudWatch, which is the built-in logging feature offered by AWS. In this book, we will use log4j to generate log entries. We will then leverage the custom log4j appender offered by AWS to write our logs to CloudWatch.
  • Exceptions: After successful execution, Lambda functions return a result in the JSON format. It is also possible to identify an execution error using Java exceptions. We will make heavy use of exceptions to tell to the AWS runtime about failed executions, and it will be especially useful in returning different HTTP code in our REST API.

AWS Lambda functions can be invoked manually or by responding to different events. They are normal functions: you give them an event object and you get the results. During the execution, Lambda functions are totally agnostic about who is calling them. However, invoking them manually does not make much sense. Instead, we configure them to respond to Cloud events. Invoking Lambda functions manually is useful when we test our functions for different type of inputs and we will actually do that when we test our functions manually. However, the real power of Lambda functions appears when their invocation is out of our control. In this book, we will configure AWS functions to respond to different cloud events. Here are examples of some of them:

  • REST Endpoints: We will develop Lambda functions that will be asynchronously invoked by HTTP requests. We will be using API Gateway. This service accepts HTTP requests, converts the HTTP request parameters into the Lambda event that our function will understand, and finally converts the output of the Lambda to the desired JSON output. We will be creating three-four endpoints using this technology and have a fully scalable API for our application.
  • Resizing Images: For the most of the use cases, we do not even need to develop a REST API for our needs. In this scenario, our users will upload their profile photos to AWS S3. We will not write a special endpoint for that; instead, client application will use AWS Cognito to temporarily obtain the IAM credentials that will only allow you to upload files to the S3 bucket. Once the image is uploaded, S3 will invoke our Lambda function and our function will resize the image and save it to the resized images bucket. After this point, the users will be able to access to resized images using the CloudFront CDN. In other words, we will have built an image service without using or developing any REST API endpoints:

In the following chapters, you will understand much better how Lambda functions work with practical examples.

After this introduction, it is time to get our hands dirty and write some code.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at R$50/month. Cancel anytime