Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Building Serverless Applications with Python
Building Serverless Applications with Python

Building Serverless Applications with Python: Develop fast, scalable, and cost-effective web applications that are always available

eBook
€32.99
Paperback
€41.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Building Serverless Applications with Python

The Serverless Paradigm

Most probably, if you are reading this book, you have already heard about the serverless paradigm and the terms serverless engineering and serverless architecture. Nowadays, the way developers deploy applications has changed drastically, especially in the domain of data engineering and web development, thanks to event-based architectural designs, also called serverless architectures.

It is not uncommon to have idle resources and servers in production idle after the server workload has finished, or waiting for the next workload to come. This introduces a bit of redundancy in the infrastructure. What if there was no need for idle resources lying around when there is no workload? What if resources can be created when necessary and be destroyed once the work is done?

At the end of this chapter, you will understand how serverless architectures and functions as a service work, and how you can build them into your existing software infrastructure. You will also learn what microservices are, and decide whether microservices or serverless operations are well-suited for your architecture or not. You will also learn how to build serverless applications with Python on major cloud service providers, such as Amazon Web Services (AWS) and Microsoft's Azure.

This chapter will cover the following points:

  • Understanding serverless architectures
  • Understanding microservices
  • Serverless architectures don't have to be real-time only
  • Pros and cons of serverless architectures

Understanding serverless architectures

The concept of serverless architectures or serverless engineering revolves entirely around understanding the concept of functions as a service. The most technical and accurate definition of serverless computing on the internet is as follows:

"Serverless computing, also known as function as a service (FAAS), is a cloud computing and code execution model in which the cloud provider fully manages starting and stopping of a function's container platform as a service (PaaS)."

Now, let's go into the details of each part of that definition to understand the paradigm of serverless computing better. We shall start with the term function as a service. It means that every serverless model has a function that is executed on the cloud. These functions are nothing but blocks of code, that are executed depending on the trigger that is associated with the function. This is a complete list of triggers in the AWS Lambda environment:

Now let's understand what manages the starting and stopping of a function. Whenever a function is triggered via one of these available triggers, the cloud provider launches a container in which the function executes. Also, after the function is successfully executed the function has returned something, or if the function has run out of time, the container gets thatched away or destroyed. The thatching happens so that the container can be reused in the event of high demand and whenever there is very little time between two triggers. Now, we come to the next part of the sentence, the function's container. This means that the functions are launched and executed in containers. This is the standard definition of a container from Docker, a company that made the concept of containers very popular:

"A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings."

This helps in packaging the code, the runtime environment, and so on of the function into a single deployment package for seamless execution. The deployment package contains the main code file for the function, all the non-standard libraries which are required for the function to execute. The creation process of a deployment package looks very similar to that of a virtual environment in Python.

So, we can clearly make out that there are no servers running round the clock in the case of serverless infrastructures. There is a clear benefit for this, which includes not having a dedicated Ops team member for monitoring the server boxes. So the extra member, if any, can focus on better things, such as software research, and so on. Not having servers running through the entire day saves a lot of money and resources for the company and/or personally. This benefit can be very clearly seen among machine learning and data engineering teams who make use of GPU instances for their regular workload. So having on-demand serverless GPU instances running, saves a lot of money without the developers or the Ops team needing to maintain them around the clock.

Understanding microservices

Similar to the concept of serverless, the design strategy, which is the microservice-oriented strategy, has also been very popular recently. This architecture design existed a long time before the idea of serverless came into existence though. Just as we tried to understand the serverless architectures from the technical definition on the internet, we shall try to do the same for microservices. The technical definition for microservices is:

"Microservices, also known as the microservice architecture, is an architectural style that structures an application as a collection of loosely coupled services, which implement business capabilities."

Planning and designing the architecture in the form of microservices has its fair share of positives and negatives, just like serverless architectures. It's important to know about both, in order to appreciate and understand when and when not to leverage microservices in your existing architecture. Let's look at this and understand the positives of having microservice architectures, before moving over to the negatives.

Microservices help software teams stay agile, and improve incrementally. In simpler terms, as the services are decoupled from each other, it is very easy to upgrade and improve a service without causing the other to go down. For example, in social network software, if the chat and the feed are both microservices, then the feed doesn't have to go down when the software team are trying to upgrade or do minor fixes on the chat service. However, in large monolithic systems, it is difficult to break things up so easily in the way one can do with microservices. So, any fix or upgrade on even a small component of the architecture comes with downtime with the fix taking more time than intended.

The sheer size of the code base of monolithic architectures itself acts as a hindrance progress in the case of any small failures. Microservices, on the other hand, greatly help in boosting developer productivity by keeping code bases lean, so that they can fix and improve the service with very little or no overhead and downtime. Microservices can be much better leveraged via containers, which provide effective and complete virtual operating system environments, processes with isolation, and dedicated access to underlying hardware resources.

However, microservices come with their own bunch of disadvantages and downsides, the major one being having to deal with distributed systems. Now that each service is surviving on its own, the architect needs to figure out how each of them interacts with the others in order to make a fully functional product. So, proper co-ordination between the services and the decisions regarding how services move data between them is a very difficult choice that needs to be taken by the architect. Major distributed problems such as the consensus, the CAP theorem, and maintaining the stability of consensus, and the connection, are some issues that the engineer needs to handle while architecting for microservices. Ensuring and maintaining security is also a major problem in distributed systems and microservices. You needs to decide on separate security patterns and layers for each microservice, along with the security decisions necessary for the data interaction to happen between the services.

Serverless architectures don't have to be real-time only

Serverless architectures generally are leveraged as real-time systems as they work as a function as service which is triggered by a set of available triggers. However, this is a very common misconception, as serverless systems can be leveraged equally well both as real-time and batch architectures. Knowing how to leverage the concept of serverless systems as batch architectures will open up many engineering possibilities, as all engineering teams don't necessarily need or have real-time systems to operate.

Serverless systems can be batched by leveraging the following:

  • The cron facility in triggers
  • The concept of queues

Firstly, let's understand the concept of the cron facility in triggers. Serverless systems on the cloud have the ability to set up monitoring, which enables the trigger to get triggered every few minutes or hours and can be set as a normal cron job. This helps in leveraging the concept of serverless as a regular cron batch job. In the AWS environment, Lambda can be triggered as a cron via AWS CloudWatch, by setting the frequency of the cron by manually entering the time interval as the input and also by entering the interval in the cron format:

One can also leverage the concept of queues when building serverless batch architectures. Let's understand this by setting an example data pipeline. Let's say the system which we intend to build does the following tasks:

  1. A user or a service sends some data into a database or a much simpler data store, such as AWS's S3.
  2. Once there are more than 100 files in my data store, we'll want to do some task. Let's say, doing some analytics on them, for example, such as counting the pages.

This can be achieved via queues, and this is one of the simpler serverless systems we can consider as an example. So, this can be achieved as follows:

  1. The user or the service uploads or sends the data to the data store which we have selected.
  2. A queue is configured for the purpose of this task.
  3. An event can be configured to S3 buckets or data stores so that as soon as data enters into the store, a message is sent to the queue which we have configured earlier.
  4. Monitoring systems can be set to monitor the queue for the number of messages in it. It is advisable to use the monitoring system of the cloud provider you are using so that the system stays completely serverless.
  5. Alarms can be set to the monitoring systems, configuring a threshold for these alarms. For example, the alarm needs to be triggered whenever the number of messages in our queue reaches or exceeds 100.
  6. This alarm can act as a trigger to the Lambda function which does the analytics by first receiving messages from the queue and then querying the data store using the filename received from the message.
  7. Once the analytics are completed on the files, the processed files can be pushed to another data store for storage.
  8. After the entire task is completed, the container or the server where the Lambda function has run will be terminated, thus making this pipeline completely serverless.

Pros and cons of serverless

As we now understand what serverless architectures and pipelines look like, how they may be leveraged into existing architectures, and also how microservices help keep architectures leaner and boost developer productivity, we shall look at the pros and cons of serverless systems in detail, so that software developers and architects can make decisions regarding when to leverage the serverless paradigm into their existing systems and when not to.

The positives of serverless systems are:

  • Lower infrastructure costs: By deploying serverless systems, the infrastructure costs can be greatly optimized, as there would not be a need for servers to be running around the clock every day. As the servers start whenever the function is triggered, and stop whenever the function gets executed successfully, the billing would only be done for that brief time period when the function was running.
  • Less maintenance needed: By virtue of the preceding reason, there is also no need for continuous monitoring and maintenance of servers. As the functions and triggers are automated, there is almost zero maintenance required for serverless systems.
  • Higher developer productivity: As the developers don't need to worry about downtime and server maintenance, they can focus and work on better software challenges, such as scaling and designing functionalities.

The remaining part of the book will show you how serverless systems are changing the way software is done. So, as this chapter is intended to help architects decide whether serverless systems are a good choice for their architecture or not, we shall now look at the disadvantages of serverless systems.

The disadvantages of serverless systems are:

  • Time limit of the function: The function which is whether executed, be it AWS's Lambda or GCP's cloud functions, has an upper time limit of 5 minutes. This makes execution of heavy computations impossible. However, this can be solved by executing a provisioning tool's playbook in nohup mode. This will be covered in detail, later in the chapter. However, making the playbook ready and setting up the container and anything else should be completed within the 5 minute time limit. The container gets automatically killed when the 5 minute limit is exceeded.
  • No control over the container environment: The developer has no control over the environment of the container that is being created for executing the function. The operating system, the filesystem, and so on, are all decided by the cloud provider. For example, AWS's Lambda functions are executed inside containers that run the Amazon Linux operating system.
  • Monitoring containers: Apart from the basic monitoring capabilities that are provided by the cloud provider via their in-house monitoring tools, there is no mechanism to do detailed monitoring of the container that is executing the serverless function. This becomes even more difficult when scaling up serverless systems to accommodate distributed systems.
  • No control on security: There is no control on how the security of the data flow is ensured, as there is very little control over the container's environment. The container can be run in the VPC and subnets of the developer's choice, though, which helps work around this disadvantage.

However, serverless systems can be scaled up to distributed systems for large- scale computations where the developer need not worry about the time limit. As already mentioned, this will be discussed in detail in the upcoming chapters. However, for insight into an intuition on how one can choose serverless systems over monolithic systems for large-scale computations, let us understand some important pointers that need to be kept in mind when taking that architectural decision.

The pointers to be kept in mind when scaling serverless systems to distributed systems are:

  • To scale up serverless systems into serverless distributed systems, one must understand how the concept of nohup works. It is a POSIX command that allows programs and processes to run in the background.
  • Nohup processes should be properly logged, including both the output and the error logs. This is the only source of information for your processes.
  • A provisioning tool, such as Ansible or Chef or a similar one, needs to be leveraged to create a master-workers architecture which has been spawned via the playbook running in nohup mode in the container where the serverless function is being executed.
  • It is a good practice to ensure that all tasks that are being executed by the provisioning tool via the master server are properly monitored and logged, as there is no way one can retrieve the logs once the entire setup finishes executing.
  • Proper security needs to be ensured by using a temporary credential facility available from the cloud providers.
  • Proper closure should be ensured for the system. The workers and the master should self-terminate immediately after the pipeline of tasks finishes executing. This is very important and this is what makes the system serverless.
  • Generally, temporary credentials come with an expiry time, which is 3,600 seconds for most environments. So, if the developer is using temporary credentials to execute a task which is supposed to take more than the expiry time, then there is a danger of the credentials getting expired.
  • Debugging distributed serverless systems is an extremely difficult task for the following reasons:
    • Monitoring and debugging a nohup process is extremely difficult. Whenever you want to debug one, you have to either refer to the log file created by the process or kill the nohup process by using the process ID, and then manually run the scripts for debugging.
    • As the complete list of tasks executes sequentially in the provisioning tool, there is a danger that the instances may get terminated because the developer has forgotten to kill the nohup process before starting the debugging process.
    • As this is a distributed system, it goes without saying that the architecture should be able to self-heal in the case of any failure or a disaster. An example scenario can be when one of the workers goes down while performing some operation on a bunch of files. The entire bunch of files is now lost, and there is no means of recovery.
    • Another advanced disaster scenario can be when two worker servers go down while performing some operations on a bunch of files. In this case, the developer does not know which files have been executed successfully and which haven't.
  • It is a good practice to ensure that all the worker instances receive an equal amount of the load to execute so that the load across the distributed system stays even and time and resources are well optimized.

Summary

In this chapter, we learned what serverless architecture is. Most importantly, the chapter helps architects decide if serverless is the way forward for their team and their engineering, and how to transition/migrate from their existing infrastructure to a serverless paradigm. We also looked at the paradigm of microservices and how they help make lightweight and highly agile architectures. This chapter also went into great detail about when a team should start thinking about microservices and when can they migrate or break their existing monolith(s) into microservices.

We then learned the art of building batch architectures in the serverless domain. One of the most common myths is that serverless systems are only for real-time computation purposes. However, we have learned how to leverage these systems for batch computations too, thus facilitating a whole lot of applications with the serverless paradigm. We looked at the pros and cons of going serverless so that better engineering decisions can be taken accordingly.

In the next chapter, we will cover a very detailed understanding of how AWS Lambda works, which is the core component of serverless engineering in the AWS cloud environment. We will understand how triggers work and how AWS Lambda functions work. You will learn about the concept of leveraging containers for executing serverless functions and the associated computational workload. Following that, we will also learn about configuring and testing Lambda functions, along with understanding the best practices while doing so. We will also cover versioning Lambda functions, in the same way versioning is done in code, and then create deployment packages for AWS Lambda, so that the developer can accommodate third-party libraries comfortably, along with the standard library ones.

Left arrow icon Right arrow icon

Key benefits

  • Design and set up a data flow between cloud services and custom business logic
  • Make your applications efficient and reliable using serverless architecture
  • Build and deploy scalable serverless Python APIs

Description

Serverless architectures allow you to build and run applications and services without having to manage the infrastructure. Many companies have adopted this architecture to save cost and improve scalability. This book will help you design serverless architectures for your applications with AWS and Python. The book is divided into three modules. The first module explains the fundamentals of serverless architecture and how AWS lambda functions work. In the next module, you will learn to build, release, and deploy your application to production. You will also learn to log and test your application. In the third module, we will take you through advanced topics such as building a serverless API for your application. You will also learn to troubleshoot and monitor your app and master AWS lambda programming concepts with API references. Moving on, you will also learn how to scale up serverless applications and handle distributed serverless systems in production. By the end of the book, you will be equipped with the knowledge required to build scalable and cost-efficient Python applications with a serverless framework.

Who is this book for?

This book is for Python developers who would like to learn about serverless architecture. Python programming knowledge is assumed.

What you will learn

  • Understand how AWS Lambda and Microsoft Azure Functions work and use them to create an application
  • Explore various triggers and how to select them, based on the problem statement
  • Build deployment packages for Lambda functions
  • Master the finer details about building Lambda functions and versioning
  • Log and monitor serverless applications
  • Learn about security in AWS and Lambda functions
  • Scale up serverless applications to handle huge workloads and serverless distributed systems in production
  • Understand SAM model deployment in AWS Lambda

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Apr 20, 2018
Length: 272 pages
Edition : 1st
Language : English
ISBN-13 : 9781787288676
Languages :
Concepts :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Apr 20, 2018
Length: 272 pages
Edition : 1st
Language : English
ISBN-13 : 9781787288676
Languages :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 115.97
Building Serverless Applications with Python
€41.99
Building Serverless Architectures
€36.99
Serverless Design Patterns and Best Practices
€36.99
Total 115.97 Stars icon

Table of Contents

10 Chapters
The Serverless Paradigm Chevron down icon Chevron up icon
Building a Serverless Application in AWS Chevron down icon Chevron up icon
Setting Up Serverless Architectures Chevron down icon Chevron up icon
Deploying Serverless APIs Chevron down icon Chevron up icon
Logging and Monitoring Chevron down icon Chevron up icon
Scaling Up Serverless Architectures Chevron down icon Chevron up icon
Security in AWS Lambda Chevron down icon Chevron up icon
Deploying a Lambda Function with SAM Chevron down icon Chevron up icon
Introduction to Microsoft Azure Functions Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Half star icon Empty star icon Empty star icon Empty star icon 1.5
(2 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 50%
1 star 50%
philipbergen Sep 16, 2019
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
There is no real substance, he walks you through the different flows to set things up. It's a nice kickstart but as soon as the aws user interface changes this book will just be misleading.
Amazon Verified review Amazon
Mr Mark V Day May 21, 2018
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
To say this is a book about Building Serverless Applications with Python is misleading, there is very little python code and the code is out weighted by screenshots of the lambda console, if you compare it to serverless architecture on AWS it is a poor comparison and the book was returned after 10 minutes of receiving the package.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.