Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
DevOps for Serverless Applications
DevOps for Serverless Applications

DevOps for Serverless Applications: Design, deploy, and monitor your serverless applications using DevOps practices

eBook
$24.99 $35.99
Paperback
$43.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Table of content icon View table of contents Preview book icon Preview Book

DevOps for Serverless Applications

Introducing Serverless

This book will introduce us to the world of the serverless approach to information technology, looking at multiple different cloud service providers, such as AWS, Azure, Google, OpenWhisk, and a few others. We will look in detail at each cloud service, as well as the different methods that are used to apply DevOps to them. We will look at the different use cases and learn the best practices for each of them.  

The following topics will be covered in this introductory chapter:

  • Introduction to serverless
  • Core concept
    • Backend as a service (BaaS)
    • Function as a service (FaaS)
    • AWS Lambda
    • Azure functions
    • Google functions
    • OpenWhisk
  • Pros and cons of serverless
  • DevOps with serverless

Introduction to serverless

When we hear the word serverless, the first thing that comes to mind is, oh, my code will magically run without any server!. In a way, this is right: the serverless approach is a process where we deploy the code into a cloud and it is executed automatically, without worrying about the underlying infrastructure, renting or buying servers, scaling, monitoring, or capacity planning. The service provider will take care of all these things. Also, you won't believe how cheap it is and how much easier it is to manage. Now, you are probably thinking, how is that possible?. To look at its workings in more detail, let's compare the serverless approach with something we do in our daily lives. 

The serverless approach is a bit like dealing with our laundry. We all need to wash our clothes, and for this, we need to buy a washing machine. But the usage of this washing machine will be about 10 to 15 hours per week, and the rest of the time the washing machine will be idle. Interestingly, we buy servers to host our application, and most of the time, our servers are idle when waiting for requests and sit unused. We have piles of servers that are hardly managed or decommissioned. As they are not properly used or managed, resources, such as the power supply, capacity, storage, and memory, are wasted. 

Also, while doing the laundry, the washing machine will allow only a certain load and volume. The same applies to servers: They too allow only a certain volume and load. The more the load or traffic, the slower the processing will be, or it may stop completely. Now, to take care of our extra load, we might decide to buy a bigger washing machine, which will allow a bigger volume of laundry and support a larger load. But again, this high-end machine will take the same resources if we have to wash a huge pile of clothes or just one piece of clothing, which is wasteful. The same is the case in our server analogy. When catering for higher traffic or requests, we could buy a high-end server. But we will end up using the same resources for 10 requests a day as we would for 10,000 requests, even with a high-end server.

Also, to use the washing machine, we have to separate our clothes before washing, select the program, and add the detergent and softener, and if these elements are not handled properly, we might ruin our clothes. Similarly, when using a server, we have to make sure we install the right software—as well as the right version of the software—make sure that it is secure enough, and always monitor whether the services are running.

Also, if you are renting an apartment, then you might not have a washing machine, or perhaps you might find launderettes to be cheaper when you wash you laundry in bulk, and also less worrisome. So launderettes or coin-operated laundry machines can be rented whenever you need to wash your clothes. Likewise, many companies, such as AWS, Azure, or Google, started by renting their servers. So we too can rent a server, and the provider will take care of the storage, memory, power, and basic setup.

Say that we've decided that a coin-operated washing machine at the local launderette is our best option. Now we just put the coin in and wash our clothes, but we still need to make sure we add detergent and fabric softener, and set the right program, otherwise we will end up ruining our clothes. Likewise, when we rent a server on the cloud, we might not bother dealing with the power, storage, and memory. But we still need to install the required software, monitor the application service, and upgrade the software version from time to time, as well as monitor the performance of the application.

Say that I found a new launderette, one that has a delivery service and that would charge me per item of clothing, so I can send clothes in bulk or one piece at a time. They will wash and iron my clothes too. Now, I don't need to worry about which detergent or comforter to use, or what cleaning program to use, and I also don't need to own an iron. But in the case of the world of information technology, companies are still using the rental coin laundry system. They still lease servers and manage them through Platform as a Service (PaaS), still manage application downtime, upgrade the software version, and monitor services.

But this can all be changed by adopting a serverless approach. Serverless computing will automatically provision and configure the server and then execute the code. As the traffic rises, it will scale automatically, apply the required resources, and scale down once the traffic eases down.

Core concept

In earlier days, the term serverless referred to an application that was dependent on third-party applications or services to manage server-side logic. Such applications were cloud-based databases, such as Google Firebase, or authentication services, such as Auth0 or AWS Cognito. They were referred to as Backend as a Service (BaaS) services. But serverless also means code that is developed to be event-triggered, and which is executed in stateless compute containers. This architecture is popularly known as Function as a Service (FaaS). Let's look at each type of service in a bit more detail. 

Backend as a Service

The BaaS was conceptualized by Auth0 and Google Firebase. Auth0 started as authentication as a service, but moved to FaaS.  So basically, BaaS is third-party service through which we can implement our required functionality, and it will provide server-side logic for the implementation of the application.

The common approach is that most web and mobile application developers code their own authentication functionality, such as login, registration, and password management, and each of these services has its own API, which has to be incorporated into the application. But this was complicated and time consuming for developers, and BaaS providers made it easy by having a unified API and SDK and bridging them with the frontend of the application so that developers did not have to worry about developing their own backend services for each service. In this way, time and money was saved.

Say, for example, that we want to build a portal that would require authentication to consume our services. We would need login, signup, and authentication systems in place, and we would also need to make it easy for the consumer to sign in with just a click of a button using their existing Google or Facebook or Twitter account. Developing these functionalities individually requires lots of time and effort.

But by using BaaS, we can easily integrate our portal to sign up and authenticate using a Google, Facebook, or Twitter account. Another BaaS service is Firebase, provided by Google. Firebase is a database service that is used by mobile apps, where database administration overhead is mitigated, and it provides authorization for different types of users. In a nutshell, this is how BaaS works. Let's look at the FaaS side of the serverless approach. 

Function as a Service

As mentioned at the start of chapter, FaaS is essentially a small program or function that performs small tasks that are triggered by an event, unlike a monolithic app, which does lots of things. So, in FaaS architecture, we break our app into small, self-contained programs or functions instead of the monolithic app that runs on PaaS and performs multiple functions. For instance, each endpoint in the API could be a separate function, and we can run these functions on demand rather than running the app full time.

The common approach would be to have an API coded in a multi-layer architecture, something like a three-tier architecture where code is broken down into a presentation, business, and data layer. All the routes would trigger the same handler functions in the business layer, and data would be processed and sent to the data layer, which would be a database or file. The following diagram shows this three-tier architecture:


That might work fine for small numbers of simultaneous users, but how would we manage this when traffic grows exponentially? The application will suddenly become a  computing nightmare. So, to resolve this problem, ideally, we would separate the data layer, which contains the database, into the separate server. But the problem is still not solved, because the API routes and business logic is within one application, so the scaling would still be a problem.  

A serverless approach to the same problem is painless. Instead of having one server for application API endpoints and business logic, each part of the application is broken down into independent, auto-scalable functions. The developer writes a function, and the serverless provider wraps the function into a container that can be monitored, cloned, and distributed on any number of servers, as shown in the following diagram:

The benefit to breaking down an application into functions is that we can scale and deploy each function separately. For instance, if one endpoint in our API is where 90 percent of our traffic goes, or our image-processing code is eating up most of the computing time, that one function or bit code can be distributed and scaled more easily than scaling out the entire application.

In a FaaS system, the functions are expected to start within milliseconds in order to allow the handling of individual requests. In PaaS systems, by contrast, there is typically an application thread that keeps running for long periods of time, and handles multiple requests. FaaS services are charged per execution time of the function, whilst PaaS services charge per running time of the thread in which the server application is running.

In the microservices architecture, the applications are loosely coupled, fine grained, and light weighted. The reason for the birth of microservices is to break down the monolithic application into small services so that it can be developed, managed, and scaled independently. But FaaS takes that a step further by breaking things down into even smaller units called functions. 

The trend is pretty clear: The unit of work is getting smaller and smaller. We'are moving from monoliths to microservices, and now to functions, as shown in the following diagram:

With the rise of containers, many cloud vendors saw that serverless functions architecture will provide better flexibility for developers to build their applications without worrying about the ops (operations). AWS was first to launch this service with the name Lambda, then other cloud providers followed the trend, such as Microsoft Azure with Azure Functions and Google Cloud with Google Functions. But this popularity also gave an opportunity for some vendors to build open source versions. Some popular versions are IBM's OpenWhisk, which is Apache licensed, Kubeless, which is built over the top of Kubernetes, and OpenFaaS, which is built over the Docker container. Even Oracle jumped into the foray with Oracle Fn. Let's briefly look at each vendor in this chapter, learning about how they work. We will then travel with them over the rest of the book, looking at their approach to DevOps. 

AWS Lambda

Amazon Web Services (AWS) were the first to launch a FaaS, or serverless, service in 2014, called Lambda. They are currently the leaders in this kind of serverless provision. AWS Lambda follow the event-driven approach. At the trigger of an event, Lambda executes the code and performs the required functionality, and it can automatically scale as the traffic rises, as well as automatically descale. Lambda functions run in response to events, such as changes to the data  in an Amazon S3 bucket, an Amazon DynamoDB table change, or in response to an HTTP request through the AWS API Gateway. That's how Lambda helps to build the triggers for multiple services, such as S3 DynamoDB, and the stream data store in Kinesis.

So, Lambda helps developers only worry about the coding—the computing part, such as the memory, CPU, network, and space, is taken care of by Lambda automatically. It also automatically manages the patching, logging, and monitoring of functions. Architecturally, a Lambda function is invoked in a container, which is launched based on the configuration provided. These containers might be reused for subsequent invocations for functions. As the demand dies, the container is decommissioned, but this is all managed internally by Lambda, so users do not have to worry about it as they do not have any control over these containers. The languages supported by AWS Lambda functions are Node.js, Java, C#, and Python.

While building serverless applications, the core components are functions and the event source. The event source is the AWS service or custom application, and the Lambda function processes the events. The execution time for each Lambda function is 300 seconds. 

Let's look at an example of how AWS Lambda actually works. In a photo-sharing application, people upload their photos, and these photos need to have thumbnails so that they can be displayed on the user's profile page. In this scenario, we can use the Lambda function to create the thumbnails, so that the moment the photo gets uploaded in the AWS S3 bucket, S3, which supports the events source, can publish the object-created events and invoke the Lambda function. The Lambda function code reads the latest photo object from the S3 bucket, creates a thumbnail version, and saves it in another S3 bucket. 

In Chapter 3Applying DevOps on AWS Lambda Applications, we will look at how we can create, run, and deploy Lambda functions in an automated way, and we will also monitor and perform root-cause analysis through logging. 

Azure Functions

Azure Functions is Microsoft's venture into serverless architecture. It came onto the market in March 2016. Azure Functions allows functions to be coded in C#, F#, PHP, Node.js, Python, and Java. Azure Functions also supports bash, batch, and PowerShell files.  Azure Functions has seamless integration with Visual Studio Team System (VSTS), Bitbucket, and GitHub, which will make continuous integration and continuous deployment easier. Azure Functions supports various types of event triggers, timer-based events for tasks, OneDrive, and SharePoint, which can be configured to trigger operations in functions. Real-time processing of data and files adds the ability to operate a serverless bot that uses Cortana as the information provider. Microsoft has introduced Logic Apps, a tool with a workflow-orchestration engine, which will allow less technical users to build serverless applications. Azure Functions allows triggers to be created in other Azure cloud services and HTTP requests. The maximum execution time is five minutes per function. Azure Functions provides two types of app service plan: Dynamic and ClassicApp Service is a container or environment for a set of Azure functions to run in. The Dynamic option is similar to Lambda, where we pay for the time and memory our function uses to run. The Classic option is about allocating your own existing or provisioned app resources for the functions at no extra cost. The memory allotted is as per App Service, whereas in AWS Lambda, the memory allocation is per function. Azure Functions allows just 10 concurrent executions per function

Azure Functions follows a similar pricing model to AWS Lambda. The total cost is based on number to triggers executed and time. So the first one million requests are free, and beyond that, it will cost $0.02 for every 100,000 executions. 

We will be looking at automating the deployment of Azure Functions in Chapter 4DevOps with Azure Functions, and also at how different DevOps processes will fit in to give Azure Functions a faster time to market. 

Google Functions

Google joined the party a little late compared to AWS Lambda and Azure Functions. They came onto the market with a beta version of Cloud Functions in March 2017. Currently, we can write Google Functions through Node.js; they will be supporting other languages soon. They support internal event bus triggers and also HTTP triggers, which respond to events such as GitHub WebHooks, slack, or any HTTPS requests, and also mobile backend for events from Firebase analytics, a real-time database.

In terms of scalability, there is in-built provision for autoscaling. Google Functions supports 1,000 functions per project and allows 400 executions per function, which is claimed to be a soft limit. Google Functions allows an execution time of 540 seconds (9 minutes). Deployment is supported through ZIP upload, cloud storage, and cloud store repositories. The event source is through cloud pub/sub or cloud storage objects. The logging of function executions is managed through Stackdriver logging, which is Google Cloud's logging tool.

We will sail through Google Functions's DevOps approach in Chapter 5, Integrating DevOps with IBM – OpenWhiskand will also look at the best practices around DevOps using Google Functions.

OpenWhisk

OpenWhisk is an open source FaaS platform that can be deployed to the cloud or on an on-premise data center. It is driven by IBM, and they have open sourced using the Apache licence. We can sign up for OpenWhisk through Bluemix (Bluemix is IBM's cloud platform) or we can set  it up locally through a vagrant. It works in a similar way to any FaaS technology, such as AWS Lambda , Azure Functions, or Google Functions. But OpenWhisk supports open events providers. So if we have a custom event provider, then we can incorporate it with OpenWhiz, because, unlike other cloud platforms, OpenWhiz allows events within its service.  One example would be that a function can be triggered through OpenWhiz upon the arrival of a new item appearance in an RSS feed. OpenWhiz will allow an organisation to set up their own FaaS platform within their premises if they are not happy to allow their data to go out of their organisation. The language support in OpenWhiz is Swift, along with JavaScript or Node.js. OpenWhiz has integrated Docker support for binary code execution within functions.

Other serverless architectures

There are many other serverless options, such as OpenFaaS, Fission, and Iron.io. I won't cover them in this book, but let's scheme through the features. OpenFaaS is an open source alternative for serverless architecture. It is built over Docker containers, Swarm, and Kubernetes. It has its own UI portal and also has CLI support to deploy functions. OpenFaaS supports Node.js, Python, GO, and C# on Windows and Linux. We can set it up over a cloud, a local laptop, or an on-premise server. We can write functions for almost everything—that is what is claimed by OpenFaas. OpenFaaS is written in Golang. It allows events through HTTP/HTTPS requests.

Fission is yet another open source version of serverless architecture—the underlying technology is Kubernetes and Docker containers, which can be deployed on both cloud and on-premise infrastructures. It is designed as a set of microservices, and its components are the controller, router, and pool manager. The router manages HTTP requests, the controller manages functions, event triggers, and environment images, and the pool manager manages the pool of containers and loads the functions into these containers. The functions are written with Python.

Serverless benefits

There are a number of pros and cons of using serverless architecture. Let's look at the bright side first. Why would anyone build their application using a serverless architecture such as AWS Lambda or OpenWhiz? The main reason is how efficiently the application performs, how fast it will scale, and, most importantly, its cost. Let's look at a few important pros and then move on to the cons.

Faster time to market

We can push the application to market much faster, as OPS becomes much simpler, and will help the developer to concentrate only on their development. The OPS team does not have to bother about writing code that could handle scaling or worry about the underlying infrastructure.

Also, teams can build the application much faster with the help of third-party integration, such as API services such as OAuth, Twitter, and Maps.

Highly scalable

Every company wants their application to perform better, have zero downtime, and scale quickly and easily with rising traffic, but with monolithic application development, it can become very difficult. The Ops team has to be vigilant in scaling the underlying infrastructure as the load on the application rises. A huge amount of time and money is wasted over downtime due to rises in traffic. But serverless computing is highly scalable, and the applications can be scaled and descaled within seconds.

Low cost

In serverless computing, developers are billed only for the time that the function is running, unlike IaaS and PaaS, which are billed 24/7 for each server. This is good for companies with a huge setup of apps, APIs, or microservices that are currently running 24/7 and using resources 100 percent of the time, whether they are required or not. But with serverless, instead of running the application 24/7, we can execute functions on demand and share the resources, so we can reduce the idle time substantially and still make the application run faster.

Latency and geolocation improvement

The scalability of the application depends on three factors: the number of users, the location of the users, and the latency of the network. In today's world, applications have a global audience, which can add to latency. But the danger of latency can be highly mitigated with the serverless platform. With serverless, a container is instantiated to run a function at every event call, and this container can be created close to the user's geographical region, which will automatically improve the performance of the app.

Serverless drawbacks

Although there are upsides to using serverless, there are also downsides. Let's look at the other side of the coin of the serverless function. 

Increased complexity

The more granular we go with the application, the more complex it becomes. The code for each function might get simpler, but the application as a whole will get more complex. Say, for example, that we break the application into 10 different microservices. We would have to manage 10 different apps, whereas in a monolithic application, it is just one app that has to be managed.

Lack of tooling

Let's say that we break our monolithic application into 50 different functions. There are still a variety of processes and tools to manage, log, monitor, and deploy the monolithic application. As serverless is pretty new in the market, monitoring or logging an application that runs for a few seconds is limited and challenging as of now, but over time, there will be many efficient ways to do this. 

Complexity with architecture

It is hard to make a decision as to how granular a function should be, and it is time consuming to assess, implement, and test to check our preferences. It would be cumbersome to manage too many functions, and at the same time, ignoring granularity would result in us setting up mini monoliths.

Drawback in implementation

The biggest challenge with serverless is integration tests. We will write many functions for an application, but how would we integrate them to work as an application? Of course, before that, how do we test how efficiently they work together? As serverless is new and still maturing, the options that are added through testing are still limited. But we will be covering a few aspects of deployment and testing in future chapters. 

DevOps with serverless

DevOps is another buzzword that has been around for quite a long time. Like serverless, DevOps is also a confusing term. Lots of people have lots of different perspectives on DevOps. Some say that DevOps is just tools, some feel that DevOps consists of a few processes—even IaaS and PaaS falls under the umbrella of DevOps. As per my understanding, DevOps is a collaboration of tools, processes, and feedback. They all go hand in hand for the successful implementation of DevOps. But why are we talking about DevOps here? In short, because we would need DevOps for a smooth transition to production, to log or monitor the serverless functions, and to test them before they reach users.

With DevOps functional prospective, I will be covering version control, continuous integration, continuous deployment, monitoring, and logging for AWS Lambda functions, Azure Functions, Google Functions, and OpenWhiz. Version control is a process where we version the code so that we can branch it, package it, deploy it, and also roll back to a previous version. Continuous integration is the practice where code is integrated together by developers with automated builds to detect and mitigate problems early on. Continuous deployment is basically a bus or pipeline where code is continuously refined using automated testing, and is then deployed to the environment. This pipeline moves smoothly towards production, with minimal manual intervention.

Summary

We will evaluate a few serverless frameworks in the next chapter, and then use one of them through the book with various DevOps implementation tutorials. In most of these DevOps implementations, we will be using the more popular DevOps tools, such as Jenkins, which is used for orchestrating, and GitHub, which is used for versioning. We will also cover automated units, integration, and system testing. We will also look at the adoption of monitoring and logging best practices, as well as many more DevOps processes and features. 

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Understand various services for designing serverless architecture
  • Build CD pipelines using various cloud providers for your serverless applications
  • Implement DevOps best practices when building serverless applications

Description

Serverless applications are becoming very popular among developers and are generating a buzz in the tech market. Many organizations struggle with the effective implementation of DevOps with serverless applications. DevOps for Serverless Applications takes you through different DevOps-related scenarios to give you a solid foundation in serverless deployment. You will start by understanding the concepts of serverless architecture and development, and why they are important. Then, you will get to grips with the DevOps ideology and gain an understanding of how it fits into the Serverless Framework. You'll cover deployment framework building and deployment with CI and CD pipelines for serverless applications. You will also explore log management and issue reporting in the serverless environment. In the concluding chapters, you will learn important security tips and best practices for secure pipeline management. By the end of this book, you will be in a position to effectively build a complete CI and CD delivery pipeline with log management for serverless applications.

Who is this book for?

DevOps for Serverless Applications is for DevOps engineers, architects, or anyone interested in understanding the DevOps ideology in the serverless world. You will learn to use DevOps with serverless and apply continuous integration, continuous delivery, testing, logging, and monitoring with serverless.

What you will learn

  • Explore serverless fundamentals and effectively combine them with DevOps
  • Set up CI and CD with AWS Lambda and other popular Serverless service providers with the help of the Serverless Framework
  • Perform monitoring and logging with serverless applications
  • Set up a dynamic dashboard for different service providers
  • Discover best practices for applying DevOps to serverless architecture
  • Understand use cases for different serverless architectures

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Sep 29, 2018
Length: 264 pages
Edition : 1st
Language : English
ISBN-13 : 9781788625661
Concepts :
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want

Product Details

Publication date : Sep 29, 2018
Length: 264 pages
Edition : 1st
Language : English
ISBN-13 : 9781788625661
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 153.97
DevOps for Serverless Applications
$43.99
Software Architect’s Handbook
$54.99
Microservices Development Cookbook
$54.99
Total $ 153.97 Stars icon

Table of Contents

11 Chapters
Introducing Serverless Chevron down icon Chevron up icon
Understanding Serverless Frameworks Chevron down icon Chevron up icon
Applying DevOps to AWS Lambda Applications Chevron down icon Chevron up icon
DevOps with Azure Functions Chevron down icon Chevron up icon
Integrating DevOps with IBM OpenWhisk Chevron down icon Chevron up icon
DevOps with Google Functions Chevron down icon Chevron up icon
Adding DevOps Flavor to Kubeless Chevron down icon Chevron up icon
Best Practices and the Future of DevOps with Serverless Chevron down icon Chevron up icon
Use Cases and Add-Ons Chevron down icon Chevron up icon
DevOps Trends with Serverless Functions Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.5
(8 Ratings)
5 star 87.5%
4 star 0%
3 star 0%
2 star 0%
1 star 12.5%
Filter icon Filter
Top Reviews

Filter reviews by




Shailendra S. Dec 16, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
As a whole, this is an excellent resource particularly for someone that is new to Serverless functions (Function-as-a-Service (FaaS)). Easier to read and better organized, and examples showcasing the covered concepts. If you're just starting with Serverless applications, this is what you need.It also includes best practices for Serverless functions architecture services by AWS Lambda, Azure Functions, IBM OpenWhisk, Google Functions, Kubeless with their approach to DevOps.The book is well written, easy to follow, and a perfect mix of tutorials and Serverless concepts. Highly recommend!
Amazon Verified review Amazon
Sudhir Jan 12, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is a must-read book for anyone who wants to understand and implement DevOps culture and process for Serverless applications.It covers the concept of Serverless with real world example, then it talks about different cloud service provider who provides serverless service, it also provides understanding of serverless framework and has many different hands-on examples for the framework. It also goes into details about how to setup CI/CD for lambda functions, azure functions, google functions, openwhisk and kubeless with multiple different tutorials. It also walks through different best practises around Serverless and DevOps. So overall it is good mix of Serverless concept and also DevOps. I definitely will recommend this book to anyone who wants to understand the concept of Serverless, DevOps for Serverless and Serverless framework.
Amazon Verified review Amazon
Swapnil Dec 16, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
If you are interested in learning new horizons in growing technology especially in the field of serverless applications using devops, this book is a right fit. Very well explained with examples and for all kind of audiences.
Amazon Verified review Amazon
Sachin Deshpande Jan 04, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Great book on devOps and serverless applications. Covers all aspects like design, deploy and managing your serverless application. Each core functions are covered in details like AWS, Azure, IBM OpenWhisk. Great read for anyone learning devOps and serverless applications.
Amazon Verified review Amazon
Denis Nov 04, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
If you are interested in Serverless ecosystem and wanted to get some hands-on trying simple serverless use cases in different platforms with a step by step approach, this book is the right choice.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.