Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Serverless Architectures with Kubernetes
Serverless Architectures with Kubernetes

Serverless Architectures with Kubernetes: Create production-ready Kubernetes clusters and run serverless applications on them

Arrow left icon
Profile Icon Onur Yılmaz Profile Icon Sathsara Sarathchandra
Arrow right icon
€32.99
Paperback Nov 2019 474 pages 1st Edition
eBook
€8.99 €26.99
Paperback
€32.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Onur Yılmaz Profile Icon Sathsara Sarathchandra
Arrow right icon
€32.99
Paperback Nov 2019 474 pages 1st Edition
eBook
€8.99 €26.99
Paperback
€32.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€8.99 €26.99
Paperback
€32.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Serverless Architectures with Kubernetes

1. Introduction to Serverless

Learning Objectives

By the end of this chapter, you will be able to:

  • Identify the benefits of serverless architectures
  • Create and invoke simple functions on a serverless platform
  • Create a cloud-native serverless function and package it as a container using Kubernetes
  • Create a Twitter Bot Backend application and package it in a Docker container

In this chapter, we will explain the serverless architecture, then create our first serverless function and package it as a container.

Introduction to Serverless

Cloud technology right now is in a state of constant transformation to create scalable, reliable, and robust environments. In order to create such an environment, every improvement in cloud technology aims to increase both the end user experience and the developer experience. End users demand fast and robust applications that are reachable from everywhere in the world. At the same time, developers demand a better development environment to design, deploy, and maintain their applications in. In the last decade, the journey of cloud technology has started with cloud computing, where servers are provisioned in cloud data centers and applications are deployed on the servers. The transition to cloud data centers decreased costs and removed the need for responsibility for data centers. However, as billions of people are accessing the internet and demanding more services, scalability has become a necessity. In order to scale applications, developers have created smaller microservices that can scale independently of each other. Microservices are packaged into containers as building blocks of software architectures to better both the developer and end user experience. Microservices enhance the developer experience by providing better maintainability while offering high scalability to end users. However, the flexibility and scalability of microservices cannot keep up with the enormous user demand. Today, for instance, millions of banking transactions take place daily, and millions of business-to-business requests are made to backend systems.

Finally, serverless started gaining attention for creating future-proof and ad hoc-scalable applications. Serverless designs focus on creating even smaller services than microservices and they are designed to last much longer into the future. These nanoservices, or functions, help developers to create more flexible and easier-to-maintain applications. On the other hand, serverless designs are ad hoc-scalable, which means if you adopt a serverless design, your services are naturally scaled up or down with the user requests. These characteristics of serverless have made it the latest big trend in the industry, and it is now shaping the cloud technology landscape. In this section, an introduction to serverless technology will be presented, looking at serverless's evolution, origin, and use cases.

Before diving deeper into serverless design, let's understand the evolution of cloud technology. In bygone days, the expected process of deploying applications started with the procurement and deployment of hardware, namely servers. Following that, operating systems were installed on the servers, and then application packages were deployed. Finally, the actual code in application packages was executed to implement business requirements. These four steps are shown in Figure 1.1:

Figure 1.1: Traditional software development
Figure 1.1: Traditional software development

Organizations started to outsource their data center operations to cloud providers to improve the scalability and utilization of servers. For instance, if you were developing an online shopping application, you first needed to buy some servers, wait for their installation, and operate them daily and deal with their potential problems, caused by electricity, networking, and misconfiguration. It was difficult to predict the usage level of servers and not feasible to make huge investments in servers to run applications. Therefore, both start-ups and large enterprises started to outsource data center operations to cloud providers. This cleared away the problems related to the first step of hardware deployment, as shown in Figure 1.2:

Figure 1.2: Software development with cloud computing
Figure 1.2: Software development with cloud computing

With the start of virtualization in cloud computing, operating systems became virtualized so that multiple virtual machines (VMs) could run on the same bare-metal machine. This transition removed the second step, and service providers provision VMs as shown in Fig 1.3. With multiple VMs running on the same hardware, the costs of running servers decreases and the flexibility of operations increases. In other words, the low-level concerns of software developers are cleared since both the hardware and the operating system are now someone else's problem:

Figure 1.3: Software development with virtualization
Figure 1.3: Software development with virtualization

VMs enable the running of multiple instances on the same hardware. However, using VMs requires installing a complete operating system for every application. Even for a basic frontend application, you need to install an operating system, which results in an overhead of operating system management, leading to limited scalability. Application developers and the high-level usage of modern applications requires faster and simpler solutions with better isolation than creating and managing VMs. Containerization technology solves this issue by running multiple instances of "containerized" applications on the same operating system. With this level of abstraction, problems related to operating systems are also removed, and containers are delivered as application packages, as illustrated in Figure 1.4. Containerization technology enables a microservices architecture where software is designed as small and scalable services that interact with each other.

This architectural approach makes it possible to run modern applications such as collaborative spreadsheets in Google Drive, live streams of sports events on YouTube, video conferences on Skype, and many more:

Figure 1.4: Software development with containerization
Figure 1.4: Software development with containerization

The next architectural phenomena, serverless, removes the burden of managing containers and focuses on running the actual code itself. The essential characteristic of serverless architecture is ad hoc scalability. Applications in serverless architecture are ad hoc-scalable, which means they are scaled up or down automatically when they are needed. They could also be scaled down to zero, which means no hardware, network, or operation costs. With serverless applications, all low-level concerns are outsourced and managed, and the focus is on the last step – Run the code – as shown in Figure 1.5. With the serverless design, the focus is on the last step of traditional software development. In the following section, we will focus on the origin and manifesto of serverless for a more in-depth introduction:

Figure 1.5: Software development with serverless
Figure 1.5: Software development with serverless

Serverless Origin and Manifesto

Serverless is a confusing term since there are various definitions used in conferences, books, and blogs. Although it theoretically means not having any servers, it practically means leaving the responsibility of servers to third-party organizations. In other words, it means not getting rid of servers but server operations. When you run serverless, someone else handles the procurement, shipping, and installation of your server operations. This decreases your costs because you do not need to operate servers or even data centers; furthermore, it lets you focus on the application logic, which implements your core business function.

The first uses of serverless were seen in articles related to continuous integration around 2010. When it was first discussed, serverless was considered for building and packaging applications on the servers of cloud providers. The dramatic increase in popularity came with the Amazon Web Services (AWS) Lambda launch in 2014. Furthermore, in 2015, AWS presented API Gateway for the management and triggering of Lambda functions as it's a single entry point for multiple functions. Therefore, serverless functions gained traction in 2014 and it became possible to create serverless architecture applications by using AWS API Gateway in 2015.

However, the most definitive and complete explanation of serverless was presented in 2016, at the AWS developer conference, as the Serverless Compute Manifesto. It consists of eight strict rules that define the core ideas behind serverless architecture:

Note

Although it was discussed in various talks at the AWS Summit 2016 conference, the Serverless Compute Manifesto has no official website or documentation. A complete list of what the manifesto details can be seen in a presentation by Dr. Tim Wagner: https://www.slideshare.net/AmazonWebServices/getting-started-with-aws-lambda-and-the-serverless-cloud.

  • Functions as the building blocks: In serverless architecture, the building blocks of development, deployment, and scaling should be the functions. Each function should be deployed and scaled in isolation, independently of other functions.
  • No servers, VMs, or containers: The service provider should operate all computation abstractions for serverless functions, including servers, VMs, and containers. Users of serverless architecture should not need any further information about the underlying infrastructure.
  • No storage: Serverless applications should be designed as ephemeral workloads that have a fresh environment for every request. If they need to persist some data, they should use a remote service such as a Database as a Service (DbaaS).
  • Implicitly fault-tolerant functions: Both the serverless infrastructure and the deployed applications should be fault-tolerant in order to create a robust, scalable, and reliable application environment.
  • Scalability with the request: The underlying infrastructure, including the computation and network resources, should enable a high level of scalability. In other words, it is not an option for a serverless environment to fail to scale up when requests are rising.
  • No cost for idle time: Serverless providers should only incur costs when serverless workloads are running. If your function has not received an HTTP request for a long period, you should not pay any money for the idleness.
  • Bring Your Own Code (BYOC): Serverless architectures should enable the running of any code developed and packaged by end users. If you are a Node.Js should appear together or Go developer, it should be possible for you to deploy your function within your preferred language to the serverless infrastructure.
  • Instrumentation: Logs of the functions and the metrics collected over the function calls should be available to the developers. This makes it possible to debug and solve problems related to functions. Since they are already running on remote servers, instrumentation should not create any further burden in terms of analyzing potential problems.

The original manifesto introduced some best practices and limitations; however, as cloud technology evolves, the world of serverless applications evolves. This evolution will make some rules from the manifesto obsolete and will add new rules. In the following section, use cases of serverless applications are discussed to explain how serverless is adopted in the industry.

Serverless Use Cases

Serverless applications and designs seem to be avant-garde technologies; however, they are highly adopted in the industry for reliable, robust, and scalable applications. Any traditional application that is running on VMs, Docker containers, or Kubernetes can be designed to run serverless if you want the benefits of serverless designs. Some of the well-known use cases of serverless architectures are listed here:

  • Data processing: Interpreting, analyzing, cleansing, and formatting data are essential steps in big data applications. With the scalability of serverless architectures, you can quickly filter millions of photos and count the number of people in them, for instance, without buying any pricey servers. According to a case report (https://azure.microsoft.com/en-in/blog/a-fast-serverless-big-data-pipeline-powered-by-a-single-azure-function/), it is possible to create a serverless application to detect fraudulent transitions from multiple sources with Azure Functions. To handle 8 million data processing requests, serverless platforms would be the appropriate choice, with their ad hoc scalability.
  • Webhooks: Webhooks are HTTP API calls to third-party services to deliver real-time data. Instead of having servers up and running for webhook backends, serverless infrastructures can be utilized with lower costs and less maintenance.
  • Check-out and payment: It is possible to create shopping systems as serverless applications where each core functionality is designed as an isolated component. For instance, you can integrate the Stripe API as a remote payment service and use the Shopify service for cart management in your serverless backend.
  • Real-time chat applications: Real-time chat applications integrated into Facebook Messenger, Telegram, or Slack, for instance, are very popular for handling customer operations, distributing news, tracking sports results, or just for entertainment. It is possible to create ephemeral serverless functions to respond to messages or take actions based on message content. The main advantage of serverless for real-time chat is that it can scale when many people are using it. It could also scale to zero and cost no money when there is no one using the chat application.

These use cases illustrate that serverless architectures can be used to design any modern application. It is also possible to move some parts of monolithic applications and convert them into serverless functions. If your current online shop is a single Java web application packaged as a JAR file, you can separate its business functions and convert them into serverless components. The dissolution of giant monoliths into small serverless functions helps to solve multiple problems at once. First of all, scalability will never be an issue for the serverless components of your application. For instance, if you cannot handle a high amount of payments during holidays, a serverless platform will automatically scale up the payment functions with the usage levels. Secondly, you do not need to limit yourself to the programming language of the monolith; you can develop your functions in any programming language. For instance, if your database clients are better implemented with Node.js, you can code the database operations of your online shop in Node.js.

Finally, you can reuse the logic implemented in your monolith since now it is a shared serverless service. For instance, if you separate the payment operations of your online shop and create serverless payment functions, you can reuse these payment functions in your next project. All these benefits make it appealing for start-ups as well as large enterprises to adopt serverless architectures. In the following section, serverless architectures will be discussed in more depth, looking specifically at some implementations.

Possible answers:

  • Applications with high latency
  • When observability and metrics are critical for business
  • When vendor lock-in and ecosystem dependencies are an issue

Serverless Architecture and Function as a Service (FaaS)

Serverless is a cloud computing design where cloud providers handle the provisioning of servers. In the previous section, we discussed how operational concerns are layered and handed over. In this section, we will focus on serverless architectures and application design using serverless architecture.

In traditional software architecture, all of the components of an application are installed on servers. For instance, let's assume that you are developing an e-commerce website in Java and your product information is stored in MySQL. In this case, the frontend, backend, and database are installed on the same server. End users are expected to reach the shopping website with the IP address of the server, and thus an application server such as Apache Tomcat should be running on the server. In addition, user information and security components are also included in the package, which is installed on the server. A monolithic e-commerce application is shown in Figure 1.6, with all four parts, namely the frontend, backend, security, and database:

Figure 1.6: Traditional software architecture
Figure 1.6: Traditional software architecture

Microservices architecture focuses on creating a loosely coupled and independently deployable collection of services. For the same e-commerce system, you would still have frontend, backend, database, and security components, but they would be isolated units. Furthermore, these components would be packaged as containers and would be managed by a container orchestrator such as Kubernetes. This enables the installing and scaling of components independently since they are distributed over multiple servers. In Figure 1.7, the same four components are installed on the servers and communicating with each other via Kubernetes networking:

Figure 1.7: Microservices software architecture
Figure 1.7: Microservices software architecture

Microservices are deployed to the servers, which are still managed by the operations teams. With the serverless architecture, the components are converted into third-party services or functions. For instance, the security of the e-commerce website could be handled by an Authentication-as-a-Service offering such as Auth0. AWS Relational Database Service (RDS) can be used as the database of the system. The best option for the backend logic is to convert it into functions and deploy them into a serverless platform such as AWS Lambda or Google Cloud Functions. Finally, the frontend could be served by storage services such as AWS Simple Storage Service (S3) or Google Cloud Storage.

With a serverless design, it is only required to define these services for you to have scalable, robust, and managed applications running in harmony, as shown in Figure 1.8:

Note

Auth0 is a platform for providing authentication and authorization for web, mobile, and legacy applications. In short, it provides authentication and authorization as a service, where you can connect any application written in any language. Further details can be found on its official website: https://auth0.com.

Figure 1.8: Serverless software architecture
Figure 1.8: Serverless software architecture

Starting from a monolith architecture and first dissolving it into microservice, and then serverless components is beneficial for multiple reasons:

  • Cost: Serverless architecture helps to decrease costs in two critical ways. The first is that the management of the servers is outsourced, and the second is that it only costs money when the serverless applications are in use.
  • Scalability: If an application is expected to grow, the current best choice is to design it as a serverless application since that removes the scalability constraints related to the infrastructure.
  • Flexibility: When the scope of deployable units is decreased, serverless provides more flexibility to innovate, choose better programming languages, and manage with smaller teams.

These dimensions and how they vary between software architectures is visualized in Figure 1.9:

Figure 1.9: Benefits of the transition from cost to serverless
Figure 1.9: Benefits of the transition from cost to serverless

When you start with a traditional software development architecture, the transition to microservices increases scalability and flexibility. However, it does not directly decrease the cost of running the applications since you are still dealing with the servers. Further transition to serverless improves both scalability and flexibility while decreasing the cost. Therefore, it is essential to learn about and implement serverless architectures for future-proof applications. In the following section, the implementation of serverless architecture, namely Function as a Service (FaaS), will be presented.

Function as a Service (FaaS)

FaaS is the most popular and widely adopted implementation of serverless architecture. All major cloud providers have FaaS products, such as AWS Lambda, Google Cloud Functions, and Azure Functions. As its name implies, the unit of deployment and management in FaaS is the function. Functions in this context are no different from any other function in any other programming language. They are expected to take some arguments and return values to implement business needs. FaaS platforms handle the management of servers and make it possible to run event-driven, scalable functions. The essential properties of a FaaS offering are these:

  • Stateless: Functions are designed to be stateless and ephemeral operations where no file is saved to disk and no caches are managed. At every invocation of a function, it starts quickly with a new environment, and it is removed when it is done.
  • Event-triggered: Functions are designed to be triggered directly and based on events such as cron time expressions, HTTP requests, message queues, and database operations. For instance, it is possible to call the startConversation function via an HTTP request when a new chat is started. Likewise, it is possible to launch the syncUsers function when a new user is added to a database.
  • Scalable: Functions are designed to run as much as needed in parallel so that every incoming request is answered and every event is covered.
  • Managed: Functions are governed by their platform so that the servers and underlying infrastructure is not a concern for FaaS users.

These properties of functions are covered by cloud providers' offerings, such as AWS Lambda, Google Cloud Functions, and Azure Functions; and on-premises offerings, such as Kubeless, Apache OpenWhisk, and OpenFass. With its high popularity, the term FaaS is mostly used interchangeably with the term serverless. In the following exercise, we will create a function to handle HTTP requests and illustrate how a serverless function should be developed.

Exercise 1: Creating an HTTP Function

In this exercise, we will create an HTTP function to be a part of a serverless platform and then invoke it via an HTTP request. In order to execute the steps of the exercise, you will use Docker, text editors, and a terminal.

Note

The code files for the exercises in this chapter can be found here: https://github.com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/tree/master/Lesson01/Exercise1.

To successfully complete the exercise, we need to ensure the following steps are executed:

  1. Create a file named function.go with the following content in your favorite text editor:
    package main
    import (
        "fmt"
        "net/http"
    )
    func WelcomeServerless(w http.ResponseWriter, r *http.Request) {
    	fmt.Fprintf(w, "Hello Serverless World!")
    }

    In this file, we have created an actual function handler to respond when this function is invoked.

  2. Create a file named main.go with the following content:
    package main
    import (
        "fmt"
        "net/http"
    )
    func main() {
        fmt.Println("Starting the serverless environment..")
        http.HandleFunc("/", WelcomeServerless)
        fmt.Println("Function handlers are registered.")
        http.ListenAndServe(":8080", nil)
    }

    In this file, we have created the environment to serve this function. In general, this part is expected to be handled by the serverless platform.

  3. Start a Go development environment with the following command in your terminal:
    docker run -it --rm -p 8080:8080 -v "$(pwd)":/go/src --workdir=/go/src golang:1.12.5

    With that command, a shell prompt will start inside a Docker container for Go version 1.12.5. In addition, port 8080 of the host system is mapped to the container, and the current working directory is mapped to /go/src. You will be able to run commands inside the started Docker container:

    Figure 1.10: The Go development environment inside the container
    Figure 1.10: The Go development environment inside the container
  4. Start the function handlers with the following command in the shell prompt opened in step 3: go run *.go.

    With the start of the applications, you will see the following lines:

    Figure 1.11: The start of the function server
    Figure 1.11: The start of the function server

    These lines indicate that the main function inside the main.go file is running as

    expected.

  5. Open http://localhost:8080 in your browser:
    Figure 1.12: The WelcomeServerless output
    Figure 1.12: The WelcomeServerless output

    The message displayed on the web page reveals that the WelcomeServerless function is successfully invoked via the HTTP request and the response is retrieved.

  6. Press Ctrl + C to exit the function handler and then write exit to stop the container:
    Figure 1.13: Exiting the function handler and container
Figure 1.13: Exiting the function handler and container

With this exercise, we demonstrated how we can create a simple function. In addition, the serverless environment was presented to show how functions are served and invoked. In the following section, an introduction to Kubernetes and the serverless environment is given to connect the two cloud computing phenomena.

Kubernetes and Serverless

Serverless and Kubernetes arrived on the cloud computing scene at about the same time, in 2014. AWS supports serverless through AWS Lambda, whereas Kubernetes became open source with the support of Google and its long and successful history in container management. Organizations started to create AWS Lambda functions for their short-lived temporary tasks, and many start-ups have been focused on products running on the serverless infrastructure. On the other hand, Kubernetes gained dramatic adoption in the industry and became the de facto container management system. It enables running both stateless applications, such as web frontends and data analysis tools, and stateful applications, such as databases, inside containers. The containerization of applications and microservices architectures have proven to be effective for both large enterprises and start-ups.

Therefore, running microservices and containerized applications is a crucial factor for successful, scalable, and reliable cloud-native applications. Also, the following two essential elements strengthen the connection between Kubernetes and serverless architectures:

  • Vendor lock-in: Kubernetes isolates the cloud provider and creates a managed environment for running serverless workloads. In other words, it is not straightforward to run your AWS Lambda functions in Google Cloud Functions if you want to move to a new provider next year. However, if you use a Kubernetes-backed serverless platform, you will be able to quickly move between cloud providers or even on-premises systems.
  • Reuse of services: As the mainstream container management system, Kubernetes runs most of its workload in your cloud environment. It offers an opportunity to deploy serverless functions side by side with existing services. It makes it easier to operate, install, connect, and manage both serverless and containerized applications.

Cloud computing and deployment strategies are always evolving to create more developer-friendly environments with lower costs. Kubernetes and containerization adoption has already won the market and the love of developers such that any cloud computation without Kubernetes won't be seen for a very long time. By providing the same benefits, serverless architectures are gaining popularity; however, this does not pose a threat to Kubernetes. On the contrary, serverless applications will make containerization more accessible, and consequently, Kubernetes will profit. Therefore, it is essential to learn how to run serverless architectures on Kubernetes to create future-proof, cloud-native, scalable applications. In the following exercise, we will combine functions and containers and package our functions as containers.

Possible answers:

  • Serverless – data preparation
  • Serverless – ephemeral API operations
  • Kubernetes – databases
  • Kubernetes – server-related operations

Exercise 2: Packaging an HTTP Function as a Container

In this exercise, we will package the HTTP function from Exercise 1 as a container to be a part of a Kubernetes workload. Also, we will run the container and trigger the function via its container.

Note

The code files for the exercises in this chapter can be found here: https://github.com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/tree/master/Lesson01/Exercise2.

To successfully complete the exercise, we need to ensure the following steps are executed:

  1. Create a file named Dockerfile in the same folder as the files from Exercise 1:
    FROM golang:1.12.5-alpine3.9 AS builder
    ADD . .
    RUN go build *.go
    FROM alpine:3.9
    COPY --from=builder /go/function ./function
    RUN chmod +x ./function
    ENTRYPOINT ["./function"]

    In this multi-stage Dockerfile, the function is built inside the golang:1.12.5-alpine3.9 container. Then, the binary is copied into the alpine:3.9 container as the final application package.

  2. Build the Docker image with the following command in the terminal: docker build . -t hello-serverless.

    Each line of Dockerfile is executed sequentially, and finally, with the last step, the Docker image is built and tagged: Successfully tagged hello-serverless:latest:

    Figure 1.14: The build of the Docker container
    Figure 1.14: The build of the Docker container
  3. Start a Docker container from the hello-serverless image with the following command in your Terminal: docker run -it --rm -p 8080:8080 hello-serverless.

    With that command, an instance of the Docker image is instantiated with port 8080 mapping the host system to the container. Furthermore, the --rm flag will remove the container when it is exited. The log lines indicate that the container of the function is running as expected:

    Figure 1.15: The start of the function container
    Figure 1.15: The start of the function container
  4. Open http://localhost:8080 in your browser:
    Figure 1.16: The WelcomeServerless output
    Figure 1.16: The WelcomeServerless output

    It reveals that the WelcomeServerless function running in the container was successfully invoked via the HTTP request, and the response is retrieved.

  5. Press Ctrl + C to exit the container:
    Figure 1.17: Exiting the container
Figure 1.17: Exiting the container

In this exercise, we saw how we can package a simple function as a container. In addition, the container was started and the function was triggered with the help of Docker's networking capabilities. In the following exercise, we will implement a parameterized function to show how to pass values to functions and return different responses.

Exercise 3: Parameterized HTTP Functions

In this exercise, we will convert the WelcomeServerless function from Exercise 2 into a parameterized HTTP function. Also, we will run the container and trigger the function via its container.

Note

The code files for the exercises in this chapter can be found here: https://github.com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/tree/master/Lesson01/Exercise3.

To successfully complete the exercise, we need to ensure that the following steps are executed:

  1. Change the contents of function.go from Exercise 2 to the following:
    package main
    import (
    	"fmt"
    	"net/http"
    )
    func WelcomeServerless(w http.ResponseWriter, r *http.Request) {
    	names, ok := r.URL.Query()["name"]
        
        if ok && len(names[0]) > 0 {
            fmt.Fprintf(w, names[0] + ", Hello Serverless World!")
    	} else {
    		fmt.Fprintf(w, "Hello Serverless World!")
    	}
    }

    In the new version of the WelcomeServerless function, we now take URL parameters and return responses accordingly.

  2. Build the Docker image with the following command in your terminal: docker build . -t hello-serverless.

    Each line of Dockerfile is executed sequentially, and with the last step, the Docker image is built and tagged: Successfully tagged hello-serverless:latest:

    Figure 1.18: The build of the Docker container
    Figure 1.18: The build of the Docker container
  3. Start a Docker container from the hello-serverless image with the following command in the terminal: docker run -it –rm -p 8080:8080 hello-serverless.

    With that command, the function handlers will start on port 8080 of the host system:

    Figure 1.19: The start of the function container
    Figure 1.19: The start of the function container
  4. Open http://localhost:8080 in your browser:
    Figure 1.20: The WelcomeServerless output
    Figure 1.20: The WelcomeServerless output

    It reveals the same response as in the previous exercise. If we provide URL parameters, we should get personalized Hello Serverless World messages.

  5. Change the address to http://localhost:8080?name=Ece in your browser and reload the page. We are now expecting to see a personalized Hello Serverless World message with the name provided in URL parameters:
    Figure 1.21: Personalized WelcomeServerless output
    Figure 1.21: Personalized WelcomeServerless output
  6. Press Ctrl + C to exit the container:
    Figure 1.22: Exiting the container
Figure 1.22: Exiting the container

In this exercise, how generic functions are used with different parameters was shown. Personal messages based on input values were returned by a single function that we deployed. In the following activity, a more complex function will be created and managed as a container to show how they are implemented in real life.

Activity 1: Twitter Bot Backend for Bike Points in London

The aim of this activity is to create a real-life function for a Twitter bot backend. The Twitter bot will be used to search for available bike points in London and the number of available bikes in the corresponding locations. The bot will answer in a natural language form; therefore, your function will take input for the street name or landmark and output a complete human-readable sentence.

Transportation data for London is publicly available and accessible via the Transport for London (TFL) Unified API (https://api.tfl.gov.uk). You are required to use the TFL API and run your functions inside containers.

Once completed, you will have a container running for the function:

Figure 1.23: The running function inside the container
Figure 1.23: The running function inside the container

When you query via an HTTP REST API, it should return sentences similar to the following when bike points are found with available bikes:

Figure 1.24: Function response when bikes are available
Figure 1.24: Function response when bikes are available

When there are no bike points found or no bikes are available at those locations, the function will return a response similar to the following:

Figure 1.25: Function response when a bike point is located but no bike is found
Figure 1.25: Function response when a bike point is located but no bike is found

The function may also provide the following response:

Figure 1.26: Function response when no bike point or bike is found
Figure 1.26: Function response when no bike point or bike is found

Execute the following steps to complete this activity:

  1. Create a main.go file to register function handlers, as in Exercise 1.
  2. Create a function.go file for the FindBikes function.
  3. Create a Dockerfile for building and packaging the function, as in Exercise 2.
  4. Build the container image with Docker commands.
  5. Run the container image as a Docker container and make the ports available from the host system.
  6. Test the function's HTTP endpoint with different queries.
  7. Exit the container.

    Note

    The files main.go, function.go and Dockerfile can be found here: https://github.com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/tree/master/Lesson01/Activity1.

    The solution for the activity can be found on page 372.

In this activity, we built the backend of a Twitter bot. We started by defining main and FindBikes functions. Then we built and packaged this serverless backend as a Docker container. Finally, we tested it with various inputs to find the closest bike station. With this real-life example, the background operations of a serverless platform and how to write serverless functions were illustrated.

Summary

In this chapter, we first described the journey from traditional to serverless software development. We discussed how software development has changed over the years to create a more developer-friendly environment. Following that, we presented the origin of serverless technology and its official manifesto. Since serverless is a popular term in the industry, defining some rules helps to design better serverless applications that integrate easily into various platforms. We then listed use cases for serverless technology to illustrate how serverless architectures can be used to create any modern application.

Following an introduction to serverless, FaaS was explored as an implementation of serverless architectures. We showed how applications are designed in traditional, microservices, and serverless designs. In addition, the benefits of the transition to serverless architectures were discussed in detail.

Finally, Kubernetes and serverless technologies were discussed to show how they support each other. As the mainstream container management system, Kubernetes was presented, which involved looking at the advantages of running serverless platforms with it. Containerization and microservices are highly adopted in the industry, and therefore running serverless workloads as containers was covered, with exercises. Finally, a real-life example of functions as a backend for a Twitter bot was explored. In this activity, functions were packaged as containers to show the relationship between microservices-based, containerized, and FaaS-backed designs.

In the next chapter, we will be introducing serverless architecture in the cloud and working with cloud services.

Left arrow icon Right arrow icon

Key benefits

  • Get hands-on experience with frameworks, such as Kubeless, Apache OpenWhisk, and Funktion
  • Master the basics of Kubernetes and prepare yourself for challenging technical assessments
  • Learn how to launch Kubernetes both locally and in a public cloud

Description

Kubernetes has established itself as the standard platform for container management, orchestration, and deployment. By learning Kubernetes, you’ll be able to design your own serverless architecture by implementing the function-as-a-service (FaaS) model. After an accelerated, hands-on overview of the serverless architecture and various Kubernetes concepts, you’ll cover a wide range of real-world development challenges faced by real-world developers, and explore various techniques to overcome them. You’ll learn how to create production-ready Kubernetes clusters and run serverless applications on them. You'll see how Kubernetes platforms and serverless frameworks such as Kubeless, Apache OpenWhisk and OpenFaaS provide the tooling to help you develop serverless applications on Kubernetes. You'll also learn ways to select the appropriate framework for your upcoming project. By the end of this book, you’ll have the skills and confidence to design your own serverless applications using the power and flexibility of Kubernetes.

Who is this book for?

This book is for software developers and DevOps engineers who have basic or intermediate knowledge about Kubernetes and want to learn how to create serverless applications that run on Kubernetes. Those who want to design and create serverless applications running on the cloud, or on-premise Kubernetes clusters will also find this book useful.

What you will learn

  • Deploy a Kubernetes cluster locally with Minikube
  • Get familiar with AWS Lambda and Google Cloud Functions
  • Create, build, and deploy a webpage generated by the serverless functions in the cloud
  • Create a Kubernetes cluster running on the virtual kubelet hardware abstraction
  • Create, test, troubleshoot, and delete an OpenFaaS function
  • Create a sample Slackbot with Apache OpenWhisk actions
Estimated delivery fee Deliver to Malta

Premium delivery 7 - 10 business days

€32.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Nov 29, 2019
Length: 474 pages
Edition : 1st
Language : English
ISBN-13 : 9781838983277
Concepts :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Malta

Premium delivery 7 - 10 business days

€32.95
(Includes tracking information)

Product Details

Publication date : Nov 29, 2019
Length: 474 pages
Edition : 1st
Language : English
ISBN-13 : 9781838983277
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 98.97
Mastering PostgreSQL 12
€32.99
Serverless Architectures with Kubernetes
€32.99
Hands-On Docker for Microservices with Python
€32.99
Total 98.97 Stars icon
Banner background image

Table of Contents

9 Chapters
1. Introduction to Serverless Chevron down icon Chevron up icon
2. Introduction to Serverless in the Cloud Chevron down icon Chevron up icon
3. Introduction to Serverless Frameworks Chevron down icon Chevron up icon
4. Kubernetes Deep Dive Chevron down icon Chevron up icon
5. Production-Ready Kubernetes Clusters Chevron down icon Chevron up icon
6. Upcoming Serverless Features in Kubernetes Chevron down icon Chevron up icon
7. Kubernetes Serverless with Kubeless Chevron down icon Chevron up icon
8. Introduction to Apache OpenWhisk Chevron down icon Chevron up icon
9. Going Serverless with OpenFaaS Chevron down icon Chevron up icon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela