Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Architecting Cloud-Native Serverless Solutions
Architecting Cloud-Native Serverless Solutions

Architecting Cloud-Native Serverless Solutions: Design, build, and operate serverless solutions on cloud and open source platforms

Arrow left icon
Profile Icon Safeer CM
Arrow right icon
zł39.99 zł145.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.9 (7 Ratings)
eBook Jun 2023 350 pages 1st Edition
eBook
zł39.99 zł145.99
Paperback
zł181.99
Subscription
Free Trial
Arrow left icon
Profile Icon Safeer CM
Arrow right icon
zł39.99 zł145.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.9 (7 Ratings)
eBook Jun 2023 350 pages 1st Edition
eBook
zł39.99 zł145.99
Paperback
zł181.99
Subscription
Free Trial
eBook
zł39.99 zł145.99
Paperback
zł181.99
Subscription
Free Trial

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Architecting Cloud-Native Serverless Solutions

Serverless Computing and Function as a Service

Serverless computing has ushered in a new era to an already revolutionizing world of cloud computing. What started as a nascent idea to run code more efficiently and modularly has grown into a powerful platform of serverless services that can replace traditional microservices and data pipelines in their entirety. Such growth and adoption also brings in new challenges regarding integration, security, and scaling. Vendors are releasing newer services and feature additions to existing services all around, opening more and more choices for customers.

AWS has been a front runner in serverless offerings, but other vendors are catching up fast. Replacing in-house and self-hosted applications with serverless platforms is becoming a trend. Function as a Service (FaaS) is what drives serverless computing. While all cloud vendors are offering their version of FaaS, we are also seeing the rise of self-hosted FaaS platforms, making this a trend across cloud and data center infrastructures alike. People are building solutions that are cloud agnostic using these self-hosted platforms as well.

In this chapter, we will cover the foundations of serverless and FaaS computing models. We will also discuss the architecture patterns that are essential to serverless models.

In this chapter, we will cover the following topics:

  • Evolution of computing in the cloud
  • Serverless and FaaS
  • Microservice architecture
  • Event-driven architecture
  • FaaS in detail
  • API gateways and the rise of serverless APIs
  • The case for serverless

Evolution of computing in the cloud

In this section, we will touch on the evolution of cloud computing and why the cloud matters. We will briefly cover the technologies that drive the cloud and various delivery models.

Benefits of cloud computing

Cloud computing has revolutionized IT and has spearheaded unprecedented growth in the past decade. By definition, cloud computing is the availability and process of delivering computing resources on-demand over the internet. The traditional computing model required software services to invest heavily in the computing infrastructure. Typically, this meant renting infrastructure in a data center – usually called colocation – for recurring charges per server and every other piece of hardware, software, and internet they used. Depending on the server count and configurations, this number would be pretty high and was inflexible in the billing model – with upfront costs and commitments. If more customized infrastructure with access to network gears and more dedicated internet bandwidth is required, the cost would go even higher and it would have more upfront costs and commitments. Internet-scale companies had to build or rent entire data centers across the globe to scale their applications – most of them still do.

This traditional IT model always led to a higher total cost of ownership, as well as higher maintenance costs. But these were not the only disadvantages – lack of control, limited choices of hardware and software combinations, inflexibility, and slow provisioning that couldn't match the market growth and ever-increasing customer bases were all hindering the speed of delivery and the growth of applications and services. Cloud computing changed all that. Resources that were available only by building or renting a data center were now available over the internet, at a click of a button or a command. This wasn't just the case servers, but private networks, routers, firewalls, and even software services and distributed systems – which would take traditional IT a huge amount of manpower and money to maintain – were all available right around the virtual corner.

Cost has always been a crucial factor in deciding on which computing model to use and what investment companies are willing to make in the short and long term. In the next section, we will talk about the difference between the cost models in the cloud.

CAPEX versus OPEX

The impact of cloud computing is multifold. On one hand, it allows engineering and product teams to experiment with their products freely without worrying about planning for the infrastructure quarters or even years back. It also has the added benefit of not having to actively manage the cloud resources, unlike the data center infrastructure. Another reason for its wider adoption is the cost factor. The difference between traditional IT and the cloud in terms of cost is sometimes referred to as CAPEX versus OPEX.

CAPEX, also known as capital expenditure, is the initial and ongoing investments that are made in assets – IT infrastructure, in this case – to reap the benefits for the foreseeable future. This also includes the ongoing maintenance cost as it improves and increases the lifespan of the assets. On the other hand, the cloud doesn't require you to invest upfront in assets; the infrastructure is elastic and virtually unlimited as far as the customer is concerned. There is no need to plan for infrastructure capacity months in advance, or even worry about the underutilization of already acquired IT assets. Infrastructure can be built, scaled up or down, and ultimately torn down without any cost implications. The expenditure, in this case, is operating expenditure – OPEX. This is the cost that's incurred in running the day-to-day business and what's spent on utilities and consumables rather than long-term assets. The flexible nature of cloud assets makes them consumable rather than assets.

Let's look at a few technologies that accelerated the adoption of the cloud.

Virtualization, software-defined networking, and containers

While we understand and appreciate cloud computing and the benefits it brings, the technologies that made it possible to move from traditional data centers to the cloud need to be acknowledged.

The core technology that succeeded in capitalizing on the potential of hardware and building abstraction on top of it was virtualization. It allowed virtual machines to be created on top of the hardware and the host operating system. Network virtualization soon followed, in the form of Software-Defined Networking (SDN). This allowed vendors to provide a completely virtualized private network and servers on top of their IT infrastructure. Virtualization was prevalent much before cloud computing started but was limited to running in data centers and development environments, where the customers or vendors directly managed the entire stack, from hardware to applications.

The next phase of technological revolution came in the form of containers, spearheaded by Docker's container runtime. This allowed process, network, and filesystem isolation from the underlying operating system. It was also possible to enforce resource utilization limits on the processes running inside the container. This amazing feat was powered by Linux namespaces, cgroups, and Union Filesystem. Packaging runtimes and application code into containers led to the dual benefit of portability and a lean operating system. It was a win for both application developers and infrastructure operators.

Now that you are aware of how virtualization, SDN, and containers came around, let's start exploring the different types of cloud computing.

Types of cloud computing

In this section, we are going to look at different cloud computing models and how they differ from each other.

Public cloud

The public cloud is the cloud infrastructure that's available over the public internet and is built and operated by cloud providers such as Amazon, Azure, Google, IBM, and so on. This is the most common cloud computing model and is where the vendor manages all the infrastructure and ensures there's enough capacity for all use cases.

A public cloud customer could be anyone who signs up for an account and has a valid payment method. This provides an easy path to start building on cloud services. The underlying infrastructure is shared by all the customers of the public cloud across the globe. The cloud vendor abstracts out this shared-ness and gives each customer the feeling that they have a dedicated infrastructure to themselves. The capacity is virtually unlimited, and the reliability of the infrastructure is guaranteed by the vendor. While it provides all these benefits, the public cloud can also cause security loopholes and an increased attack surface if it's not maintained well. Excessive billing can happen due to a lack of knowledge of the cloud cost model, unrealistic capacity planning, or abandoning the rarely used resources without disposing of them properly.

Private cloud

Unlike with the public cloud, a private cloud customer is usually a single business or organization. A private cloud could be maintained in-house or in the company-owned data centers – usually called internal private clouds. Some third-party providers run dedicated private clouds for business customers. This model is called a hosted private cloud.

A private cloud provides more control and customization for businesses, and certain businesses prefer private clouds due to their business nature. For example, telecom companies prefer to run open source-based private clouds – Apache OpenStack is the primary choice of technology for a large number of telecom carriers. Hosting the cloud infrastructure also helps them integrate the telco hardware and network with the computing infrastructure, thereby improving their ability to provide better communication services. This added flexibility and control also comes at a cost – the cost of operating and scaling the cloud. From budget planning to growth predictions, to hardware and real estate acquisition for expansion, this becomes the responsibility of the business. The engineering cost – both in terms of technology and manpower – becomes a core cost center for the business.

Hybrid cloud

The hybrid cloud combines a public cloud and a physical infrastructure – either operated on-premises or on a private cloud. Data and applications can move between the public and private clouds securely to suit the business needs. Organizations could adopt a hybrid model for many reasons; they could be bound by regulations and compliance (such as financial institutions), low latency for certain applications to be placed close to the company infrastructure, or just because huge investments have already been made in the physical infrastructure. Most public clouds identify this as a valid business use case and provide cloud solutions that offer connectivity from cloud infrastructure to data centers through a private WAN-wide area network. Examples include AWS Direct Connect, GCP Interconnect, and Azure ExpressRoute.

An alternate form of hybrid cloud is the multi-cloud infrastructure. In these scenarios, one public cloud infrastructure is connected to one or more cloud infrastructures hosted by different vendors:

Figure 1.1 – Types of cloud computing

Figure 1.1 – Types of cloud computing

The preceding diagram summarizes the cloud computing types and how they are interrelated. Now that we understand these types, let's look at various ways in which cloud services are delivered.

Cloud service delivery models – IaaS, PaaS, and SaaS

While cloud computing initially started with services such as computing and storage, it soon evolved to offer a lot more services that handle data, computing, and software. These services are broadly categorized into three types based on their delivery models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Let's take a quick look at each of these categories.

Infrastructure as a service

In IaaS, the cloud vendor delivers services such as compute (virtual machines, containers, and so on), storage, and network as a cloud service – just like a traditional data center would. It also covers a lot of supporting services, such as firewall and security, monitoring, load balancing, and more. Out of all the service categories listed, IaaS provides the most control to the customer and they get to fine-tune and configure core services, as they would in a traditional IT infrastructure.

While the compute, storage, and network are made available to the customers as infrastructure pieces, these are not actual physical hardware. Instead, these resources are virtualized – as abstractions on top of the real hardware. There is a lesser-known variant of IaaS where the real hardware is directly provisioned and exposed to the customer. This category of services is called Bare-Metal as a Service (BMaaS). BMaaS provides much more control than IaaS to the customer and it is also usually costlier and takes more engineering time to manage.

Platform as a service

PaaS allows customers to develop, test, build, deploy, and manage their applications without having to worry about the resources or the build environment and its associated tooling. This could be considered as an additional layer of abstraction on top of IaaS. In addition to computing, storage, and network resources, PaaS also provides the operating system, container/middleware, and application runtime. Any updates that are required for the upkeep of the platform, such as operating system patching, will be taken care of by the vendor. PaaS enables organizations to focus on development without worrying about the supporting infrastructure and software ecosystem.

Any data that's needed for the PaaS applications is the responsibility of the user, though the data stores that are required will be provided by the vendors. Application owners have direct control of the data and can move it elsewhere if necessary.

Software as a service

In the SaaS model, a cloud vendor provides access to software via the internet. This cloud-based software is usually provided through a pay-as-you-go model where different sets of features of the same software are offered for varying charges. The more features used, the costlier the SaaS is. The pricing models also depend on the number of users using the software.

The advantage of SaaS is that it completely frees a customer from having to develop or operate their software. All the hassle of running such an infrastructure, including security and scaling, are taken care of by the vendor. The only commitment from the customer is the subscription fee that they need to pay. This freedom also comes at the cost of complete vendor dependency. The data is managed by the vendor; most vendors would enable their customers to take a backup of their data since finding a compatible vendor or reusing that data in-house could become challenging:

Figure 1.2 – Cloud service delivery models

Figure 1.2 – Cloud service delivery models

Now that we have cemented our foundations of cloud computing, let's look at a new model of computing – FaaS.

Serverless and FaaS

In the previous sections, we discussed various types of clouds, cloud service delivery models, and the core technologies that drove this technology revolution. Now that we have established the baselines, it is time to define the core concept of this book – serverless.

When we say serverless, what we are usually referring to is an application that's built on top of a serverless platform. Serverless started as a new cloud service delivery model where everything except the code is abstracted away from the application developer. This sounds like PaaS as there are no servers to manage and the application developer's responsibility is limited to writing the code. There are some overlaps, but there are a few distinctive differences between PaaS and serverless, as follows:

PaaS

Serverless

Always-on application

Runs on demand

Scaling requires configuration

Automatic scaling

More control over the development and deployment infrastructure

Very limited control over the development and deployment infrastructure

High chance of idle capacity

Full utilization and no idle time, as well as visibility to fine-tune and benchmark business logic

Billed for the entirety of the application's running time

Billed every time the business logic is executed

Table 1.1 – PaaS versus serverless

In the spectrum of cloud service delivery models, serverless can be placed between PaaS and SaaS.

FaaS and BaaS

The serverless model became popular in 2014 after AWS introduced a service called Lambda, which provides FaaS. Historically, other services could be considered ancestors of serverless, such as Google App Engine and iron.io. Lambda, in its initial days, allowed users to write functions in a selected set of language runtimes. This function could then be executed in response to a limited set of events or be scheduled to run at an interval, similar to a cronjob. It was also possible to invoke the function manually.

As we mentioned previously, Lambda was one of the first services in the category of FaaS and established itself as a standard. So, when we say serverless, people think of FaaS and, subsequently, Lambda. But FaaS is just one part of the puzzle – it serves as the computing component of serverless. As is often the case, compute is meaningless without data and a way to provide input and output. This is where a whole range of supporting services come into the picture. There are services in the category of API gateways, object stores, relational databases, NoSQL databases, communication buses, workflow management, authentication services, and more. In general, these services power the backend for serverless computing. These services can be categorized as Backend as a Service (BaaS). We will look at BaaS in the next chapter.

Before we get into the details of FaaS, let's review two architecture patterns that you should know about to understand serverless – the microservice architecture and the Event-Driven Architecture (EDA).

Microservice architecture

Before we look at the microservice architecture, let's look at how web applications were built before that. The traditional way of building software applications was called monolithic architecture. Enterprises used to develop applications as one big indivisible unit that provided all the intended functionality. In the initial phases of development and deployment, monoliths offered some fairly good advantages. Project planning and building a minimum viable product – the alpha or beta version – was easier. A single technology stack would be chosen, which made it easier to hire and train developers. In terms of deployment, it was easier to scale since multiple copies of this single unit could be thrown behind a load balancer to scale for increased traffic:

Figure 1.3 – Monolithic architecture

Figure 1.3 – Monolithic architecture

The problem starts when the monolithic application has to accommodate more features and the business requirements grow. It becomes increasingly complex to understand the business logic and how the various pieces that implement the features are interconnected. As the development team grows, parts of the application will be developed by dedicated teams. This will lead to a disconnect in communication and introduce non-compatible changes and more complex dependencies. The adoption of new technologies will become virtually impossible and the only choice to bring in changes that align with changing business requirements would be to rewrite the application in its entirety. On the scaling front, the problem is that we need to scale up the entire application, even if only a particular piece of code or business logic is creating the bottleneck. This inflexibility causes unnecessary provisioning of resources and idle time when the particular business logic is not in the critical path.

The microservice architecture was introduced to fix the shortcomings of the monolithic architecture. In this architecture, an application is organized as a collection of smaller independent units called microservices. This is achieved by building separate services around independent functions or the business logic of the application. In a monolithic architecture, the different modules of the application would communicate with each other using library calls or inter-process communication channels. In the case of the microservice architecture, individual services communicate with each other via APIs using protocols such as HTTP or gRPC. Some of the key features of the microservice model are as follows:

  • Loosely coupled – each unit is independent.
  • Single responsibility – one service is responsible for one business function.
  • Independently develop and deploy.
  • Each service can be built in a separate technology stack.
  • Easier to divide and separate the backends that support the services, such as databases.
  • Smaller and separate teams are responsible for one or more microservices.
  • The developer's responsibilities are better and clearly defined.
  • Easy to scale independently.
  • A bug in one service won't bring down the entire application. Instead, a single piece of business logic or a feature would be impacted:
Figure 1.4 – E-commerce application with the microservice architecture

Figure 1.4 – E-commerce application with the microservice architecture

While microservices help solve a lot of problems that the monolithic architecture posed, it is no silver bullet. Some of the disadvantages of microservices are as follows:

  • Given all inter-microservice communication happens via the network, network issues such as latency have a direct impact and increase the time it takes to communicate between two parts of the business function.
  • Since most business logic requires talking to other microservices, it increases the complexity of managing the service.
  • Debugging becomes hard in a distributed microservice environment.
  • More external services are required to ensure visibility into the infrastructure using metrics, logs, and tracing. The absence of any of this makes troubleshooting hard.
  • It puts a premium on monitoring and increases the overall infrastructure cost.
  • Testing global business logic would involve multiple service calls and dependencies, making it very challenging.
  • Deployments require more standardization, engineering investment, and continuous upkeep.
  • It's complex to route requests.

This sums up the microservice architecture and its benefits. In the next section, we will briefly discuss a few technologies that can help microservices be deployed more structurally.

Containers, orchestration, and microservices

Containers revolutionized the way we deploy and utilize system resources. While we had microservices long before containers became popular, they were not configured and deployed optimally. A container's capability to isolate running processes from one another and limit the resources that are used by processes was a great enabler for microservices. The introduction of container orchestration services such as Kubernetes took this to the next level. It helped support more streamlined deployments and developers could define every resource, every network, and every backend for an application using a declarative model. Currently, containers and container orchestration are the de facto way to deploy microservices.

Now that we have a firm understanding of the microservice architecture, let's examine another architecture pattern – EDA.

Event-driven architecture

EDA is an architectural pattern where capturing, processing, and storing events is the central theme. This allows a bunch of microservices to exchange and process information asynchronously. But before we dive into the details of the architecture, let's define what an event is.

Events

An event is the record of a significant occurrence or change that's been made to the state of a system. The source of the event could be a change in the hardware or software system. An event could also be a change to the content of a data item or a change in the state of a business transaction. Anything that happens in your business or IT infrastructure could be an event. Which events do we need to process and bring under EDA as an engineering and business choice? Events are immutable records and can be read and processed without the event needing to be modified. Events are usually ordered based on their creation time.

Some examples of events are as follows:

  • Customer requests
  • Change of balance in a bank account
  • A food delivery order being placed
  • A user being added to a server
  • Sensor reading from a hardware or IoT device
  • A security breach in a system

You can find examples of events all around your application and infrastructure. The trick is deciding on which are relevant and need processing. In the next section, we'll look at the structure of EDA.

Structure and components of an EDA

The value proposition of EDA comes from the fact that an event loses its processing value as it gets older. Event-driven systems can respond to such events as they are generated and take appropriate action to add a lot of business value. In an event-driven system, messages from various sources are ingested, then sent to interested parties (read microservices) for processing, and then persisted to disk for a defined period.

EDA fundamentally differs from the synchronous model that's followed by APIs and web stacks, where a response must be returned for every request synchronously. This could be compared to a customer support center using phone calls versus emails to respond to customer requests. While phone calls take a lot of time and need the support agent to be manually responding to the request, the same time can be spent asynchronously replying to a bunch of emails, often with the help of automation. The same principle applies to request-response versus event-driven models. But just like this example, EDA is not a silver bullet and can't be used on all occasions. The trick is in finding the right use case and building on it. Most critical systems and customer-facing services still have to rely on the synchronous request-response model.

The components of an event-driven model can be broadly classified into three types – event producer, event router (broker), and event consumer. The event producers are one or more microservices that produce interesting events and post them to the broker. The event broker is the central component of this architecture and enables loose coupling between producers and consumers. It is responsible for receiving the events, serializing or deserializing them, if necessary, notifying the consumers of the new event, and storing them. Certain brokers also filter the events based on conditions or rules. The consumers can then consume the interesting events at their pace:

Figure 1.5 – EDA

Figure 1.5 – EDA

That sums up the EDA pattern. Now, let's look into the benefits of EDA.

Benefits of EDA

The following is not a comprehensive list of the benefits of EDA, but this should give you a fair idea of why this architecture pattern is important:

  • Improved scalability and fault tolerance due to a producer or consumer failing doesn't impact the rest of the systems.
  • Real-time data processing for better decisions and customer experience – businesses can respond in real time to changes in customer behavior and make decisions or share data that improves the quality of the service.
  • Operational stability and agility.
  • Cost efficiency compared to batch processing. With batch processing, large volumes of data had to be stored and processed in batches. This meant allocating a lot more storage and compute resources for a longer period. Once batch processing is over, the computing resource becomes idle. This doesn't happen in EDA as the events are processed as they arrive, and it distributes the compute and storage optimally.
  • Better interoperability between independent services.
  • High throughput and low latency.
  • Easy to filter and transform events.
  • The rate of production and consumption doesn't have to match.
  • Works with small as well as complex applications.

Now that we have covered the use cases of EDA, let's look at some use cases where the EDA pattern can be implemented.

Use cases

EDA has a very varied set of use cases; some examples are as follows:

  • Real-time monitoring and alerting based on the events in a software system
  • Website activity tracking
  • Real-time trend analysis and decision making
  • Fraud detection
  • Data replication between similar and different applications
  • Integration with external vendors and services

While EDA becomes more and more important as the business logic and infrastructure becomes complicated, there are certain downsides we need to be aware of. We'll explore them in the next section.

Disadvantages

As we mentioned earlier, EDA is no silver bullet and doesn't work with all business use cases. Some of its notable disadvantages are as follows:

  • The decoupled nature of events can also make it difficult to debug or trace back the issues with events.
  • The reliability of the system depends on the reliability of the broker. Ideally, the broker should be either a cloud service or a self-hosted distributed system with a high degree of reliability.
  • Consumer patterns can make it difficult to do efficient capacity planning. If many of the consumers are services that wake up only at a defined interval and process the events, this could create an imbalance in the capacity for that period.
  • There is no single standard in implementing brokers – knowing the guarantees that are provided by the broker is important. Architectural choices such as whether it provides a strong guarantee of ordering or the promise of no duplicate events should be figured out early in the design, and the producers and consumers should be designed accordingly.

In the next section, we will discuss what our software choices are for EDA, both on-premises and in the cloud.

Brokers

There are open source brokers such as Kafka, Apache Pulsar, and Apache ActiveMQ that can implement some form of message broker. Since we are mostly talking in the context of the cloud in this book, the following are the most common cloud brokers:

  • Amazon Simple Queue Service (SQS)
  • Amazon Simple Notification Service (SNS)
  • Amazon EventBridge
  • Azure Service Bus queues
  • Azure Service Bus topics
  • Google Cloud Pub/Sub
  • Google Cloud Pub/Sub Lite

EDA, as we've discovered, is fundamental to a lot of modern applications' architectures. Now, let's look at FaaS platforms in detail.

FaaS in detail – self-hosted FaaS

We briefly discussed FaaS earlier. As a serverless computing service, it is the foundational service for any serverless stack. So, what exactly defines a FaaS and its functionality?

As in the general definition of a function, it is a discrete piece of code that can execute one task. In the context of a larger web application microservice, this function would ideally serve a single URL endpoint for a specific HTTP method – say, GET, POST, PUT, or DELETE. In the context of EDA, a FaaS function would handle consuming one type of event or transforming and fanning out the event to multiple other functions. In scheduled execution mode, the function could be cleaning up some logs or changing some configurations. Irrespective of the model where it is used, FaaS has a simple objective – to run a function with a set of resource constraints within a time limit. The function could be triggered by an event or a schedule or even manually launched.

Similar to writing functions in any language, you can write multiple functions and libraries that can then be invoked within the primary function code. So long as you provide a function to FaaS, it doesn't care about what other functions you have defined or libraries you have included within the code snippet. FaaS considers this function as the handler function – the name could be different for different platforms, but essentially, this function is the entry point to your code and could take arguments that are passed by the platform, such as an event in an event-driven model.

FaaS runtimes are determined and locked by the vendor. They usually decide whether a language is supported and, if so, which versions of the language runtime will be available. This is usually a limited list where each platform adds support for more languages every day. Almost all platforms support a minimum of Java, JavaScript, and Python.

The process to create and maintain these functions is similar across platforms:

  • The customer creates a function, names it, and decides on the language runtime to use.
  • The customer decides on the limit for the supported resource constraints. This includes the upper limit of RAM and the running time that the function will use.
  • While different platforms provide different configuration features, most platforms provide a host of configurations, including logging, security, and, most importantly, the mechanism to trigger the function.
  • All FaaS platforms support events, cron jobs, and manual triggers.
  • The platform also provides options to upload and maintain the code and its associated dependencies. Most also support various versions of the function to be kept for rollbacks or to roll forward. In most cloud platforms, these functions can also be tested with dummy inputs provided by the customer.

The implementation details differ across platforms but behind the scenes, how FaaS infrastructure logically works is roughly the same everywhere. When a function is triggered, the following happens:

  • Depending on the language runtime that's been configured for the function, a container that's baked with the language runtime is spun up in the cloud provider's infrastructure.
  • The code artifact – the function code and dependencies that are packed together as an archive or a file – is downloaded from the artifact store and dropped into the container.
  • Depending on the language, the command that's running inside the container will vary. But this will ultimately be the runtime that's invoking the entry point function from the artifact.
  • Depending on the platform and how it's invoked, the application that's running in the container will receive an event or custom environment variables that can be passed into the entry point function as arguments.
  • The container and the server will have network and access restrictions based on the security policy that's been configured for the function:
Figure 1.6 – FaaS infrastructure

Figure 1.6 – FaaS infrastructure

One thing that characterizes FaaS is its stateless nature. Each invocation of the same function is an independent execution, and no context or global variables can be passed around between them. The FaaS platform has no visibility into the kind of business logic the code is executing or the data that's being processed. While this may look like a limiting factor, it's quite the opposite. This enables FaaS to independently scale multiple instances of the same function without worrying about the communication between them. This makes it a very scalable platform. Any data persistence that's necessary for the business logic to work should be saved to an external data service, such as a queue or database.

Cloud FaaS versus self-hosted FaaS

While FaaS started with the hosted model, the stateless and lightweight nature of it was very appealing. As it happens with most services like this, the open source community and various vendors created open source FaaS platforms that can be run on any platform that offers virtual machines or bare metal computing. These are known as self-hosted FaaS platforms. With self-hosted FaaS platforms, the infrastructure is not abstracted out anymore. Somebody in the organization will end up maintaining the infrastructure. But the advantage is that the developers have more control over the infrastructure and the infrastructure is much more secure and customizable.

The following is a list of FaaS offerings from the top cloud providers:

  • AWS Lambda
  • Google Cloud Functions
  • Azure Functions
  • IBM Cloud Functions

Other cloud providers are specialized in certain use cases, such as Cloudflare Workers, which is the FaaS from the edge and network service provider. This FaaS offering mostly caters to the edge computing use case within serverless. The following is a list of self-hosted and open source FaaS offerings:

  • Apache OpensWhisk – also powers IBM Cloud
  • Kubeless
  • Knative – powers Google's Cloud Functions and Cloud Run
  • OpenFaaS

All FaaS offerings have common basic features, such as the ability to run functions in response to events or scheduled invocations. But a lot of other features vary between platforms. In the next section, we will look at a very common serverless pattern that makes use of FaaS.

API gateways and the rise of serverless API services

API Gateway is an architectural pattern that is often part of an API management platform. API life cycle management involves designing and publishing APIs and provides tools to document and analyze them. API management enables enterprises to manage their API usage, respond to market changes quickly, use external APIs effectively, and even monetize their APIs. While a detailed discussion on API management is outside the scope of this book, one component of the API management ecosystem is of particular interest to us: API gateways.

An API gateway can be considered as a gatekeeper for all the API endpoints of the enterprise. A bare-bones API gateway would support defining APIs, routing them to the correct backend destination, and enforcing authentication and authorization as a minimum set of features. Collecting metrics at the API endpoints is also a commonly supported feature that helps in understanding the telemetry of each API. While cloud API gateways provide this as part of their cloud monitoring solutions, self-hosted API gateways usually have plugins to export metrics to standard metric collection systems or metric endpoints where external tools can scrape metrics. API gateways either host the APIs on their own or send the traffic to internal microservices, thus acting as API proxies. The clients of API gateways could be mobile and web applications, third-party services, and partner services. Some of the most common features of API gateways are as follows:

  • Authentication and authorization: Most cloud-native API gateways support their own Identity and Access Management (IAM) systems as one of their leading authentication and authorization solutions. But as APIs, they also need to support common access methods using API keys, JWTs, mutual-TLS, and so on.
  • Rate limiting, quotas, and security: Controlling the number of requests and preventing abuse is a common requirement. Cloud API gateways often achieve this by integrating with their CDN/global load balancers and DDoS protection systems.
  • Protocol translation: Converting requests and responses between various API protocols, such as REST, WebSocket, GraphQL, and gRPC.
  • Load balancing: With the cloud, this is a given as API Gateway is a managed service. For self-hosted or open source gateways, load balancing may need additional services or configuration.
  • Custom code execution: This enables developers to modify requests or responses before they are passed down to downstream APIs or upstream customers.

Since API gateways act as the single entry point for all the APIs in an enterprise, they support various types of endpoint types. While most common APIs are written as REST services and use the HTTP protocol, there are also WebSocket, gRPC, and GraphQL-based APIs. Not all platforms support all of these protocols/endpoint types.

While API gateways existed independent of the cloud and serverless, they got more traction once cloud providers started integrating their serverless platforms with API Gateway. As in the case of most cloud service releases, AWS was the first to do this. Lambda was initially released as a private preview in 2014. In June 2015, 3 months after Lambda became generally available, AWS released API Gateway and started supporting integration with Lambda. Other vendors followed suit soon after. Due to this, serverless APIs became mainstream.

The idea of a serverless API is very simple. First, you must define an API endpoint in the supported endpoint protocol; that is, REST/gRPC/WebSocket/GraphQL. For example, in an HTTP-based REST API, this definition would include a URL path and an associated HTTP method, such as GET/POST/PUT/DELETE. Once the endpoint has been defined, you must associate a FaaS function with it. When a client request hits said endpoint, the request and its execution context are passed to the function, which will process the request and return a response. The gateway passes back the response in the appropriate protocol:

Figure 1.7 – API Gateway with FaaS

Figure 1.7 – API Gateway with FaaS

The advantage of serverless APIs is that they create on-demand APIs that can be scaled very fast and without any practical limits. The cloud providers would impose certain limits to avoid abuse and plan for better scalability and resource utilization of their infrastructure. But in most cases, you can increase these limits or lift them altogether by contacting your cloud vendor. In Part 2 of this book, we will explore these vendor-specific gateways in detail.

The case for serverless

Serverless brought a paradigm shift in how infrastructure management can be simplified or reduced to near zero. However, this doesn't mean that there are no servers, but it abstracts out all management responsibilities from the customer. When you're delivering software, infrastructure management and maintenance is always an ongoing engineering cost and adds up to the operational cost – not to mention the engineering cost of having people manage the infrastructure for it. The ability to build lightweight microservices, on-demand APIs, and serverless event processing pipelines has a huge impact on the overall engineering cost and feature rollouts.

One thing we haven't talked about much is the cost model of FaaS. While only part of the serverless landscape, its billing model is a testament to the true nature of serverless. All cloud vendors charge for FaaS based on the memory and execution time the function takes for a single run. When used with precision, this cost model can shave off a lot of money from your cloud budget. Right-sizing the function and optimizing its code becomes a necessary skill for developers and will lead to a design-for-performance-first mindset.

As we will see in Part 2 of this book, cloud vendors are heavily investing in building and providing serverless services as demand grows. The wide array of BaaS category services that are available to us is astounding and opens up a lot of possibilities. While not all business use cases can be converted to serverless, a large chunk of business use cases will find a perfect match in serverless.

Summary

In this chapter, we covered the foundations of serverless in general and FaaS in particular. How the microservice architecture has modernized the software architecture and how the latest container and container orchestration technologies are spearheading the microservice adoption was a key lesson. We also covered EDA and the API Gateway architecture. These concepts should have helped cement your foundational knowledge of serverless computing and will be useful when we start covering FaaS platforms in Part 2 of this book. Serverless has evolved into a vast technology platform that encompasses many backends and computing services. The features and offerings may vary slightly between platforms and vendors, but the idea has caught up.

In the next chapter, we will look at some of the backend architectural patterns and technologies that will come in handy in serverless architectures.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Learn with DIY projects and step-by-step instructions for different serverless technologies and vendors
  • Explore detailed sections on running serverless workloads across Kubernetes and virtual machines
  • Discover Cloudflare Serverless Solutions to modernize your web applications

Description

Serverless computing has emerged as a mainstream paradigm in both cloud and on-premises computing, with AWS Lambda playing a pivotal role in shaping the Function-as-a-Service (FaaS) landscape. However, with the explosion of serverless technologies and vendors, it has become increasingly challenging to comprehend the foundational services and their offerings. Architecting Cloud Native Serverless Solutions lays a strong foundation for understanding the serverless landscape and technologies in a vendor-agnostic manner. You'll learn how to select the appropriate cloud vendors and technologies based on your specific needs. In addition, you'll dive deep into the serverless services across AWS, GCP, Azure, and Cloudflare followed by open source serverless tools such as Knative, OpenFaaS, and OpenWhisk, along with examples. You'll explore serverless solutions on Kubernetes that can be deployed on both cloud-hosted clusters and on-premises environments, with real-world use cases. Furthermore, you'll explore development frameworks, DevOps approaches, best practices, security considerations, and design principles associated with serverless computing. By the end of this serverless book, you'll be well equipped to solve your business problems by using the appropriate serverless vendors and technologies to build efficient and cost-effective serverless systems independently.

Who is this book for?

This book is for DevOps, platform, cloud, site reliability engineers, or application developers looking to build serverless solutions. It’s a valuable reference for solution architects trying to modernize a legacy application or working on a greenfield project. It’s also helpful for anyone trying to solve business or operational problems without wanting to manage complicated technology infrastructure using serverless technologies. A basic understanding of cloud computing and some familiarity with at least one cloud vendor, Python programming language, and working with CLI will be helpful when reading this book.

What you will learn

  • Understand the serverless landscape and its potential
  • Build serverless solutions across AWS, Azure, and GCP
  • Develop and run serverless applications on Kubernetes
  • Implement open source FaaS with Knative, OpenFaaS, and OpenWhisk
  • Modernize web architecture with Cloudflare Serverless
  • Discover popular serverless frameworks and DevOps for serverless
  • Explore software design and serverless architecture patterns
  • Acquire an understanding of serverless development and security best practices

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jun 23, 2023
Length: 350 pages
Edition : 1st
Language : English
ISBN-13 : 9781803235998
Concepts :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Jun 23, 2023
Length: 350 pages
Edition : 1st
Language : English
ISBN-13 : 9781803235998
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just zł20 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just zł20 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 605.97
Architecting Cloud-Native Serverless Solutions
zł181.99
The Ultimate Docker Container Book
zł201.99
AWS for Solutions Architects
zł221.99
Total 605.97 Stars icon
Banner background image

Table of Contents

16 Chapters
Part 1 – Serverless Essentials Chevron down icon Chevron up icon
Chapter 1: Serverless Computing and Function as a Service Chevron down icon Chevron up icon
Chapter 2: Backend as a Service and Powerful Serverless Platforms Chevron down icon Chevron up icon
Part 2 – Platforms and Solutions in Action Chevron down icon Chevron up icon
Chapter 3: Serverless Solutions in AWS Chevron down icon Chevron up icon
Chapter 4: Serverless Solutions in Azure Chevron down icon Chevron up icon
Chapter 5: Serverless Solutions in GCP Chevron down icon Chevron up icon
Chapter 6: Serverless Cloudflare Chevron down icon Chevron up icon
Chapter 7: Kubernetes, Knative and OpenFaaS Chevron down icon Chevron up icon
Chapter 8: Self-Hosted FaaS with Apache OpenWhisk Chevron down icon Chevron up icon
Part 3 – Design, Build, and Operate Serverless Chevron down icon Chevron up icon
Chapter 9: Implementing DevOps Practices for Serverless Chevron down icon Chevron up icon
Chapter 10: Serverless Security, Observability, and Best Practices Chevron down icon Chevron up icon
Chapter 11: Architectural and Design Patterns for Serverless Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.9
(7 Ratings)
5 star 85.7%
4 star 14.3%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




Rajaseelan Aug 25, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This books starts with whys of Serverless, and ends with patterns.What I like is, how the author covers some of the common serverless platforms that are NOT LIMITED to the Big 3 Cloud Providers. Cloudflare has been picking in the edge, and reading the about the offerings here has made me want to try it out, as they have a decent developer tier.The treatment given to Serverless offerings via kubernetes was also great, as it mean you can start designing your application to be "serverless native", and utilize the portability of of kubernetes to deploy it across various platforms.Emphasis on DevOps practices is also welcome, as too often, the ease of deploying functions means developers neglect the CD part of the deployment cycle. Missing in most serverless achitecture talks is observability. This was given the proper treatment here.
Amazon Verified review Amazon
Anonymous Jul 15, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book is exceptionally informative, with well-organized content and stunning print quality.This book is definitely worth the money and time invested.
Amazon Verified review Amazon
Pavan Belagatti Jul 24, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
'Architecting Cloud-Native Serverless Solutions' is an amazing book that every cloud-native architect should have on his/her bookshelf. Gives some good insights into the facts while building through a serverless approach.
Amazon Verified review Amazon
Tiny Aug 21, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Too often, when starting a DevOps journey, one forgets that architecture and design remain critical components. When you build a feature, it needs to fit into the overall design process. While features often live by themselves, understanding the overall architecture is a key component. When we process those details, having a mindset towards where the fit into the architecture is a key component. Recommend using this book to refresh your knowledge of serverless applications.
Amazon Verified review Amazon
Amazon Customer Jul 18, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Architecting Cloud Native Serverless Solutions by Safeer CM is a comprehensive guide to designing, building, and operating serverless solutions. The book covers a wide range of topics, from the basics of serverless computing to more advanced topics such as serverless on Kubernetes and serverless security.The book is well-written and easy to follow, even for readers with no prior experience with serverless computing. The author does a good job of explaining the concepts in a clear and concise way, and he provides plenty of examples to illustrate his points.One of the strengths of the book is that it covers a wide range of serverless platforms. The author discusses the major cloud providers (AWS, Azure, GCP, and Cloudflare) as well as open source serverless platforms such as Knative, OpenFaaS, and OpenWhisk. This gives readers the flexibility to choose the platform that best meets their needs.Another strength of the book is that it covers a wide range of topics. The author not only discusses the technical aspects of serverless computing, but he also discusses the business and operational aspects. This makes the book a valuable resource for both technical and non-technical readers.Overall, Architecting Cloud Native Serverless Solutions is an excellent book for anyone who wants to learn more about serverless computing. The book is well-written, easy to follow, and comprehensive. I highly recommend it to anyone who is interested in serverless computing.Here are some of the pros and cons of the book:Pros: Well-written and easy to follow Covers a wide range of topics Includes real-world examples Provides a good overview of the serverless landscapeCons: Some of the technical content may be too advanced for some readers The book is not as up-to-date as some other serverless booksOverall, I would highly recommend this book to anyone who is interested in learning more about serverless computing. It is a comprehensive and well-written resource that will give you the knowledge you need to get started with serverless computing.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.