Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Edge Computing Systems with Kubernetes
Edge Computing Systems with Kubernetes

Edge Computing Systems with Kubernetes: A use case guide for building edge systems using K3s, k3OS, and open source cloud native technologies

eBook
$9.99 $41.99
Paperback
$51.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Edge Computing Systems with Kubernetes

Edge Computing with Kubernetes

Edge computing is an emerging paradigm of distributed systems where the units that compute information are close to the origin of that information. The benefit of this paradigm is that it helps your system to reduce network outages and reduces the delays when you process across the cloud. This means you get a better interactive experience with your machine learning or Internet of Things (IoT) applications. This chapter covers the basics and the importance of edge computing and how Kubernetes can be used for it. It also covers different scenarios and basic architectures using low-power devices, which can use private and public clouds to exchange data.

In this chapter, we’re going to cover the following main topics:

  • Edge data centers using K3s and basic edge computing concepts
  • Basic edge computing architectures with K3s
  • Adapting your software to run at the edge

Technical requirements

In this chapter, we are going to run our edge computing on an edge device (such as a Raspberry Pi), so we need to set up a cross-compiling toolchain for Advanced RISC Machines (ARM).

For this, you need one of the following:

  • A Mac with terminal access
  • A PC with Ubuntu installed with terminal access
  • A virtual machine with Ubuntu installed with terminal access

For more detail and code snippets, check out this resource on GitHub: https://github.com/PacktPublishing/Edge-Computing-Systems-with-Kubernetes/tree/main/ch1.

Edge data centers using K3s and basic edge computing concepts

With the evolution of the cloud, companies and organizations are starting to migrate their processing tasks to edge computing devices, with the goal to reduce costs and get more benefits for the infrastructure that they are paying for. As a part of the introductory content in this book, we must learn about the basic concepts related to edge computing and understand why we use K3s for edge computing. So, let’s get started with the basic concepts.

The edge and edge computing

According to the Qualcomm and Cisco companies, the edge can be defined as “anywhere where data is processed before it crosses the Wide Area Network (WAN)”; this is the edge, but what is edge computing? A post by Eric Hamilton from Cloudwards.net defines edge computing as “the processing and analyzing of data along a network edge, closest to the point of its collection, so that data becomes actionable.” In other words, edge computing refers to processing your data near to the source and distributing the computation in different places, using devices that are close to the source of data.

To add more context, let’s explore the next figure:

Figure 1.1 – Components of edge layers

Figure 1.1 – Components of edge layers

This figure shows how the data is processed in different contexts; these contexts are the following:

  • Cloud layer: In this layer, you can find the cloud providers, such as AWS, Azure, GCP, and more.
  • Near edge: In this layer, you can find telecommunications infrastructure and devices, such as 5G networks, radio virtual devices, and similar devices.
  • Far edge: In this layer, you will find edge clusters, such as K3s clusters or devices that exchange data between the cloud and edge layer, but this layer can be subdivided into the tiny edge layer.
  • Tiny edge: In this layer, you will find sensors, end-user devices that exchange data with a processing device, and edge clusters on the far edge.

Important Note

Remember that edge computing refers to data that is processed on edge devices before the result goes to its destination, which could be on a public or private cloud.

Other important concepts to consider for building edge clusters are the following:

  • Fog computing: An architecture of cloud services that distribute the system across near edge and far edge devices; these devices can be geographically dispersed.
  • Multi-Access Edge Computing (MEC): This distributes the computing at the edge of larger networks, with low latency and high bandwidth, and is the predecessor of mobile edge computing; in other words, the processing uses telecom networks and mobile devices.
  • Cloudlets: This is a small-scale cloud data center, which could be used for resource-intensive use cases, such as data analytics, Machine Learning (ML) and so on.

Benefits of edge computing

With this short explanation, let’s move on to understand the main benefits of edge computing; some of these include the following:

  • Reducing latency: Edge computing can process heavy compute processes on edge devices, reducing the latency to bring this information.
  • Reducing bandwidth: Edge computing can reduce the used bandwidth while taking part of the data on the edge devices, reducing the traffic on the network.
  • Reducing costs: Reducing latency and bandwidth translates to the reduction of operational costs, which is one of the most important benefits of edge computing.
  • Improving security: Edge computing uses data aggregation and data encryption algorithms to improve the security of data access.

Let’s now discuss containers, Docker, and containerd.

Containers, Docker, and containerd for edge computing

In the last few years, container adoption has been increasing because of the success of Docker. Docker has been the most popular container engine for the last few years. Container technology gives businesses a way to design applications using microservices architecture. This way, companies speed up their development and strategies for scaling their applications. So, to begin with a basic concept: A container is a small runtime environment that packages your application with all the dependencies needed for it to run. This concept is not new, but Docker, a container engine, popularized this concept. In simple words, Docker uses small operating system images with the necessary dependencies to run your software. This can be called operating system virtualization. What this does is use the cgroups kernel feature of Linux to limit CPU, memory, network, I/O, and so on for your processes. Other operating systems, such as Windows or FreeBSD, use similar features to insulate and create this type of virtualization. Let’s see the next figure to represent these concepts:

Figure 1.2 – Containerized applications inside the OS

Figure 1.2 – Containerized applications inside the OS

This figure shows that a container doesn’t depend on special features, such as a hypervisor that is commonly seen in hardware virtualization used by VMware, Hyper-V, and Xen; instead of that, the application runs as a binary inside the container and reuses the host kernel. Let’s say that running a container is almost like running a binary program inside a directory but adds some resource limits, using cgroups in the case of Linux containers.

Docker implements all these abstractions. It is a popular container toolchain that adds some versioning features, such as Git. That was the main reason it became very popular, and it features easy portability and versioning at the operating system level. At the moment, containerd is the container runtime used by Docker and Kubernetes to create containers. In general, with containerd, you can create containers without extra features; it’s very optimized. With the explosion of edge computing, containerd has become an important piece of software to run containers in low-resource environments.

In general, with all these technologies you can do the following:

  • Standardize how to package your software.
  • Bring portability to your software.
  • Maintain your software in an easier way.
  • Run applications in low-resource environments.

So, Docker must be taken into consideration as an important software piece to build edge computing and low-resource environments.

Distributed systems, edge computing, and Kubernetes

In the last decade, distributed systems evolved from multi-node clusters with applications using monolithic architectures to multi-node clusters with microservices architectures. One of the first options to start building microservices is to use containers, but once the system needs to scale, it is necessary to use an orchestrator. This is where Kubernetes comes into the game.

As an example, let’s imagine an orchestra with lots of musicians. You can find musicians playing the piano, trumpets, and so on. But if the orchestra was disorganized, what would you need to have to organize all the musicians? The answer is an orchestra director or an orchestrator. Here is when Kubernetes appears; each musician is a container that needs to communicate or listen to other musicians and, of course, follow the instructions of the orchestra director or orchestrator. In this way, all the musicians can play their instruments at the right time and can sound beautiful.

This is what Kubernetes does; it is an orchestrator of containers, but at the same time it is a platform with all the necessary prebuilt pieces to build your own distributed system, ready to scale and designed with best practices that can help you to implement agile development and a DevOps culture. Depending on your use case, sometimes it’s better to use something small such as Docker or containerd, but for complex or demanding scenarios, it’s better to use Kubernetes.

Edge clusters using K3s – a lightweight Kubernetes

Now, the big question is how to start building edge computing systems. Let’s get started with K3s. K3s is a Kubernetes-certified distribution created by Rancher Labs. K3s doesn’t include by default extra features that are not vital to be used on Kubernetes, but they can be added later. K3s uses containerd as its container engine, which gives K3s the ability to run on low-resource environments using ARM devices. For example, you can also run K3s on x86_64 devices in production environments. However, for the purpose of this book, we will use K3s as our main piece of software to build edge computing systems using ARM devices.

Talking about clusters at the edge, K3s offers the same power as Kubernetes but in a small package and in an optimized way, plus some features designed especially for edge computing systems. K3s is very easy to use, compared with other Kubernetes distributions. It’s a lightweight Kubernetes that can be used for edge computing, sandbox environments, or whatever you want, depending on the use case.

Edge devices using ARM processors and micro data centers

Now, it’s time to talk about edge devices and ARM processors, so let’s begin with edge devices. Edge devices are designed to process and analyze information near to the data source location; this is where the edge computing mindset comes from. Talking about low-energy consumption devices, x86 or Intel processors consume more energy and get warmer than ARM processors. This means more power and more cooling; in other words, you will pay more money for x86_64 processors. On the other hand, ARM processors have less computational power and consume less energy. That’s the reason for the success of ARM processors on smartphone devices; they give you better cost and benefit between processing and energy consumption compared to Intel processors.

Because of that, companies are interested in designing micro data centers using ARM processors in their servers. For the same reason, companies are starting to migrate their workloads to be processed by devices using ARM processors. One example is the AWS Graviton2, which is a service that offers cloud instances using ARM processors.

Edge computing diagrams to build your system

Right now, with all the basic concepts of containers, orchestrators, and edge computing and its layers, we can focus on the five basic diagrams of edge computing configurations that you can use to design this kind of system. So, let’s use K3s as our main platform for edge computing for the next diagrams.

Edge cluster and public cloud

This configuration shares and processes data between the public or private cloud with edge layers, but let’s explain its different layers:

  • Cloud layer: This layer is in the public cloud and its provider, such as AWS, Azure, or GCP. This provider can offer instances using Intel or ARM processors. For example, AWS offers the AWS Graviton2 instance if you need an ARM processor. As a complement, the public cloud can offer managed services to store data such as databases, storage, and so on. The private cloud could be in this layer too. You can find software such as VMware ESXi or OpenStack to provide this kind of service or instance locally. You can even choose a hybrid approach using the public and the private cloud. In general, this layer supports your far and tiny edge layers for storage or data processing.
  • Near edge: In this layer, you can find network devices to move all the data between the cloud layer and the far layer. Typically, these include telco devices, 5G networks, and so on.
  • Far edge: In this layer, you can find K3s clusters, similar lightweight clusters such as KubeEdge, and software such as Docker or containerd. In general, this is your local processing layer.
  • Tiny edge: This is a layer inside the far edge, where you can find edge devices such as smartwatches, IoT devices, and so on, which send data to the far edge.
Figure 1.3 – Edge cluster and public cloud

Figure 1.3 – Edge cluster and public cloud

Use cases include the following:

  • Scenarios where you must share data between different systems across the internet or a private cloud
  • Distribute data processing between your cloud and the edge, such as a machine learning model generation or predictions
  • Scenarios where you must scale IoT applications, and the response time of the application is critical
  • Scenarios where you want to secure your data using the aggregation strategy of distributing data and encryption across the system

Regional edge clusters and public cloud

This configuration is focused on distributing the processing strategy across different regions and sharing data across a public cloud. Let’s explain the different layers:

  • Cloud layer: This layer contains managed services such as databases to distribute the data across different regions.
  • Near edge: In this layer, you can find network devices to move all the data between the cloud layer and the far layer. Typically, this includes telco devices, 5G networks, and so on.
  • Far edge: In this layer, you can find K3s clusters across different regions. These clusters or nodes can share or update the data stored in a public cloud.
  • Tiny edge: Here, you can find different edge devices close to each region where the far edge clusters process the information because of this distributed configuration.
Figure 1.4 – Regional edge cluster and public cloud

Figure 1.4 – Regional edge cluster and public cloud

Use cases include the following:

  • Different cluster configurations across different regions
  • Reducing application response time, choosing the closest data, or processing node location, which is critical in IoT applications
  • Sharing data across different regions
  • Distributing processing across different regions

Single node cluster and public/private cloud

This is a basic configuration where a single computer processes all the information captured on tiny edge devices. Let’s explain the different layers:

  • Cloud layer: In this layer, you can find the data storage for the system. It could be placed on the public or private cloud.
  • Near edge: In this layer, you can find network devices to move all the data between the cloud layer and the far layer. Typically, this includes telco devices, 5G networks, and so on.
  • Far edge: In this layer, you can find a single node K3s cluster that recollects data from tiny edge devices.
  • Tiny edge: Devices that capture data, such as smartwatches, tablets, cameras, sensors, and so on. This kind of configuration is more for processing locally or on a small scale.
Figure 1.5 – Single node cluster and public/private cloud

Figure 1.5 – Single node cluster and public/private cloud

Use cases include the following:

  • Low-cost and low-energy consumption environments
  • Green edge applications that can be powered by solar panels or wind turbines
  • Small processes or use cases, such as analyzing health records or autonomous house systems that need something local or not too complicated

Let’s now adapt the software to run at the edge.

Adapting your software to run at the edge

Something important while designing an edge computing system is to choose the processor architecture to build your software. One popular architecture because of the lower consumption for computing is ARM, but if ARM is the selected architecture, it is necessary to transform your current code in most of the cases from x86_64 (Intel) to ARM (ARMv7 such as RI and ARM such as AWS Graviton2 instances). The following subsections include short guides to perform the process to convert from one platform to another; this process is called cross-compiling. With this, you will be able to run your software on ARM devices using Go, Python, Rust, and Java. So, let’s get started.

Adapting Go to run on ARM

First, it’s necessary to install Go on your system. Here are a couple of ways to install Go.

Installing Go on Linux

To install Go on Linux, execute the following steps:

  1. Download and untar the Go official binaries:
    $ wget https://golang.org/dl/go1.15.linux-amd64.tar.gz
    $ tar -C /usr/local -xzf go1.15.linux-amd64.tar.gz
  2. Set the environment variables to run Go:
    $ mkdir $HOME/go
  3. Set your GOPATH in the configuration file of your terminal with the following lines. ~/.profile is a common file to set these environment variables; let’s modify the .profile file:
    $ export PATH=$PATH:/usr/local/go/bin
    $ export GOPATH=$HOME/go
  4. Load the new configuration using the following command:
    $ . ~/.profile
    $ mkdir $GOPATH/src
  5. (Optional). If you want to, you can set these environment variables temporarily in your terminal using the following commands:
    $ export PATH=$PATH:/usr/local/go/bin
    $ export GOPATH=$HOME/go
  6. To check whether GOPATH is configured, run the following command:
    $ go env GOPATH

Now, you are ready to use Go on Linux. Let’s move to this installation using a Mac.

Installing Go on a Mac

To install Go on a Mac, execute the following steps:

  1. Install Homebrew (called brew) with the following command:
    $ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
  2. Once it is installed, install Go with brew:
    $ brew install go

Important Note

To find out how to install brew, you can check the official page at https://brew.sh.

Cross-compiling from x86_64 to ARM with Go

To cross-compile from x86_64 to ARM, execute the following steps:

  1. Create a folder to store your code:
    $ cd ~/
    $ mkdir goproject
    $ cd goproject
  2. Create an initial Go configuration to install external Go libraries outside the GOPATH command; for this, execute the next command:
    $ go mod init main
  3. Create the example.go file with Hello World as its contents:
    $ cat << EOF > example.go
    package main
    import "fmt"
    func main() {
       fmt.Println("Hello World") 
    }
    EOF
  4. Assuming that your environment is under x86_64 and you want to cross-compile for ARMv7 support, execute the following commands:
    $ env GOOS=linux GOARM=7 GOARCH=arm go build example.go

Use the next line for ARMv8 64-bit support:

$ env GOOS=linux GOARCH=arm64 go build example.go

Important Note

If you want to see other options for cross-compiling, see https://github.com/golang/go/wiki/GoArm.

Set the execution permissions for the generated binary:

$ chmod 777 example
$ ./example
  1. Copy the generated binary to your ARM device and test if it works.

In the next section, we will learn how to adapt Rust to run on ARM.

Adapting Rust to run on ARM

First, it’s necessary to install Rust on your system. Here are a couple of ways to install Rust.

Installing Rust on Linux

To install Rust on Linux, execute the following steps:

  1. Install Rust by executing the following command in the terminal:
    $ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh 
  2. Set the path for Rust in the configuration file of your terminal. For example, if you are using Bash, add the following line to your .bashrc:
    $ export PATH=$PATH:$HOME/.cargo/bin

Installing Rust on a Mac

To install Rust on a Mac, execute the following steps:

  1. Install Homebrew with the following command:
    $ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
  2. Once it is installed, install rustup with brew:
    $ brew install rustup-init
  3. Run the rustup command to install Rust and all the necessary tools for Rust with the following command:
    $ rustup-init
  4. Set your terminal environment variables by adding the following line to your terminal configuration file:
    $ export PATH=$PATH:$HOME/.cargo/bin

Important Note

Mac users often use the ZSH terminal, so they have to use .zshrc. If you are using another terminal, look for the proper configuration file or the generic /etc/profile.

Cross-compiling from x86_64 to ARMv7 with Rust on a Mac

To cross-compile from x86_64 to ARM, execute the following steps:

  1. Install the complements to match the compiler and environment variables for ARMv7 architecture on your Mac; for this, execute the following command:
    $ brew tap messense/macos-cross-toolchains
  2. Download the support for ARMv7 for cross-compiling by executing the following command:
    $ brew install armv7-unknown-linux-gnueabihf
  3. Now set the environment variables:
    $ export CC_armv7_unknown_linux_gnueabihf=armv7-unknown-linux-gnueabihf-gcc
    $ export CXX_armv7_unknown_linux_gnueabihf=armv7-unknown-linux-gnueabihf-g++
    $ export AR_armv7_unknown_linux_gnueabihf=armv7-unknown-linux-gnueabihf-ar
    $ export CARGO_TARGET_ARMV7_UNKNOWN_LINUX_GNUEABIHF_LINKER=armv7-unknown-linux-gnueabihf-gcc
  4. Create a folder to store your code:
    $ cd ~/
    $ mkdir rustproject
    $ cd rustproject
  5. Create an initial Hello World project with Rust:
    $ cargo new hello-rust
    $ cd hello-rust

The generated Rust code will look like this:

fn main() {
  println!("Hello, world!");
}

The source code will be located at src/main.rs.

  1. Add the support for ARMv7:
    $ rustup target add armv7-unknown-linux-gnueabi
  2. Build your software:
    $ cargo build --target=armv7-unknown-linux-gnueabi
  3. Copy the binary file into your device and test whether it works:
    $ cargo build --target=armv7-unknown-linux-gnueabi
  4. The generated binary will be inside the target/armv7-unknown-linux-gnueabi/hello-rust folder.
  5. Now copy your binary into your device and test whether it works.

Important Note

For more options for cross-compiling with Rust, check out https://doc.rust-lang.org/nightly/rustc/platform-support.html and https://rust-lang.github.io/rustup/cross-compilation.html. For the toolchain for Mac and AArch64 (64-bit ARMv8), check out aarch64-unknown-linux-gnu inside the repository at https://github.com/messense/homebrew-macos-cross-toolchains.

Adapting Python to run on ARM

First, it is necessary to install Python on your system. There are a couple of ways of doing this.

Installing Python on Linux

To install Python, execute the following steps:

  1. Update your repositories:
    $ sudo apt-get update
  2. Install Python 3:
    $ sudo apt-get install -y python3

Install Python on a Mac

To install Python on a Mac using Homebrew, execute the following steps:

  1. Check for your desired Python version on brew’s available version list:
    $ brew search python
  2. Let’s say that you choose Python 3.8; you have to install it by executing the following command:
    $ brew install python@3.8
  3. Test your installation:
    $ python3 --version

Cross-compiling from x86_64 to ARM with Python

Python is very important and one of the most popular languages now, and it is commonly used for AI and ML applications. Python is an interpreted language; it needs a runtime environment (such as Java) to run the code. In this case, you must install Python as the runtime environment. It has similar challenges running code as Java but has other challenges too. Sometimes, you need to compile libraries from scratch to use it. The standard Python libraries currently support ARM, but the issue is when you want something outside those standard libraries.

As a basic example, let’s run Python code across different platforms by executing the following steps:

  1. Create a basic file called example.py:
    def main():
       print("hello world")
    if __name__ == "__main__":
       main()
  2. Copy example.py to your ARM device.
  3. Install Python 3 on your ARM device by running the following command:
    $ sudo apt-get install -y python3
  4. Run your code:
    $ python3 example.py

Adapting Java to run on ARM

When talking about Java to run on ARM devices, it is a little bit different. Java uses a hybrid compiler – in other words, a two-phase compiler. This means that it generates an intermediate code called bytecode and is interpreted by a Java Virtual Machine (JVM). This bytecode is a cross-platform code and, following the Java philosophy of compile once and run everywhere, it means that you can compile using the platform you want, and it will run on any other platform without modifications. So, let’s see how to perform cross-compiling for a basic Java program that can run on an ARMv7 and an ARMv8 64-bit device.

Installing Java JDK on Linux

To install Java on Linux, execute the following commands:

  1. Update the current repositories of Ubuntu:
    $ sudo apt-get update
  2. Install the official JDK 8:
    $ sudo apt-get install openjdk-8-jre
  3. Test whether javac runs:
    $ javac

Installing Java JDK on a Mac

If you don’t have Java installed on your Mac, follow the next steps:

  1. (Optional) Download Java JDK from the following link and choose the architecture that you need, such as Linux, Mac, or Windows: https://www.oracle.com/java/technologies/javase-downloads.html.
  2. (Optional) Download and run the installer.

To test whether Java exists or whether it was installed correctly, run the following command:

$ java -version
  1. Test whether the compiler is installed by executing the following command:
    $ javac -v

Cross-compiling from x86_64 to ARM with Java

Java is a language that generates an intermediate code called bytecode, which runs on the JVM. Let’s say that you have a basic code in a file called Example.java:

class Example {
   public static void main(String[] args) {
      System.out.println("Hello world!");
   }
}

To execute your code, follow these steps:

  1. To compile it, use the following command:
    $ javac Example.java

This will generate the intermediate code in a file called Example.class, which can be executed by the JVM. Let’s do this in the next step.

  1. To run the bytecode, execute the following command:
    $ java Example
  2. Now, copy Example.class to another device and run it with the proper JVM using the java command.

Summary

This chapter explained all the basic concepts about edge computing and how it relates to other concepts, such as fog computing, MEC, and cloudlets. It also explained how containers and orchestrators such as Docker, containerd, and Kubernetes can help you to build your own edge computing system, using different configurations, depending on your own use case. At the end of the chapter, we covered how you can run and compile your software on edge devices using ARM processors, using the cross-compiling technique with Go, Python, Rust, and Java languages.

Questions

Here are a few questions to test your new knowledge:

  1. What is the difference between the edge and edge computing?
  2. What infrastructure configurations can you use to build an edge computing system?
  3. How can containers and orchestrators help you to build edge computing systems?
  4. What is cross-compiling and how can you use it to run your software on ARM devices?

Further reading

Here are some additional resources that you can check out to learn more about edge computing:

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • A guide to implementing an edge computing environment
  • Reduce latency and costs for real-time applications running at the edge
  • Find stable and relevant cloud native open source software to complement your edge environments

Description

Edge computing is a way of processing information near the source of data instead of processing it on data centers in the cloud. In this way, edge computing can reduce latency when data is processed, improving the user experience on real-time data visualization for your applications. Using K3s, a light-weight Kubernetes and k3OS, a K3s-based Linux distribution along with other open source cloud native technologies, you can build reliable edge computing systems without spending a lot of money. In this book, you will learn how to design edge computing systems with containers and edge devices using sensors, GPS modules, WiFi, LoRa communication and so on. You will also get to grips with different use cases and examples covered in this book, how to solve common use cases for edge computing such as updating your applications using GitOps, reading data from sensors and storing it on SQL and NoSQL databases. Later chapters will show you how to connect hardware to your edge clusters, predict using machine learning, and analyze images with computer vision. All the examples and use cases in this book are designed to run on devices using 64-bit ARM processors, using Raspberry Pi devices as an example. By the end of this book, you will be able to use the content of these chapters as small pieces to create your own edge computing system.

Who is this book for?

This book is for engineers (developers and/or operators) seeking to bring the cloud native benefits of GitOps and Kubernetes to the edge. Anyone with basic knowledge of Linux and containers looking to learn Kubernetes using examples applied to edge computing and hardware systems will benefit from this book.

What you will learn

  • Configure k3OS and K3s for development and production scenarios
  • Package applications into K3s for shipped-node scenarios
  • Deploy in occasionally connected scenarios, from one node to one million nodes
  • Manage GitOps for applications across different locations
  • Use open source cloud native software to complement your edge computing systems
  • Implement observability event-driven and serverless edge applications
  • Collect and process data from sensors at the edge and visualize it into the cloud
Estimated delivery fee Deliver to United States

Economy delivery 10 - 13 business days

Free $6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Oct 14, 2022
Length: 458 pages
Edition : 1st
Language : English
ISBN-13 : 9781800568594
Vendor :
JetBrains
Category :
Concepts :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to United States

Economy delivery 10 - 13 business days

Free $6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Publication date : Oct 14, 2022
Length: 458 pages
Edition : 1st
Language : English
ISBN-13 : 9781800568594
Vendor :
JetBrains
Category :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 153.97
Edge Computing Systems with Kubernetes
$51.99
The Kubernetes Bible
$54.99
IoT Edge Computing with MicroK8s
$46.99
Total $ 153.97 Stars icon
Banner background image

Table of Contents

20 Chapters
Part 1: Edge Computing Basics Chevron down icon Chevron up icon
Chapter 1: Edge Computing with Kubernetes Chevron down icon Chevron up icon
Chapter 2: K3s Installation and Configuration Chevron down icon Chevron up icon
Chapter 3: K3s Advanced Configurations and Management Chevron down icon Chevron up icon
Chapter 4: k3OS Installation and Configurations Chevron down icon Chevron up icon
Chapter 5: K3s Homelab for Edge Computing Experiments Chevron down icon Chevron up icon
Part 2: Cloud Native Applications at the Edge Chevron down icon Chevron up icon
Chapter 6: Exposing Your Applications Using Ingress Controllers and Certificates Chevron down icon Chevron up icon
Chapter 7: GitOps with Flux for Edge Applications Chevron down icon Chevron up icon
Chapter 8: Observability and Traffic Splitting Using Linkerd Chevron down icon Chevron up icon
Chapter 9: Edge Serverless and Event-Driven Architectures with Knative and Cloud Events Chevron down icon Chevron up icon
Chapter 10: SQL and NoSQL Databases at the Edge Chevron down icon Chevron up icon
Part 3: Edge Computing Use Cases in Practice Chevron down icon Chevron up icon
Chapter 11: Monitoring the Edge with Prometheus and Grafana Chevron down icon Chevron up icon
Chapter 12: Communicating with Edge Devices across Long Distances Using LoRa Chevron down icon Chevron up icon
Chapter 13: Geolocalization Applications Using GPS, NoSQL, and K3s Clusters Chevron down icon Chevron up icon
Chapter 14: Computer Vision with Python and K3s Clusters Chevron down icon Chevron up icon
Chapter 15: Designing Your Own Edge Computing System Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.7
(7 Ratings)
5 star 57.1%
4 star 0%
3 star 14.3%
2 star 14.3%
1 star 14.3%
Filter icon Filter
Top Reviews

Filter reviews by




E Altinel Jan 13, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I have mostly enjoyed the book through k3s chapters (k3os seemed a bit hard for me but it was a good intro), cloud native apps at the practical edge applications on k8s(flux, ingress controllers and load balancers and service mesh, observability and various stateful things like storage and databases)I do recommend the book for the above cases plus the most intriguing chapters around LoRa devices and geolocalization apps(via GPS) intrigued me the most as I have heard the terms a lot but never seen the practice or had a practical job about this. It is a great book about edge technologies and plus how it relates to bigger scales with the cloud.I didn't love knative part so much but that's good as a reference point as well, the book is great but knative is just super complicated beast, so I don't recommend buying the book for just the sake of knative.
Amazon Verified review Amazon
Ramiro Jan 19, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
A great read for anyone looking to learn more about edge computing. I've been using Kubernetes for years and still learned a great deal. Enough that now I'm even running a small datacenter using some of my spare raspberry pi's.When Rancher released K3s four years ago, it took the computing world by storm. Kubernetes was only meant for big data centers; K3s now allowed us to run Kubernetes everywhere. But back then that was challenging; the learning curve needed to be lowered. But not anymore! "Edge Computing Systems with Kubernetes" is one of those books you wish existed years ago.Whether you are getting started on Edge Computing and Kubernetes, or if you are a seasoned Kubernetes professional, there is something in this book for you. The book is clear, and entertaining, and strikes a great balance between theory and practice.
Amazon Verified review Amazon
Carlos A. Aug 16, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book is very comprehensive it covers the full set of topics for kubernetes while touching uses cases on the edge.
Amazon Verified review Amazon
Hung Tran Oct 17, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I am a graduate student for cloud computing, I am building one of my project that need to use Docker and Kubernetes, This book have saved me a ton of time for watching tutorial, because it provide so much coding example that I can learn from and when I forgot how to use it, I can just come back and read about it. This book also divide the edge computing usage within several topics , so depend on what you do on your project, you can look up the topics easily. This really good and comprehensive guidance
Amazon Verified review Amazon
Anthony J. Barbaro Oct 22, 2024
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
There are numerous errors and issues with the text due updates in the Kubernetes API. This improves at chapter 5. I am able to work around the issues with web searches and using the Kindle annotations to make the solutions permanent in my copy
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela