Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Kubernetes - A Complete DevOps Cookbook

You're reading from   Kubernetes - A Complete DevOps Cookbook Build and manage your applications, orchestrate containers, and deploy cloud-native services

Arrow left icon
Product type Paperback
Published in Mar 2020
Publisher Packt
ISBN-13 9781838828042
Length 584 pages
Edition 1st Edition
Concepts
Arrow right icon
Author (1):
Arrow left icon
Murat Karslioglu Murat Karslioglu
Author Profile Icon Murat Karslioglu
Murat Karslioglu
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Building Production-Ready Kubernetes Clusters 2. Operating Applications on Kubernetes FREE CHAPTER 3. Building CI/CD Pipelines 4. Automating Tests in DevOps 5. Preparing for Stateful Workloads 6. Disaster Recovery and Backup 7. Scaling and Upgrading Applications 8. Observability and Monitoring on Kubernetes 9. Securing Applications and Clusters 10. Logging with Kubernetes 11. Other Books You May Enjoy

Configuring a Kubernetes cluster on Google Cloud Platform

This section will take you through step-by-step instructions to configure Kubernetes clusters on GCP. You will learn how to run a hosted Kubernetes cluster without needing to provision or manage master and etcd instances using GKE.

Getting ready

All the operations mentioned here require a GCP account with billing enabled. If you don't have one already, go to https://console.cloud.google.com and create an account.

On Google Cloud Platform (GCP), you have two main options when it comes to running Kubernetes. You can consider using Google Compute Engine (GCE) if you'd like to manage your deployment completely and have specific powerful instance requirements. Otherwise, it's highly recommended to use the managed Google Kubernetes Engine (GKE).

How to do it…

This section is further divided into the following subsections to make this process easier to follow:

  • Installing the command-line tools to configure GCP services
  • Provisioning a managed Kubernetes cluster on GKE
  • Connecting to GKE clusters

Installing the command-line tools to configure GCP services

In this recipe, we will get the primary CLI for Google Cloud Platform, gcloud, installed so that we can configure GCP services:

  1. Run the following command to download the gcloud CLI:

$ curl https://sdk.cloud.google.com | bash
  1. Initialize the SDK and follow the instructions given:
$ gcloud init
  1. During the initialization, when asked, select either an existing project that you have permissions for or create a new project.
  2. Enable the Compute Engine APIs for the project:
$ gcloud services enable compute.googleapis.com
Operation "operations/acf.07e3e23a-77a0-4fb3-8d30-ef20adb2986a" finished successfully.
  1. Set a default zone:
$ gcloud config set compute/zone us-central1-a
  1. Make sure you can start up a GCE instance from the command line:
$ gcloud compute instances create "devops-cookbook" \
--zone "us-central1-a" --machine-type "f1-micro"
  1. Delete the test VM:
$ gcloud compute instances delete "devops-cookbook"

If all the commands are successful, you can provision your GKE cluster.

Provisioning a managed Kubernetes cluster on GKE

Let's perform the following steps:

  1. Create a cluster:
$ gcloud container clusters create k8s-devops-cookbook-1 \
--cluster-version latest --machine-type n1-standard-2 \
--image-type UBUNTU --disk-type pd-standard --disk-size 100 \
--no-enable-basic-auth --metadata disable-legacy-endpoints=true \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--num-nodes "3" --enable-stackdriver-kubernetes \
--no-enable-ip-alias --enable-autoscaling --min-nodes 1 \

--max-nodes 5 --enable-network-policy \
--addons HorizontalPodAutoscaling,HttpLoadBalancing \
--enable-autoupgrade --enable-autorepair --maintenance-window "10:00"

Cluster creation will take 5 minutes or more to complete.

Connecting to Google Kubernetes Engine (GKE) clusters

To get access to your GKE cluster, you need to follow these steps:

  1. Configure kubectl to access your k8s-devops-cookbook-1 cluster:
$ gcloud container clusters get-credentials k8s-devops-cookbook-1
  1. Verify your Kubernetes cluster:
$ kubectl get nodes

Now, you have a three-node GKE cluster up and running.

How it works…

This recipe showed you how to quickly provision a GKE cluster using some default parameters.

In Step 1, we created a cluster with some default parameters. While all of the parameters are very important, I want to explain some of them here.

--cluster-version sets the Kubernetes version to use for the master and nodes. Only use it if you want to use a version that's different from the default. To get the available version information, you can use the gcloud container get-server-config command.

We set the instance type by using the --machine-type parameter. If it's not set, the default is n1-standard-1. To get the list of predefined types, you can use the gcloud compute machine-types list command.

The default image type is COS, but my personal preference is Ubuntu, so I used --image-type UBUNTU to set the OS image to UBUNTU. If this isn't set, the server picks the default image type, that is, COS. To get the list of available image types, you can use the gcloud container get-server-config command.

GKE offers advanced cluster management features and comes with the automatic scaling of node instances, auto-upgrade, and auto-repair to maintain node availability. --enable-autoupgrade enables the GKE auto-upgrade feature for cluster nodes and --enable-autorepair enables the automatic repair feature, which is started at the time defined with the --maintenance-window parameter. The time that's set here is the UTC time zone and must be in HH:MM format.

There's more…

The following are some of the alternative methods that can be employed besides the recipe described in the previous section:

  • Using Google Cloud Shell
  • Deploying with a custom network configuration
  • Deleting your cluster
  • Viewing the Workloads dashboard

Using Google Cloud Shell

As an alternative to your Linux workstation, you can get a CLI interface on your browser to manage your cloud instances.

Go to https://cloud.google.com/shell/ to get a Google Cloud Shell.

Deploying with a custom network configuration

The following steps demonstrate how to provision your cluster with a custom network configuration:

  1. Create a VPC network:
$ gcloud compute networks create k8s-devops-cookbook \
--subnet-mode custom
  1. Create a subnet in your VPC network. In our example, this is 10.240.0.0/16:
$ gcloud compute networks subnets create kubernetes \
--network k8s-devops-cookbook --range 10.240.0.0/16
  1. Create a firewall rule to allow internal traffic:
$ gcloud compute firewall-rules create k8s-devops-cookbook-allow-int \
--allow tcp,udp,icmp --network k8s-devops-cookbook \
--source-ranges 10.240.0.0/16,10.200.0.0/16
  1. Create a firewall rule to allow external SSH, ICMP, and HTTPS traffic:
$ gcloud compute firewall-rules create k8s-devops-cookbook-allow-ext \
--allow tcp:22,tcp:6443,icmp --network k8s-devops-cookbook \
--source-ranges 0.0.0.0/0
  1. Verify the rules:
$ gcloud compute firewall-rules list
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
...
k8s-devops-cookbook-allow-ext k8s-devops-cookbook INGRESS 1000 tcp:22,tcp:6443,icmp False
k8s-devops-cookbook-allow-int k8s-devops-cookbook INGRESS 1000 tcp,udp,icmp False
  1. Add the --network k8s-devops-cookbook and --subnetwork kubernetes parameters to your container clusters create command and run it.

Deleting your cluster

To delete your k8s-devops-cookbook-1 cluster, use the following command:

$ gcloud container clusters delete k8s-devops-cookbook-1

This process may take a few minutes and when finished, you will get a confirmation message.

Viewing the Workloads dashboard

On GCP, instead of using the Kubernetes Dashboard application, you can use the built-in Workloads dashboard and deploy containerized applications through Google Marketplace. Follow these steps:

  1. To access the Workload dashboard from your GCP dashboard, choose your GKE cluster and click on Workloads.
  2. Click on Show system workloads to see the existing components and containers that have been deployed in the kube-system namespace.

See also

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image