Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Google Cloud Platform Cookbook
Google Cloud Platform Cookbook

Google Cloud Platform Cookbook: Implement, deploy, maintain, and migrate applications on Google Cloud Platform

eBook
€8.99 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Google Cloud Platform Cookbook

Compute

In this chapter, we will cover the following topics:

  • Hosting a Node.js application on Google Compute Engine
  • Hosting the Node.js application on Google App Engine
  • Hosting a Node.js application on Kubernetes Engine
  • Hosting an application on Google Cloud Functions
  • Hosting a highly scalable application on Google Compute Engine

Introduction

Google provides four options for the computing needs of your application. Compute Engine gives us the option to run VMs on Google Cloud Platform's infrastructure. It also provides all the networking and security features needed to run infrastructure as a service (IaaS) workloads. Google App Engine is a platform as a service (PaaS) offering that supports most of the major programming languages. It comes in two flavors, a standard environment based on container instances and a flexible environment based on Compute Engine. Google Kubernetes Engine offers a Kubernetes-powered container platform for all containerized applications. Finally, for all serverless application needs, Google Cloud Functions provides the compute power and integration with other cloud services.

Hosting a Node.js application on Google Compute Engine

We'll implement a Node.js application (http://keystonejs.com/) on Google Compute Engine (GCE). GCE is Google's offering for all IaaS needs. Our simple application is built on expressjs and MongoDB. expressjs is a simple web application framework for Node.js and MongoDB is a document-oriented NoSQL database. KeystoneJS also uses a templating engine along with Node.js and MongoDB.

The architecture of our recipe is depicted as follows:

Single-tiered Node.js application on GCE

We will follow a single-tiered approach to host the application and the database on the same VM. Later in this chapter, we'll host the same Node.js application on Google App Engine and Kubernetes Engine.

You'll be using the following services and others for this recipe: 
  • GCE
  • Google Cloud logging
  • Google Cloud Source Repositories

Getting ready

The following are the initial setup verification steps to be taken before the recipe can be executed:

  1. Create or select a GCP project.
  2. Enable billing and enable the default APIs (some APIs such as BigQuery, storage, monitoring, and a few others are enabled automatically).
  3. Install Google Cloud SDK on your development machine. Please follow the steps from https://cloud.google.com/sdk/docs/.
  4. Install Node.js and MongoDB on your development machine.

How to do it...

We'll approach this recipe in two stages. In the first stage, we'll prepare our development machine to run our sample Node.js application. Then we'll push the working application to the Compute Engine.

Running the application on the development machine

Follow these steps to download the source code from GitHub and configure it to work on your development machine:

  1. Clone the repository in your development space:
$ git clone https://github.com/legorie/gcpcookbook.git 
  1. Navigate to the directory where the mysite application is stored:
$ cd gcpcookbook/Chapter01/mysite
  1. With your favorite editor, create a filename .env in the mysite folder:
COOKIE_SECRET=d44d5c45e7f8149aabc068244 
MONGO_URI=mongodb://localhost/mysite 
  1. Install all the packages required for the application to work:
$ npm install 
  1. Start the mongod service in your development machine
  2. Run the application:
$ node keystone.js 
  1. You'll see the following message logged on the Terminal:
------------------------------------------------
Applying update 0.0.1-admins...

------------------------------------------------
mySite: Successfully applied update 0.0.1-admins.

Successfully created:

* 1 User


------------------------------------------------
Successfully applied 1 update.
------------------------------------------------

------------------------------------------------
KeystoneJS Started:
mySite is ready on port 3000
------------------------------------------------
  1. The application is now available on http://localhost:3000, as shown:
  1.  You can stop the local server by pressing Ctrl + C.

Deploying the application on GCP

To deploy the application to GCP, we'll first upload the working code from our development machine to Google Source Repositories. Then, instead of setting up the VM manually, we'll modify and use a start up script provided by Google to bootstrap the VM with the necessary packages and a runnable application. Finally, we'll create the VM with the bootstrap script and configure the firewall rules so that the application is accessible from the internet.

Moving the code to Google Source Repositories

Each project on GCP has a Git repository which can be accessed by the GCE instances. Though we can manually move the code to an instance, moving it to Source Repositories gives the ability for the compute instances to pull the code automatically via a start up script:

  1. If you have made any changes to the code, you can commit the code to the local repository:
git commit -am "Ready to be committed to GCP"
  1. Create a new repository under the project:
  1. Follow the steps to upload the code from the local repository to Google Source Repositories. In the following example, the project ID is gcp-cookbook and the repository name is gcpcookbook:
  1. After the git push command is successful, you'll see the repository updated in the Source Repositories:

Creating the start up script

The start up script is used to initialize the VM during a boot or a restart with the necessary software (MongoDB, Node.js, supervisor, and others) and loads the application from the source code repository. The following script can be found in the /Chapter01/ folder of the Git repository. The start up script performs the following tasks:

  1. Installs the logging agent which is an application based on Fluentd.
  2. Installs the MongoDB database to be used by the KeystoneJS application.
  3. Installs Node.js, Git, and supervisor. Supervisor is a process control system which is used to run our KeystoneJS application as a process.
  4. Clones the application code from the repository to the local folder. Update the code at #Line 60, to reflect your repository's URL:
git clone https://source.developers.google.com/p/<PROJECT ID>/r/<REPOSITORY NAME> /opt/app #Line 60
  1. Installs the dependencies and creates the .env file to hold the environment variables:
COOKIE_SECRET=<Long Random String>
  1. The application is configured to run under the supervisor:
#! /bin/bash 
# Source url: https://github.com/GoogleCloudPlatform/nodejs-getting-started/blob/master/7-gce/gce/startup-script.sh 
# The startup-script is modified to suit the Chapter 01-Recipe 01 of our book
# Copyright 2017, Google, Inc. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # [START startup] set -v
  1. Talks to the metadata server to get the project ID:
PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google")
# Install logging monitor. The monitor will automatically pick up logs sent to
# syslog.
# [START logging]
curl -s "https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh" | bash
service google-fluentd restart &
# [END logging]
  1. Installs MongoDB:
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list
apt-get update
apt-get install -y mongodb-org
cat > /etc/systemd/system/mongodb.service << EOF
[Unit]
Description=High-performance, schema-free document-oriented database
After=network.target
[Service]
User=mongodb
ExecStart=/usr/bin/mongod --quiet --config /etc/mongod.conf
[Install]
WantedBy=multi-user.target
EOF
systemctl start mongodb
systemctl enable mongodb
  1. Installs dependencies from apt:
apt-get install -yq ca-certificates git nodejs build-essential supervisor
  1. Installs Node.js:
mkdir /opt/nodejs
curl https://nodejs.org/dist/v4.2.2/node-v4.2.2-linux-x64.tar.gz | tar xvzf - -C /opt/nodejs --strip-components=1
ln -s /opt/nodejs/bin/node /usr/bin/node
ln -s /opt/nodejs/bin/npm /usr/bin/npm
  1. Gets the application source code from the Google Cloud Source Repositories:
# git requires $HOME and it's not set during the startup script.
export HOME=/root
git config --global credential.helper gcloud.sh
git clone https://source.developers.google.com/p/<Project ID>/r/gcpcookbook /opt/app
  1. Installs the app dependencies:
cd /opt/app/Chapter01/mysite
npm install
cat >./.env << EOF
COOKIE_SECRET=d44d5c45e7f8149aabc06a830dba5716b4bd952a639c82499954
MONGODB_URI=mongodb://localhost:27017
EOF
  1. Creates a nodeapp user. The application will run as this user:
useradd -m -d /home/nodeapp nodeapp
chown -R nodeapp:nodeapp /opt/app
  1. Configures the supervisor to run the nodeapp:
cat >/etc/supervisor/conf.d/node-app.conf << EOF
[program:nodeapp]
directory=/opt/app/Chapter01/mysite
command=npm start
autostart=true
autorestart=true
user=nodeapp
environment=HOME="/home/nodeapp",USER="nodeapp",NODE_ENV="production"
stdout_logfile=syslog
stderr_logfile=syslog
EOF
supervisorctl reread
supervisorctl update
# Application should now be running under supervisor
# [END startup]

Creating and configuring a GCE instance

After creating the start up script, follow these steps:

  1. With the start up script ready, we can create an instance using the gcloud command:
$ gcloud compute instances create mysite-instance \
--image-family=debian-8 \
--image-project=debian-cloud \
--machine-type=g1-small \
--scopes userinfo-email,cloud-platform \
--metadata-from-file startup-script=./startup-script.sh \
--zone us-east1-c \
--tags mysite-server 
  1. You can check the progress of the instance creation using the following command:
$ gcloud compute instances get-serial-port-output \
mysite-instance --zone us-east1-c
  1. Create a firewall rule to allow access to port 3000 to the instance:
$ gcloud compute firewall-rules create default-allow-http-3000 \
--allow tcp:3000 \
--source-ranges 0.0.0.0/0 \
--target-tags mysite-server \
--description "Allow port 3000 access to mysite-server" 

   The following screenshot shows the details of the firewall rule:

The tags on the firewall rule and the create instance commands should match.
  1. Get the public IP of the instance from the Google Cloud Console or by using the following command:
$ gcloud compute instances list
  1. Navigate to http://<public IP of the instance>:3000 to see the application running.

Hosting the Node.js application on Google App Engine

We'll implement the same Node.js application used in the first recipe on Google App Engine. App Engine is a PaaS solution where we just need to deploy the code in any of the supported languages (Node.js, Java, Ruby, C#, Go, Python, and PHP), and the platform takes care of scaling automatically, health checking, and updates to the underlying OS.

App Engine provides the compute power for the application and so for the database, we'll have to use a managed MongoDB service such as mLab or a MongoDB instance of GCE. As we already have a VM running MongoDB from our previous recipe, we'll use that to serve our application running on App Engine.

Getting ready

The following are the initial setup verification steps to be taken before the recipe can be executed:

  1. Create or select a GCP project.
  2. Enable billing and enable the default APIs (some APIs such as BigQuery, storage, monitoring, and a few others are enabled automatically).
  3. Verify that Google Cloud SDK is installed on your development machine.
  4. Verify that the default project is set properly:
$ gcloud config list 
  1. The VM which runs MongoDB from our first recipe allows connections only from the localhost. We'll have to modify the configuration to allow connections from the external world.
  2. SSH into the VM from the Console:
  1. Navigate to the MongoDB's configuration file, /etc/mongod.conf, and update the bindIp value to include 0.0.0.0:
   # network interfaces
    net:
      port: 27017
      bindIp: [127.0.0.1,0.0.0.0] 
In a few versions of Mongo, it is just enough to comment our the bind_ip line in the mongodb config to allow access from outside the instance.
  1. Reboot the machine and verify that the MongoDB service is up and running.
  2. We'll also create a new firewall rule to allow access to port 27017 from anywhere:
$ gcloud compute firewall-rules \
create default-allow-mongo-27017 \ --allow tcp:27017 \ --source-ranges 0.0.0.0/0 \ --target-tags mysite-server \ --description "Allow port 27017 access to mysite-server"

The following screenshot shows the details of the firewall rule:

The MongoDB instance is now open to the world without any login credentials. So for production systems, make sure you secure the MongoDB instance with an admin user and run the mongod process using the --auth option.

  1. Connect to the MongoDB instance running on the VM from your development machine:
$ mongo mongodb://<External IP>:27017

How to do it...

With the MongoDB server up and running, we'll make a few configurational changes and deploy the application to the App Engine:

  1. Logging into the Cloud Platform Console, create an App Engine application, select the region where the application will be hosted and enable billing. You can follow along with the interactive tutorial provided by Google to host your first Node.js application to the App Engine.
  1. In the development machine, copy the Chapter01/mysite folder to a new folder called Chapter01/mysite-ae from where we'll push the code to the App Engine:
$ cp mysite/ mysite-ae/ -r 
  1. Navigate to the mysite-ae folder. Open the .env file and update the path for MONGO_URI to point to our VM:
MONGO_URI=mongodb://<External IP>:27017/mysite 
  1. Verify that all the packages are installed and launch the application on the development machine, pointing to the database on the Cloud:
$ npm install
$ npm start  
  1. The application's configurations are governed by a file called app.yaml. Create a new file with the following content:
# Basic configurations for the NodeJS application 
runtime: nodejs 
env: flex
  1. Now, we can deploy the application to the App Engine:
$ gcloud app deploy
  1. Once the application is deployed, the URL to access the application is provided. Fire up your favorite browser and navigate to the appspot URL and verify that the KeystoneJS application is running properly:
  .....
5cbd6acfb] to complete...done. Updating service [default]...done. Deployed service [default] to [https://<project-id>.appspot.com]
  1. You can stream logs from the command line by running:
$ gcloud app logs tail -s default
  1. To view your application in the web browser run:
$ gcloud app browse  

Hosting a Node.js application on Kubernetes Engine

We will containerize the KeystoneJS application and host it on Google Kubernetes Engine (GKE). GKE is powered by the container management system, Kubernetes. Containers are built to do one specific task, and so we'll separate the application and the database as we did for App Engine.

The MongoDB container will host the MongoDB database with the data stored on external disks. The data within a container is transient, and so we need an external disk to safely store the MongoDB data. The App Container includes a Node.js runtime, that will run our KeystoneJS application.

It will communicate with the Mongo Container and also expose itself to the end user:

You'll be using the following services and others for this recipe:
  • Google Kubernetes Engine
  • GCE
  • Google Container Registry

Getting ready

The following are the initial setup verification steps to be taken before the recipe can be executed:

  1. Create or select a GCP project.
  2. Enable billing and enable the default APIs (some APIs such as BigQuery, storage, monitoring, and a few others are enabled automatically).
  3. Verify that Google Cloud SDK is installed on your development machine.
  4. Verify that the default project is set properly.
  5. Install Docker on your development machine.
  6. Install kubectl, the command-line tool for running commands against Kubernetes clusters:
$ gcloud components install kubectl

How to do it...

The steps involved are:

  1. Creating a cluster on GKE to host the containers
  2. Containerizing the KeystoneJS application
  3. Creating a replicated deployment for the application and MongoDB
  4. Creating a load-balanced service to route traffic to the deployed application

Creating a cluster on GKE to host the containers

The container engine cluster runs on top of GCE. For this recipe, we'll create a two-node cluster which will be internally managed by Kubernetes:

  1. We'll create the cluster using the following command:
$ gcloud container clusters create mysite-cluster
--scopes "cloud-platform" --num-nodes 2 --zone us-east1-c

The gcloud command automatically generates a kubeconfig entry that enables us to use kubectl on the cluster:

  1. Using kubectl, verify that you have access to the created cluster:
$ kubectl get nodes

The gcloud command is used to manage resources on Google Cloud Project and kubectl is used to manage resources on the Container Engine/Kubernetes cluster.

Containerizing the KeystoneJS application

Follow these steps:

  1. Clone the repository in your development space: 
$ git clone https://github.com/legorie/gcpcookbook.git
  1. Navigate to the directory where the mysite application is stored:
$ cd gcpcookbook/Chapter01/mysite-gke  
  1. With your favorite editor, create a filename .env in the mysite folder:
PORT=8080 
COOKIE_SECRET=<a very long string> 
MONGO_URI=mongodb://mongo/mysite 

A custom port of 8080 is used for the KeystoneJS application. This port will be mapped to port 80 later in the Kubernetes service configuration. Similarly, mongo will be the name of the load-balanced MongoDB service that will be created later.

  1. The Dockerfile in the folder is used to create the application's Docker image. First, it pulls a Node.js image from the registry, then it copies the application code into the container, installs the dependencies, and starts the application. Navigate to /Chapter01/mysite-gke/Dockerfile:
# https://github.com/GoogleCloudPlatform/nodejs-getting-started/blob/master/optional-container-engine/Dockerfile 
# Dockerfile extending the generic Node image with application files for a 
# single application. 
FROM gcr.io/google_appengine/nodejs 
# Check to see if the version included in the base runtime satisfies 
# '>=0.12.7', if not then do an npm install of the latest available 
# version that satisfies it. 
RUN /usr/local/bin/install_node '>=0.12.7' 
COPY . /app/ 
# You have to specify "--unsafe-perm" with npm install 
# when running as root.  Failing to do this can cause 
# install to appear to succeed even if a preinstall 
# script fails, and may have other adverse consequences 
# as well. 
# This command will also cat the npm-debug.log file after the 
# build, if it exists. 
RUN npm install --unsafe-perm || \ 
  ((if [ -f npm-debug.log ]; then \ 
      cat npm-debug.log; \ 
    fi) && false) 
CMD npm start
  1. The .dockerignore file contains the file paths which will not be included in the Docker container.
  2. Build the Docker image:
$ docker build -t gcr.io/<Project ID>/mysite .  
Troubleshooting:
  • Error: Cannot connect to the Docker daemon. Is the Docker daemon running on this host?
  • Solution: Add the current user to the Docker group and restart the shell. Create a new Docker group if needed.
  1. You can list the created Docker image:
$ docker images
  1. Push the created image to Google Container Registry so that our cluster can access this image:
$ gcloud docker --push gcr.io/<Project ID>/mysite  

Creating a replicated deployment for the application and MongoDB

Follow these steps:

  1. To create an external disk, we'll use the following command:
$ gcloud compute disks create --size 1GB mongo-disk \
--zone us-east1-c
  1. We'll first create the MongoDB deployment because the application expects the database's presence. A deployment object creates the desired number of pods indicated by our replica count. Notice the label given to the pods that are created. The Kubernetes system manages the pods, the deployment, and their linking to their corresponding services via label selectors. Navigate to /Chapter01/mysite-gke/db-deployment.yml:
apiVersion: apps/v1beta1 
kind: Deployment metadata: name: mongo-deployment spec: replicas: 1 template: metadata: labels: name: mongo spec: containers: - image: mongo name: mongo ports: - name: mongo containerPort: 27017 hostPort: 27017 volumeMounts: - name: mongo-persistent-storage mountPath: /data/db volumes: - name: mongo-persistent-storage gcePersistentDisk: pdName: mongo-disk #The created disk name fsType: ext4
You can refer to the following link for more information on Kubernetes objects: https://kubernetes.io/docs/user-guide/walkthrough/k8s201/.
  1. Use kubectl to deploy the deployment to the cluster:
$ kubectl create -f db-deployment.yml
  1. You can view the deployments using the command:
$ kubectl get deployments 
  1. The pods created by the deployment can be viewed using the command:
$ kubectl get pods
  1. To present the MongoDB pods to the application layer, we'll need to create a service. A service exposes a single static IP address to the underlying set of pods. Navigate to /Chapter01/mysite-gke/db-service.yml:
apiVersion: v1 
kind: Service 
metadata: 
 labels: 
   name: mongo 
 name: mongo 
spec: 
 ports: 
   - port: 27017 
     targetPort: 27017 
 selector: 
name: mongo #The key-value pair is matched with the label on the deployment 
  1. The kubectl command to create a service is:
$ kubectl create -f db-service.yml
  1. You can view the status of the creation using the commands:
$ kubectl get services
$ kubectl describe service mongo
  1. We'll repeat the same process for the Node.js application. For the deployment, we'll choose to have two replicas of the application pod to serve the web requests. Navigate to /Chapter01/mysite-gke/web-deployment.yml and update the <Project ID> in the image item:
apiVersion: apps/v1beta1 
kind: Deployment 
metadata: 
  name: mysite-app 
  labels: 
    name: mysite 
spec: 
  replicas: 2 
  template: 
    metadata: 
      labels: 
        name: mysite 
    spec: 
      containers: 
      - image: gcr.io/<Project ID>/mysite 
        name: mysite 
        ports: 
        - name: http-server 
  containerPort: 8080 #KeystoneJS app is exposed on port 8080 
  1. Use kubectl to create the deployment:
$ kubectl create -f web-deployment.yml 
  1. Finally, we'll create the service to manage the application pods. Navigate to /Chapter01/mysite-gke/web-service.yml
apiVersion: v1 
kind: Service metadata: name: mysite labels: name: mysite spec: type: LoadBalancer ports: - port: 80 #The application is exposed to the external world on port 80 targetPort: http-server protocol: TCP selector: name: mysite

To create the service execute the below command:

$ kubectl create -f web-service.yml
  1. Get the external IP of the mysite service and open it in a browser to view the application:
$ kubectl get services

NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.27.240.1 <none> 443/TCP 49m
mongo 10.27.246.117 <none> 27017/TCP 30m
mysite 10.27.240.33 1x4.1x3.38.164 80:30414/TCP 2m
After the service is created, the External IP will be unavailable for a short period; you can retry after a few seconds. The Google Cloud Console has a rich interface to view the cluster components, in addition to the Kubernetes dashboard. In case of any errors, you can view the logs and verify the configurations on the Console. The Workloads submenu of GKE provides details of Deployments, the Discovery & load balancing submenu gives us all the services created.

Hosting an application on Google Cloud Functions

Google Cloud Functions is the serverless compute service that runs our code in response to events. The resources needed to run the code are automatically managed and scaled. At the time of writing this recipe, Google Cloud Functions is in beta. The functions can be written in JavaScript on a Node.js runtime. The functions can be invoked with an HTTP trigger, file events on Cloud Storage buckets, and messages on Cloud Pub/Sub topic.

We'll create a simple calculator using an HTTP trigger that will take the input parameters via the HTTP POST method and provide the result.

Getting ready

The following are the initial setup verification steps to be taken before the recipe can be executed:

  1. Create or select a GCP project
  2. Enable billing and enable the default APIs (some APIs such as BigQuery, storage, monitoring, and a few others are enabled automatically)

 

  1. Verify that Google Cloud SDK is installed on your development machine
  2. Verify that the default project is set properly

How to do it...

We'll use the simple calculator JavaScript code available on the book's GitHub repository and deploy it to Cloud Functions:

  1. Navigate to the /Chapter01/calculator folder. The application code is present in index.js and the dependencies in the package.json file. As there are no dependencies for this function, the package.json file is a basic skeleton needed for the deployment.
  2. The main function receives the input via the request object, validates the inputs and performs the calculation. The calculated result is then sent back to the requester via the response object and an appropriate HTTP status code. In the following code, the switch statement does the core processing of the calculator, do spend some time on it to understand the gist of this function:
/**
 * Responds to any HTTP request that provides the below JSON
message in the body. * # Example input JSON : {"number1": 1, "operand": "mul",
"number2": 2 } * @param {!Object} req Cloud Function request context. * @param {!Object} res Cloud Function response context. */ exports.calculator = function calculator(req, res) { if (req.body.operand === undefined) { res.status(400).send('No operand defined!'); }
else { // Everything is okay console.log("Received number1",req.body.number1); console.log("Received operand",req.body.operand); console.log("Received number2",req.body.number2); var error, result; if (isNaN(req.body.number1) || isNaN(req.body.number2)) { console.error("Invalid Numbers"); // different logging error = "Invalid Numbers!"; res.status(400).send(error); } switch(req.body.operand) { case "+": case "add": result = req.body.number1 + req.body.number2; break; case "-": case "sub": result = req.body.number1 - req.body.number2; break; case "*": case "mul": result = req.body.number1 * req.body.number2; break; case "/": case "div": if(req.body.number2 === 0){ console.error("The divisor cannot be 0"); error = "The divisor cannot be 0"; res.status(400).send(error); } else{ result = req.body.number1/req.body.number2; } break; default: res.status(400).send("Invalid operand"); break; } console.log("The Result is: " + result); res.status(200).send('The result is: ' + result); } };
  1. We'll deploy the calculator function using the following command:
$ gcloud beta functions deploy calculator --trigger-http   

The entry point for the function will be automatically taken as the calculator function. If you choose to use another name, index.js, the deploy command should be updated appropriately:

  1. You can test the function via the Console, your favorite API testing apps such as Postman or via the following curl command. The endpoint for the function can be found under the Triggering event tab in the Console or it will be provided after the deploy command:
Input JSON : {"number1": 1, "operand": "mul", "number2": 2 }
    
$ curl -X POST 
https://us-central1-<ProjectID>.cloudfunctions.net/calculator
-d '{"number1": 1, "operand": "mul", "number2": 2 }' -H "Content-Type: application/json" The result is: 2
  1. You can also click on the VIEW LOGS button in the Cloud Functions interface to view the logs of the function execution:

Hosting a highly scalable application on Google Compute Engine

There are a number of ways to host a highly scalable application on GCP using Compute Engine, App Engine, and Container Engine. We'll look at a simple PHP and MySQL application hosted on GCE with Cloud SQL and see how the GCP ecosystem helps us in building it in a scalable manner.

First, we'll create a Cloud SQL instance, which will be used by the application servers. The application servers should be designed to be replicated at will depending on any events, such as CPU usage, high utilization, and so on.

So, we'll create an instance template which is a definition of how GCP should create a new application server when it is needed. We feed in the start up script that prepares the instance to our requirements.

Then, we create an instance group which is a group of identical instances defined by the instance template. The instance group also monitors the health of the instances to make sure they maintain the defined number of servers. It automatically identifies unhealthy instances and recreates them as defined by the template.

Later, we create an HTTP(S) load balancer to serve traffic to the instance group we have created. With the load balancer in place, we now have two instances serving traffic to the users under a single endpoint provided by the load balancer. Finally, to handle any unexpected load, we'll use the autoscaling feature of the instance group.

Getting ready

The following are the initial setup verification steps to be taken before the recipe can be executed:

  1. Create or select a GCP project
  2. Enable billing and enable the default APIs (some APIs such as BigQuery, storage, monitoring, and a few others are enabled automatically)
  3. Enable the Google Cloud SQL API
  4. Verify that Google Cloud SDK is installed on your development machine
  5. Verify that the default project is set properly

How to do it...

The implementation approach would be to first create the backend service (the database), then the instance-related setup, and finally the load balancing setup:

  1. Let's first create a Cloud SQL instance. On the Google Console, navigate to the SQL menu item under Storage.
  2. Click on Create instance and select MySQL.
  3. Choose the recommended MySQL second generation and fill out the details:
The root password is set to a simple password for demonstration purposes.
  1. Note the IP address of the Cloud SQL instance that will be fed to the configuration file in the next step:
  1. Navigate to the /Chapter01/php-app/pdo folder. Edit the config.php file as follows:
$host       = "35.190.175.176" // IP Address of the Cloud SQL
$username   = "root";
$password   = ""; 
// Password which was given during the creation $dbname = "test"; $dsn = "mysql:host=$host;dbname=$dbname"; $options = array( PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION );
  1. The PHP application code is now ready to be hosted and replicated into multiple machines. Commit the changes to the Source Repositories from where the start up scripts will pick the code.
  2. The startup-script.sh can be found in the Chapter01/php-app/ directory. The script installs the necessary software to run the PHP application, then it downloads the application code from Source Repositories and moves it to the /var/www/html folder and installs the components for logging. Do update the project ID and the repository name in the following script to point to your GCP repository:
#!/bin/bash
# Modified from https://github.com/GoogleCloudPlatform/
getting-started-php/blob/master/optional-compute-engine/gce/
startup-script.sh # [START all] set -e export HOME=/root # [START php] apt-get update apt-get install -y git apache2 php5 php5-mysql php5-dev php-pear
pkg-config mysql-client # Fetch the project ID from the Metadata server PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/
project-id" -H "Metadata-Flavor: Google") # Get the application source code git config --global credential.helper gcloud.sh git clone https://source.developers.google.com/p/<Project ID>/r/<Repository Name> /opt/src -b master #ln -s /opt/src/optional-compute-engine /opt/app cp /opt/src/Chapter01/php-app/pdo/* /var/www/html -r # [END php] systemctl restart apache2 iptables -A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT # [START project_config] # Fetch the application config file from the Metadata server and add it to the project #curl -s "http://metadata.google.internal/computeMetadata/v1/instance/attributes/project-config" \ # -H "Metadata-Flavor: Google" >> /opt/app/config/settings.yml # [END project_config] # [START logging] # Install Fluentd sudo curl -s "https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh" | bash # Start Fluentd service google-fluentd restart & # [END logging] # [END all]
  1. Do make sure the firewall-rules are updated to allow traffic for ports 80 and 3306. The instances are tagged http-server, so include them in the target-tags attribute.
  2. We'll create an instance group for a group of the same PHP application servers. Create an instance template as follows:
$ gcloud compute instance-templates create my-php-tmpl \      
--machine-type=g1-small \       
--scopes logging-write,storage-ro,
https://www.googleapis.com/auth/projecthosting \ --metadata-from-file startup-script=./startup-script.sh \ --image-family=debian-8 \ --image-project=debian-cloud \ --tags http-server

The following screenshot shows the output for the preceding command:

Create the instance group as follows:

$ gcloud compute instance-groups managed create my-php-group \
--base-instance-name my-php-app \
--size 2 \
--template my-php-tmpl \
--zone us-east1-c

The following screenshot shows the output for the preceding command:

We'll create a health check that will poll the instance at specified intervals to verify that they can continue to serve traffic:

gcloud compute http-health-checks create php-health-check --request-path /public/index.php

The following screenshot shows the output for the preceding command:

  1. Now, we have two instances running in our instance group, my-php-group. We'll bring them under a load balancer to serve traffic.
  2. Head over to the Load balancing submenu and let's create a new HTTP(S) load balancer by navigating to Networking | Network Services | Load balancing:
  1. For the Backend configuration, we'll have to create a backend service which will point to the instance group and the health check that we have already created:
  1. For the Host and path rules and Frontend configuration, we'll leave the default settings.
  2. Once the settings are completed, an example review screen is shown as follows:
  1. Go ahead and create the HTTP(S) load balancer—an external IP address is created to address the load balancer. After some time, once the instances are identified as healthy, the load balancer will serve traffic to our instances under the group:
  1. In cases where traffic cannot be handled by the fixed number of instances under a load balancer, GCP provides a Compute Engine autoscaler. For scalability based on certain criteria, we can configure autoscaling at the instance group level. Instances can be scaled depending on CPU usage, HTTP load balancing usage, monitoring metrics and a combination of these factors:

How it works...

When the user hits the endpoint URL of the load balancer, it transfers the request to one of the available instances under its control. A load balancer constantly checks for the health of the instance under its supervision. The URL to test for the health is set up using the Google Compute's health check.

The PHP applications running on both the instances are configured to use the same Cloud SQL database. So, irrespective of the request hitting Instance 1 or Instance 2, the data is dealt from the common Cloud SQL database. 

Also, the Autoscaler is turned on in the Instance Group governing the two instances. If there is an increase in usage (CPU in our example), the Autoscaler will spawn a new instance to handle the increase in traffic:

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • • Implement Google Cloud services in your organization
  • • Leverage Google Cloud components to secure your organization’s data
  • • A recipe-based guide that promises hands-on experience in deploying a highly scalable and available environment

Description

Google Cloud Platform is a cloud computing platform that offers products and services to host applications using state-of-the art infrastructure and technology. You can build and host applications and websites, store data, and analyze data on Google's scalable infrastructure. This book follows a recipe-based approach, giving you hands-on experience to make the most of Google Cloud services. This book starts with practical recipes that explain how to utilize Google Cloud's common services. Then, you'll see how to make full use of Google Cloud components such as networking, security, management, and developer tools. Next, we'll deep dive into implementing core Google Cloud services into your organization, with practical recipes on App Engine, Compute Engine, Cloud Functions, virtual networks, and Cloud Storage. Later, we'll provide recipes on implementing authentication and security, Cloud APIs, command-line management, deployment management, and the Cloud SDK. Finally, we'll cover administration and troubleshooting tasks on applications with Compute services and we'll show how to monitor your organization's efficiency with best practices. By the end of this book, you'll have an overall understanding and hands-on implementation of Google Cloud services in your organization with ease.

Who is this book for?

This book is for IT professionals, engineers, and developers looking at implementing Google Cloud in their organizations. Administrators and architects planning to make their organization more efficient with Google Cloud will also find this book useful. Basic understanding of Cloud services and the Google Cloud platform is necessary.

What you will learn

  • • Host a Python application on Google Compute Engine
  • • Host an application using Google Cloud Functions
  • • Migrate a MySQL DB to Cloud Spanner
  • • Configure a network for a highly available application on GCP
  • • Learn simple image processing using Storage and Cloud Functions
  • • Automate security checks using Policy Scanner
  • • Understand tools for monitoring a production environment in GCP
  • • Learn to manage multiple projects using service accounts
Estimated delivery fee Deliver to Cyprus

Premium delivery 7 - 10 business days

€32.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Apr 16, 2018
Length: 280 pages
Edition : 1st
Language : English
ISBN-13 : 9781788291996
Vendor :
Google
Languages :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Cyprus

Premium delivery 7 - 10 business days

€32.95
(Includes tracking information)

Product Details

Publication date : Apr 16, 2018
Length: 280 pages
Edition : 1st
Language : English
ISBN-13 : 9781788291996
Vendor :
Google
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 110.97
Google Cloud Platform for Developers
€36.99
Google Cloud Platform for Architects
€36.99
Google Cloud Platform Cookbook
€36.99
Total 110.97 Stars icon
Banner background image

Table of Contents

8 Chapters
Compute Chevron down icon Chevron up icon
Storage and Databases Chevron down icon Chevron up icon
Networking Chevron down icon Chevron up icon
Security Chevron down icon Chevron up icon
Machine Learning and Big Data Chevron down icon Chevron up icon
Management Tools Chevron down icon Chevron up icon
Best Practices Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
(4 Ratings)
5 star 25%
4 star 0%
3 star 25%
2 star 50%
1 star 0%
avarma Nov 15, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Hard to find content, especially if you are working on GCP. Don't get fooled by the size - it is small, yet very informative. Although, at a high level, you may know how to use a cloud function to automatically ingest and resize an image, it helps to have a walkthrough. Lots of other gems - especially on the networking and the devops pieces. Great job by the author!The kindle formatting may be bad - the hardcopy worked for me.
Amazon Verified review Amazon
Gerardo Santovena May 24, 2018
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
The content is good but the format is very bad, many exercises can't be read on Kindle at all.
Amazon Verified review Amazon
George A. Quintas May 21, 2018
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
Not compatible with Kindle Cloud Reader. Are you testing@Packt> are you testing your online contents on all available online readers out there? This is the second kindle book that I bought under your name?I will ask for a refund but I will buy the paper version.
Amazon Verified review Amazon
Dheeraj Khanna Oct 19, 2018
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
This book (kindle version as I am hsing it on Kindle) very hard to follow for examples. One example takes more than 3-4 days to complete and majority of the time you keep troubleshooting errors to make the code working. There is no support to get it going to learn . The kestone example given does not work at all and few of the steps are missing.It is a complete waste of money. I dont recomend to buy this.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela