Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Ansible 2 for automating networking tasks on Google Cloud Platform [Tutorial]

Save for later
  • 8 min read
  • 31 Jul 2018

article-image

Google Cloud Platform is one of the largest and most innovative cloud providers out there. It is used by various industry leaders such as Coca-Cola, Spotify, and Philips. Amazon Web Services and Google Cloud are always involved in a price war, which benefits consumers greatly. Google Cloud Platform covers 12 geographical regions across four continents with new regions coming up every year. In this tutorial, we will learn about Google compute engine and network services and how Ansible 2 can be leveraged to automate common networking tasks.

This is an excerpt from Ansible 2 Cloud Automation Cookbook written by Aditya Patawari, Vikas Aggarwal. 

Managing network and firewall rules


By default, inbound connections are not allowed to any of the instances. One way to allow the traffic is by allowing incoming connections to a certain port of instances carrying a particular tag. For example, we can tag all the webservers as http and allow incoming connections to port 80 and 8080 for all the instances carrying the http tag.

How to do it…

  1. We will create a firewall rule with source tag using the gce_net module:

- name: Create Firewall Rule with Source Tags
  gce_net:
    name: my-network
    fwname: "allow-http"
    allowed: tcp:80,8080
    state: "present"
    target_tags: "http"
    subnet_region: us-west1
    service_account_email: "{{ service_account_email }}"
    project_id: "{{ project_id }}"
    credentials_file: "{{ credentials_file }}"
  tags:
  - recipe6

  1. Using tags for firewalls is not possible all the time. A lot of organizations whitelist internal IP ranges or allow office IPs to reach the instances over the network. A simple way to allow a range of IP addresses is to use a source range:

- name: Create Firewall Rule with Source Range
  gce_net:
    name: my-network
    fwname: "allow-internal"
    state: "present"
    src_range: ['10.0.0.0/16']
    subnet_name: public-subnet
    allowed: 'tcp'
    service_account_email: "{{ service_account_email }}"
    project_id: "{{ project_id }}"
    credentials_file: "{{ credentials_file }}"   
  tags:
  - recipe6

How it works...


In step 1, we have created a firewall rule called allow-http to allow incoming requests to TCP port 80 and 8080. Since our instance app is tagged with http, it can accept incoming traffic to port 80 and 8080.

In step 2, we have allowed all the instances with IP 10.0.0.0/16, which is a private IP address range. Along with connection parameters and the source IP address CIDR, we have defined the network name and subnet name. We have allowed all TCP connections. If we want to restrict it to a port or a range of ports, then we can use tcp:80 or tcp:4000-5000 respectively.

Managing load balancer


An important reason to use a cloud is to achieve scalability at a relatively low cost. Load balancers play a key role in scalability. We can attach multiple instances behind a load balancer to distribute the traffic between the instances. Google Cloud load balancer also supports health checks which helps to ensure that traffic is sent to healthy instances only.

How to do it…


Let us create a load balancer and attach an instance to it:

- name: create load balancer and attach to instance
  gce_lb:
    name: loadbalancer1
    region: us-west1
    members: ["{{ zone }}/app"]
    httphealthcheck_name: hc
    httphealthcheck_port: 80
    httphealthcheck_path: "/"
    service_account_email: "{{ service_account_email }}"
    project_id: "{{ project_id }}"
    credentials_file: "{{ credentials_file }}"
  tags:
  - recipe7


For creating a load balancer, we need to supply a comma separated list of instances. We also need to provide health check parameters including a name, a port and the path on which a GET request can be sent.

Managing GCE images in Ansible 2


Images are a collection of a boot loader, operating system, and a root filesystem. There are public images provided by Google and various open source communities. We can use these images to create an instance. GCE also provides us capability to create our own image which we can use to boot instances.

It is important to understand the difference between an image and a snapshot. A snapshot is incremental but it is just a disk snapshot. Due to its incremental nature, it is better for creating backups. Images consist of more information such as a boot loader. Images are non-incremental in nature. However, it is possible to import images from a different cloud provider or datacenter to GCE.

Another reason we recommend snapshots for backup is that taking a snapshot does not require us to shut down the instance, whereas building an image would require us to shut down the instance. Why build images at all? We will discover that in subsequent sections.

How to do it…

  1. Let us create an image for now:
  2. Unlock access to the largest independent learning library in Tech for FREE!
    Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
    Renews at €18.99/month. Cancel anytime

- name: stop the instance
  gce:
    instance_names: app
    zone: "{{ zone }}"
    machine_type: f1-micro
    image: centos-7
    state: stopped
    service_account_email: "{{ service_account_email }}"
    credentials_file: "{{ credentials_file }}"
    project_id: "{{ project_id }}"
    disk_size: 15
    metadata: "{{ instance_metadata }}"
  tags:
  - recipe8

- name: create image
gce_img:
name: app-image
source: app
zone: "{{ zone }}"
state: present
service_account_email: "{{ service_account_email }}"
pem_file: "{{ credentials_file }}"
project_id: "{{ project_id }}"
tags:
- recipe8

- name: start the instance
gce:
instance_names: app
zone: "{{ zone }}"
machine_type: f1-micro
image: centos-7
state: started
service_account_email: "{{ service_account_email }}"
credentials_file: "{{ credentials_file }}"
project_id: "{{ project_id }}"
disk_size: 15
metadata: "{{ instance_metadata }}"
tags:
- recipe8

How it works...


In these tasks, we are stopping the instance first and then creating the image. We just need to supply the instance name while creating the image, along with the standard connection parameters. Finally, we start the instance back. The parameters of these tasks are self-explanatory.

Creating instance templates


Instance templates define various characteristics of an instance and related attributes. Some of these attributes are:

  • Machine type (f1-micro, n1-standard-1, custom)
  • Image (we created one in the previous tip, app-image)
  • Zone (us-west1-a)
  • Tags (we have a firewall rule for tag http)

How to do it…


Once a template is created, we can use it to create a managed instance group which can be auto-scale based on various parameters. Instance templates are typically available globally as long as we do not specify a restrictive parameter like a specific subnet or disk:

- name: create instance template named app-template
  gce_instance_template:
    name: app-template
    size: f1-micro
    tags: http,http-server
    image: app-image
    state: present
    subnetwork: public-subnet
    subnetwork_region: us-west1
    service_account_email: "{{ service_account_email }}"
    credentials_file: "{{ credentials_file }}"
    project_id: "{{ project_id }}"
  tags:
  - recipe9


We have specified the machine type, image, subnets, and tags. This template can be used to create instance groups.

Creating managed instance groups


Traditionally, we have managed virtual machines individually. Instance groups let us manage a group of identical virtual machines as a single entity. These virtual machines are created from an instance template, like the one which we created in the previous tip. Now, if we have to make a change in instance configuration, that change would be applied to all the instances in the group.

How to do it…


Perhaps, the most important feature of an instance group is auto-scaling. In event of high resource requirements, the instance group can scale up to a predefined number automatically:

- name: create an instance group with autoscaling
  gce_mig:
    name: app-mig
    zone: "{{ zone }}"
    service_account_email: "{{ service_account_email }}"
    credentials_file: "{{ credentials_file }}"
    project_id: "{{ project_id }}"
    state: present
    size: 2
    named_ports:
    - name: http
      port: 80
    template: app-template
    autoscaling:
      enabled: yes
      name: app-autoscaler
      policy:
        min_instances: 2
        max_instances: 5
        cool_down_period: 90
        cpu_utilization:
          target: 0.6
        load_balancing_utilization:
         target: 0.8
  tags:
  - recipe10

How it works...


The preceding task creates an instance group with an initial size of two instances, defined by size. We have named port 80 as HTTP. This can be used by other GCE components to route traffic. We have used the template that we created in the previous recipe. We also enable autoscaling with a policy to allow scaling up to five instances. At any given point, at least two instances would be running.

We are scaling on two parameters, cpu_utilization, where 0.6 would trigger scaling after the utilization exceeds 60% and load_balancing_utilization where the scaling will trigger after 80% of the requests per minutes capacity is reached. Typically, when an instance is booted, it might take some time for initialization and startup. Data collected during that period might not make much sense. The parameter, cool_down_period, indicates that we should start collecting data from the instance after 90 seconds and should not trigger scaling based on data before.

We learnt a few networking tricks to manage public cloud infrastructure effectively. You can know more about building the public cloud infrastructure by referring to this book Ansible 2 Cloud Automation Cookbook.

Why choose Ansible for your automation and configuration management needs?

Getting Started with Ansible 2

Top 7 DevOps tools in 2018