Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Infrastructure as Code Cookbook

You're reading from   Infrastructure as Code Cookbook Automate complex infrastructures

Arrow left icon
Product type Paperback
Published in Feb 2017
Publisher Packt
ISBN-13 9781786464910
Length 440 pages
Edition 1st Edition
Arrow right icon
Authors (2):
Arrow left icon
Pierre Pomès Pierre Pomès
Author Profile Icon Pierre Pomès
Pierre Pomès
Stephane Jourdan Stephane Jourdan
Author Profile Icon Stephane Jourdan
Stephane Jourdan
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Vagrant Development Environments FREE CHAPTER 2. Provisioning IaaS with Terraform 3. Going Further with Terraform 4. Automating Complete Infrastructures with Terraform 5. Provisioning the Last Mile with Cloud-Init 6. Fundamentals of Managing Servers with Chef and Puppet 7. Testing and Writing Better Infrastructure Code with Chef and Puppet 8. Maintaining Systems Using Chef and Puppet 9. Working with Docker 10. Maintaining Docker Containers Index

Simulating a networked three-tier architecture app with Vagrant

Vagrant is a great tool to help simulate systems in isolated networks, allowing us to easily mock architectures found in production. The idea behind the multiple tiers is to separate the logic and execution of the various elements of the application, and not centralize everything in one place. A common pattern is to get a first layer that gets the common user requests, a second layer that does the application job, and a third layer that stores and retrieves data, usually from a database.

In this simulation, we'll have the traditional three tiers, each running CentOS 7 virtual machines on their own isolated network:

  • Front: NGINX reverse proxy
  • App: a Node.js app running on two nodes
  • Database: Redis

Virtual Machine Name

front_lan IP

app_lan IP

db_lan IP

front-1

10.10.0.11/24

10.20.0.101/24

N/A

app-1

N/A

10.20.0.11/24

10.30.0.101/24

app-2

N/A

10.20.0.12/24

10/30.0.102/24

db-1

N/A

N/A

10.30.0.11/24

You will access the reverse proxy (NGINX), which alone can contact the application server (Node.js), which is the only one to be able to connect to the database.

Getting ready

To step through this recipe, you will need the following:

  • A working Vagrant installation
  • A working VirtualBox installation
  • An Internet connection

How to do it…

Follow these steps for simulating a networked three-tier architecture app with Vagrant.

Tier 3 – the database

The database lives in a db_lan private network with the IP 10.30.0.11/24.

This application will use a simple Redis installation. Installing and configuring Redis is beyond the scope of this book, so we'll keep it as simple as possible (install it, configure it to listen on the LAN port instead of 127.0.0.1, and start it):

  config.vm.define "db-1" do |config|
    config.vm.hostname = "db-1"
    config.vm.network "private_network", ip: "10.30.0.11", virtualbox__intnet: "db_lan"
    config.vm.provision :shell, :inline => "sudo yum install -q -y epel-release"
    config.vm.provision :shell, :inline => "sudo yum install -q -y redis"
    config.vm.provision :shell, :inline => "sudo sed -i 's/bind 127.0.0.1/bind 127.0.0.1 10.30.0.11/' /etc/redis.conf"
    config.vm.provision :shell, :inline => "sudo systemctl enable redis"
    config.vm.provision :shell, :inline => "sudo systemctl start redis"
  end

Tier 2: the application servers

This tier is where our application lives, backed by an application (web) server. The application can connect to the database tier, and will be available to the end user through tier 1 proxy servers. This is usually where all the logic is done (by the application).

The Node.js application

This will be simulated with the simplest Node.js code I could produce to demonstrate the usage, displaying the server hostname (the filename is app.js).

First, it creates a connection to the Redis server on the db_lan network:

#!/usr/bin/env node
var os = require("os");
var redis = require('redis');
var client = redis.createClient(6379, '10.30.0.11');
client.on('connect', function() {
    console.log('connected to redis on '+os.hostname()+' 10.30.0.11:6379');
});

Then if it goes well, it creates an HTTP server listening on :8080, displaying the server's hostname:

var http = require('http');
http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Running on '+os.hostname()+'\n');
}).listen(8080);
console.log('HTTP server listening on :8080');

Start the app, the simplest of the systemd service file (systemd unit files are out of the scope of this book):

[Unit]
Description=Node App
After=network.target

[Service]
ExecStart=/srv/nodeapp/app.js
Restart=always
User=vagrant
Group=vagrant
Environment=PATH=/usr/bin
Environment=NODE_ENV=production
WorkingDirectory=/srv/nodeapp
[Install]
WantedBy=multi-user.target

Let's iterate through the deployment of a number of application servers (in this case: two) to serve the app. Once again, deploying Node.js applications is out of the scope of this book, so I kept it as simple as possible—simple directories and permissions creation and systemd unit deployment. In production, this would probably be done through a configuration management tool such as Chef or Ansible and maybe coupled with a proper deployment tool:

# Tier 2: a scalable number of application servers
vm_app_num = 2
  (1..vm_app_num).each do |n|
    app_lan_ip = "10.20.0.#{n+10}"
    db_lan_ip = "10.30.0.#{n+100}"
    config.vm.define "app-#{n}" do |config|
      config.vm.hostname = "app-#{n}"
      config.vm.network "private_network", ip: app_lan_ip, virtualbox__intnet: "app_lan"
      config.vm.network "private_network", ip: db_lan_ip, virtualbox__intnet: "db_lan"
      config.vm.provision :shell, :inline => "sudo yum install -q -y epel-release"
      config.vm.provision :shell, :inline => "sudo yum install -q -y nodejs npm"
      config.vm.provision :shell, :inline => "sudo mkdir /srv/nodeapp"
      config.vm.provision :shell, :inline => "sudo cp /vagrant/app.js /src/nodeapp"
      config.vm.provision :shell, :inline => "sudo chown -R vagrant.vagrant /srv/"
      config.vm.provision :shell, :inline => "sudo chmod +x /srv/nodeapp/app.js"
      config.vm.provision :shell, :inline => "cd /srv/nodeapp; npm install redis"
      config.vm.provision :shell, :inline => "sudo cp /vagrant/nodeapp.service /etc/systemd/system"
      config.vm.provision :shell, :inline => "sudo systemctl daemon-reload"
      config.vm.provision :shell, :inline => "sudo systemctl start nodeapp"
    end
  end

Tier 1: the NGINX reverse proxy

Tier 1 is represented here by an NGINX reverse proxy configuration on CentOS 7, as simple as it could be for this demo. Configuring an NGINX reverse proxy with a pool of servers is out of the scope of this book:

events {
  worker_connections 1024;
}
http {
  upstream app {
    server 10.20.0.11:8080 max_fails=1 fail_timeout=1s;
    server 10.20.0.12:8080 max_fails=1 fail_timeout=1s;
  }
  server {
    listen 80;
    server_name  _;
    location / {
      proxy_set_header   X-Real-IP $remote_addr;
      proxy_set_header   Host      $http_host;
      proxy_pass         http://app;
    }
  }
}

Now let's create the reverse proxy VM that will serve http://localhost:8080 through the pool of application servers. This VM listens on 10.10.0.11/24 on its own LAN (front_lan), and on 10.20.0.101/24 on the application servers' LAN (app_lan):

  # Tier 1: an NGINX reverse proxy VM, available on http://localhost:8080
  config.vm.define "front-1" do |config|
    config.vm.hostname = "front-1"
    config.vm.network "private_network", ip: "10.10.0.11", virtualbox__intnet: "front_lan"
    config.vm.network "private_network", ip: "10.20.0.101", virtualbox__intnet: "app_lan"
    config.vm.network "forwarded_port", guest: 80, host: 8080
    config.vm.provision :shell, :inline => "sudo yum install -q -y epel-release"
    config.vm.provision :shell, :inline => "sudo yum install -q -y nginx"
    config.vm.provision :shell, :inline => "sudo cp /vagrant/nginx.conf /etc/nginx/nginx.conf"
    config.vm.provision :shell, :inline => "sudo systemctl enable nginx"
    config.vm.provision :shell, :inline => "sudo systemctl start nginx"
  end

Start this up (vagrant up) and navigate to http://localhost:8080, where the app displays the application server hostname so you can confirm that the load balancing across networks is working (while application servers can talk to the Redis backend).

You have been reading a chapter from
Infrastructure as Code Cookbook
Published in: Feb 2017
Publisher: Packt
ISBN-13: 9781786464910
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image