Chapter 7. Cloud Computing
In this chapter, we will cover the following recipes:
- Creating virtual machine with KVM
- Managing virtual machines with virsh
- Setting up your own cloud with OpenStack
- Adding a cloud image to OpenStack
- Launching a virtual instance with OpenStack
- Installing Juju a service orchestration framework
- Managing services with Juju
Introduction
Cloud computing has become the most important terminology in the computing sphere. It has reduced the effort and cost required to set up and operate the overall computing infrastructure. It has helped various businesses quickly start their business operations without wasting time planning their IT infrastructure, and has enabled really small teams to scale their businesses with on-demand computing power.
The term cloud is commonly used to refer to a large network of servers connected to the Internet. These servers offer a wide range of services and are available for the general public on a pay-per-use basis. Most cloud resources are available in the form of Software as a Service (SaaS), Platform as a Service (PaaS), or Infrastructure as a Service (IaaS). A SaaS is a software system hosted in the cloud. These systems are generally maintained by large organizations; a well-known example that we commonly use is Gmail and the Google Docs service. The end user can access these application through their browsers. He or she can just sign up for the service, pay the required fees, if any, and start using it without any local setup. All data is stored in the cloud and is accessible from any location.
PaaS provide a base platform to develop and run applications in the cloud. The service provider does the hard work of building and maintaining the infrastructure and provides easy-to-use APIs that enable developers to quickly develop and deploy an application. Heroku and the Google App Engine are well-known examples of PaaS services.
Similarly, IaaS provides access to computing infrastructure. This is the base layer of cloud computing and provides physical or virtual access to computing, storage, and network services. The service builds and maintains actual infrastructure, including hardware assembly, virtualization, backups, and scaling. Examples include Amazon AWS and the Google Compute Engine. Heroku is a platform service built on top of the AWS infrastructure.
These cloud services are built on top of virtualization. Virtualization is a software system that enables us to break a large physical server into multiple small virtual servers that can be used independently. One can run multiple isolated operating systems and applications on a single large hardware server. Cloud computing is a set of tools that allows the general public to utilize these virtual resources at a small cost.
Ubuntu offers a wide range of virtualization and cloud computing tools. It supports hypervisors, such as KVM, XEN, and QEMU; a free and open source cloud computing platform, OpenStack; the service orchestration tool Juju and machine provisioning tool MAAS. In this chapter, we will take a brief look at virtualization with KVM. We will install and set up our own cloud with OpenStack and deploy our applications with Juju.
Creating virtual machine with KVM
Ubuntu server gives you various options for your virtualization needs. You can choose from KVM, XEN, QEMU, VirtualBox, and various other proprietary and open source tools. KVM, or Kernel virtual machine, is the default hypervisor on Ubuntu. In this recipe, we will set up a virtual machine with the help of KVM. Ubuntu, being a popular cloud distribution provides prebuilt cloud images that can be used to start virtual machines in the cloud. We will use one of these prebuilt images to build our own local virtual machine.
Getting ready
As always, you will need access to the root account or an account with sudo
privileges.
How to do it…
Follows these steps to install KVM and launch a virtual machine using cloud image:
- To get started, install the required packages:
$ sudo apt-get install kvm cloud-utils \ genisoimage bridge-utils
Tip
Before using KVM, you need to check whether your CPU supports hardware virtualization, which is required by KVM. Check CPU support with the following command:
$ kvm-ok
You should see output like this:
INFO: /dev/kvm exists
KVM acceleration can be used.
- Next, download the cloud images from the Ubuntu servers. I have selected the Ubuntu 14.04 Trusty image:
$ wget http://cloud-images.ubuntu.com/releases/trusty/release/ubuntu-14.04-server-cloudimg-amd64-disk1.img -O trusty.img.dist
This image is in a compressed format and needs to be converted into an uncompressed format. This is not strictly necessary but should save on-demand decompression when an image is used. Use the following command to convert the image:
$ qemu-img convert -O qcow2 trusty.img.dist trusty.img.orig
- Create a copy-on-write image to protect your original image from modifications:
$ qemu-img create -f qcow2 -b trusty.img.orig trusty.img
- Now that our image is ready, we need a
cloud-config
disk to initialize this image and set the necessary user details. Create a new file calleduser-data
and add the following data to it:$ sudo vi user-data #cloud-config password: password chpasswd: { expire: False } ssh_pwauth: True
This file will set a password for the default user,
ubuntu
, and enable password authentication in the SSH configuration. - Create a disk with this configuration written on it:
$ cloud-localds my-seed.img user-data
- Next, create a network bridge to be used by virtual machines. Edit
/etc/network/interfaces
as follows:auto eth0 iface eth0 inet manual auto br0 iface br0 inet dhcp bridge_ports eth0
Note
On Ubuntu 16.04, you will need to edit files under the
/etc/network/interfaces.d
directory. Edit the file foreth0
or your default network interface, and create a new file forbr0
. All files are merged under/etc/network/interfaces
. - Restart the networking service for the changes to take effect. If you are on an SSH connection, your session will get disconnected:
$ sudo service networking restart
- Now that we have all the required data, let's start our image with KVM, as follows:
$ sudo kvm -netdev bridge,id=net0,br=br0 \ -net user -m 256 -nographic \ -hda trusty.img -hdb my-seed.img
This should start a virtual machine and route all input and output to your console. The first boot with
cloud-init
should take a while. Once the boot process completes, you will get a login prompt. Log in with the usernameubuntu
and the password specified in user-data. - Once you get access to the shell, set a new password for the user
ubuntu
:$ sudo passwd ubuntu
After that, uninstall the cloud-init tool to stop it running on the next boot:
$ sudo apt-get remove cloud-init
Your virtual machine is now ready to use. The next time you start the machine, you can skip the second disk with the cloud-init details and route the system console to VNC, as follows:
$ sudo kvm -netdev bridge,id=net0,br=br0 \ -hda trusty.img \ -m 256 -vnc 0.0.0.0:1 -daemonize
How it works…
Ubuntu provides various options to create and manage virtual machines. The previous recipe covers basic virtualization with KVM and prebuilt Ubuntu Cloud images. KVM is very similar to desktop virtualization tools such as VirtualBox and VMware. It comes as a part of the Qemu emulator and uses hardware acceleration features from the host CPU to boost the performance of virtual machines. Without hardware support, the machines need to run inside the Qemu emulator.
After installing KVM, we have used Ubuntu cloud image as our pre-installed boot disk. Cloud images are prebuilt operating system images that do not contain any user data or system configuration. These images need to be initialized before being used. Recent Ubuntu releases contain a program called cloud-init, which is used to initialize the image at first boot. The cloud-init program looks for the metadata service on the network and queries user-data once the service is found. In our case, we have used a secondary disk to pass user data and initialize the cloud image.
We downloaded the prebuilt image from the Ubuntu image server and converted it to uncompressed format. Then, we created a new snapshot with the backing image set to the original prebuilt image. This should protect our original image from any modifications so that it can be used to create more copies. Whenever you need to restore a machine to its original state, just delete the newly created snapshot images and recreate it. Note that you will need to use the cloud-init process again during such restores.
This recipe uses prebuilt images, but you can also install the entire operating system on virtual machines. You will need to download the required installation medium and attach a blank hard disk to the VM. For installation, make sure you set the VNC connection to follow the installation steps.
There's more…
Ubuntu also provides the virt-manager
graphical interface to create and manage KVM virtual machines from a GUI. You can install it as follows:
$ sudo apt-get install virt-manager
Alternatively, you can also install Oracle VirtualBox on Ubuntu. Download the .deb
file for your Ubuntu version and install it with dpkg -i
, or install it from the package manager as follows:
- Add the Oracle repository to your installation sources. Make sure to substitute
xenial
with the correct Ubuntu version:$ sudo vi /etc/apt/sources.list deb http://download.virtualbox.org/virtualbox/debian xenial contrib
- Add the Oracle public keys:
wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
- Install VirtualBox:
$ sudo apt-get update && sudo apt-get install virtualbox-5.0
See also
- VirtualBox downloads: https://www.virtualbox.org/wiki/Linux_Downloads
- Ubuntu Cloud images on a local hypervisor: https://help.ubuntu.com/community/UEC/Images#line-105
- The Ubuntu community page for KVM: https://help.ubuntu.com/community/KVM
Getting ready
As always, you will need access to the root account or an account with sudo
privileges.
How to do it…
Follows these steps to install KVM and launch a virtual machine using cloud image:
- To get started, install the required packages:
$ sudo apt-get install kvm cloud-utils \ genisoimage bridge-utils
Tip
Before using KVM, you need to check whether your CPU supports hardware virtualization, which is required by KVM. Check CPU support with the following command:
$ kvm-ok
You should see output like this:
INFO: /dev/kvm exists
KVM acceleration can be used.
- Next, download the cloud images from the Ubuntu servers. I have selected the Ubuntu 14.04 Trusty image:
$ wget http://cloud-images.ubuntu.com/releases/trusty/release/ubuntu-14.04-server-cloudimg-amd64-disk1.img -O trusty.img.dist
This image is in a compressed format and needs to be converted into an uncompressed format. This is not strictly necessary but should save on-demand decompression when an image is used. Use the following command to convert the image:
$ qemu-img convert -O qcow2 trusty.img.dist trusty.img.orig
- Create a copy-on-write image to protect your original image from modifications:
$ qemu-img create -f qcow2 -b trusty.img.orig trusty.img
- Now that our image is ready, we need a
cloud-config
disk to initialize this image and set the necessary user details. Create a new file calleduser-data
and add the following data to it:$ sudo vi user-data #cloud-config password: password chpasswd: { expire: False } ssh_pwauth: True
This file will set a password for the default user,
ubuntu
, and enable password authentication in the SSH configuration. - Create a disk with this configuration written on it:
$ cloud-localds my-seed.img user-data
- Next, create a network bridge to be used by virtual machines. Edit
/etc/network/interfaces
as follows:auto eth0 iface eth0 inet manual auto br0 iface br0 inet dhcp bridge_ports eth0
Note
On Ubuntu 16.04, you will need to edit files under the
/etc/network/interfaces.d
directory. Edit the file foreth0
or your default network interface, and create a new file forbr0
. All files are merged under/etc/network/interfaces
. - Restart the networking service for the changes to take effect. If you are on an SSH connection, your session will get disconnected:
$ sudo service networking restart
- Now that we have all the required data, let's start our image with KVM, as follows:
$ sudo kvm -netdev bridge,id=net0,br=br0 \ -net user -m 256 -nographic \ -hda trusty.img -hdb my-seed.img
This should start a virtual machine and route all input and output to your console. The first boot with
cloud-init
should take a while. Once the boot process completes, you will get a login prompt. Log in with the usernameubuntu
and the password specified in user-data. - Once you get access to the shell, set a new password for the user
ubuntu
:$ sudo passwd ubuntu
After that, uninstall the cloud-init tool to stop it running on the next boot:
$ sudo apt-get remove cloud-init
Your virtual machine is now ready to use. The next time you start the machine, you can skip the second disk with the cloud-init details and route the system console to VNC, as follows:
$ sudo kvm -netdev bridge,id=net0,br=br0 \ -hda trusty.img \ -m 256 -vnc 0.0.0.0:1 -daemonize
How it works…
Ubuntu provides various options to create and manage virtual machines. The previous recipe covers basic virtualization with KVM and prebuilt Ubuntu Cloud images. KVM is very similar to desktop virtualization tools such as VirtualBox and VMware. It comes as a part of the Qemu emulator and uses hardware acceleration features from the host CPU to boost the performance of virtual machines. Without hardware support, the machines need to run inside the Qemu emulator.
After installing KVM, we have used Ubuntu cloud image as our pre-installed boot disk. Cloud images are prebuilt operating system images that do not contain any user data or system configuration. These images need to be initialized before being used. Recent Ubuntu releases contain a program called cloud-init, which is used to initialize the image at first boot. The cloud-init program looks for the metadata service on the network and queries user-data once the service is found. In our case, we have used a secondary disk to pass user data and initialize the cloud image.
We downloaded the prebuilt image from the Ubuntu image server and converted it to uncompressed format. Then, we created a new snapshot with the backing image set to the original prebuilt image. This should protect our original image from any modifications so that it can be used to create more copies. Whenever you need to restore a machine to its original state, just delete the newly created snapshot images and recreate it. Note that you will need to use the cloud-init process again during such restores.
This recipe uses prebuilt images, but you can also install the entire operating system on virtual machines. You will need to download the required installation medium and attach a blank hard disk to the VM. For installation, make sure you set the VNC connection to follow the installation steps.
There's more…
Ubuntu also provides the virt-manager
graphical interface to create and manage KVM virtual machines from a GUI. You can install it as follows:
$ sudo apt-get install virt-manager
Alternatively, you can also install Oracle VirtualBox on Ubuntu. Download the .deb
file for your Ubuntu version and install it with dpkg -i
, or install it from the package manager as follows:
- Add the Oracle repository to your installation sources. Make sure to substitute
xenial
with the correct Ubuntu version:$ sudo vi /etc/apt/sources.list deb http://download.virtualbox.org/virtualbox/debian xenial contrib
- Add the Oracle public keys:
wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
- Install VirtualBox:
$ sudo apt-get update && sudo apt-get install virtualbox-5.0
See also
- VirtualBox downloads: https://www.virtualbox.org/wiki/Linux_Downloads
- Ubuntu Cloud images on a local hypervisor: https://help.ubuntu.com/community/UEC/Images#line-105
- The Ubuntu community page for KVM: https://help.ubuntu.com/community/KVM
How to do it…
Follows these steps to install KVM and launch a virtual machine using cloud image:
- To get started, install the required packages:
$ sudo apt-get install kvm cloud-utils \ genisoimage bridge-utils
Tip
Before using KVM, you need to check whether your CPU supports hardware virtualization, which is required by KVM. Check CPU support with the following command:
$ kvm-ok
You should see output like this:
INFO: /dev/kvm exists
KVM acceleration can be used.
- Next, download the cloud images from the Ubuntu servers. I have selected the Ubuntu 14.04 Trusty image:
$ wget http://cloud-images.ubuntu.com/releases/trusty/release/ubuntu-14.04-server-cloudimg-amd64-disk1.img -O trusty.img.dist
This image is in a compressed format and needs to be converted into an uncompressed format. This is not strictly necessary but should save on-demand decompression when an image is used. Use the following command to convert the image:
$ qemu-img convert -O qcow2 trusty.img.dist trusty.img.orig
- Create a copy-on-write image to protect your original image from modifications:
$ qemu-img create -f qcow2 -b trusty.img.orig trusty.img
- Now that our image is ready, we need a
cloud-config
disk to initialize this image and set the necessary user details. Create a new file calleduser-data
and add the following data to it:$ sudo vi user-data #cloud-config password: password chpasswd: { expire: False } ssh_pwauth: True
This file will set a password for the default user,
ubuntu
, and enable password authentication in the SSH configuration. - Create a disk with this configuration written on it:
$ cloud-localds my-seed.img user-data
- Next, create a network bridge to be used by virtual machines. Edit
/etc/network/interfaces
as follows:auto eth0 iface eth0 inet manual auto br0 iface br0 inet dhcp bridge_ports eth0
Note
On Ubuntu 16.04, you will need to edit files under the
/etc/network/interfaces.d
directory. Edit the file foreth0
or your default network interface, and create a new file forbr0
. All files are merged under/etc/network/interfaces
. - Restart the networking service for the changes to take effect. If you are on an SSH connection, your session will get disconnected:
$ sudo service networking restart
- Now that we have all the required data, let's start our image with KVM, as follows:
$ sudo kvm -netdev bridge,id=net0,br=br0 \ -net user -m 256 -nographic \ -hda trusty.img -hdb my-seed.img
This should start a virtual machine and route all input and output to your console. The first boot with
cloud-init
should take a while. Once the boot process completes, you will get a login prompt. Log in with the usernameubuntu
and the password specified in user-data. - Once you get access to the shell, set a new password for the user
ubuntu
:$ sudo passwd ubuntu
After that, uninstall the cloud-init tool to stop it running on the next boot:
$ sudo apt-get remove cloud-init
Your virtual machine is now ready to use. The next time you start the machine, you can skip the second disk with the cloud-init details and route the system console to VNC, as follows:
$ sudo kvm -netdev bridge,id=net0,br=br0 \ -hda trusty.img \ -m 256 -vnc 0.0.0.0:1 -daemonize
How it works…
Ubuntu provides various options to create and manage virtual machines. The previous recipe covers basic virtualization with KVM and prebuilt Ubuntu Cloud images. KVM is very similar to desktop virtualization tools such as VirtualBox and VMware. It comes as a part of the Qemu emulator and uses hardware acceleration features from the host CPU to boost the performance of virtual machines. Without hardware support, the machines need to run inside the Qemu emulator.
After installing KVM, we have used Ubuntu cloud image as our pre-installed boot disk. Cloud images are prebuilt operating system images that do not contain any user data or system configuration. These images need to be initialized before being used. Recent Ubuntu releases contain a program called cloud-init, which is used to initialize the image at first boot. The cloud-init program looks for the metadata service on the network and queries user-data once the service is found. In our case, we have used a secondary disk to pass user data and initialize the cloud image.
We downloaded the prebuilt image from the Ubuntu image server and converted it to uncompressed format. Then, we created a new snapshot with the backing image set to the original prebuilt image. This should protect our original image from any modifications so that it can be used to create more copies. Whenever you need to restore a machine to its original state, just delete the newly created snapshot images and recreate it. Note that you will need to use the cloud-init process again during such restores.
This recipe uses prebuilt images, but you can also install the entire operating system on virtual machines. You will need to download the required installation medium and attach a blank hard disk to the VM. For installation, make sure you set the VNC connection to follow the installation steps.
There's more…
Ubuntu also provides the virt-manager
graphical interface to create and manage KVM virtual machines from a GUI. You can install it as follows:
$ sudo apt-get install virt-manager
Alternatively, you can also install Oracle VirtualBox on Ubuntu. Download the .deb
file for your Ubuntu version and install it with dpkg -i
, or install it from the package manager as follows:
- Add the Oracle repository to your installation sources. Make sure to substitute
xenial
with the correct Ubuntu version:$ sudo vi /etc/apt/sources.list deb http://download.virtualbox.org/virtualbox/debian xenial contrib
- Add the Oracle public keys:
wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
- Install VirtualBox:
$ sudo apt-get update && sudo apt-get install virtualbox-5.0
See also
- VirtualBox downloads: https://www.virtualbox.org/wiki/Linux_Downloads
- Ubuntu Cloud images on a local hypervisor: https://help.ubuntu.com/community/UEC/Images#line-105
- The Ubuntu community page for KVM: https://help.ubuntu.com/community/KVM
How it works…
Ubuntu provides various options to create and manage virtual machines. The previous recipe covers basic virtualization with KVM and prebuilt Ubuntu Cloud images. KVM is very similar to desktop virtualization tools such as VirtualBox and VMware. It comes as a part of the Qemu emulator and uses hardware acceleration features from the host CPU to boost the performance of virtual machines. Without hardware support, the machines need to run inside the Qemu emulator.
After installing KVM, we have used Ubuntu cloud image as our pre-installed boot disk. Cloud images are prebuilt operating system images that do not contain any user data or system configuration. These images need to be initialized before being used. Recent Ubuntu releases contain a program called cloud-init, which is used to initialize the image at first boot. The cloud-init program looks for the metadata service on the network and queries user-data once the service is found. In our case, we have used a secondary disk to pass user data and initialize the cloud image.
We downloaded the prebuilt image from the Ubuntu image server and converted it to uncompressed format. Then, we created a new snapshot with the backing image set to the original prebuilt image. This should protect our original image from any modifications so that it can be used to create more copies. Whenever you need to restore a machine to its original state, just delete the newly created snapshot images and recreate it. Note that you will need to use the cloud-init process again during such restores.
This recipe uses prebuilt images, but you can also install the entire operating system on virtual machines. You will need to download the required installation medium and attach a blank hard disk to the VM. For installation, make sure you set the VNC connection to follow the installation steps.
There's more…
Ubuntu also provides the virt-manager
graphical interface to create and manage KVM virtual machines from a GUI. You can install it as follows:
$ sudo apt-get install virt-manager
Alternatively, you can also install Oracle VirtualBox on Ubuntu. Download the .deb
file for your Ubuntu version and install it with dpkg -i
, or install it from the package manager as follows:
- Add the Oracle repository to your installation sources. Make sure to substitute
xenial
with the correct Ubuntu version:$ sudo vi /etc/apt/sources.list deb http://download.virtualbox.org/virtualbox/debian xenial contrib
- Add the Oracle public keys:
wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
- Install VirtualBox:
$ sudo apt-get update && sudo apt-get install virtualbox-5.0
See also
- VirtualBox downloads: https://www.virtualbox.org/wiki/Linux_Downloads
- Ubuntu Cloud images on a local hypervisor: https://help.ubuntu.com/community/UEC/Images#line-105
- The Ubuntu community page for KVM: https://help.ubuntu.com/community/KVM
There's more…
Ubuntu also provides the virt-manager
graphical interface to create and manage KVM virtual machines from a GUI. You can install it as follows:
$ sudo apt-get install virt-manager
Alternatively, you can also install Oracle VirtualBox on Ubuntu. Download the .deb
file for your Ubuntu version and install it with dpkg -i
, or install it from the package manager as follows:
- Add the Oracle repository to your installation sources. Make sure to substitute
xenial
with the correct Ubuntu version:$ sudo vi /etc/apt/sources.list deb http://download.virtualbox.org/virtualbox/debian xenial contrib
- Add the Oracle public keys:
wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
- Install VirtualBox:
$ sudo apt-get update && sudo apt-get install virtualbox-5.0
See also
- VirtualBox downloads: https://www.virtualbox.org/wiki/Linux_Downloads
- Ubuntu Cloud images on a local hypervisor: https://help.ubuntu.com/community/UEC/Images#line-105
- The Ubuntu community page for KVM: https://help.ubuntu.com/community/KVM
See also
- VirtualBox downloads: https://www.virtualbox.org/wiki/Linux_Downloads
- Ubuntu Cloud images on a local hypervisor: https://help.ubuntu.com/community/UEC/Images#line-105
- The Ubuntu community page for KVM: https://help.ubuntu.com/community/KVM
Managing virtual machines with virsh
In the previous recipe, we saw how to start and manage virtual machines with KVM. This recipe covers the use of Virsh and virt-install to create and manage virtual machines. The libvirt
Linux library exposes various APIs to manage hypervisors and virtual machines. Virsh is a command-line tool that provides an interface to libvirt APIs.
To create a new machine, Virsh needs the machine definition in XML format. virt-install is a Python script to easily create a new virtual machine without manipulating bits of XML. It provides an easy-to-use interface to define a machine, create an XML definition for it and then load it in Virsh to start it.
In this recipe, we will create a new virtual machine with virt-install and see how it can be managed with various Virsh commands.
Getting ready
You will need access to the root account or an account with sudo
privileges.
- Install the required packages, as follows:
$ sudo apt-get update $ sudo apt-get install -y qemu-kvm libvirt-bin virtinst
- Install packages to create the cloud init disk:
$ sudo apt-get install genisoimage
- Add your user to the
libvirtd
group and update group membership for the current session:$ sudo adduser ubuntu libvirtd $ newgrp libvirtd
How to do it…
We need to create a new virtual machine. This can be done either with an XML definition of the machine or with a tool called virt-install. We will again use the prebuilt Ubuntu Cloud images and initialize them with a secondary disk:
- First, download the Ubuntu Cloud image and prepare it for use:
$ mkdir ubuntuvm && cd ubuntuvm $ wget -O trusty.img.dist \ http://cloud-images.ubuntu.com/releases/trusty/release/ubuntu-14.04-server-cloudimg-amd64-disk1.img $ qemu-img convert -O qcow2 trusty.img.dist trusty.img.orig $ qemu-img create -f qcow2 -b trusty.img.orig trusty.img
- Create the initialization disk to initialize your cloud image:
$ sudo vi user-data #cloud-config password: password chpasswd: { expire: False } ssh_pwauth: True $ sudo vi meta-data instance-id: ubuntu01; local-hostname: ubuntu $ genisoimage -output cidata.iso -volid cidata -joliet \ -rock user-data meta-data
- Now that we have all the necessary data, let's create a new machine, as follows:
$ virt-install --import --name ubuntu01 \ --ram 256 --vcpus 1 --disk trusty.img \ --disk cidata.iso,device=cdrom \ --network bridge=virbr0 \ --graphics vnc,listen=0.0.0.0 --noautoconsole -v
This should create a virtual machine and start it. A display should be opened on the local VNC port
5900
. You can access the VNC through other systems available on the local network with a GUI.Tip
You can set up local port forwarding and access VNC from your local system as follows:
$ ssh kvm_hostname_or_ip -L 5900:127.0.0.1:5900 $ vncviewer localhost:5900
- Once the cloud-init process completes, you can log in with the default user,
ubuntu
, and the password set inuser-data
. - Now that the machine is created and running, we can use the
virsh
command to manage this machine. You may need to connectvirsh
andqemu
before using them:$ virsh connect qemu:///system
- Get a list of running machines with
virsh list
. The--all
parameter will show all available machines, whether they are running or stopped:$ virsh list --all # or virsh --connect qemu:///system list
- You can open a console to a running machine with
virsh
as follows. This should give you a login prompt inside the virtual machine:$ virsh console ubuntu01
To close the console, use the Ctrl + ] key combination.
- Once you are done with the machine, you can shut it down with
virsh shutdown
. This will call a shutdown process inside the virtual machine:$ virsh shutdown ubuntu01
You can also stop the machine without a proper shutdown, as follows:
$ virsh destroy ubuntu01
- To completely remove the machine, use
virsh undefine
. With this command, the machine will be deleted and cannot be used again:$ virsh destroy ubuntu01
How it works…
Both the virt-install
and virsh
commands collectively give you an easy-to-use virtualization environment. Additionally, the system does not need to support hardware virtualization. When it's available, the virtual machines will use KVM and hardware acceleration, and when KVM is not supported, Qemu will be used to emulate virtual hardware.
With virt-install
, we have easily created a KVM virtual machine. This command abstracts the XML definition required by libvirt. With a list of various parameters, we can easily define all the components with their respective configurations. You can get a full list of virt-install
parameters with the --help
flag.
Tip
The virtinst
package, which installs virt-install
, also contains some more commands, such as virt-clone
, virt-admin
, and virt-xml
. Use tab completion in your bash shell to get a list of all virt-*
commands.
Once the machine is defined and running, it can be managed with virsh
subcommands. Virsh provides tons of subcommands to manage virtual machines, or domains as they are called by libvirt. You can start or stop machines, pause and resume them, or stop them entirely. You can even modify the machine configuration to add or remove devices as needed, or create a clone of an existing machine. To get a list of all machine (domain) management commands, use virsh help domain
.
Once you have your first virtual machine, it becomes easier to create new machines using the XML definition from it. You can dump the XML definition with virsh dumpxml machine
, edit it as required, and then create a new machine using XML configuration with virsh create configuration.xml
.
There are a lot more options available for the virsh
and virt-install
commands; check their respective manual pages for more details.
There's more…
In the previous example, we used cloud images to quickly start a virtual machine. You do not need to use cloud machines, and you can install the operating system on your own using the respective installation media.
Download the installation media and then use following command to start the installation. Make sure you change the -c
parameter to the downloaded ISO file, along with the location:
$ sudo virt-install -n ubuntu -r 1024 \ --disk path=/var/lib/libvirt/images/ubuntu01.img,bus=virtio,size=4 \ -c ubuntu-16.04-server-i386.iso \ --network network=default,model=virtio --graphics vnc,listen=0.0.0.0 --noautoconsole -v
The command will wait for the installation to complete. You can access the GUI installation using the VNC client.
Forward your local port to access VNC on a KVM host. Make sure you replace 5900
with the respective port from virsh vncdisplay node0:
$ ssh kvm_hostname_or_ip -L 5900:127.0.0.1:5900
Now you can connect to VNC at localhost:5900
.
Easy cloud images with uvtool
Ubuntu provides another super easy tool named uvtool. This tool focuses on the creation of virtual machines out of Ubuntu Cloud images. It synchronizes cloud images from Ubuntu servers to your local machine. Later, these images can be used to launch virtual machines in minutes. You can install and use uvtool with the following commands:
$ sudo apt-get install uvtool
Download the Xenial image from the cloud images:
$ uvt-simplestreams-libvirt sync release=xenial arch=amd64
Start a virtual machine:
$ uvt-kvm create virtsys01
Finally, get the IP of a running system:
$ uvt-kvm ip virtsys01
Check out the manual page with the man uvtool
command and visit the official uvtool page at https://help.ubuntu.com/lts/serverguide/cloud-images-and-uvtool.html for more details.
See also
- Check out the manual pages for virt-install using
$ man virt-install
- Check out the manual pages for virsh using
$ man virsh
- The official Libvirt site: http://libvirt.org/
- The Libvirt documentation on Ubuntu Server guide: https://help.ubuntu.com/lts/serverguide/libvirt.html
Getting ready
You will need access to the root account or an account with sudo
privileges.
- Install the required packages, as follows:
$ sudo apt-get update $ sudo apt-get install -y qemu-kvm libvirt-bin virtinst
- Install packages to create the cloud init disk:
$ sudo apt-get install genisoimage
- Add your user to the
libvirtd
group and update group membership for the current session:$ sudo adduser ubuntu libvirtd $ newgrp libvirtd
How to do it…
We need to create a new virtual machine. This can be done either with an XML definition of the machine or with a tool called virt-install. We will again use the prebuilt Ubuntu Cloud images and initialize them with a secondary disk:
- First, download the Ubuntu Cloud image and prepare it for use:
$ mkdir ubuntuvm && cd ubuntuvm $ wget -O trusty.img.dist \ http://cloud-images.ubuntu.com/releases/trusty/release/ubuntu-14.04-server-cloudimg-amd64-disk1.img $ qemu-img convert -O qcow2 trusty.img.dist trusty.img.orig $ qemu-img create -f qcow2 -b trusty.img.orig trusty.img
- Create the initialization disk to initialize your cloud image:
$ sudo vi user-data #cloud-config password: password chpasswd: { expire: False } ssh_pwauth: True $ sudo vi meta-data instance-id: ubuntu01; local-hostname: ubuntu $ genisoimage -output cidata.iso -volid cidata -joliet \ -rock user-data meta-data
- Now that we have all the necessary data, let's create a new machine, as follows:
$ virt-install --import --name ubuntu01 \ --ram 256 --vcpus 1 --disk trusty.img \ --disk cidata.iso,device=cdrom \ --network bridge=virbr0 \ --graphics vnc,listen=0.0.0.0 --noautoconsole -v
This should create a virtual machine and start it. A display should be opened on the local VNC port
5900
. You can access the VNC through other systems available on the local network with a GUI.Tip
You can set up local port forwarding and access VNC from your local system as follows:
$ ssh kvm_hostname_or_ip -L 5900:127.0.0.1:5900 $ vncviewer localhost:5900
- Once the cloud-init process completes, you can log in with the default user,
ubuntu
, and the password set inuser-data
. - Now that the machine is created and running, we can use the
virsh
command to manage this machine. You may need to connectvirsh
andqemu
before using them:$ virsh connect qemu:///system
- Get a list of running machines with
virsh list
. The--all
parameter will show all available machines, whether they are running or stopped:$ virsh list --all # or virsh --connect qemu:///system list
- You can open a console to a running machine with
virsh
as follows. This should give you a login prompt inside the virtual machine:$ virsh console ubuntu01
To close the console, use the Ctrl + ] key combination.
- Once you are done with the machine, you can shut it down with
virsh shutdown
. This will call a shutdown process inside the virtual machine:$ virsh shutdown ubuntu01
You can also stop the machine without a proper shutdown, as follows:
$ virsh destroy ubuntu01
- To completely remove the machine, use
virsh undefine
. With this command, the machine will be deleted and cannot be used again:$ virsh destroy ubuntu01
How it works…
Both the virt-install
and virsh
commands collectively give you an easy-to-use virtualization environment. Additionally, the system does not need to support hardware virtualization. When it's available, the virtual machines will use KVM and hardware acceleration, and when KVM is not supported, Qemu will be used to emulate virtual hardware.
With virt-install
, we have easily created a KVM virtual machine. This command abstracts the XML definition required by libvirt. With a list of various parameters, we can easily define all the components with their respective configurations. You can get a full list of virt-install
parameters with the --help
flag.
Tip
The virtinst
package, which installs virt-install
, also contains some more commands, such as virt-clone
, virt-admin
, and virt-xml
. Use tab completion in your bash shell to get a list of all virt-*
commands.
Once the machine is defined and running, it can be managed with virsh
subcommands. Virsh provides tons of subcommands to manage virtual machines, or domains as they are called by libvirt. You can start or stop machines, pause and resume them, or stop them entirely. You can even modify the machine configuration to add or remove devices as needed, or create a clone of an existing machine. To get a list of all machine (domain) management commands, use virsh help domain
.
Once you have your first virtual machine, it becomes easier to create new machines using the XML definition from it. You can dump the XML definition with virsh dumpxml machine
, edit it as required, and then create a new machine using XML configuration with virsh create configuration.xml
.
There are a lot more options available for the virsh
and virt-install
commands; check their respective manual pages for more details.
There's more…
In the previous example, we used cloud images to quickly start a virtual machine. You do not need to use cloud machines, and you can install the operating system on your own using the respective installation media.
Download the installation media and then use following command to start the installation. Make sure you change the -c
parameter to the downloaded ISO file, along with the location:
$ sudo virt-install -n ubuntu -r 1024 \ --disk path=/var/lib/libvirt/images/ubuntu01.img,bus=virtio,size=4 \ -c ubuntu-16.04-server-i386.iso \ --network network=default,model=virtio --graphics vnc,listen=0.0.0.0 --noautoconsole -v
The command will wait for the installation to complete. You can access the GUI installation using the VNC client.
Forward your local port to access VNC on a KVM host. Make sure you replace 5900
with the respective port from virsh vncdisplay node0:
$ ssh kvm_hostname_or_ip -L 5900:127.0.0.1:5900
Now you can connect to VNC at localhost:5900
.
Easy cloud images with uvtool
Ubuntu provides another super easy tool named uvtool. This tool focuses on the creation of virtual machines out of Ubuntu Cloud images. It synchronizes cloud images from Ubuntu servers to your local machine. Later, these images can be used to launch virtual machines in minutes. You can install and use uvtool with the following commands:
$ sudo apt-get install uvtool
Download the Xenial image from the cloud images:
$ uvt-simplestreams-libvirt sync release=xenial arch=amd64
Start a virtual machine:
$ uvt-kvm create virtsys01
Finally, get the IP of a running system:
$ uvt-kvm ip virtsys01
Check out the manual page with the man uvtool
command and visit the official uvtool page at https://help.ubuntu.com/lts/serverguide/cloud-images-and-uvtool.html for more details.
See also
- Check out the manual pages for virt-install using
$ man virt-install
- Check out the manual pages for virsh using
$ man virsh
- The official Libvirt site: http://libvirt.org/
- The Libvirt documentation on Ubuntu Server guide: https://help.ubuntu.com/lts/serverguide/libvirt.html
How to do it…
We need to create a new virtual machine. This can be done either with an XML definition of the machine or with a tool called virt-install. We will again use the prebuilt Ubuntu Cloud images and initialize them with a secondary disk:
- First, download the Ubuntu Cloud image and prepare it for use:
$ mkdir ubuntuvm && cd ubuntuvm $ wget -O trusty.img.dist \ http://cloud-images.ubuntu.com/releases/trusty/release/ubuntu-14.04-server-cloudimg-amd64-disk1.img $ qemu-img convert -O qcow2 trusty.img.dist trusty.img.orig $ qemu-img create -f qcow2 -b trusty.img.orig trusty.img
- Create the initialization disk to initialize your cloud image:
$ sudo vi user-data #cloud-config password: password chpasswd: { expire: False } ssh_pwauth: True $ sudo vi meta-data instance-id: ubuntu01; local-hostname: ubuntu $ genisoimage -output cidata.iso -volid cidata -joliet \ -rock user-data meta-data
- Now that we have all the necessary data, let's create a new machine, as follows:
$ virt-install --import --name ubuntu01 \ --ram 256 --vcpus 1 --disk trusty.img \ --disk cidata.iso,device=cdrom \ --network bridge=virbr0 \ --graphics vnc,listen=0.0.0.0 --noautoconsole -v
This should create a virtual machine and start it. A display should be opened on the local VNC port
5900
. You can access the VNC through other systems available on the local network with a GUI.Tip
You can set up local port forwarding and access VNC from your local system as follows:
$ ssh kvm_hostname_or_ip -L 5900:127.0.0.1:5900 $ vncviewer localhost:5900
- Once the cloud-init process completes, you can log in with the default user,
ubuntu
, and the password set inuser-data
. - Now that the machine is created and running, we can use the
virsh
command to manage this machine. You may need to connectvirsh
andqemu
before using them:$ virsh connect qemu:///system
- Get a list of running machines with
virsh list
. The--all
parameter will show all available machines, whether they are running or stopped:$ virsh list --all # or virsh --connect qemu:///system list
- You can open a console to a running machine with
virsh
as follows. This should give you a login prompt inside the virtual machine:$ virsh console ubuntu01
To close the console, use the Ctrl + ] key combination.
- Once you are done with the machine, you can shut it down with
virsh shutdown
. This will call a shutdown process inside the virtual machine:$ virsh shutdown ubuntu01
You can also stop the machine without a proper shutdown, as follows:
$ virsh destroy ubuntu01
- To completely remove the machine, use
virsh undefine
. With this command, the machine will be deleted and cannot be used again:$ virsh destroy ubuntu01
How it works…
Both the virt-install
and virsh
commands collectively give you an easy-to-use virtualization environment. Additionally, the system does not need to support hardware virtualization. When it's available, the virtual machines will use KVM and hardware acceleration, and when KVM is not supported, Qemu will be used to emulate virtual hardware.
With virt-install
, we have easily created a KVM virtual machine. This command abstracts the XML definition required by libvirt. With a list of various parameters, we can easily define all the components with their respective configurations. You can get a full list of virt-install
parameters with the --help
flag.
Tip
The virtinst
package, which installs virt-install
, also contains some more commands, such as virt-clone
, virt-admin
, and virt-xml
. Use tab completion in your bash shell to get a list of all virt-*
commands.
Once the machine is defined and running, it can be managed with virsh
subcommands. Virsh provides tons of subcommands to manage virtual machines, or domains as they are called by libvirt. You can start or stop machines, pause and resume them, or stop them entirely. You can even modify the machine configuration to add or remove devices as needed, or create a clone of an existing machine. To get a list of all machine (domain) management commands, use virsh help domain
.
Once you have your first virtual machine, it becomes easier to create new machines using the XML definition from it. You can dump the XML definition with virsh dumpxml machine
, edit it as required, and then create a new machine using XML configuration with virsh create configuration.xml
.
There are a lot more options available for the virsh
and virt-install
commands; check their respective manual pages for more details.
There's more…
In the previous example, we used cloud images to quickly start a virtual machine. You do not need to use cloud machines, and you can install the operating system on your own using the respective installation media.
Download the installation media and then use following command to start the installation. Make sure you change the -c
parameter to the downloaded ISO file, along with the location:
$ sudo virt-install -n ubuntu -r 1024 \ --disk path=/var/lib/libvirt/images/ubuntu01.img,bus=virtio,size=4 \ -c ubuntu-16.04-server-i386.iso \ --network network=default,model=virtio --graphics vnc,listen=0.0.0.0 --noautoconsole -v
The command will wait for the installation to complete. You can access the GUI installation using the VNC client.
Forward your local port to access VNC on a KVM host. Make sure you replace 5900
with the respective port from virsh vncdisplay node0:
$ ssh kvm_hostname_or_ip -L 5900:127.0.0.1:5900
Now you can connect to VNC at localhost:5900
.
Easy cloud images with uvtool
Ubuntu provides another super easy tool named uvtool. This tool focuses on the creation of virtual machines out of Ubuntu Cloud images. It synchronizes cloud images from Ubuntu servers to your local machine. Later, these images can be used to launch virtual machines in minutes. You can install and use uvtool with the following commands:
$ sudo apt-get install uvtool
Download the Xenial image from the cloud images:
$ uvt-simplestreams-libvirt sync release=xenial arch=amd64
Start a virtual machine:
$ uvt-kvm create virtsys01
Finally, get the IP of a running system:
$ uvt-kvm ip virtsys01
Check out the manual page with the man uvtool
command and visit the official uvtool page at https://help.ubuntu.com/lts/serverguide/cloud-images-and-uvtool.html for more details.
See also
- Check out the manual pages for virt-install using
$ man virt-install
- Check out the manual pages for virsh using
$ man virsh
- The official Libvirt site: http://libvirt.org/
- The Libvirt documentation on Ubuntu Server guide: https://help.ubuntu.com/lts/serverguide/libvirt.html
How it works…
Both the virt-install
and virsh
commands collectively give you an easy-to-use virtualization environment. Additionally, the system does not need to support hardware virtualization. When it's available, the virtual machines will use KVM and hardware acceleration, and when KVM is not supported, Qemu will be used to emulate virtual hardware.
With virt-install
, we have easily created a KVM virtual machine. This command abstracts the XML definition required by libvirt. With a list of various parameters, we can easily define all the components with their respective configurations. You can get a full list of virt-install
parameters with the --help
flag.
Tip
The virtinst
package, which installs virt-install
, also contains some more commands, such as virt-clone
, virt-admin
, and virt-xml
. Use tab completion in your bash shell to get a list of all virt-*
commands.
Once the machine is defined and running, it can be managed with virsh
subcommands. Virsh provides tons of subcommands to manage virtual machines, or domains as they are called by libvirt. You can start or stop machines, pause and resume them, or stop them entirely. You can even modify the machine configuration to add or remove devices as needed, or create a clone of an existing machine. To get a list of all machine (domain) management commands, use virsh help domain
.
Once you have your first virtual machine, it becomes easier to create new machines using the XML definition from it. You can dump the XML definition with virsh dumpxml machine
, edit it as required, and then create a new machine using XML configuration with virsh create configuration.xml
.
There are a lot more options available for the virsh
and virt-install
commands; check their respective manual pages for more details.
There's more…
In the previous example, we used cloud images to quickly start a virtual machine. You do not need to use cloud machines, and you can install the operating system on your own using the respective installation media.
Download the installation media and then use following command to start the installation. Make sure you change the -c
parameter to the downloaded ISO file, along with the location:
$ sudo virt-install -n ubuntu -r 1024 \ --disk path=/var/lib/libvirt/images/ubuntu01.img,bus=virtio,size=4 \ -c ubuntu-16.04-server-i386.iso \ --network network=default,model=virtio --graphics vnc,listen=0.0.0.0 --noautoconsole -v
The command will wait for the installation to complete. You can access the GUI installation using the VNC client.
Forward your local port to access VNC on a KVM host. Make sure you replace 5900
with the respective port from virsh vncdisplay node0:
$ ssh kvm_hostname_or_ip -L 5900:127.0.0.1:5900
Now you can connect to VNC at localhost:5900
.
Easy cloud images with uvtool
Ubuntu provides another super easy tool named uvtool. This tool focuses on the creation of virtual machines out of Ubuntu Cloud images. It synchronizes cloud images from Ubuntu servers to your local machine. Later, these images can be used to launch virtual machines in minutes. You can install and use uvtool with the following commands:
$ sudo apt-get install uvtool
Download the Xenial image from the cloud images:
$ uvt-simplestreams-libvirt sync release=xenial arch=amd64
Start a virtual machine:
$ uvt-kvm create virtsys01
Finally, get the IP of a running system:
$ uvt-kvm ip virtsys01
Check out the manual page with the man uvtool
command and visit the official uvtool page at https://help.ubuntu.com/lts/serverguide/cloud-images-and-uvtool.html for more details.
See also
- Check out the manual pages for virt-install using
$ man virt-install
- Check out the manual pages for virsh using
$ man virsh
- The official Libvirt site: http://libvirt.org/
- The Libvirt documentation on Ubuntu Server guide: https://help.ubuntu.com/lts/serverguide/libvirt.html
There's more…
In the previous example, we used cloud images to quickly start a virtual machine. You do not need to use cloud machines, and you can install the operating system on your own using the respective installation media.
Download the installation media and then use following command to start the installation. Make sure you change the -c
parameter to the downloaded ISO file, along with the location:
$ sudo virt-install -n ubuntu -r 1024 \ --disk path=/var/lib/libvirt/images/ubuntu01.img,bus=virtio,size=4 \ -c ubuntu-16.04-server-i386.iso \ --network network=default,model=virtio --graphics vnc,listen=0.0.0.0 --noautoconsole -v
The command will wait for the installation to complete. You can access the GUI installation using the VNC client.
Forward your local port to access VNC on a KVM host. Make sure you replace 5900
with the respective port from virsh vncdisplay node0:
$ ssh kvm_hostname_or_ip -L 5900:127.0.0.1:5900
Now you can connect to VNC at localhost:5900
.
Easy cloud images with uvtool
Ubuntu provides another super easy tool named uvtool. This tool focuses on the creation of virtual machines out of Ubuntu Cloud images. It synchronizes cloud images from Ubuntu servers to your local machine. Later, these images can be used to launch virtual machines in minutes. You can install and use uvtool with the following commands:
$ sudo apt-get install uvtool
Download the Xenial image from the cloud images:
$ uvt-simplestreams-libvirt sync release=xenial arch=amd64
Start a virtual machine:
$ uvt-kvm create virtsys01
Finally, get the IP of a running system:
$ uvt-kvm ip virtsys01
Check out the manual page with the man uvtool
command and visit the official uvtool page at https://help.ubuntu.com/lts/serverguide/cloud-images-and-uvtool.html for more details.
See also
- Check out the manual pages for virt-install using
$ man virt-install
- Check out the manual pages for virsh using
$ man virsh
- The official Libvirt site: http://libvirt.org/
- The Libvirt documentation on Ubuntu Server guide: https://help.ubuntu.com/lts/serverguide/libvirt.html
Easy cloud images with uvtool
Ubuntu provides another super easy tool named uvtool. This tool focuses on the creation of virtual machines out of Ubuntu Cloud images. It synchronizes cloud images from Ubuntu servers to your local machine. Later, these images can be used to launch virtual machines in minutes. You can install and use uvtool with the following commands:
$ sudo apt-get install uvtool
Download the Xenial image from the cloud images:
$ uvt-simplestreams-libvirt sync release=xenial arch=amd64
Start a virtual machine:
$ uvt-kvm create virtsys01
Finally, get the IP of a running system:
$ uvt-kvm ip virtsys01
Check out the manual page with the man uvtool
command and visit the official uvtool page at https://help.ubuntu.com/lts/serverguide/cloud-images-and-uvtool.html for more details.
- Check out the manual pages for virt-install using
$ man virt-install
- Check out the manual pages for virsh using
$ man virsh
- The official Libvirt site: http://libvirt.org/
- The Libvirt documentation on Ubuntu Server guide: https://help.ubuntu.com/lts/serverguide/libvirt.html
See also
- Check out the manual pages for virt-install using
$ man virt-install
- Check out the manual pages for virsh using
$ man virsh
- The official Libvirt site: http://libvirt.org/
- The Libvirt documentation on Ubuntu Server guide: https://help.ubuntu.com/lts/serverguide/libvirt.html
Setting up your own cloud with OpenStack
We have already seen how to create virtual machines with KVM and Qemu, and how to manage them with tools such as virsh and virt-manager. This approach works when you need to work with a handful of machines and manage few hosts. To operate on a larger scale, you need a tool to manage host machines, VM configurations, images, network, and storage, and monitor the entire environment. OpenStack is an open source initiative to create and manage a large pool of virtual machines (or containers). It is a collection of various tools to deploy IaaS clouds. The official site defines OpenStack as an operating system to control a large pool of compute, network, and storage resources, all managed through a dashboard.
OpenStack was primarily developed and open-sourced by Rackspace, a leading cloud service provider. With its thirteenth release, Mitaka, OpenStack provides tons of tools to manage various components of your infrastructure. A few important components of OpenStack are as follows:
- Nova: Compute controller
- Neutron: OpenStack networking
- Keystone: Identity service
- Glance: OpenStack image service
- Horizon: OpenStack dashboard
- Cinder: Block storage service
- Swift: Object store
- Heat: Orchestration program
OpenStack in itself is quite a big deployment. You need to decide the required components, plan their deployment, and install and configure them to work in sync. The installation itself can be a good topic for a separate book. However, the OpenStack community has developed a set of scripts known as DevStack to support development with faster deployments. In this recipe, we will use the DevStack script to quickly install OpenStack and get an overview of its workings. The official OpenStack documentation provides detailed documents for the Ubuntu based installation and configuration of various components. If you are planning a serious production environment, you should read it thoroughly.
Getting ready
You will need a non-root account with sudo
privileges. The default account named ubuntu
should work.
The system should have at least two CPU cores with at least 4 GB of RAM and 60 GB of disk space. A static IP address is preferred. If possible, use the minimal installation of Ubuntu.
Tip
If you are performing a fresh installation of Ubuntu Server, press F4 on the first screen to get installation options, and choose Install Minimal System. If you are installing inside a virtual machine, choose Install Minimal Virtual Machine. You may need to go to the installation menu with the Esc key before using F4.
DevStack scripts are available on GitHub. Clone the repository or download and extract it to your installation server. Use the following command to clone:
$ git clone https://git.openstack.org/openstack-dev/devstack \ -b stable/mitaka --depth 1 $ cd devstack
You can choose to get the latest release by selecting the master branch. Just skip the -b stable/mitaka
option from the previous command.
How to do it…
Once you obtain the DevStack source, it's as easy as executing an installation script. Before that, we will create a minimal configuration file for passwords and basic network configuration:
- Copy the sample configuration to the root of the
devstack
directory:$ cp samples/local.conf
- Edit
local.conf
and update passwords:ADMIN_PASSWORD=password DATABASE_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=$ADMIN_PASSWORD
- Add basic network configuration as follows. Update IP address range as per your local network configuration and set
FLAT_INTERFACE
to your primary Ethernet interface:FLOATING_RANGE=192.168.1.224/27 FIXED_RANGE=10.11.12.0/24 FIXED_NETWORK_SIZE=256 FLAT_INTERFACE=eth0
Save the changes to the configuration file.
- Now, start the installation with the following command. As the Mitaka stable branch has not been tested with Ubuntu Xenial (16.04), we need to use the
FORCE
variable. If you are using the master branch of DevStack or an older version of Ubuntu, you can start the installation with the./stack.sh
command:$ FORCE=yes ./stack.sh
The installation should take some time to complete, mostly depending on your network speed. Once the installation completes, the script should output the dashboard URL, keystone API endpoint, and the admin password:
- Now, access the OpenStack dashboard and log in with the given username and password. The admin account will give you an admin interface. The login screen looks like this:
- Once you log in, your admin interface should look something like this:
Now, from this screen, you can deploy new virtual instances, set up different cloud images, and configure instance flavors.
How it works…
We used DevStack, an unattended installation script, to install and configure basic OpenStack deployment. This will install OpenStack with the bare minimum components for deploying virtual machines with OpenStack. By default, DevStack installs the identity service, Nova network, compute service, and image service. The installation process creates two user accounts, namely admin
and dummy
. The admin
account gives you administrative access to the OpenStack installation and the dummy
account gives you the end user interface. The DevStack installation also adds a Cirros image to the image store. This is a basic lightweight Linux distribution and a good candidate to test OpenStack installation.
The default installation creates a basic flat network. You can also configure DevStack to enable Neutron support, by setting the required options in the configuration. Check out the DevStack documentation for more details.
There's more…
Ubuntu provides its own easy-to-use OpenStack installer. It provides options to install OpenStack, along with LXD support and OpenStack Autopilot, an enterprise offering by Canonical. You can choose to install on your local machine (all-in-one installation) or choose a Metal as a Service (MAAS) setup for a multinode deployment. The single-machine setup will install OpenStack on multiple LXC containers, deployed and managed through Juju. You will need at least 12 GB of main memory and an 8-CPU server. Use the following commands to get started with the Ubuntu OpenStack installer:
$ sudo apt-get update $ sudo apt-get install conjure-up $ conjure-up openstack
While DevStack installs a development-focused minimal installation of OpenStack, various other scripts support the automation of the OpenStack installation process. A notable project is OpenStack Ansible. This is an official OpenStack project and provides production-grade deployments. A quick GitHub search should give you a lot more options.
See also
- A step-by-step detailed guide to installing various OpenStack components on Ubuntu server: http://docs.openstack.org/mitaka/install-guide-ubuntu/
- DevStack Neutron configuration: http://docs.openstack.org/developer/devstack/guides/neutron.html
- OpenStack Ansible: https://github.com/openstack/openstack-ansible
- A list of OpenStack resources: https://github.com/ramitsurana/awesome-openstack
- Ubuntu MaaS: http://www.ubuntu.com/cloud/maas
- Ubuntu Juju: http://www.ubuntu.com/cloud/juju
- Read more about LXD and LXC in Chapter 8, Working with Containers
Getting ready
You will need a non-root account with sudo
privileges. The default account named ubuntu
should work.
The system should have at least two CPU cores with at least 4 GB of RAM and 60 GB of disk space. A static IP address is preferred. If possible, use the minimal installation of Ubuntu.
Tip
If you are performing a fresh installation of Ubuntu Server, press F4 on the first screen to get installation options, and choose Install Minimal System. If you are installing inside a virtual machine, choose Install Minimal Virtual Machine. You may need to go to the installation menu with the Esc key before using F4.
DevStack scripts are available on GitHub. Clone the repository or download and extract it to your installation server. Use the following command to clone:
$ git clone https://git.openstack.org/openstack-dev/devstack \ -b stable/mitaka --depth 1 $ cd devstack
You can choose to get the latest release by selecting the master branch. Just skip the -b stable/mitaka
option from the previous command.
How to do it…
Once you obtain the DevStack source, it's as easy as executing an installation script. Before that, we will create a minimal configuration file for passwords and basic network configuration:
- Copy the sample configuration to the root of the
devstack
directory:$ cp samples/local.conf
- Edit
local.conf
and update passwords:ADMIN_PASSWORD=password DATABASE_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=$ADMIN_PASSWORD
- Add basic network configuration as follows. Update IP address range as per your local network configuration and set
FLAT_INTERFACE
to your primary Ethernet interface:FLOATING_RANGE=192.168.1.224/27 FIXED_RANGE=10.11.12.0/24 FIXED_NETWORK_SIZE=256 FLAT_INTERFACE=eth0
Save the changes to the configuration file.
- Now, start the installation with the following command. As the Mitaka stable branch has not been tested with Ubuntu Xenial (16.04), we need to use the
FORCE
variable. If you are using the master branch of DevStack or an older version of Ubuntu, you can start the installation with the./stack.sh
command:$ FORCE=yes ./stack.sh
The installation should take some time to complete, mostly depending on your network speed. Once the installation completes, the script should output the dashboard URL, keystone API endpoint, and the admin password:
- Now, access the OpenStack dashboard and log in with the given username and password. The admin account will give you an admin interface. The login screen looks like this:
- Once you log in, your admin interface should look something like this:
Now, from this screen, you can deploy new virtual instances, set up different cloud images, and configure instance flavors.
How it works…
We used DevStack, an unattended installation script, to install and configure basic OpenStack deployment. This will install OpenStack with the bare minimum components for deploying virtual machines with OpenStack. By default, DevStack installs the identity service, Nova network, compute service, and image service. The installation process creates two user accounts, namely admin
and dummy
. The admin
account gives you administrative access to the OpenStack installation and the dummy
account gives you the end user interface. The DevStack installation also adds a Cirros image to the image store. This is a basic lightweight Linux distribution and a good candidate to test OpenStack installation.
The default installation creates a basic flat network. You can also configure DevStack to enable Neutron support, by setting the required options in the configuration. Check out the DevStack documentation for more details.
There's more…
Ubuntu provides its own easy-to-use OpenStack installer. It provides options to install OpenStack, along with LXD support and OpenStack Autopilot, an enterprise offering by Canonical. You can choose to install on your local machine (all-in-one installation) or choose a Metal as a Service (MAAS) setup for a multinode deployment. The single-machine setup will install OpenStack on multiple LXC containers, deployed and managed through Juju. You will need at least 12 GB of main memory and an 8-CPU server. Use the following commands to get started with the Ubuntu OpenStack installer:
$ sudo apt-get update $ sudo apt-get install conjure-up $ conjure-up openstack
While DevStack installs a development-focused minimal installation of OpenStack, various other scripts support the automation of the OpenStack installation process. A notable project is OpenStack Ansible. This is an official OpenStack project and provides production-grade deployments. A quick GitHub search should give you a lot more options.
See also
- A step-by-step detailed guide to installing various OpenStack components on Ubuntu server: http://docs.openstack.org/mitaka/install-guide-ubuntu/
- DevStack Neutron configuration: http://docs.openstack.org/developer/devstack/guides/neutron.html
- OpenStack Ansible: https://github.com/openstack/openstack-ansible
- A list of OpenStack resources: https://github.com/ramitsurana/awesome-openstack
- Ubuntu MaaS: http://www.ubuntu.com/cloud/maas
- Ubuntu Juju: http://www.ubuntu.com/cloud/juju
- Read more about LXD and LXC in Chapter 8, Working with Containers
How to do it…
Once you obtain the DevStack source, it's as easy as executing an installation script. Before that, we will create a minimal configuration file for passwords and basic network configuration:
- Copy the sample configuration to the root of the
devstack
directory:$ cp samples/local.conf
- Edit
local.conf
and update passwords:ADMIN_PASSWORD=password DATABASE_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=$ADMIN_PASSWORD
- Add basic network configuration as follows. Update IP address range as per your local network configuration and set
FLAT_INTERFACE
to your primary Ethernet interface:FLOATING_RANGE=192.168.1.224/27 FIXED_RANGE=10.11.12.0/24 FIXED_NETWORK_SIZE=256 FLAT_INTERFACE=eth0
Save the changes to the configuration file.
- Now, start the installation with the following command. As the Mitaka stable branch has not been tested with Ubuntu Xenial (16.04), we need to use the
FORCE
variable. If you are using the master branch of DevStack or an older version of Ubuntu, you can start the installation with the./stack.sh
command:$ FORCE=yes ./stack.sh
The installation should take some time to complete, mostly depending on your network speed. Once the installation completes, the script should output the dashboard URL, keystone API endpoint, and the admin password:
- Now, access the OpenStack dashboard and log in with the given username and password. The admin account will give you an admin interface. The login screen looks like this:
- Once you log in, your admin interface should look something like this:
Now, from this screen, you can deploy new virtual instances, set up different cloud images, and configure instance flavors.
How it works…
We used DevStack, an unattended installation script, to install and configure basic OpenStack deployment. This will install OpenStack with the bare minimum components for deploying virtual machines with OpenStack. By default, DevStack installs the identity service, Nova network, compute service, and image service. The installation process creates two user accounts, namely admin
and dummy
. The admin
account gives you administrative access to the OpenStack installation and the dummy
account gives you the end user interface. The DevStack installation also adds a Cirros image to the image store. This is a basic lightweight Linux distribution and a good candidate to test OpenStack installation.
The default installation creates a basic flat network. You can also configure DevStack to enable Neutron support, by setting the required options in the configuration. Check out the DevStack documentation for more details.
There's more…
Ubuntu provides its own easy-to-use OpenStack installer. It provides options to install OpenStack, along with LXD support and OpenStack Autopilot, an enterprise offering by Canonical. You can choose to install on your local machine (all-in-one installation) or choose a Metal as a Service (MAAS) setup for a multinode deployment. The single-machine setup will install OpenStack on multiple LXC containers, deployed and managed through Juju. You will need at least 12 GB of main memory and an 8-CPU server. Use the following commands to get started with the Ubuntu OpenStack installer:
$ sudo apt-get update $ sudo apt-get install conjure-up $ conjure-up openstack
While DevStack installs a development-focused minimal installation of OpenStack, various other scripts support the automation of the OpenStack installation process. A notable project is OpenStack Ansible. This is an official OpenStack project and provides production-grade deployments. A quick GitHub search should give you a lot more options.
See also
- A step-by-step detailed guide to installing various OpenStack components on Ubuntu server: http://docs.openstack.org/mitaka/install-guide-ubuntu/
- DevStack Neutron configuration: http://docs.openstack.org/developer/devstack/guides/neutron.html
- OpenStack Ansible: https://github.com/openstack/openstack-ansible
- A list of OpenStack resources: https://github.com/ramitsurana/awesome-openstack
- Ubuntu MaaS: http://www.ubuntu.com/cloud/maas
- Ubuntu Juju: http://www.ubuntu.com/cloud/juju
- Read more about LXD and LXC in Chapter 8, Working with Containers
How it works…
We used DevStack, an unattended installation script, to install and configure basic OpenStack deployment. This will install OpenStack with the bare minimum components for deploying virtual machines with OpenStack. By default, DevStack installs the identity service, Nova network, compute service, and image service. The installation process creates two user accounts, namely admin
and dummy
. The admin
account gives you administrative access to the OpenStack installation and the dummy
account gives you the end user interface. The DevStack installation also adds a Cirros image to the image store. This is a basic lightweight Linux distribution and a good candidate to test OpenStack installation.
The default installation creates a basic flat network. You can also configure DevStack to enable Neutron support, by setting the required options in the configuration. Check out the DevStack documentation for more details.
There's more…
Ubuntu provides its own easy-to-use OpenStack installer. It provides options to install OpenStack, along with LXD support and OpenStack Autopilot, an enterprise offering by Canonical. You can choose to install on your local machine (all-in-one installation) or choose a Metal as a Service (MAAS) setup for a multinode deployment. The single-machine setup will install OpenStack on multiple LXC containers, deployed and managed through Juju. You will need at least 12 GB of main memory and an 8-CPU server. Use the following commands to get started with the Ubuntu OpenStack installer:
$ sudo apt-get update $ sudo apt-get install conjure-up $ conjure-up openstack
While DevStack installs a development-focused minimal installation of OpenStack, various other scripts support the automation of the OpenStack installation process. A notable project is OpenStack Ansible. This is an official OpenStack project and provides production-grade deployments. A quick GitHub search should give you a lot more options.
See also
- A step-by-step detailed guide to installing various OpenStack components on Ubuntu server: http://docs.openstack.org/mitaka/install-guide-ubuntu/
- DevStack Neutron configuration: http://docs.openstack.org/developer/devstack/guides/neutron.html
- OpenStack Ansible: https://github.com/openstack/openstack-ansible
- A list of OpenStack resources: https://github.com/ramitsurana/awesome-openstack
- Ubuntu MaaS: http://www.ubuntu.com/cloud/maas
- Ubuntu Juju: http://www.ubuntu.com/cloud/juju
- Read more about LXD and LXC in Chapter 8, Working with Containers
There's more…
Ubuntu provides its own easy-to-use OpenStack installer. It provides options to install OpenStack, along with LXD support and OpenStack Autopilot, an enterprise offering by Canonical. You can choose to install on your local machine (all-in-one installation) or choose a Metal as a Service (MAAS) setup for a multinode deployment. The single-machine setup will install OpenStack on multiple LXC containers, deployed and managed through Juju. You will need at least 12 GB of main memory and an 8-CPU server. Use the following commands to get started with the Ubuntu OpenStack installer:
$ sudo apt-get update $ sudo apt-get install conjure-up $ conjure-up openstack
While DevStack installs a development-focused minimal installation of OpenStack, various other scripts support the automation of the OpenStack installation process. A notable project is OpenStack Ansible. This is an official OpenStack project and provides production-grade deployments. A quick GitHub search should give you a lot more options.
See also
- A step-by-step detailed guide to installing various OpenStack components on Ubuntu server: http://docs.openstack.org/mitaka/install-guide-ubuntu/
- DevStack Neutron configuration: http://docs.openstack.org/developer/devstack/guides/neutron.html
- OpenStack Ansible: https://github.com/openstack/openstack-ansible
- A list of OpenStack resources: https://github.com/ramitsurana/awesome-openstack
- Ubuntu MaaS: http://www.ubuntu.com/cloud/maas
- Ubuntu Juju: http://www.ubuntu.com/cloud/juju
- Read more about LXD and LXC in Chapter 8, Working with Containers
See also
- A step-by-step detailed guide to installing various OpenStack components on Ubuntu server: http://docs.openstack.org/mitaka/install-guide-ubuntu/
- DevStack Neutron configuration: http://docs.openstack.org/developer/devstack/guides/neutron.html
- OpenStack Ansible: https://github.com/openstack/openstack-ansible
- A list of OpenStack resources: https://github.com/ramitsurana/awesome-openstack
- Ubuntu MaaS: http://www.ubuntu.com/cloud/maas
- Ubuntu Juju: http://www.ubuntu.com/cloud/juju
- Read more about LXD and LXC in Chapter 8, Working with Containers
Adding a cloud image to OpenStack
In the previous recipe, we installed and configured OpenStack. Now, to start using the service, we need to upload virtual machine images. The OpenStack installation uploads a test image named Cirros. This is a small Linux distribution designed to be used as a test image in the cloud. We will upload prebuilt cloud images available from Ubuntu.
Getting ready
Make sure you have installed the OpenStack environment and you can access the OpenStack dashboard with valid credentials. It is not necessary to have an admin account to create and upload images.
Select the cloud image of your choice and get its download URL. Here, we will use the Trusty Ubuntu Server image. The selected image format is QCOW2, though OpenStack support various other image formats. The following is the URL for the selected image:
https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
How to do it…
The OpenStack dashboard provides a separate section for image management. You can see the images that are already available and add or remove your own images. Follow these steps to create your own image:
- Log in to your OpenStack dashboard. On successful login, you should get an Overview page for your account.
- Now, from the left-hand side Project menu, under the Compute submenu, click on Images:
This should show you a list of all publicly available images—something like this:
- Click on the Create Image button to add a new image. This should open a popup box with various details. Here, you can choose to add an image URL or enter an image path if you have downloaded the image to your local machine.
- Fill in the name and other required details. Under Image Source, select the image location, and in the next box, Image Location, enter the URL for the Ubuntu Cloud image.
- Under Format, select the image format of your selected image. In this case, it's QCOW2.
- Enter
amd64
under Architecture. Make sure you match this with your selected image. - Enter the minimum disk and RAM size. As we have selected an Ubuntu image, the minimum disk size should be 5 GB and minimum RAM 256 MB. These values will affect the selection of instance flavors while creating a new instance.
- Finally, click on the Create Image button to save the details and add the image to OpenStack. This will download the image from the source URL and save it in the image repository. The resulting image will be listed under the Project tab, as follows:
Now, the image is ready can be used to launch new cloud instances.
How it works…
OpenStack is a cloud virtualization platform and needs operating system images to launch virtual machines in the cloud. The Glance OpenStack imaging service provides the image-management service. It supports various types of image, including Qemu format, raw disk files, ISO images, and images from other virtualization platforms, as well as Docker images. Like every other thing in OpenStack, image management works with the help of APIs provided by Glance.
OpenStack, being a cloud platform, is expected to have ready-to-use images that can be used to quickly start a virtual instance. It is possible to upload the operating system installation disk and install the OS to a virtual instance, but that would be a waste of resources. Instead, it is preferable to have prebuilt cloud images. Various popular operating systems provide their respective cloud images, which can be imported to cloud systems. In the previous example, we used the Ubuntu Cloud image for the Ubuntu Trusty release.
We imported the image by specifying its source URI. Local image files can also be uploaded by selecting the image file as an image source. You can also build your own images and upload them to the image store to be used in the cloud. Along with the image source, we need to provide a few more parameters, which include the type of the image being uploaded and the minimum resource requirements of that image. Once the image has been uploaded, it can be used to launch a new instance in the cloud. Also, the image can be marked as public so that it is accessible to all OpenStack users. You will need specific rights for your OpenStack account to create public images.
There's more…
OpenStack images can also be managed from the command line with the client called glance
. To access the respective APIs from the command line, you need to authenticate with the Glance server. Use the following steps to use glance
from the command line:
- First, add authentication parameters to the environment:
export OS_USERNAME=demo export OS_PASSWORD=password export OS_AUTH_URL=http://10.196.69.158/identity export OS_TENANT_ID=8fe52bb13ca44981aa15d9b62e9133f4
Tip
DevStack makes things even easier by providing a script,
openrc
. It's located under theroot
directory of DevStack and can be used as follows:$ source openrc demo # source openrc username
You are then ready, without multiple export commands.
- Now, use the following command to obtain the image list for the specified user:
$ glance image-list
You can get a list of available command-line options with glance help
.
See also
- Read more about OpenStack image management: http://docs.openstack.org/image-guide/
- Command-line image management: http://docs.openstack.org/user-guide/common/cli_manage_images.html
- Dashboard image management: http://docs.openstack.org/user-guide/dashboard_manage_images.html
- Glance documentation: http://docs.openstack.org/developer/glance/
Getting ready
Make sure you have installed the OpenStack environment and you can access the OpenStack dashboard with valid credentials. It is not necessary to have an admin account to create and upload images.
Select the cloud image of your choice and get its download URL. Here, we will use the Trusty Ubuntu Server image. The selected image format is QCOW2, though OpenStack support various other image formats. The following is the URL for the selected image:
https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
How to do it…
The OpenStack dashboard provides a separate section for image management. You can see the images that are already available and add or remove your own images. Follow these steps to create your own image:
- Log in to your OpenStack dashboard. On successful login, you should get an Overview page for your account.
- Now, from the left-hand side Project menu, under the Compute submenu, click on Images:
This should show you a list of all publicly available images—something like this:
- Click on the Create Image button to add a new image. This should open a popup box with various details. Here, you can choose to add an image URL or enter an image path if you have downloaded the image to your local machine.
- Fill in the name and other required details. Under Image Source, select the image location, and in the next box, Image Location, enter the URL for the Ubuntu Cloud image.
- Under Format, select the image format of your selected image. In this case, it's QCOW2.
- Enter
amd64
under Architecture. Make sure you match this with your selected image. - Enter the minimum disk and RAM size. As we have selected an Ubuntu image, the minimum disk size should be 5 GB and minimum RAM 256 MB. These values will affect the selection of instance flavors while creating a new instance.
- Finally, click on the Create Image button to save the details and add the image to OpenStack. This will download the image from the source URL and save it in the image repository. The resulting image will be listed under the Project tab, as follows:
Now, the image is ready can be used to launch new cloud instances.
How it works…
OpenStack is a cloud virtualization platform and needs operating system images to launch virtual machines in the cloud. The Glance OpenStack imaging service provides the image-management service. It supports various types of image, including Qemu format, raw disk files, ISO images, and images from other virtualization platforms, as well as Docker images. Like every other thing in OpenStack, image management works with the help of APIs provided by Glance.
OpenStack, being a cloud platform, is expected to have ready-to-use images that can be used to quickly start a virtual instance. It is possible to upload the operating system installation disk and install the OS to a virtual instance, but that would be a waste of resources. Instead, it is preferable to have prebuilt cloud images. Various popular operating systems provide their respective cloud images, which can be imported to cloud systems. In the previous example, we used the Ubuntu Cloud image for the Ubuntu Trusty release.
We imported the image by specifying its source URI. Local image files can also be uploaded by selecting the image file as an image source. You can also build your own images and upload them to the image store to be used in the cloud. Along with the image source, we need to provide a few more parameters, which include the type of the image being uploaded and the minimum resource requirements of that image. Once the image has been uploaded, it can be used to launch a new instance in the cloud. Also, the image can be marked as public so that it is accessible to all OpenStack users. You will need specific rights for your OpenStack account to create public images.
There's more…
OpenStack images can also be managed from the command line with the client called glance
. To access the respective APIs from the command line, you need to authenticate with the Glance server. Use the following steps to use glance
from the command line:
- First, add authentication parameters to the environment:
export OS_USERNAME=demo export OS_PASSWORD=password export OS_AUTH_URL=http://10.196.69.158/identity export OS_TENANT_ID=8fe52bb13ca44981aa15d9b62e9133f4
Tip
DevStack makes things even easier by providing a script,
openrc
. It's located under theroot
directory of DevStack and can be used as follows:$ source openrc demo # source openrc username
You are then ready, without multiple export commands.
- Now, use the following command to obtain the image list for the specified user:
$ glance image-list
You can get a list of available command-line options with glance help
.
See also
- Read more about OpenStack image management: http://docs.openstack.org/image-guide/
- Command-line image management: http://docs.openstack.org/user-guide/common/cli_manage_images.html
- Dashboard image management: http://docs.openstack.org/user-guide/dashboard_manage_images.html
- Glance documentation: http://docs.openstack.org/developer/glance/
How to do it…
The OpenStack dashboard provides a separate section for image management. You can see the images that are already available and add or remove your own images. Follow these steps to create your own image:
- Log in to your OpenStack dashboard. On successful login, you should get an Overview page for your account.
- Now, from the left-hand side Project menu, under the Compute submenu, click on Images:
This should show you a list of all publicly available images—something like this:
- Click on the Create Image button to add a new image. This should open a popup box with various details. Here, you can choose to add an image URL or enter an image path if you have downloaded the image to your local machine.
- Fill in the name and other required details. Under Image Source, select the image location, and in the next box, Image Location, enter the URL for the Ubuntu Cloud image.
- Under Format, select the image format of your selected image. In this case, it's QCOW2.
- Enter
amd64
under Architecture. Make sure you match this with your selected image. - Enter the minimum disk and RAM size. As we have selected an Ubuntu image, the minimum disk size should be 5 GB and minimum RAM 256 MB. These values will affect the selection of instance flavors while creating a new instance.
- Finally, click on the Create Image button to save the details and add the image to OpenStack. This will download the image from the source URL and save it in the image repository. The resulting image will be listed under the Project tab, as follows:
Now, the image is ready can be used to launch new cloud instances.
How it works…
OpenStack is a cloud virtualization platform and needs operating system images to launch virtual machines in the cloud. The Glance OpenStack imaging service provides the image-management service. It supports various types of image, including Qemu format, raw disk files, ISO images, and images from other virtualization platforms, as well as Docker images. Like every other thing in OpenStack, image management works with the help of APIs provided by Glance.
OpenStack, being a cloud platform, is expected to have ready-to-use images that can be used to quickly start a virtual instance. It is possible to upload the operating system installation disk and install the OS to a virtual instance, but that would be a waste of resources. Instead, it is preferable to have prebuilt cloud images. Various popular operating systems provide their respective cloud images, which can be imported to cloud systems. In the previous example, we used the Ubuntu Cloud image for the Ubuntu Trusty release.
We imported the image by specifying its source URI. Local image files can also be uploaded by selecting the image file as an image source. You can also build your own images and upload them to the image store to be used in the cloud. Along with the image source, we need to provide a few more parameters, which include the type of the image being uploaded and the minimum resource requirements of that image. Once the image has been uploaded, it can be used to launch a new instance in the cloud. Also, the image can be marked as public so that it is accessible to all OpenStack users. You will need specific rights for your OpenStack account to create public images.
There's more…
OpenStack images can also be managed from the command line with the client called glance
. To access the respective APIs from the command line, you need to authenticate with the Glance server. Use the following steps to use glance
from the command line:
- First, add authentication parameters to the environment:
export OS_USERNAME=demo export OS_PASSWORD=password export OS_AUTH_URL=http://10.196.69.158/identity export OS_TENANT_ID=8fe52bb13ca44981aa15d9b62e9133f4
Tip
DevStack makes things even easier by providing a script,
openrc
. It's located under theroot
directory of DevStack and can be used as follows:$ source openrc demo # source openrc username
You are then ready, without multiple export commands.
- Now, use the following command to obtain the image list for the specified user:
$ glance image-list
You can get a list of available command-line options with glance help
.
See also
- Read more about OpenStack image management: http://docs.openstack.org/image-guide/
- Command-line image management: http://docs.openstack.org/user-guide/common/cli_manage_images.html
- Dashboard image management: http://docs.openstack.org/user-guide/dashboard_manage_images.html
- Glance documentation: http://docs.openstack.org/developer/glance/
How it works…
OpenStack is a cloud virtualization platform and needs operating system images to launch virtual machines in the cloud. The Glance OpenStack imaging service provides the image-management service. It supports various types of image, including Qemu format, raw disk files, ISO images, and images from other virtualization platforms, as well as Docker images. Like every other thing in OpenStack, image management works with the help of APIs provided by Glance.
OpenStack, being a cloud platform, is expected to have ready-to-use images that can be used to quickly start a virtual instance. It is possible to upload the operating system installation disk and install the OS to a virtual instance, but that would be a waste of resources. Instead, it is preferable to have prebuilt cloud images. Various popular operating systems provide their respective cloud images, which can be imported to cloud systems. In the previous example, we used the Ubuntu Cloud image for the Ubuntu Trusty release.
We imported the image by specifying its source URI. Local image files can also be uploaded by selecting the image file as an image source. You can also build your own images and upload them to the image store to be used in the cloud. Along with the image source, we need to provide a few more parameters, which include the type of the image being uploaded and the minimum resource requirements of that image. Once the image has been uploaded, it can be used to launch a new instance in the cloud. Also, the image can be marked as public so that it is accessible to all OpenStack users. You will need specific rights for your OpenStack account to create public images.
There's more…
OpenStack images can also be managed from the command line with the client called glance
. To access the respective APIs from the command line, you need to authenticate with the Glance server. Use the following steps to use glance
from the command line:
- First, add authentication parameters to the environment:
export OS_USERNAME=demo export OS_PASSWORD=password export OS_AUTH_URL=http://10.196.69.158/identity export OS_TENANT_ID=8fe52bb13ca44981aa15d9b62e9133f4
Tip
DevStack makes things even easier by providing a script,
openrc
. It's located under theroot
directory of DevStack and can be used as follows:$ source openrc demo # source openrc username
You are then ready, without multiple export commands.
- Now, use the following command to obtain the image list for the specified user:
$ glance image-list
You can get a list of available command-line options with glance help
.
See also
- Read more about OpenStack image management: http://docs.openstack.org/image-guide/
- Command-line image management: http://docs.openstack.org/user-guide/common/cli_manage_images.html
- Dashboard image management: http://docs.openstack.org/user-guide/dashboard_manage_images.html
- Glance documentation: http://docs.openstack.org/developer/glance/
There's more…
OpenStack images can also be managed from the command line with the client called glance
. To access the respective APIs from the command line, you need to authenticate with the Glance server. Use the following steps to use glance
from the command line:
- First, add authentication parameters to the environment:
export OS_USERNAME=demo export OS_PASSWORD=password export OS_AUTH_URL=http://10.196.69.158/identity export OS_TENANT_ID=8fe52bb13ca44981aa15d9b62e9133f4
Tip
DevStack makes things even easier by providing a script,
openrc
. It's located under theroot
directory of DevStack and can be used as follows:$ source openrc demo # source openrc username
You are then ready, without multiple export commands.
- Now, use the following command to obtain the image list for the specified user:
$ glance image-list
You can get a list of available command-line options with glance help
.
See also
- Read more about OpenStack image management: http://docs.openstack.org/image-guide/
- Command-line image management: http://docs.openstack.org/user-guide/common/cli_manage_images.html
- Dashboard image management: http://docs.openstack.org/user-guide/dashboard_manage_images.html
- Glance documentation: http://docs.openstack.org/developer/glance/
See also
- Read more about OpenStack image management: http://docs.openstack.org/image-guide/
- Command-line image management: http://docs.openstack.org/user-guide/common/cli_manage_images.html
- Dashboard image management: http://docs.openstack.org/user-guide/dashboard_manage_images.html
- Glance documentation: http://docs.openstack.org/developer/glance/
Launching a virtual instance with OpenStack
Now that we have OpenStack installed and have set our desired operating system image, we are ready to launch our first instance in a self-hosted cloud.
Getting ready
You will need credentials to access the OpenStack dashboard.
Uploading your own image is not necessary; you can use the default Cirros image to launch the test instance.
Log in to the OpenStack dashboard and set the SSH key pair in the Access & Security tab available under the Projects menu. Here, you can generate a new key pair or import your existing public key.
Note
If you generate a new key pair, a file with the .pem
extension will be downloaded to your local system. To use this key with PuTTy, you need to use PuTTYgen and extract the public and private keys.
How to do it…
OpenStack instances are the same virtual machines that we launch from the command line or desktop tools. OpenStack give you a web interface to launch your virtual machines from. Follow these steps to create and start a new instance:
- Select the Instance option under the Projects menu and then click on the Launch Instance button on the right-hand side. This should open a modal box with various options, which will look something like this:
- Now, start filling in the necessary details. All fields that are marked with * are required fields. Let's start by naming our instance. Enter the name in the Instance Name field.
- Set the value of Count to the number of instances you want to launch. We will leave it at the default value of 1.
- Next, click on the Source tab. Here, we need to configure the source image for our instance. Set Select Boot Source to Image and select No for Create New Volume. Then, from the Available Images section, search the desired image and click on the button with the + sign to select the image. The list should contain our recently uploaded image. The final screen should look something like this:
- Next, on the Flavor tab, we need to select the desired resources for our instance. Select the desired flavor by clicking on the + button. Make sure that the selected row does not contain any warning signs.
- Now, from the Key Pair tab, select the SSH key pair that we just created. This is required to log in to your instance.
- Finally, click on the Launch Instance button from the bottom of the modal box. A new instance should be created and listed under the instances list. It will take some time to start; wait for the Status column to show Active:
- You are now ready to access your virtual instance. Log in to your host console and try to ping the IP address of your instance. Then, open an SSH session with the following command:
$ ssh -i your_key ubuntu@instance_ip
This should give you a shell inside your new cloud instance. Try to ping an external server, such as an OpenDNS server, from within an instance to ensure connectivity.
To make this instance available on your local network, you will need to assign a floating IP address to it. Click on the drop-down arrow from the Actions column and select Associate Floating IP. This should add one more IP address to your instance and make it available on your local network.
How it works…
OpenStack instances are the same as the virtual machines that we build and operate with common virtualization tools such as VirtualBox and Qemu. OpenStack provides a central console for deploying and managing thousands of such machines on multiple hosts. Under the hood, OpenStack uses the same virtualization tools as the others. The preferred hypervisor is KVM, and if hardware acceleration is not available, Qemu emulation is used. OpenStack supports various other hypervisors, including VMware, XEN, Hyper-V, and Docker. In addition, a lightervisor, LXD, is on its way to a stable release. Other than virtualization, OpenStack adds various other improvements, such as image management, block storage, object storage, and various network configurations.
In the previous example, we set various parameters before launching a new instance; these include the instance name, resource constraints, operating system image, and login credentials. All these parameters will be passed to the underlying hypervisor to create and start the new virtual machine. A few other options that we have not used are volumes and networks. As we have installed a very basic OpenStack instance, new developments in network configurations are not available for use. You can update your DevStack configuration and install the OpenStack networking component Neutron.
Volumes, on the other hand, are available and can be used to obtain disk images of the desired size and format. You can also attach multiple volumes to a single machine, providing extended storage capacity. Volumes can be created separately and do not depend on the instance. You can reuse an existing volume with a new instance, and all data stored on it will be available to the new instance.
Here, we have used a cloud image to start a new instance. You can also choose a previously stored instance snapshot, create a new volume, or use a volume snapshot. The volume can be a permanent volume, which has its life cycle separate from the instance, or an ephemeral volume, which gets deleted along with the instance. Volumes can also be attached at instance runtime or even removed from an instance, provided they are not a boot source.
Other options include configuration and metadata. The configuration tab provides an option to add initialization scripts that are executed at first boot. This is very similar to cloud-init data. The following is a short example of a cloud-init script:
#cloud-config package_update: true package_upgrade: true password: password chpasswd: { expire: False } ssh_pwauth: True ssh_authorized_keys: - your-ssh-public-key-contents
This script will set a password for the default user (ubuntu
in the case of Ubuntu images), enable password logins, add an SSH key to authorize keys, and update and upgrade packages.
The metadata section adds arbitrary data to instances in the form of key-value pairs. This data can be used to identify an instance from a group and automate certain tasks.
Once an instance has been started, you have various management options from the Actions menu available on the instance list. From this menu, you can create instance snapshots; start, stop, or pause instances; edit security groups; get the VNC console; and so on.
There's more…
Similar to the glance
command-line client, a compute client is available as well and is named after the compute component. The nova
command can be used to create and manage cloud instances from the command line. You can get detailed parameters and options with the nova help
command or, to get help with a specific subcommand, nova help <subcommand>
.
See also
- The cloud-init official documentation: https://cloudinit.readthedocs.io/en/latest/
- More on cloud-init: https://help.ubuntu.com/community/CloudInit
- OpenStack instance guide: http://docs.openstack.org/user-guide/dashboard_launch_instances.html
- Command-line cheat sheet: http://docs.openstack.org/user-guide/cli_cheat_sheet.html#compute-nova
Getting ready
You will need credentials to access the OpenStack dashboard.
Uploading your own image is not necessary; you can use the default Cirros image to launch the test instance.
Log in to the OpenStack dashboard and set the SSH key pair in the Access & Security tab available under the Projects menu. Here, you can generate a new key pair or import your existing public key.
Note
If you generate a new key pair, a file with the .pem
extension will be downloaded to your local system. To use this key with PuTTy, you need to use PuTTYgen and extract the public and private keys.
How to do it…
OpenStack instances are the same virtual machines that we launch from the command line or desktop tools. OpenStack give you a web interface to launch your virtual machines from. Follow these steps to create and start a new instance:
- Select the Instance option under the Projects menu and then click on the Launch Instance button on the right-hand side. This should open a modal box with various options, which will look something like this:
- Now, start filling in the necessary details. All fields that are marked with * are required fields. Let's start by naming our instance. Enter the name in the Instance Name field.
- Set the value of Count to the number of instances you want to launch. We will leave it at the default value of 1.
- Next, click on the Source tab. Here, we need to configure the source image for our instance. Set Select Boot Source to Image and select No for Create New Volume. Then, from the Available Images section, search the desired image and click on the button with the + sign to select the image. The list should contain our recently uploaded image. The final screen should look something like this:
- Next, on the Flavor tab, we need to select the desired resources for our instance. Select the desired flavor by clicking on the + button. Make sure that the selected row does not contain any warning signs.
- Now, from the Key Pair tab, select the SSH key pair that we just created. This is required to log in to your instance.
- Finally, click on the Launch Instance button from the bottom of the modal box. A new instance should be created and listed under the instances list. It will take some time to start; wait for the Status column to show Active:
- You are now ready to access your virtual instance. Log in to your host console and try to ping the IP address of your instance. Then, open an SSH session with the following command:
$ ssh -i your_key ubuntu@instance_ip
This should give you a shell inside your new cloud instance. Try to ping an external server, such as an OpenDNS server, from within an instance to ensure connectivity.
To make this instance available on your local network, you will need to assign a floating IP address to it. Click on the drop-down arrow from the Actions column and select Associate Floating IP. This should add one more IP address to your instance and make it available on your local network.
How it works…
OpenStack instances are the same as the virtual machines that we build and operate with common virtualization tools such as VirtualBox and Qemu. OpenStack provides a central console for deploying and managing thousands of such machines on multiple hosts. Under the hood, OpenStack uses the same virtualization tools as the others. The preferred hypervisor is KVM, and if hardware acceleration is not available, Qemu emulation is used. OpenStack supports various other hypervisors, including VMware, XEN, Hyper-V, and Docker. In addition, a lightervisor, LXD, is on its way to a stable release. Other than virtualization, OpenStack adds various other improvements, such as image management, block storage, object storage, and various network configurations.
In the previous example, we set various parameters before launching a new instance; these include the instance name, resource constraints, operating system image, and login credentials. All these parameters will be passed to the underlying hypervisor to create and start the new virtual machine. A few other options that we have not used are volumes and networks. As we have installed a very basic OpenStack instance, new developments in network configurations are not available for use. You can update your DevStack configuration and install the OpenStack networking component Neutron.
Volumes, on the other hand, are available and can be used to obtain disk images of the desired size and format. You can also attach multiple volumes to a single machine, providing extended storage capacity. Volumes can be created separately and do not depend on the instance. You can reuse an existing volume with a new instance, and all data stored on it will be available to the new instance.
Here, we have used a cloud image to start a new instance. You can also choose a previously stored instance snapshot, create a new volume, or use a volume snapshot. The volume can be a permanent volume, which has its life cycle separate from the instance, or an ephemeral volume, which gets deleted along with the instance. Volumes can also be attached at instance runtime or even removed from an instance, provided they are not a boot source.
Other options include configuration and metadata. The configuration tab provides an option to add initialization scripts that are executed at first boot. This is very similar to cloud-init data. The following is a short example of a cloud-init script:
#cloud-config package_update: true package_upgrade: true password: password chpasswd: { expire: False } ssh_pwauth: True ssh_authorized_keys: - your-ssh-public-key-contents
This script will set a password for the default user (ubuntu
in the case of Ubuntu images), enable password logins, add an SSH key to authorize keys, and update and upgrade packages.
The metadata section adds arbitrary data to instances in the form of key-value pairs. This data can be used to identify an instance from a group and automate certain tasks.
Once an instance has been started, you have various management options from the Actions menu available on the instance list. From this menu, you can create instance snapshots; start, stop, or pause instances; edit security groups; get the VNC console; and so on.
There's more…
Similar to the glance
command-line client, a compute client is available as well and is named after the compute component. The nova
command can be used to create and manage cloud instances from the command line. You can get detailed parameters and options with the nova help
command or, to get help with a specific subcommand, nova help <subcommand>
.
See also
- The cloud-init official documentation: https://cloudinit.readthedocs.io/en/latest/
- More on cloud-init: https://help.ubuntu.com/community/CloudInit
- OpenStack instance guide: http://docs.openstack.org/user-guide/dashboard_launch_instances.html
- Command-line cheat sheet: http://docs.openstack.org/user-guide/cli_cheat_sheet.html#compute-nova
How to do it…
OpenStack instances are the same virtual machines that we launch from the command line or desktop tools. OpenStack give you a web interface to launch your virtual machines from. Follow these steps to create and start a new instance:
- Select the Instance option under the Projects menu and then click on the Launch Instance button on the right-hand side. This should open a modal box with various options, which will look something like this:
- Now, start filling in the necessary details. All fields that are marked with * are required fields. Let's start by naming our instance. Enter the name in the Instance Name field.
- Set the value of Count to the number of instances you want to launch. We will leave it at the default value of 1.
- Next, click on the Source tab. Here, we need to configure the source image for our instance. Set Select Boot Source to Image and select No for Create New Volume. Then, from the Available Images section, search the desired image and click on the button with the + sign to select the image. The list should contain our recently uploaded image. The final screen should look something like this:
- Next, on the Flavor tab, we need to select the desired resources for our instance. Select the desired flavor by clicking on the + button. Make sure that the selected row does not contain any warning signs.
- Now, from the Key Pair tab, select the SSH key pair that we just created. This is required to log in to your instance.
- Finally, click on the Launch Instance button from the bottom of the modal box. A new instance should be created and listed under the instances list. It will take some time to start; wait for the Status column to show Active:
- You are now ready to access your virtual instance. Log in to your host console and try to ping the IP address of your instance. Then, open an SSH session with the following command:
$ ssh -i your_key ubuntu@instance_ip
This should give you a shell inside your new cloud instance. Try to ping an external server, such as an OpenDNS server, from within an instance to ensure connectivity.
To make this instance available on your local network, you will need to assign a floating IP address to it. Click on the drop-down arrow from the Actions column and select Associate Floating IP. This should add one more IP address to your instance and make it available on your local network.
How it works…
OpenStack instances are the same as the virtual machines that we build and operate with common virtualization tools such as VirtualBox and Qemu. OpenStack provides a central console for deploying and managing thousands of such machines on multiple hosts. Under the hood, OpenStack uses the same virtualization tools as the others. The preferred hypervisor is KVM, and if hardware acceleration is not available, Qemu emulation is used. OpenStack supports various other hypervisors, including VMware, XEN, Hyper-V, and Docker. In addition, a lightervisor, LXD, is on its way to a stable release. Other than virtualization, OpenStack adds various other improvements, such as image management, block storage, object storage, and various network configurations.
In the previous example, we set various parameters before launching a new instance; these include the instance name, resource constraints, operating system image, and login credentials. All these parameters will be passed to the underlying hypervisor to create and start the new virtual machine. A few other options that we have not used are volumes and networks. As we have installed a very basic OpenStack instance, new developments in network configurations are not available for use. You can update your DevStack configuration and install the OpenStack networking component Neutron.
Volumes, on the other hand, are available and can be used to obtain disk images of the desired size and format. You can also attach multiple volumes to a single machine, providing extended storage capacity. Volumes can be created separately and do not depend on the instance. You can reuse an existing volume with a new instance, and all data stored on it will be available to the new instance.
Here, we have used a cloud image to start a new instance. You can also choose a previously stored instance snapshot, create a new volume, or use a volume snapshot. The volume can be a permanent volume, which has its life cycle separate from the instance, or an ephemeral volume, which gets deleted along with the instance. Volumes can also be attached at instance runtime or even removed from an instance, provided they are not a boot source.
Other options include configuration and metadata. The configuration tab provides an option to add initialization scripts that are executed at first boot. This is very similar to cloud-init data. The following is a short example of a cloud-init script:
#cloud-config package_update: true package_upgrade: true password: password chpasswd: { expire: False } ssh_pwauth: True ssh_authorized_keys: - your-ssh-public-key-contents
This script will set a password for the default user (ubuntu
in the case of Ubuntu images), enable password logins, add an SSH key to authorize keys, and update and upgrade packages.
The metadata section adds arbitrary data to instances in the form of key-value pairs. This data can be used to identify an instance from a group and automate certain tasks.
Once an instance has been started, you have various management options from the Actions menu available on the instance list. From this menu, you can create instance snapshots; start, stop, or pause instances; edit security groups; get the VNC console; and so on.
There's more…
Similar to the glance
command-line client, a compute client is available as well and is named after the compute component. The nova
command can be used to create and manage cloud instances from the command line. You can get detailed parameters and options with the nova help
command or, to get help with a specific subcommand, nova help <subcommand>
.
See also
- The cloud-init official documentation: https://cloudinit.readthedocs.io/en/latest/
- More on cloud-init: https://help.ubuntu.com/community/CloudInit
- OpenStack instance guide: http://docs.openstack.org/user-guide/dashboard_launch_instances.html
- Command-line cheat sheet: http://docs.openstack.org/user-guide/cli_cheat_sheet.html#compute-nova
How it works…
OpenStack instances are the same as the virtual machines that we build and operate with common virtualization tools such as VirtualBox and Qemu. OpenStack provides a central console for deploying and managing thousands of such machines on multiple hosts. Under the hood, OpenStack uses the same virtualization tools as the others. The preferred hypervisor is KVM, and if hardware acceleration is not available, Qemu emulation is used. OpenStack supports various other hypervisors, including VMware, XEN, Hyper-V, and Docker. In addition, a lightervisor, LXD, is on its way to a stable release. Other than virtualization, OpenStack adds various other improvements, such as image management, block storage, object storage, and various network configurations.
In the previous example, we set various parameters before launching a new instance; these include the instance name, resource constraints, operating system image, and login credentials. All these parameters will be passed to the underlying hypervisor to create and start the new virtual machine. A few other options that we have not used are volumes and networks. As we have installed a very basic OpenStack instance, new developments in network configurations are not available for use. You can update your DevStack configuration and install the OpenStack networking component Neutron.
Volumes, on the other hand, are available and can be used to obtain disk images of the desired size and format. You can also attach multiple volumes to a single machine, providing extended storage capacity. Volumes can be created separately and do not depend on the instance. You can reuse an existing volume with a new instance, and all data stored on it will be available to the new instance.
Here, we have used a cloud image to start a new instance. You can also choose a previously stored instance snapshot, create a new volume, or use a volume snapshot. The volume can be a permanent volume, which has its life cycle separate from the instance, or an ephemeral volume, which gets deleted along with the instance. Volumes can also be attached at instance runtime or even removed from an instance, provided they are not a boot source.
Other options include configuration and metadata. The configuration tab provides an option to add initialization scripts that are executed at first boot. This is very similar to cloud-init data. The following is a short example of a cloud-init script:
#cloud-config package_update: true package_upgrade: true password: password chpasswd: { expire: False } ssh_pwauth: True ssh_authorized_keys: - your-ssh-public-key-contents
This script will set a password for the default user (ubuntu
in the case of Ubuntu images), enable password logins, add an SSH key to authorize keys, and update and upgrade packages.
The metadata section adds arbitrary data to instances in the form of key-value pairs. This data can be used to identify an instance from a group and automate certain tasks.
Once an instance has been started, you have various management options from the Actions menu available on the instance list. From this menu, you can create instance snapshots; start, stop, or pause instances; edit security groups; get the VNC console; and so on.
There's more…
Similar to the glance
command-line client, a compute client is available as well and is named after the compute component. The nova
command can be used to create and manage cloud instances from the command line. You can get detailed parameters and options with the nova help
command or, to get help with a specific subcommand, nova help <subcommand>
.
See also
- The cloud-init official documentation: https://cloudinit.readthedocs.io/en/latest/
- More on cloud-init: https://help.ubuntu.com/community/CloudInit
- OpenStack instance guide: http://docs.openstack.org/user-guide/dashboard_launch_instances.html
- Command-line cheat sheet: http://docs.openstack.org/user-guide/cli_cheat_sheet.html#compute-nova
There's more…
Similar to the glance
command-line client, a compute client is available as well and is named after the compute component. The nova
command can be used to create and manage cloud instances from the command line. You can get detailed parameters and options with the nova help
command or, to get help with a specific subcommand, nova help <subcommand>
.
See also
- The cloud-init official documentation: https://cloudinit.readthedocs.io/en/latest/
- More on cloud-init: https://help.ubuntu.com/community/CloudInit
- OpenStack instance guide: http://docs.openstack.org/user-guide/dashboard_launch_instances.html
- Command-line cheat sheet: http://docs.openstack.org/user-guide/cli_cheat_sheet.html#compute-nova
See also
- The cloud-init official documentation: https://cloudinit.readthedocs.io/en/latest/
- More on cloud-init: https://help.ubuntu.com/community/CloudInit
- OpenStack instance guide: http://docs.openstack.org/user-guide/dashboard_launch_instances.html
- Command-line cheat sheet: http://docs.openstack.org/user-guide/cli_cheat_sheet.html#compute-nova
Installing Juju a service orchestration framework
Up to now in this chapter, we have learned about virtualization and OpenStack for deploying and managing virtual servers. Now, it's time to look at a service-modeling tool, Juju. Juju is a service-modeling tool for Ubuntu. Connect it to any cloud service, model your application, and press deploy—done. Juju takes care of lower-level configuration, deployments, and scaling, and even monitors your services.
Juju is an open source tool that offers a GUI and command-line interface for modeling your service. Applications are generally deployed as collections of multiple services. For example, to deploy WordPress, you need a web server, a database system, and perhaps a load balancer. Service modeling refers to the relations between these services. Services are defined with the help of charms, which are collections of configurations and deployment instructions, such as dependencies and resource requirements. The Juju store provides more than 300 predefined and ready-to-use charms.
Once you model your application with the required charms and their relationships, these models can be stored as a bundle. A bundle represents a set of charms, their configurations, and their relationships with each other. The entire bundle can be deployed to a cloud or local system with a single command. Also, similar to charms, bundles can be shared and are available on the Juju store.
This recipe covers the installation of Juju on Ubuntu Server. With the release of Xenial, the latest Ubuntu release, Canonical has also updated the Juju platform to version 2.0.
Getting ready
You need access to the root account or an account with sudo
privileges.
Make sure you have the SSH keys generated with your user account. You can generate a new key pair with the following command:
$ ssh-keygen -t rsa -b 2048
How to do it…
Juju 2.0 is available in the Ubuntu Xenial repository, so installation is quite easy. Follow these steps to install Juju, along with LXD for local deployments:
- Install Juju, along with the LXD and ZFSUtils packages. On Ubuntu 16, LXD should already be installed:
$ sudo apt-get update $ sudo apt-get install juju-2.0 lxd zfsutils-linux
- The LXD installation creates a new group,
lxd
, and adds the current user to it. Update your group membership withnewgrp
so that you don't need to log out and log back in:$ newgrp lxd
- Now, we need to initialize LXD before using it with Juju. We will create a new ZFS pool for LXD and configure a local
lxd
bridge for container networking with NAT enabled:$ sudo lxd init Name of the storage backend to use (dir or zfs): zfs Create a new ZFS pool (yes/no)? yes Name of the new ZFS pool: lxdpool Would you like to use an existing block device (yes/no)? no Size in GB of the new loop device (1GB minimum): 20 Would you like LXD to be available over the network (yes/no)? no Do you want to configure the LXD bridge (yes/no)? yes
LXD has been successfully configured.
- Now that LXD has been configured, we can bootstrap Juju and create a controller node. The following command will bootstrap Juju with LXD for local deployments:
$ juju bootstrap juju-controller lxd
This command should take some time to finish as it needs to fetch the container image and the install Juju tools inside the container.
- Once the bootstrap process completes, you can check the list of controllers, as follows:
$ juju list-controllers CONTROLLER MODEL USER SERVER local.juju-controller* default admin@local 10.155.16.114:17070
- You can also check the LXD container created by Juju using the
lxc list
command:$ lxc list
- From Juju 2.0 onwards, every controller will install the Juju GUI by default. This is a web application to manage the controller and its models. The following command will give you the URL of the Juju GUI:
$ juju gui ... https://10.201.217.65:17070/gui/2331544b-1e16-49ba-8ac7-2f13ea147497/ ...
- You may need to use port forwarding to access the web console. Use the following command to quickly set up iptables forwarding:
$ sudo iptables -t nat -A PREROUTING -p tcp --dport 17070 -j DNAT \ --to-destination 10.201.217.65:17070
- You will also need a username and password to log in to the GUI. To get these details, use the following command:
$ juju show-controller --show-passwords juju-controller ... accounts: admin@local: user: admin@local password: 8fcb8aca6e22728c6ac59b7cba322f39
When you log in to the web console, it should look something like this:
Now, you are ready to use Juju and deploy your applications either with a command line or from the web console.
How it works…
Here, we installed and configured the Juju framework with LXD as a local deployment backend. Juju is a service-modeling framework that makes it easy to compose and deploy an entire application with just a few commands. Now, we have installed and bootstrapped Juju. The bootstrap process creates a controller node on a selected cloud; in our case, it is LXD. The command provides various optional arguments to configure controller machines, as well as pass the credentials to the bootstrap process. Check out the bootstrap help menu with the juju bootstrap
--help
command.
We have used LXD as a local provider, which does not need any special credentials to connect and create new nodes. When using pubic cloud providers or your own cloud, you will need to provide your username and password or access keys. This can be done with the help of the add-credentials <cloud>
command. All added credentials are stored in a plaintext file: ~/.local/share/juju/credentials.yaml
. You can view a list of available cloud credentials with the juju list-credentials
command.
The controller node is a special machine created by Juju to host and manage data and models related to an environment. The container node hosts two models, namely admin and default, and the admin model runs the Juju API server and database system. Juju can use multiple cloud systems simultaneously, and each cloud can have its own controller node.
From version 2.0 onwards, every controller node installs the Juju GUI application by default. The Juju GUI is a web application that provides an easy-to-use visual interface to create and manage various Juju entities. With its simple interface, you can easily create new models, import charms, and set up relations between them. The GUI is still available as a separate charm and can be deployed separately to any machine in a Juju environment. The command-line tools are more than enough to operate Juju, and it is possible to skip the installation of the GUI component using the --no-gui
option with the bootstrap
command.
There's more…
In the previous example, we used LXD as a local deployment backend for Juju. With LXD, Juju can quickly create new containers to deploy applications. Along with LXD, Juju supports various other cloud providers. You can get a full list of supported cloud providers with the list-clouds
option:
$ juju list-clouds
Juju also provides the option to fetch updates to a supported cloud list. With the update-clouds
subcommand, you can update your local cloud with the latest developments from Juju.
Along with public clouds, Juju also supports OpenStack deployments and MaaS-based infrastructures. You can also create your own cloud configuration and add it to Juju with the juju add-cloud
command. Like with LXD, you can use virtual machines or even physical machines for Juju-based deployments. As far as you can access the machine with SSH, you can use it with Juju. Check out the cloud-configuration manual for more details: https://jujucharms.com/docs/devel/clouds-manual
See also
- Read more about Juju concepts at https://jujucharms.com/docs/devel/juju-concepts
- Get to know Juju-supported clouds or how to add your own at https://jujucharms.com/docs/devel/clouds
- The Juju GUI: https://jujucharms.com/docs/devel/controllers-gui
- Juju controllers: https://jujucharms.com/docs/devel/controllers
- Refer to Chapter 8, Working with Containers for more details about LXD containers
- Learn how to connect Juju to a remote LXD server: https://insights.ubuntu.com/2015/11/16/juju-and-remote-lxd-host/
Getting ready
You need access to the root account or an account with sudo
privileges.
Make sure you have the SSH keys generated with your user account. You can generate a new key pair with the following command:
$ ssh-keygen -t rsa -b 2048
How to do it…
Juju 2.0 is available in the Ubuntu Xenial repository, so installation is quite easy. Follow these steps to install Juju, along with LXD for local deployments:
- Install Juju, along with the LXD and ZFSUtils packages. On Ubuntu 16, LXD should already be installed:
$ sudo apt-get update $ sudo apt-get install juju-2.0 lxd zfsutils-linux
- The LXD installation creates a new group,
lxd
, and adds the current user to it. Update your group membership withnewgrp
so that you don't need to log out and log back in:$ newgrp lxd
- Now, we need to initialize LXD before using it with Juju. We will create a new ZFS pool for LXD and configure a local
lxd
bridge for container networking with NAT enabled:$ sudo lxd init Name of the storage backend to use (dir or zfs): zfs Create a new ZFS pool (yes/no)? yes Name of the new ZFS pool: lxdpool Would you like to use an existing block device (yes/no)? no Size in GB of the new loop device (1GB minimum): 20 Would you like LXD to be available over the network (yes/no)? no Do you want to configure the LXD bridge (yes/no)? yes
LXD has been successfully configured.
- Now that LXD has been configured, we can bootstrap Juju and create a controller node. The following command will bootstrap Juju with LXD for local deployments:
$ juju bootstrap juju-controller lxd
This command should take some time to finish as it needs to fetch the container image and the install Juju tools inside the container.
- Once the bootstrap process completes, you can check the list of controllers, as follows:
$ juju list-controllers CONTROLLER MODEL USER SERVER local.juju-controller* default admin@local 10.155.16.114:17070
- You can also check the LXD container created by Juju using the
lxc list
command:$ lxc list
- From Juju 2.0 onwards, every controller will install the Juju GUI by default. This is a web application to manage the controller and its models. The following command will give you the URL of the Juju GUI:
$ juju gui ... https://10.201.217.65:17070/gui/2331544b-1e16-49ba-8ac7-2f13ea147497/ ...
- You may need to use port forwarding to access the web console. Use the following command to quickly set up iptables forwarding:
$ sudo iptables -t nat -A PREROUTING -p tcp --dport 17070 -j DNAT \ --to-destination 10.201.217.65:17070
- You will also need a username and password to log in to the GUI. To get these details, use the following command:
$ juju show-controller --show-passwords juju-controller ... accounts: admin@local: user: admin@local password: 8fcb8aca6e22728c6ac59b7cba322f39
When you log in to the web console, it should look something like this:
Now, you are ready to use Juju and deploy your applications either with a command line or from the web console.
How it works…
Here, we installed and configured the Juju framework with LXD as a local deployment backend. Juju is a service-modeling framework that makes it easy to compose and deploy an entire application with just a few commands. Now, we have installed and bootstrapped Juju. The bootstrap process creates a controller node on a selected cloud; in our case, it is LXD. The command provides various optional arguments to configure controller machines, as well as pass the credentials to the bootstrap process. Check out the bootstrap help menu with the juju bootstrap
--help
command.
We have used LXD as a local provider, which does not need any special credentials to connect and create new nodes. When using pubic cloud providers or your own cloud, you will need to provide your username and password or access keys. This can be done with the help of the add-credentials <cloud>
command. All added credentials are stored in a plaintext file: ~/.local/share/juju/credentials.yaml
. You can view a list of available cloud credentials with the juju list-credentials
command.
The controller node is a special machine created by Juju to host and manage data and models related to an environment. The container node hosts two models, namely admin and default, and the admin model runs the Juju API server and database system. Juju can use multiple cloud systems simultaneously, and each cloud can have its own controller node.
From version 2.0 onwards, every controller node installs the Juju GUI application by default. The Juju GUI is a web application that provides an easy-to-use visual interface to create and manage various Juju entities. With its simple interface, you can easily create new models, import charms, and set up relations between them. The GUI is still available as a separate charm and can be deployed separately to any machine in a Juju environment. The command-line tools are more than enough to operate Juju, and it is possible to skip the installation of the GUI component using the --no-gui
option with the bootstrap
command.
There's more…
In the previous example, we used LXD as a local deployment backend for Juju. With LXD, Juju can quickly create new containers to deploy applications. Along with LXD, Juju supports various other cloud providers. You can get a full list of supported cloud providers with the list-clouds
option:
$ juju list-clouds
Juju also provides the option to fetch updates to a supported cloud list. With the update-clouds
subcommand, you can update your local cloud with the latest developments from Juju.
Along with public clouds, Juju also supports OpenStack deployments and MaaS-based infrastructures. You can also create your own cloud configuration and add it to Juju with the juju add-cloud
command. Like with LXD, you can use virtual machines or even physical machines for Juju-based deployments. As far as you can access the machine with SSH, you can use it with Juju. Check out the cloud-configuration manual for more details: https://jujucharms.com/docs/devel/clouds-manual
See also
- Read more about Juju concepts at https://jujucharms.com/docs/devel/juju-concepts
- Get to know Juju-supported clouds or how to add your own at https://jujucharms.com/docs/devel/clouds
- The Juju GUI: https://jujucharms.com/docs/devel/controllers-gui
- Juju controllers: https://jujucharms.com/docs/devel/controllers
- Refer to Chapter 8, Working with Containers for more details about LXD containers
- Learn how to connect Juju to a remote LXD server: https://insights.ubuntu.com/2015/11/16/juju-and-remote-lxd-host/
How to do it…
Juju 2.0 is available in the Ubuntu Xenial repository, so installation is quite easy. Follow these steps to install Juju, along with LXD for local deployments:
- Install Juju, along with the LXD and ZFSUtils packages. On Ubuntu 16, LXD should already be installed:
$ sudo apt-get update $ sudo apt-get install juju-2.0 lxd zfsutils-linux
- The LXD installation creates a new group,
lxd
, and adds the current user to it. Update your group membership withnewgrp
so that you don't need to log out and log back in:$ newgrp lxd
- Now, we need to initialize LXD before using it with Juju. We will create a new ZFS pool for LXD and configure a local
lxd
bridge for container networking with NAT enabled:$ sudo lxd init Name of the storage backend to use (dir or zfs): zfs Create a new ZFS pool (yes/no)? yes Name of the new ZFS pool: lxdpool Would you like to use an existing block device (yes/no)? no Size in GB of the new loop device (1GB minimum): 20 Would you like LXD to be available over the network (yes/no)? no Do you want to configure the LXD bridge (yes/no)? yes
LXD has been successfully configured.
- Now that LXD has been configured, we can bootstrap Juju and create a controller node. The following command will bootstrap Juju with LXD for local deployments:
$ juju bootstrap juju-controller lxd
This command should take some time to finish as it needs to fetch the container image and the install Juju tools inside the container.
- Once the bootstrap process completes, you can check the list of controllers, as follows:
$ juju list-controllers CONTROLLER MODEL USER SERVER local.juju-controller* default admin@local 10.155.16.114:17070
- You can also check the LXD container created by Juju using the
lxc list
command:$ lxc list
- From Juju 2.0 onwards, every controller will install the Juju GUI by default. This is a web application to manage the controller and its models. The following command will give you the URL of the Juju GUI:
$ juju gui ... https://10.201.217.65:17070/gui/2331544b-1e16-49ba-8ac7-2f13ea147497/ ...
- You may need to use port forwarding to access the web console. Use the following command to quickly set up iptables forwarding:
$ sudo iptables -t nat -A PREROUTING -p tcp --dport 17070 -j DNAT \ --to-destination 10.201.217.65:17070
- You will also need a username and password to log in to the GUI. To get these details, use the following command:
$ juju show-controller --show-passwords juju-controller ... accounts: admin@local: user: admin@local password: 8fcb8aca6e22728c6ac59b7cba322f39
When you log in to the web console, it should look something like this:
Now, you are ready to use Juju and deploy your applications either with a command line or from the web console.
How it works…
Here, we installed and configured the Juju framework with LXD as a local deployment backend. Juju is a service-modeling framework that makes it easy to compose and deploy an entire application with just a few commands. Now, we have installed and bootstrapped Juju. The bootstrap process creates a controller node on a selected cloud; in our case, it is LXD. The command provides various optional arguments to configure controller machines, as well as pass the credentials to the bootstrap process. Check out the bootstrap help menu with the juju bootstrap
--help
command.
We have used LXD as a local provider, which does not need any special credentials to connect and create new nodes. When using pubic cloud providers or your own cloud, you will need to provide your username and password or access keys. This can be done with the help of the add-credentials <cloud>
command. All added credentials are stored in a plaintext file: ~/.local/share/juju/credentials.yaml
. You can view a list of available cloud credentials with the juju list-credentials
command.
The controller node is a special machine created by Juju to host and manage data and models related to an environment. The container node hosts two models, namely admin and default, and the admin model runs the Juju API server and database system. Juju can use multiple cloud systems simultaneously, and each cloud can have its own controller node.
From version 2.0 onwards, every controller node installs the Juju GUI application by default. The Juju GUI is a web application that provides an easy-to-use visual interface to create and manage various Juju entities. With its simple interface, you can easily create new models, import charms, and set up relations between them. The GUI is still available as a separate charm and can be deployed separately to any machine in a Juju environment. The command-line tools are more than enough to operate Juju, and it is possible to skip the installation of the GUI component using the --no-gui
option with the bootstrap
command.
There's more…
In the previous example, we used LXD as a local deployment backend for Juju. With LXD, Juju can quickly create new containers to deploy applications. Along with LXD, Juju supports various other cloud providers. You can get a full list of supported cloud providers with the list-clouds
option:
$ juju list-clouds
Juju also provides the option to fetch updates to a supported cloud list. With the update-clouds
subcommand, you can update your local cloud with the latest developments from Juju.
Along with public clouds, Juju also supports OpenStack deployments and MaaS-based infrastructures. You can also create your own cloud configuration and add it to Juju with the juju add-cloud
command. Like with LXD, you can use virtual machines or even physical machines for Juju-based deployments. As far as you can access the machine with SSH, you can use it with Juju. Check out the cloud-configuration manual for more details: https://jujucharms.com/docs/devel/clouds-manual
See also
- Read more about Juju concepts at https://jujucharms.com/docs/devel/juju-concepts
- Get to know Juju-supported clouds or how to add your own at https://jujucharms.com/docs/devel/clouds
- The Juju GUI: https://jujucharms.com/docs/devel/controllers-gui
- Juju controllers: https://jujucharms.com/docs/devel/controllers
- Refer to Chapter 8, Working with Containers for more details about LXD containers
- Learn how to connect Juju to a remote LXD server: https://insights.ubuntu.com/2015/11/16/juju-and-remote-lxd-host/
How it works…
Here, we installed and configured the Juju framework with LXD as a local deployment backend. Juju is a service-modeling framework that makes it easy to compose and deploy an entire application with just a few commands. Now, we have installed and bootstrapped Juju. The bootstrap process creates a controller node on a selected cloud; in our case, it is LXD. The command provides various optional arguments to configure controller machines, as well as pass the credentials to the bootstrap process. Check out the bootstrap help menu with the juju bootstrap
--help
command.
We have used LXD as a local provider, which does not need any special credentials to connect and create new nodes. When using pubic cloud providers or your own cloud, you will need to provide your username and password or access keys. This can be done with the help of the add-credentials <cloud>
command. All added credentials are stored in a plaintext file: ~/.local/share/juju/credentials.yaml
. You can view a list of available cloud credentials with the juju list-credentials
command.
The controller node is a special machine created by Juju to host and manage data and models related to an environment. The container node hosts two models, namely admin and default, and the admin model runs the Juju API server and database system. Juju can use multiple cloud systems simultaneously, and each cloud can have its own controller node.
From version 2.0 onwards, every controller node installs the Juju GUI application by default. The Juju GUI is a web application that provides an easy-to-use visual interface to create and manage various Juju entities. With its simple interface, you can easily create new models, import charms, and set up relations between them. The GUI is still available as a separate charm and can be deployed separately to any machine in a Juju environment. The command-line tools are more than enough to operate Juju, and it is possible to skip the installation of the GUI component using the --no-gui
option with the bootstrap
command.
There's more…
In the previous example, we used LXD as a local deployment backend for Juju. With LXD, Juju can quickly create new containers to deploy applications. Along with LXD, Juju supports various other cloud providers. You can get a full list of supported cloud providers with the list-clouds
option:
$ juju list-clouds
Juju also provides the option to fetch updates to a supported cloud list. With the update-clouds
subcommand, you can update your local cloud with the latest developments from Juju.
Along with public clouds, Juju also supports OpenStack deployments and MaaS-based infrastructures. You can also create your own cloud configuration and add it to Juju with the juju add-cloud
command. Like with LXD, you can use virtual machines or even physical machines for Juju-based deployments. As far as you can access the machine with SSH, you can use it with Juju. Check out the cloud-configuration manual for more details: https://jujucharms.com/docs/devel/clouds-manual
See also
- Read more about Juju concepts at https://jujucharms.com/docs/devel/juju-concepts
- Get to know Juju-supported clouds or how to add your own at https://jujucharms.com/docs/devel/clouds
- The Juju GUI: https://jujucharms.com/docs/devel/controllers-gui
- Juju controllers: https://jujucharms.com/docs/devel/controllers
- Refer to Chapter 8, Working with Containers for more details about LXD containers
- Learn how to connect Juju to a remote LXD server: https://insights.ubuntu.com/2015/11/16/juju-and-remote-lxd-host/
There's more…
In the previous example, we used LXD as a local deployment backend for Juju. With LXD, Juju can quickly create new containers to deploy applications. Along with LXD, Juju supports various other cloud providers. You can get a full list of supported cloud providers with the list-clouds
option:
$ juju list-clouds
Juju also provides the option to fetch updates to a supported cloud list. With the update-clouds
subcommand, you can update your local cloud with the latest developments from Juju.
Along with public clouds, Juju also supports OpenStack deployments and MaaS-based infrastructures. You can also create your own cloud configuration and add it to Juju with the juju add-cloud
command. Like with LXD, you can use virtual machines or even physical machines for Juju-based deployments. As far as you can access the machine with SSH, you can use it with Juju. Check out the cloud-configuration manual for more details: https://jujucharms.com/docs/devel/clouds-manual
See also
- Read more about Juju concepts at https://jujucharms.com/docs/devel/juju-concepts
- Get to know Juju-supported clouds or how to add your own at https://jujucharms.com/docs/devel/clouds
- The Juju GUI: https://jujucharms.com/docs/devel/controllers-gui
- Juju controllers: https://jujucharms.com/docs/devel/controllers
- Refer to Chapter 8, Working with Containers for more details about LXD containers
- Learn how to connect Juju to a remote LXD server: https://insights.ubuntu.com/2015/11/16/juju-and-remote-lxd-host/
See also
- Read more about Juju concepts at https://jujucharms.com/docs/devel/juju-concepts
- Get to know Juju-supported clouds or how to add your own at https://jujucharms.com/docs/devel/clouds
- The Juju GUI: https://jujucharms.com/docs/devel/controllers-gui
- Juju controllers: https://jujucharms.com/docs/devel/controllers
- Refer to Chapter 8, Working with Containers for more details about LXD containers
- Learn how to connect Juju to a remote LXD server: https://insights.ubuntu.com/2015/11/16/juju-and-remote-lxd-host/
Managing services with Juju
In the previous recipe, we learned how to install the Juju service orchestration framework. Now, we will look at how to use Juju to deploy and manage a service.
Getting ready
Make sure you have installed and bootstrapped Juju.
How to do it…
We will deploy a sample WordPress installation with a load balancer. The MySQL service will be used as the database for WordPress. Both services are available in the Juju Charm store.
Follow these steps to manage services with Juju:
- Let's start by deploying the WordPress service with
juju deploy
. This should give you the following output:$ juju deploy wordpress Added charm "cs:trusty/wordpress-4" to the model. Deploying charm "cs:trusty/wordpress-4" with the charm series "trusty".
- Now, deploy a MySQL service to store WordPress contents:
$ juju deploy mysql Added charm "cs:trusty/mysql-38" to the model. Deploying charm "cs:trusty/mysql-38" with the charm series "trusty".
- Now, you can use
juju status
to confirm your deployed services. It should show you the deployed services, their relations, and respective machine statuses, as follows:$ juju status
- Now that both services have been deployed, we need to connect them together so that
wordpress
can use the database service. Juju calls this a relation, and it can be created as follows:$ juju add-relation mysql wordpress
- Finally, we need to expose our
wordpress
service so that it can be accessed outside our local network. By default, all charms start as unexposed and are accessible only on a local network:$ juju expose wordpress
You can get the IP address or DNS name of the wordpress
instance with the juju status
command from the Machines
section. Note that in a local LXD environment, you may need a forwarded port to access WordPress.
How it works…
In this example, we deployed two separate services using Juju. Juju will create two separate machines for each of them and deploy the service as per the instructions in the respective charms. These two services need to be connected with each other so that wordpress
knows the existence of the MySQL database. Juju calls these connections relations. Each charm contains a set of hooks that are triggered on given events. When we create a relation between WordPress and MySQL, both services are informed about it with the database-relation-changed
hook. At this point, both services can exchange the necessary details, such as MySQL ports and login credentials. The WordPress charm will set up a MySQL connection and initialize a database.
Once both services are ready, we can expose them to be accessed on a public network. Here, we do not need MySQL to be accessible by WordPress users, so we have only exposed the wordpress
service. WordPress can access MySQL internally, with the help of a relation.
You can use the Juju GUI to visualize your model and add or remove charms and their relations. At this point, if you open a GUI, you should see your charms plotted on the graph and connected with each other through a small line, indicating a relation. The GUI also provides an option to set constraints on a charm and configure charm settings, if any.
Note that both charms internally contain scaling options. WordPress is installed behind an Nginx reverse proxy and can be scaled with extra units as and when required. You can add new units to the service with a single command, as follows:
$ juju add-unit mysql -n 1
There's more…
When you no longer need these services, the entire model can be destroyed with the juju destroy-model <modelname>
command. You can also selectively destroy particular services with the remove-service
command and remove relations with remove-relations
. Check out the Juju manual page for tons of commands that are not listed in the Juju help menu.
See also
- How to create your own charm: https://jujucharms.com/docs/stable/authors-charm-writing
- More about hooks: https://jujucharms.com/docs/stable/authors-hook-environment
Getting ready
Make sure you have installed and bootstrapped Juju.
How to do it…
We will deploy a sample WordPress installation with a load balancer. The MySQL service will be used as the database for WordPress. Both services are available in the Juju Charm store.
Follow these steps to manage services with Juju:
- Let's start by deploying the WordPress service with
juju deploy
. This should give you the following output:$ juju deploy wordpress Added charm "cs:trusty/wordpress-4" to the model. Deploying charm "cs:trusty/wordpress-4" with the charm series "trusty".
- Now, deploy a MySQL service to store WordPress contents:
$ juju deploy mysql Added charm "cs:trusty/mysql-38" to the model. Deploying charm "cs:trusty/mysql-38" with the charm series "trusty".
- Now, you can use
juju status
to confirm your deployed services. It should show you the deployed services, their relations, and respective machine statuses, as follows:$ juju status
- Now that both services have been deployed, we need to connect them together so that
wordpress
can use the database service. Juju calls this a relation, and it can be created as follows:$ juju add-relation mysql wordpress
- Finally, we need to expose our
wordpress
service so that it can be accessed outside our local network. By default, all charms start as unexposed and are accessible only on a local network:$ juju expose wordpress
You can get the IP address or DNS name of the wordpress
instance with the juju status
command from the Machines
section. Note that in a local LXD environment, you may need a forwarded port to access WordPress.
How it works…
In this example, we deployed two separate services using Juju. Juju will create two separate machines for each of them and deploy the service as per the instructions in the respective charms. These two services need to be connected with each other so that wordpress
knows the existence of the MySQL database. Juju calls these connections relations. Each charm contains a set of hooks that are triggered on given events. When we create a relation between WordPress and MySQL, both services are informed about it with the database-relation-changed
hook. At this point, both services can exchange the necessary details, such as MySQL ports and login credentials. The WordPress charm will set up a MySQL connection and initialize a database.
Once both services are ready, we can expose them to be accessed on a public network. Here, we do not need MySQL to be accessible by WordPress users, so we have only exposed the wordpress
service. WordPress can access MySQL internally, with the help of a relation.
You can use the Juju GUI to visualize your model and add or remove charms and their relations. At this point, if you open a GUI, you should see your charms plotted on the graph and connected with each other through a small line, indicating a relation. The GUI also provides an option to set constraints on a charm and configure charm settings, if any.
Note that both charms internally contain scaling options. WordPress is installed behind an Nginx reverse proxy and can be scaled with extra units as and when required. You can add new units to the service with a single command, as follows:
$ juju add-unit mysql -n 1
There's more…
When you no longer need these services, the entire model can be destroyed with the juju destroy-model <modelname>
command. You can also selectively destroy particular services with the remove-service
command and remove relations with remove-relations
. Check out the Juju manual page for tons of commands that are not listed in the Juju help menu.
See also
- How to create your own charm: https://jujucharms.com/docs/stable/authors-charm-writing
- More about hooks: https://jujucharms.com/docs/stable/authors-hook-environment
How to do it…
We will deploy a sample WordPress installation with a load balancer. The MySQL service will be used as the database for WordPress. Both services are available in the Juju Charm store.
Follow these steps to manage services with Juju:
- Let's start by deploying the WordPress service with
juju deploy
. This should give you the following output:$ juju deploy wordpress Added charm "cs:trusty/wordpress-4" to the model. Deploying charm "cs:trusty/wordpress-4" with the charm series "trusty".
- Now, deploy a MySQL service to store WordPress contents:
$ juju deploy mysql Added charm "cs:trusty/mysql-38" to the model. Deploying charm "cs:trusty/mysql-38" with the charm series "trusty".
- Now, you can use
juju status
to confirm your deployed services. It should show you the deployed services, their relations, and respective machine statuses, as follows:$ juju status
- Now that both services have been deployed, we need to connect them together so that
wordpress
can use the database service. Juju calls this a relation, and it can be created as follows:$ juju add-relation mysql wordpress
- Finally, we need to expose our
wordpress
service so that it can be accessed outside our local network. By default, all charms start as unexposed and are accessible only on a local network:$ juju expose wordpress
You can get the IP address or DNS name of the wordpress
instance with the juju status
command from the Machines
section. Note that in a local LXD environment, you may need a forwarded port to access WordPress.
How it works…
In this example, we deployed two separate services using Juju. Juju will create two separate machines for each of them and deploy the service as per the instructions in the respective charms. These two services need to be connected with each other so that wordpress
knows the existence of the MySQL database. Juju calls these connections relations. Each charm contains a set of hooks that are triggered on given events. When we create a relation between WordPress and MySQL, both services are informed about it with the database-relation-changed
hook. At this point, both services can exchange the necessary details, such as MySQL ports and login credentials. The WordPress charm will set up a MySQL connection and initialize a database.
Once both services are ready, we can expose them to be accessed on a public network. Here, we do not need MySQL to be accessible by WordPress users, so we have only exposed the wordpress
service. WordPress can access MySQL internally, with the help of a relation.
You can use the Juju GUI to visualize your model and add or remove charms and their relations. At this point, if you open a GUI, you should see your charms plotted on the graph and connected with each other through a small line, indicating a relation. The GUI also provides an option to set constraints on a charm and configure charm settings, if any.
Note that both charms internally contain scaling options. WordPress is installed behind an Nginx reverse proxy and can be scaled with extra units as and when required. You can add new units to the service with a single command, as follows:
$ juju add-unit mysql -n 1
There's more…
When you no longer need these services, the entire model can be destroyed with the juju destroy-model <modelname>
command. You can also selectively destroy particular services with the remove-service
command and remove relations with remove-relations
. Check out the Juju manual page for tons of commands that are not listed in the Juju help menu.
See also
- How to create your own charm: https://jujucharms.com/docs/stable/authors-charm-writing
- More about hooks: https://jujucharms.com/docs/stable/authors-hook-environment
How it works…
In this example, we deployed two separate services using Juju. Juju will create two separate machines for each of them and deploy the service as per the instructions in the respective charms. These two services need to be connected with each other so that wordpress
knows the existence of the MySQL database. Juju calls these connections relations. Each charm contains a set of hooks that are triggered on given events. When we create a relation between WordPress and MySQL, both services are informed about it with the database-relation-changed
hook. At this point, both services can exchange the necessary details, such as MySQL ports and login credentials. The WordPress charm will set up a MySQL connection and initialize a database.
Once both services are ready, we can expose them to be accessed on a public network. Here, we do not need MySQL to be accessible by WordPress users, so we have only exposed the wordpress
service. WordPress can access MySQL internally, with the help of a relation.
You can use the Juju GUI to visualize your model and add or remove charms and their relations. At this point, if you open a GUI, you should see your charms plotted on the graph and connected with each other through a small line, indicating a relation. The GUI also provides an option to set constraints on a charm and configure charm settings, if any.
Note that both charms internally contain scaling options. WordPress is installed behind an Nginx reverse proxy and can be scaled with extra units as and when required. You can add new units to the service with a single command, as follows:
$ juju add-unit mysql -n 1
There's more…
When you no longer need these services, the entire model can be destroyed with the juju destroy-model <modelname>
command. You can also selectively destroy particular services with the remove-service
command and remove relations with remove-relations
. Check out the Juju manual page for tons of commands that are not listed in the Juju help menu.
See also
- How to create your own charm: https://jujucharms.com/docs/stable/authors-charm-writing
- More about hooks: https://jujucharms.com/docs/stable/authors-hook-environment
There's more…
When you no longer need these services, the entire model can be destroyed with the juju destroy-model <modelname>
command. You can also selectively destroy particular services with the remove-service
command and remove relations with remove-relations
. Check out the Juju manual page for tons of commands that are not listed in the Juju help menu.
See also
- How to create your own charm: https://jujucharms.com/docs/stable/authors-charm-writing
- More about hooks: https://jujucharms.com/docs/stable/authors-hook-environment
See also
- How to create your own charm: https://jujucharms.com/docs/stable/authors-charm-writing
- More about hooks: https://jujucharms.com/docs/stable/authors-hook-environment