Creating a test environment with QEMU and KVM
To be able to learn Ansible, we will need to make quite a few playbooks and run them.
Tip
Doing it directly on your computer will be very risky. For this reason, I would suggest using virtual machines.
It's possible to create a test environment with cloud providers in a few seconds, but often it is more useful to have those machines locally. To do so, we will use Kernel-based Virtual Machine (KVM) with Quick Emulator (QEMU).
The first thing will be installing qemu-kvm
and virt-install
. On Fedora it will be enough to run:
$ sudo dnf install -y @virtualization
On Red Hat/CentOS/Scientific Linux/Unbreakable Linux it will be enough to run:
$ sudo yum install -y qemu-kvm virt-install virt-manager
If you use Ubuntu, you can install it using:
$ sudo apt install virt-manager
On Debian, you'll need to execute:
$ sudo apt install qemu-kvm libvirt-bin
For our examples, I'll be using CentOS 7. This is for multiple reasons; the main ones are:
- CentOS is free and 100% compatible with Red Hat, Scientific Linux, and Unbreakable Linux
- Many companies use Red Hat/CentOS/Scientific Linux/Unbreakable Linux for their servers
- Those distributions are the only ones with SELinux support built in, and as we have seen earlier, SELinux can help you make your environment much more secure
At the time of writing this book, the most recent CentOS cloud image is http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1603.qcow2, So let's download this image with the help of the following command:
$ wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1603.qcow2
Since we will probably need to create many machines, it's better if we create a copy of it so the original one will not be modified:
$ cp CentOS-7-x86_64-GenericCloud-1603.qcow2 centos_1.qcow2
Since the qcow2
images will run cloud-init
to set up the networking, users, and so on, we will need to provide a couple of files. Let's start by creating a metadata file for networking:
instance-id: centos_1
local-hostname: centos_1.local
network-interfaces: |
iface eth0 inet static
address (An IP in your virtual bridge class)
network (The first IP of the virtual bridge class)
netmask (Your virtual bridge class netmask)
broadcast (Your virtual bridge class broadcast)
gateway (Your virtual bridge class gateway)
To find your virtual bridge data, you have to look for a device that has the name virbrX
or something similar, in my case it is virtbr0
, so I can find all of its information using the following command:
$ ip addr show virbr0
The previous command will give this as an output:
5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:38:1a:e6 brd ff:ff:ff:ff:ff:ff
inet 192.168.124.1/24 brd 192.168.124.255 scope global virbr0
valid_lft forever preferred_lft forever
So, for me the meta-data file looks like the following:
instance-id: centos_1
local-hostname: centos_1.local
network-interfaces: |
iface eth0 inet static
address 192.168.124.10
network 192.168.124.1
netmask 255.255.255.0
broadcast 192.168.124.255
gateway 192.168.124.1
This file will set up the eth0
interface of the virtual machine at boot time. We also need another file (user-data) to set up the users
properly:
users:
- name: (yourname)
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD:ALL']
ssh-authorized-keys:
- (insert ssh public key here)
For me, the file looks like the following:
users:
- name: fale
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD:ALL']
ssh-authorized-keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDRoZzfNif+wXFqzsmvHg4jJt8+ZO/dQxm5k7pXYAwdWVbiFrZYGhMQl5FPfzC7rkDaC31fod3Y85QkQVgNKCVYUy5QR5LfxUjSQDv+y2Nfao4be/BKla0ffc7JVSzFFAELGGDLn1lMN0e0D9syqQbKgSRdOdvweq/0Et3KNIF9e7XgEdSuAHls17NDtMkWUfyi5yvEtdtMcp9gO4OlG6Vh0iCXOdx+f0QA2hh1JnvePvzJ4a8CeckN5JwL7Q027nlsHPBYq9K1jvv+diUs48FflPJI4fgMq3Zo7zyCpf8qE7Dlx+u7OvR5kxNdrpnOsDgHeAGNkrzfcmxU7kbU29NX4VFgWd0sdlzu1nOWFEH7Cnd547tx5VFxBzJwEAUCh7QSiU2Ne/hCnjFkZuDZ5pN4pNw+yu+Feoz79gV/utoLHuCodYyAvSQlQ7VSfC+djLD/9wHC2yGksvc9ICnSUv3JyQEEEG4K26z6szF9+a3vU0qIq7YYa8QHgWIHtzSxztYRIWJOzTZlwyuNmhbRNYDaMC5BMzvQ8JREv0obMLmrlvolJPWT4gn1N9sDNNXIC6RDRE5yGsIEf0CliYW1X/8XG40U+g9LG+lrYOGWD4OymZ2P/VDIzZbVT6NG/rdSSGnf4D1AwlOGR7eNTv30AK9o0LVjqGaJWKWYUF9zY6I3+Q==
To provide those files at boot time, we will need to create an ISO file containing them:
$ genisoimage -output centos_1.iso -volid cidata -joliet -rock user-data meta-data
After the ISO file is ready, we can instruct virt-install
to actually create the virtual machine:
virt-install --name CentOS_1 \
--ram 2048 \
--disk centos_1.qcow2 \
--vcpus 2 \
--os-variant fedora21 \
--connect qemu:///system \
--network bridge:br0,model=virtio \
--cdrom centos_1.iso \
--boot hd
virt-install --name CentOS_1 \ --ram 2048 \ --disk centos_1.qcow2 \ --vcpus 2 \ --os-variant fedora21 \ --connect qemu:///system \ --network bridge:br0,model=virtio \ --cdrom centos_1.iso \ --boot hd
Since our network configuration is in the ISO file, we will need it at every boot. Sadly, by default this does not happen, so we will need to do a few more steps. Firstly, run virsh
:
$ virsh
At this point, a virsh
shell should appear with an output like the following:
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh #
This means that we switched from bash (or your shell, if you are not using bash) to the virtualization shell. Issue the following command:
virsh # edit CentOS_1
By doing this we will be able to tweak the configuration of the CentOS_1
machine. In the disk section, you'll need to find the cdrom
device that should look like this:
<disk type='block' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='hda' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0'
unit='0'/>
</disk>
You'll need to change it to the following as highlighted in bold:
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='(Put here your ISO path)/centos_1.iso'/>
<target dev='hda' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0'
unit='0'/>
</disk>
At this point, our virtual machine will always start with the ISO file mounted as a cdrom
and therefore cloud-init
will be able to correctly initiate the networking.