




















































In this article by Karan Singh, author of the book, Learning Ceph, we will cover the following topics:
(For more resources related to this topic, see here.)
We can test deploy Ceph in a sandbox environment using Oracle VirtualBox virtual machines. This virtual setup can help us discover and perform experiments with Ceph storage clusters as if we are working in a real environment. Since Ceph is an open source software-defined storage deployed on top of commodity hardware in a production environment, we can imitate a fully functioning Ceph environment on virtual machines, instead of real-commodity hardware, for our testing purposes.
Oracle VirtualBox is a free software available at http://www.virtualbox.org for Windows, Mac OS X, and Linux. We must fulfil system requirements for the VirtualBox software so that it can function properly during our testing. We assume that your host operating system is a Unix variant; for Microsoft windows, host machines use an absolute path to run the VBoxManage command, which is by default c:Program FilesOracleVirtualBoxVBoxManage.exe.
The system requirement for VirtualBox depends upon the number and configuration of virtual machines running on top of it. Your VirtualBox host should require an x86-type processor (Intel or AMD), a few gigabytes of memory (to run three Ceph virtual machines), and a couple of gigabytes of hard drive space. To begin with, we must download VirtualBox from http://www.virtualbox.org/ and then follow the installation procedure once this has been downloaded. We will also need to download the CentOS 6.4 Server ISO image from http://vault.centos.org/6.4/isos/.
To set up our sandbox environment, we will create a minimum of three virtual machines; you can create even more machines for your Ceph cluster based on the hardware configuration of your host machine. We will first create a single VM and install OS on it; after this, we will clone this VM twice. This will save us a lot of time and increase our productivity. Let's begin by performing the following steps to create the first virtual machine:
The VirtualBox host machine used throughout in this demonstration is a Mac OS X which is a UNIX-type host. If you are performing these steps on a non-UNIX machine that is, on Windows-based host then keep in mind that virtualbox hostonly adapter name will be something like VirtualBox Host-Only Ethernet Adapter #<adapter number>. Please run these commands with the correct adapter names. On windows-based hosts, you can check VirtualBox networking options in Oracle VM VirtualBox Manager by navigating to File | VirtualBox Settings | Network | Host-only Networks.
# VBoxManage hostonlyif remove vboxnet1 # VBoxManage hostonlyif create # VBoxManage hostonlyif ipconfig vboxnet1 --ip 192.168.57.1 --netmask 255.255.255.0
For Windows-based VirtualBox hosts
# VBoxManage.exe hostonlyif remove "VirtualBox Host-Only Ethernet Adapter" # VBoxManage.exe hostonlyif create # VBoxManage hostonlyif ipconfig "VirtualBox Host-Only Ethernet Adapter" --ip 192.168.57.1 --netmask 255.255.255.
The following is the step-by-step process to create virtual machines using CLI commands:
# VBoxManage createvm --name ceph-node1 --ostype RedHat_64 --register # VBoxManage modifyvm ceph-node1 --memory 1024 --nic1 nat --nic2 hostonly --hostonlyadapter2 vboxnet1
For Windows VirtualBox hosts:
# VBoxManage.exe modifyvm ceph-node1 --memory 1024 --nic1 nat --nic2 hostonly --hostonlyadapter2 "VirtualBox Host-Only Ethernet Adapter"
# VBoxManage storagectl ceph-node1 --name "IDE Controller" --add ide --controller PIIX4 --hostiocache on --bootable on # VBoxManage storageattach ceph-node1 --storagectl "IDE Controller" --type dvddrive --port 0 --device 0 --medium CentOS-6.4-x86_64-bin-DVD1.iso
Make sure you execute the preceding command from the same directory where you have saved CentOS ISO image or you can specify the location where you saved it.
# VBoxManage storagectl ceph-node1 --name "SATA Controller" --add sata --controller IntelAHCI --hostiocache on --bootable on # VBoxManage createhd --filename OS-ceph-node1.vdi --size 10240 # VBoxManage storageattach ceph-node1 --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium OS-ceph-node1.vdi
# VBoxManage createhd --filename ceph-node1-osd1.vdi --size 10240 # VBoxManage storageattach ceph-node1 --storagectl "SATA Controller" --port 1 --device 0 --type hdd --medium ceph-node1-osd1.vdi
# VBoxManage createhd --filename ceph-node1-osd2.vdi --size 10240 # VBoxManage storageattach ceph-node1 --storagectl "SATA Controller" --port 2 --device 0 --type hdd --medium ceph-node1-osd2.vdi
# VBoxManage createhd --filename ceph-node1-osd3.vdi --size 10240 # VBoxManage storageattach ceph-node1 --storagectl "SATA Controller" --port 3 --device 0 --type hdd --medium ceph-node1-osd3.vdi
# VBoxManage startvm ceph-node1 --type gui
ONBOOT=yes BOOTPROTO=dhcp
ONBOOT=yes BOOTPROTO=static IPADDR=192.168.57.101 NETMASK=255.255.255.0
192.168.57.101 ceph-node1 192.168.57.102 ceph-node2 192.168.57.103 ceph-node3
# ssh root@192.168.57.101
# VBoxManage clonevm --name ceph-node2 ceph-node1 --register
# VBoxManage clonevm --name ceph-node3 ceph-node1 --register
# VBoxManage startvm ceph-node1 # VBoxManage startvm ceph-node2 # VBoxManage startvm ceph-node3
HOSTNAME=ceph-node2
DEVICE=<correct device name of your first network interface, check ifconfig -a> ONBOOT=yes BOOTPROTO=dhcp HWADDR= <correct MAC address of your first network interface, check ifconfig -a >
Edit the /etc/sysconfig/network-scripts/ifcfg-<second interface name> file and add:
DEVICE=<correct device name of your second network interface, check ifconfig -a> ONBOOT=yes BOOTPROTO=static IPADDR=192.168.57.102 NETMASK=255.255.255.0 HWADDR= <correct MAC address of your second network interface, check ifconfig -a >
192.168.57.101 ceph-node1 192.168.57.102 ceph-node2 192.168.57.103 ceph-node3
After performing these changes, you should restart your virtual machine to bring the new hostname into effect. The restart will also update your network configurations.
DEVICE=<correct device name of your first network interface, check ifconfig -a> ONBOOT=yes BOOTPROTO=dhcp HWADDR= <correct MAC address of your first network interface, check ifconfig -a >
DEVICE=<correct device name of your second network interface, check ifconfig -a> ONBOOT=yes BOOTPROTO=static IPADDR=192.168.57.103 NETMASK=255.255.255.0 HWADDR= <correct MAC address of your second network interface, check ifconfig -a >
192.168.57.101 ceph-node1 192.168.57.102 ceph-node2 192.168.57.103 ceph-node3
After performing these changes, you should restart your virtual machine to bring a new hostname into effect; the restart will also update your network configurations.
At this point, we prepare three virtual machines and make sure each VM communicates with each other. They should also have access to the Internet to install Ceph packages.
To deploy our first Ceph cluster, we will use the ceph-deploy tool to install and configure Ceph on all three virtual machines. The ceph-deploy tool is a part of the Ceph software-defined storage, which is used for easier deployment and management of your Ceph storage cluster.
Since we created three virtual machines that run CentOS 6.4 and have connectivity with the Internet as well as private network connections, we will configure these machines as Ceph storage clusters as mentioned in the following diagram:
# ssh-keygen
# ssh-copy-id ceph-node2
# rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
# yum install ceph-deploy
# ceph-deploy new ceph-node1 ## Create a directory for ceph # mkdir /etc/ceph # cd /etc/ceph
The new subcommand of ceph-deploy deploys a new cluster with ceph as the cluster name, which is by default; it generates a cluster configuration and keying files. List the present working directory; you will find the ceph.conf and ceph.mon.keyring files.
In this testing, we will intentionally install the Emperor release (v0.72) of Ceph software, which is not the latest release. Later in this book, we will demonstrate the upgradation of Emperor to Firefly release of Ceph.
# ceph –v
# ceph-deploy mon create-initial
Once monitor creation is successful, check your cluster status. Your cluster will not be healthy at this stage:
# ceph status
# ceph-deploy disk list ceph-node1
From the output, carefully identify the disks (other than OS-partition disks) on which we should create Ceph OSD. In our case, the disk names will ideally be sdb, sdc, and sdd.
# ceph-deploy disk zap ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd
# ceph-deploy osd create ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd
# ceph status
At this stage, your cluster will not be healthy. We need to add a few more nodes to the Ceph cluster so that it can set up a distributed, replicated object storage, and hence become healthy.
Now we have a single-node Ceph cluster. We should scale it up to make it a distributed, reliable storage cluster. To scale up a cluster, we should add more monitor nodes and OSD. As per our plan, we will now configure ceph-node2 and ceph-node3 machines as monitor as well as OSD nodes.
A Ceph storage cluster requires at least one monitor to run. For high availability, a Ceph storage cluster relies on an odd number of monitors that's more than one, for example, 3 or 5, to form a quorum. It uses the Paxos algorithm to maintain quorum majority. Since we already have one monitor running on ceph-node1, let's create two more monitors for our Ceph cluster:
# service iptables stop # chkconfig iptables off # ssh ceph-node2 service iptables stop # ssh ceph-node2 chkconfig iptables off # ssh ceph-node3 service iptables stop # ssh ceph-node3 chkconfig iptables off
# ceph-deploy mon create ceph-node2 # ceph-deploy mon create ceph-node3
# chkconfig ntpd on # ssh ceph-node2 chkconfig ntpd on # ssh ceph-node3 chkconfig ntpd on # ntpdate pool.ntp.org # ssh ceph-node2 ntpdate pool.ntp.org # ssh ceph-node3 ntpdate pool.ntp.org # /etc/init.d/ntpd start # ssh ceph-node2 /etc/init.d/ntpd start # ssh ceph-node3 /etc/init.d/ntpd start
At this point, we have a running Ceph cluster with three monitors OSDs. Now we will scale our cluster and add more OSDs. To accomplish this, we will run the following commands from the ceph-node1 machine, unless otherwise specified.
We will follow the same method for OSD addition:
# ceph-deploy disk list ceph-node2 ceph-node3 # ceph-deploy disk zap ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd # ceph-deploy disk zap ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd # ceph-deploy osd create ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd # ceph-deploy osd create ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd # ceph status
Check the cluster status for a new OSD. At this stage, your cluster will be healthy with nine OSDs in and up:
The software-defined nature of Ceph provides a great deal of flexibility to its adopters. Unlike other proprietary storage systems, which are hardware dependent, Ceph can be easily deployed and tested on almost any computer system available today. Moreover, if getting physical machines is a challenge, you can use virtual machines to install Ceph, as mentioned in this article, but keep in mind that such a setup should only be used for testing purposes.
In this article, we learned how to create a set of virtual machines using the VirtualBox software, followed by Ceph deployment as a three-node cluster using the ceph-deploy tool. We also added a couple of OSDs and monitor machines to our cluster in order to demonstrate its dynamic scalability. We recommend you deploy a Ceph cluster of your own using the instructions mentioned in this article.