Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Mastering OpenStack

You're reading from   Mastering OpenStack Implement the latest techniques for designing and deploying an operational, production-ready private cloud

Arrow left icon
Product type Paperback
Published in Nov 2024
Publisher Packt
ISBN-13 9781835468913
Length 392 pages
Edition 3rd Edition
Arrow right icon
Author (1):
Arrow left icon
Omar Khedher Omar Khedher
Author Profile Icon Omar Khedher
Omar Khedher
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Part 1: Architecting the OpenStack Ecosystem FREE CHAPTER
2. Chapter 1: Revisiting OpenStack – Design Considerations 3. Chapter 2: Kicking Off the OpenStack Setup – The Right Way (DevSecOps) 4. Chapter 3: OpenStack Control Plane – Shared Services 5. Chapter 4: OpenStack Compute – Compute Capacity and Flavors 6. Chapter 5: OpenStack Storage – Block, Object, and File Shares 7. Chapter 6: OpenStack Networking – Connectivity and Managed Service Options 8. Part 2: Operating the OpenStack Cloud Environment
9. Chapter 7: Running a Highly Available Cloud – Meeting the SLA 10. Chapter 8: Monitoring and Logging – Remediating Proactively 11. Chapter 9: Benchmarking the Infrastructure – Evaluating Resource Capacity and Optimization 12. Part 3: Extending the OpenStack Cloud
13. Chapter 10: OpenStack Hybrid Cloud – Design Patterns 14. Chapter 11: A Hybrid Cloud Hyperscale Use Case – Scaling a Kubernetes Workload 15. Index 16. Other Books You May Enjoy

Deploying in the cloud

Since the first release of OpenStack, several deployment tools and wrappers have been developed to assist OpenStack operators with more enhanced, easier ways to set up a fully running OpenStack environment. With the rise of system management tools such as Chef, Puppet, SaltStack, and Ansible, the OpenStack community has dedicated different channels to each system management tool to develop different classes and modules through the OpenStack ecosystem evolution. Which one of these tools we choose is down to familiarity or the technical requirements of cloud operators. In this section, Ansible will be the system management tool we use.

Ansible in a nutshell

Like any other system management tool, Ansible uses its own glossary and terms to define infrastructure components, modules, relationships, and parameters. Conversely, unlike other tools, Ansible comes with a simple architecture setup that makes it a popular choice, and it can be summarized as follows:

  • It has the flexibility to handle interdependent complex service modules.
  • It uses an agentless transport mechanism to connect and update target systems without the need to install additional packages. It exposes only the master server where the Ansible software is running.
  • Modules executed on target systems are auto-cleaned once installed.
  • It is rich in core automation modules to extend more features.
  • The infrastructure collection code written in YAML is organized in Ansible playbooks. YAML is easier to learn and master than other languages, such as Ruby for Chef.
  • Due to the simplicity of the Ansible architecture, scalability is much simpler to implement compared to other tools that require running agents.
  • As a declarative method of programming, it focuses on describing the output of the desired state, rather than diving into how to do it and which steps must be taken to reach the resulting output.

When dealing with the complexity of the OpenStack ecosystem, with all its sets of components, subcomponents, and parameter flavors, it is essential to briefly skim the surface of the Ansible terminology :

  • Playbooks: These consist of the main configuration file(s) that describes a series of actions to run on one or a group of hosts. The tasks are written in YAML and executed in order, from top to bottom, to accomplish full deployment or a configuration change.
  • Roles: These present the organizational structure of playbooks by collecting different Ansible assets, such as tasks, variables, and modules, to deploy a service on one or a group of hosts.
  • Modules: These are an abstract representation of a specific unit of code functionality. Modules can be written and customized to control system resources, files, and services. Ansible is shipped with some modules referred to as module libraries (core modules) that can be executed via customized playbooks.
  • Variables: These are dynamic values used in roles and playbooks to reflect the desired configuration. Similar to programming language variables, Ansible variables enable the propagation of different states across different environments through the same role or playbook.
  • Inventory: This is a listing of managed hosts in the target environment. Ansible uses a configuration file in the INI format, which defines the name and IP of the managed target host. Hosts can be declared in inventory files using different patterns by nesting hosts by role, or by specifying a series of hosts via a combination and numeric patterns.

The CI/CD system triggers Ansible to install and run playbooks across target OpenStack nodes defined in its inventory, as depicted in the following figure:

Figure 2.3 – An OpenStack environment management through Ansible

Figure 2.3 – An OpenStack environment management through Ansible

Integrating system management tools such as Ansible in OpenStack ecosystem life cycle management has introduced a significant change in the way such a complex ecosystem is managed and operated. Not surprisingly, the hunger to move quickly to adopt a private cloud setup and maximize the agility of its deployment has led to other challenges. That brings us to the root causes of OpenStack-tied dependencies between services. Thinking long term, the development of new OpenStack releases and feature integrations in an existing environment presents one of the ultimate challenges for a private cloud operation mission. Driving major upgrades has always been a blocker to seamlessly jumping to a new OpenStack release in a running production setup. That makes the upgrade case a very cautious operation and puts your Service-Level Agreement (SLA) at risk if one or several dependencies crash, due to a code compatibility version that has been overlooked. Although tests can help to identify such anomalies, a massive number of tests for each component should be in place in advance, and that could be costly in terms of resources and human interactions. The other facet of these challenges is the lack of rollback mechanisms. Rolling back a change that causes an issue will definitely bring one or several parts of a whole system down, as you wait for your management tools to redeploy, restart the affected services, and test and wait for a complete synchronization between other dependent services to run them again. The most common OpenStack deployment options involve bare-metal machines or virtual machines, making such operational tasks heavier to roll, requiring full management of machine images, and it becomes costly when testing in an isolated environment. For example, upgrading or updating an OpenStack component version would require all inter-dependent services to be deployed in a different testing environment before propagating the change in the production one. Within the latest OpenStack releases, several organizations have been experimenting with the rise of containers for a fully containerized OpenStack cloud. Let’s unleash the container dilemma in the next section and explore what opportunities are in store for OpenStack deployment and management.

Containerizing the cloud

Container technology has been around for more than a decade, and companies have started deploying their workloads in containers to take advantage of resource optimization, environment isolation, and portability. Running different pieces of your software in lightweight, self-contained, and independent containers brings more flexibility to managing a complex software system. Running in an isolated mode, operations such as upgrades and rolling back become much easier and can be performed with more confidence.

By referring to our infrastructure code, we can see the benefit of the OpenStack software architecture’s modular design, which offers a great deal by taking advantage of container technology, where each module can live in a separate, stateless environment. Looking at the current state of container trends, an extended list of container and orchestration engines can be found, such as LXC, Docker, and Kubernetes. The OpenStack community did not miss the chance to adopt containerization technology early on, when containers started to become the standard for modern software development. Within the Antelope and later releases, we can find several deployment methods based on containers, combined with configuration management tools, as summarized in the following table:

Deployment project

Container

Project

Source

OpenStack-Ansible

LXC

Ansible

https://docs.openstack.org/openstack-ansible/latest/

Kolla-Ansible

Docker

Ansible

https://docs.openstack.org/kolla-ansible/latest/

OpenStack-Helm

Docker

Kubernetes and Helm

https://docs.openstack.org/openstack-helm/latest/

Triple-O (currently no longer supported)

Docker

Ansible

https://docs.openstack.org/tripleo-ansible/latest/

Table 2.1 – A list of OpenStack deployment tools running containers

The OpenStack-Ansible (OSA) project is one of the most widespread deployments based on LXC containers. The deployment of LXC containers running OpenStack services is orchestrated by Ansible.

In our next deployment, we will adopt another emerging OpenStack project, named Kolla-Ansible. Similarly to OSA, the Kolla project uses Docker as a containerization tool by building a Docker container for each OpenStack service. Besides the design differences between LXC and Docker, Kolla extends the parameterization layout, making its containers more configurable, with an array of choices between the base operating system and template engines. This is not to mention Docker’s design advantages as a container technology, with the layered nature of its images, versioning, and portability for sharing capabilities.

Important note

Kolla has been integrated officially within the OpenStack subproject since the Liberty release. As per the official OpenStack definition of the Kolla project, “Kolla’s mission is to provide production-ready containers and deployment tools for operating OpenStack clouds.”

Those extra advantages make Docker more suitable for our deployment pipeline, where we will deal with a container service as an artifact that can be easily deployed through different environments, before promoting it to a production environment. As depicted in the following high-level schema, for each merged change in the code repository, the CI/CD tool builds an artifact composed of a software package encapsulated in a Docker image. The generated image will be committed to a private Docker registry, where Ansible will download it and orchestrate its setup in the designated OpenStack node.

Figure 2.4 – High-level CI/CD pipeline OpenStack deployment

Figure 2.4 – High-level CI/CD pipeline OpenStack deployment

The other component of the deployment tool stack is the Jinja2 templating tool. This is mainly used by Docker for the dynamic assignment of variables, based on defined parameters generated by Ansible. Jinja2 is designed mainly for Dockerfiles to support full system call interface compatibility for both RPM and DEB container distributions.

Important note

Jinja2 templating in the Kolla context provides an array of ways of building Docker images of different source distributions, including CentOS, Debian, Fedora, Ubuntu, and RHEL container operating systems that can be parameterized per OpenStack service code.

As our main motivation is to solve the complexity of dependencies and provide a simpler and more scalable development experience, Kolla is the way to go, and indeed since its official first stable release, several production deployments have been performed the Kolla way.

Building the picture

Treating OpenStack deployment as IaC will help us to inherit and use most of the software tools and processes to deliver code artifacts, ready for deployment, with more confidence. Having a robust integration and deployment pipeline is vital to ensure that our software-defined data center does not fail on production day. Modern software development techniques involve several tools that increase automation and agility and, hence, faster feedback between each deployment.

The following tools will be employed for our OpenStack infrastructure code development:

  • A version control system: GitHub will be our code repository for the OpenStack infrastructure code.
  • A CI/CD tool: Jenkins will be installed on the deployer machine and grant extra plugins to build and run deployment pipelines.
  • A system management tool: The Ansible packages will be installed on the deployer machine and provide the OpenStack playbooks to be deployed inside the containers, in tandem with Kolla.
  • An image builder: Dedicated to OpenStack containers, Kolla builds container images running different OpenStack services.

The Kolla-Ansible project code repository will be our starting point for the first OpenStack deployment, with minimum customization to have a rolling deployment pipeline initially. The official latest master stable branch of the project code can be found here: https://github.com/openstack/kolla-ansible.

Kolla-Ansible supports both all-in-one as well as multi-node OpenStack setups. As we’re aiming for a production setup, as discussed in the initial design draft in Chapter 1, Revisiting OpenStack –Design Considerations, we will make sure our initial draft is the first deployment iteration ready for deployment.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image