Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering OpenStack

You're reading from   Mastering OpenStack Design, deploy, and manage clouds in mid to large IT infrastructures

Arrow left icon
Product type Paperback
Published in Apr 2017
Publisher Packt
ISBN-13 9781786463982
Length 470 pages
Edition 2nd Edition
Arrow right icon
Authors (2):
Arrow left icon
Chandan Dutta Chandan Dutta
Author Profile Icon Chandan Dutta
Chandan Dutta
Omar Khedher Omar Khedher
Author Profile Icon Omar Khedher
Omar Khedher
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Designing OpenStack Cloud Architectural Consideration 2. Deploying OpenStack - The DevOps Way FREE CHAPTER 3. OpenStack Cluster – The Cloud Controller and Common Services 4. OpenStack Compute - Choice of Hypervisor and Node Segregation 5. OpenStack Storage - Block, Object, and File Share 6. OpenStack Networking - Choice of Connectivity Types and Networking Services 7. Advanced Networking - A Look at SDN and NFV 8. Operating the OpenStack Infrastructure - The User Perspective 9. OpenStack HA and Failover 10. Monitoring and Troubleshooting - Running a Healthy OpenStack Cluster 11. Keeping Track of Logs - ELK and OpenStack 12. OpenStack Benchmarking and Performance Tuning - Maintaining Cloud Performance

Gathering the pieces and building a picture

Let's try to see how OpenStack works by chaining all the service cores covered in the previous sections in a series of steps:

  1. Authentication is the first action performed. This is where Keystone comes into the picture. Keystone authenticates the user based on credentials such as the username and password.
  2. The service catalog is then provided by Keystone. This contains information about the OpenStack services and the API endpoints.
  3. You can use the Openstack CLI to get the catalog:
    $ openstack catalog list
The service catalog is a JSON structure that exposes the resources available on a token request.
  1. Typically, once authenticated, you can talk to an API node. There are different APIs in the OpenStack ecosystem (the OpenStack API and EC2 API):

The following figure shows a high-level view of how OpenStack works:

  1. Another element in the architecture is the instance scheduler. Schedulers are implemented by OpenStack services that are architected around worker daemons. The worker daemons manage the launching of instances on individual nodes and keep track of resources available to the physical nodes on which they run. The scheduler in an OpenStack service looks at the state of the resources on a physical node (provided by the worker daemons) and decides the best candidate node to launch a virtual instance on. An example of this architecture is nova-scheduler. This selects the compute node to run a virtual machine or Neutron L3 scheduler, which decides which L3 network node will host a virtual router.
The scheduling process in OpenStack Nova can perform different algorithms such as simple, chance, and zone. An advanced way to do this is by deploying weights and filters by ranking servers as its available resources.

Provisioning a VM under the hood

It is important to understand how different services in OpenStack work together, leading to a running virtual machine. We have already seen how a request is processed in OpenStack via APIs.

Let's figure out how things work by referring to the following simple architecture diagram:

The process of launching a virtual machine involves the interaction of the main OpenStack services that form the building blocks of an instance including compute, network, storage, and the base image. As shown in the previous diagram, OpenStack services interact with each other via a message bus to submit and retrieve RPC calls. The information of each step of the provisioning process is verified and passed by different OpenStack services via the message bus. From an architecture perspective, sub system calls are defined and treated in OpenStack API endpoints involving: Nova, Glance, Cinder, and Neutron.

On the other hand, the inter-communication of APIs within OpenStack requires an authentication mechanism to be trusted, which involves Keystone.

Starting with the identity service, the following steps summarize briefly the provisioning workflow based on API calls in OpenStack:

  • Calling the identity service for authentication
  • Generating a token to be used for subsequent calls
  • Contacting the image service to list and retrieve a base image
  • Processing the request to the compute service API
  • Processing compute service calls to determine security groups and keys
  • Calling the network service API to determine available networks
  • Choosing the hypervisor node by the compute scheduler service
  • Calling the block storage service API to allocate volume to the instance
  • Spinning up the instance in the hypervisor via the compute service API call
  • Calling the network service API to allocate network resources to the instance

It is important to keep in mind that handling tokens in OpenStack on every API call and service request is a time limited operation. One of the major causes of a failed provisioning operation in OpenStack is the expiration of the token during subsequent API calls. Additionally, the management of tokens has faced a few changes within different OpenStack releases. This includes two different approaches used in OpenStack prior to the Liberty release including:

  • Universally Unique Identifier (UUID): Within Keystone version 2, an UUID token will be generated and passed along every API call between client services and back to Keystone for validation. This version has proven performance degradation of the identity service.
  • Public Key Infrastructure (PKI): Within Keystone version 3, tokens are no longer validated at each API call by Keystone. API endpoints can verify the token by checking the Keystone signature added when initially generating the token.

Starting from the Kilo release, handling tokens in Keystone has progressed by introducing more sophisticated cryptographic authentication token methods, such as Fernet. The new implementation will help to tackle the token performance issue noticed in UUID and PKI tokens. Fernet is fully supported in the Mitaka release and the community is pushing to adopt it as the default. On the other hand, PKI tokens are deprecated in favor of Fernet tokens in further releases of Kilo OpenStack.

More advanced topics regarding additions introduced in Keystone are covered briefly in Chapter 3, OpenStack Cluster – The Cloud Controller and Common Services.
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image