Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering OpenStack

You're reading from   Mastering OpenStack Design, deploy, and manage a scalable OpenStack infrastructure

Arrow left icon
Product type Paperback
Published in Jul 2015
Publisher
ISBN-13 9781784395643
Length 400 pages
Edition 1st Edition
Arrow right icon
Author (1):
Arrow left icon
Omar Khedher Omar Khedher
Author Profile Icon Omar Khedher
Omar Khedher
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Designing OpenStack Cloud Architecture FREE CHAPTER 2. Deploying OpenStack – DevOps and OpenStack Dual Deal 3. Learning OpenStack Clustering – Cloud Controllers and Compute Nodes 4. Learning OpenStack Storage – Deploying the Hybrid Storage Model 5. Implementing OpenStack Networking and Security 6. OpenStack HA and Failover 7. OpenStack Multinode Deployment – Bringing in Production 8. Extending OpenStack – Advanced Networking Features and Deploying Multi-tier Applications 9. Monitoring OpenStack – Ceilometer and Zabbix 10. Keeping Track for Logs – Centralizing Logs with Logstash 11. Tuning OpenStack Performance – Advanced Configuration Index

Gathering the pieces and building a picture

Let's try to see how OpenStack works by chaining all the service cores covered in the previous sections in a series of steps:

  1. A user accesses the OpenStack environment via a web interface (HTTP/REST).
  2. Authentication is the first action performed. This is where Keystone comes into the picture.
  3. A conversation is started with Keystone—"Hey, I would like to authenticate and here are my credentials".
  4. Keystone responds "OK, then you may authenticate and give the token" once the credentials have been accepted
  5. You may remember that the service catalog comes with the token as a piece of code, which will allow you to access resources. Now you have it!
  6. The service catalog, during its turn, will incorporate the code by responding "Here are the resources available, so you can go through and get what you need from your accessible list".

    Note

    The service catalog is a JSON structure that exposes the resources available upon a token request.

    You can use the following example on querying by tenant to get a list of servers:

    $ curl -v -H "X-Auth-Token:token" http://192.168.27.47:8774/v2/tenant_id/servers
    

    A list of server details is returned on how to gain access to the servers:

    {
        "server": {
            "adminPass": "verysecuredpassword",
            "id": "5aaee3c3-12ee-7633-b32b-635489236232fbfbf",
            "links": [
                {
                    "href": "http://myopenstack.com/v2/openstack/servers/5aaee3c3-12ee-7633-b32b-635489236232fbfbf",
                    "rel": "self"
                },
                {
                    "href": "http://myopenstack.com/v2/openstack/servers/5aaee3c3-12ee-7633-b32b-635489236232fbfbf",
                    "rel": "bookmark"
                }
            ]
        }
    }
  7. Typically, once authenticated, you can talk to an API node. There are different APIs in the OpenStack ecosystem (OpenStack API and EC2 API).
  8. Once we authenticate and request access, we have the following services that will do the homework under the hood:
    • Compute nodes that deal with hypervisor
    • Volume services that deal with storage
    • Network services that make all the connections between VLANs and virtual network interfaces that work and talk to each other

    The next figure resumes the first blob pieces on how OpenStack works:

    Gathering the pieces and building a picture
  9. However, how do we get these services to talk? In such cases, you should think about the wondrous connector, the RabbitMQ queuing system.

    For anyone who is non-familiar with the queuing system, we can consider an example of a central airport:

    You have booked a flight and have been assigned a specific gateway that only you are interested in. This gateway gets you directly to your seat on the plane. A queuing system allows you to tune in to the server or service that you are interested in.

    A queuing system takes care of issues such as; who wants to do the work? By analogy, since everybody listens to the airport assistance speaker channel, only one person (same passenger's destination) listens to that information and makes it work by joining the gateway.

    Now, we have this information in the queue.

    Note

    If you have a look at the Python source tree, for any service, you will see a network directory for the network code, and there will be an api.py file for every one of these services.

    Let's take an example. If you want to create an instance and implement it in the compute node, it might say "import the nova-compute node API and there is method/function there to create the instance". So, it will do all the jobs of getting over the wire and spinning up the server instances and doing the same for the appropriate node.

  10. Another element of the picture is the schedule, which looks at the services and claims "this is what you have as memory, CPU, disk, network, and so on".

    When a new request comes in, the scheduler might notify "you will get from these available resources available."

    Note

    The scheduling process in OpenStack can perform different algorithms such as simple, chance, and zone. An advanced way to do this is by deploying weight filtering by ranking the servers as its available resources.

    Using this option, the node will spin up the server while you create your own rules. Here, you distribute your servers based on the number of processors and how much memory you may want in your spinning servers.

    The last piece of this picture is that we need to get the information back. So, we have all these services that are doing something. Remember that they have a special airport gateway. Again, our queue performs some actions, and it sends notifications as these actions occur. They might be subscribed to find out certain things such as whether the network is up, the server is ready, or the server has crashed.

Provisioning a flow under the hood

It is important to understand how different services in OpenStack work together, leading to a running virtual machine. We have already seen how a request is processed in OpenStack via APIs. Now, we can go further and closely check how such services and subsystems, which includes authentication, computing, images, networks, queuing, and databases, work in tandem with performing a complete workflow to provide an instance in OpenStack. The next series of steps describes how service components work together once a submission of an instance provisioning request has been done:

  1. A client enters the user credentials via Horizon, which makes the REST call to Keystone for authentication.
  2. The authentication request will be handled by Keystone, which generates and sends back an authentication token. The token will be stored by Keystone, which will be used to authenticate against the rest of the OpenStack components by using APIs.
  3. The action of Launch Instance in the dashboard will convert the creation of a new instance request into an API request, which will be sent to the nova-api service.
  4. The nova-api service receives the authentication request and sends it for validation and access permission to Keystone.
  5. Keystone checks the token and sends an authentication validation, which includes roles and permissions.
  6. The nova-api service later creates an initial entry for an instance in the database and contacts the queuing system via an RPC call (rpc.cast). The call request will be sent to nova-scheduler to specify which host ID will run the instance.
  7. The nova-scheduler contacts the queue and subscribes the new instance request.
  8. The nova-scheduler performs the information gathering process from the database to find out the appropriate host based on its weighting and filtering algorithms.
  9. Once a host has been chosen, the nova-scheduler sends back an RPC call (rpc.cast) to start launching an instance that remains in the queue.
  10. The nova-compute contacts the queue and picks up the call issued by the nova-scheduler. Therefore, nova-compute proceeds with the subscription on the instance and sends an RPC call (rpc.call) in order to get instance-related information such as the instance characteristics (CPU, RAM, and disk) and the host ID. The RPC call remains in the queue.
  11. The nova-conductor contacts the queue and picks up the call.
  12. The nova-conductor contacts the queue and subscribes the new instance request. It interrogates the database to get instance information and publish its state in the queue.
  13. The nova-compute picks the instance information from the queue and sends an authentication token in a REST call to the glance-api to get a specific image URI from a glance.

    The image URI will be obtained by the Image ID to find the requested one from the image repository.

  14. The glance-api will verify the authentication token with Keystone.
  15. Once validated, glance-api returns the image URI, including its metadata, which specifies the location details of the image that is being scrutinized.

    Note

    If the images are stored in a Swift cluster, the images will be requested as Swift objects via the REST calls. Keep in mind that it is not the job of nova-compute to fetch from the swift storage. Swift will interface via APIs to perform object requests. More details about this will be covered in Chapter 4, Learning OpenStack Storage – Deploying the Hybrid Storage Model.

  16. The nova-compute sends the authentication token to a neutron-server via a REST call to configure the network for the instance.
  17. The neutron-server checks the token with Keystone.
  18. Once validated, the neutron-server contacts its agents, such as the neutron-l2-agent and neutron-dhcp-agent, by submitting the request in the queue.
  19. Neutron agents pick the calls from the queue and reply by sending network information pertaining to the instance. For example, neutron-l2-agent gets the L2 configuration from Libvirt and publishes it in the queue. On the contrary, neutron-dhcp-agent contacts dnsmasq for the IP allocation and returns an IP reply in the queue.

    Note

    Dnsmasq is a software that provides a network infrastructure such as the DNS forwarder and the DHCP server.

  20. The neutron-server collects all the network settings from the queue and records it in the database. Therefore, it sends back an RPC call to the queue along with all the network details.
  21. Nova-compute contacts the queue and grabs the instance network configuration.
  22. Nova-compute sends the authentication token to cinder-api via a REST call to get the volume, which will be attached to the instance.
  23. The cinder-api checks the token with Keystone.
  24. Once validated, the cinder-api returns the volume information to the queue.
  25. Nova-compute contacts the queue and grabs the block storage information.
  26. At this stage, the nova-compute executes a request to the specified hypervisor via Libvirt to start the virtual machine.
  27. In order to get the instance state, nova-compute sends an RPC call (rpc.call) to nova-conductor.
  28. The nova-conductor picks the call from the queue and replies to the queue by mentioning the new instance state.
  29. The polling instance state is always performed via nova-api, which consults the database to get the state information and sends it back to the client.

Let's figure out how things can be seen by referring to the following simple architecture diagram:

Provisioning a flow under the hood

Expanding the picture

You may have certain limitations that are typically associated with network switches. Network switches create a lot of virtual LANs and virtual networks that specify whether there is a lot of input to data centers.

Let's imagine that we have 250 compute hosts scenario. You can conclude that a mesh of rack servers will be placed in the data center.

Now, you take the step to grow our data center, and to be geographically data-redundant in Europe and Africa: a data center in London, Amsterdam and Tunis.

We have a data center on each of these new locations and each of these locations are able to communicate with each other. At this point, a new terminology is introduced—cell concept.

To scale this out even further, we will take into consideration the entire system. We will take just the worker nodes and put them in other cells.

Another special scheduler works as a top-level cell and enforces the request into the child cell. Now, the child cells can do the work, and they can worry about VLAN and network issues.

The cells can share certain pieces of infrastructure, such as the database, authentication service Keystone, and some of the Glance image services. This is depicted in the following diagram:

Expanding the picture

Note

More information about the concept of cells and configuration setup in OpenStack can be found for Havana release at the following reference: http://docs.openstack.org/havana/config-reference/content/section_compute-cells.html.

You have been reading a chapter from
Mastering OpenStack
Published in: Jul 2015
Publisher:
ISBN-13: 9781784395643
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime