Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Mastering OpenStack

You're reading from   Mastering OpenStack Implement the latest techniques for designing and deploying an operational, production-ready private cloud

Arrow left icon
Product type Paperback
Published in Nov 2024
Publisher Packt
ISBN-13 9781835468913
Length 392 pages
Edition 3rd Edition
Arrow right icon
Author (1):
Arrow left icon
Omar Khedher Omar Khedher
Author Profile Icon Omar Khedher
Omar Khedher
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Part 1: Architecting the OpenStack Ecosystem FREE CHAPTER
2. Chapter 1: Revisiting OpenStack – Design Considerations 3. Chapter 2: Kicking Off the OpenStack Setup – The Right Way (DevSecOps) 4. Chapter 3: OpenStack Control Plane – Shared Services 5. Chapter 4: OpenStack Compute – Compute Capacity and Flavors 6. Chapter 5: OpenStack Storage – Block, Object, and File Shares 7. Chapter 6: OpenStack Networking – Connectivity and Managed Service Options 8. Part 2: Operating the OpenStack Cloud Environment
9. Chapter 7: Running a Highly Available Cloud – Meeting the SLA 10. Chapter 8: Monitoring and Logging – Remediating Proactively 11. Chapter 9: Benchmarking the Infrastructure – Evaluating Resource Capacity and Optimization 12. Part 3: Extending the OpenStack Cloud
13. Chapter 10: OpenStack Hybrid Cloud – Design Patterns 14. Chapter 11: A Hybrid Cloud Hyperscale Use Case – Scaling a Kubernetes Workload 15. Index 16. Other Books You May Enjoy

Configuring cloud routing

Instances within the same virtual tenant network can reach each other, but by default, each tenant network cannot reach other tenants or external networks. Deploying virtual routers is the way to enable L3 network communication so that tenant virtual networks can connect by associating a subnet with a router.

Routing tenant traffic

Under the hood, a port associated with a tenant virtual network will be associated with the IP address of the subnet gateway. Instances across different virtual networks reach each other by communicating via the virtual router, using the gateway IP address and their private IP addresses encapsulated in the packets. This is called a NAT (network address translation) mechanism. In OpenStack networking, the Neutron L3 agent manages virtual routers. IP packets are forwarded by a virtual router to different self-service and external networks through the following router interfaces:

  • qr: Contains the tenant network gateway IP address and is dedicated to routing traffic through all self-service traffic networks
  • qg: Contains the external network gateway IP address and is dedicated to routing traffic out onto the external provider network

Upon the initiation of a virtual router instance, a network namespace will be created in the Neutron node that defines connections to self-service or external provider networks through routing tables, packet forwarding, and iptables rules. As shown in the following diagram, a router namespace refers to a virtual router that is attached to multiple bridge ports in an OVS configuration (qr and qg). Traffic flow between instances hosted in the same or different compute nodes is routed through the virtual router.

Figure 6.8 – Neutron router namespace connectivity based on OVS implementation

Figure 6.8 – Neutron router namespace connectivity based on OVS implementation

In the previous OVS implementation, the L3 agent should be up and running to start creating and managing virtual routers.

Important note

The router service plugin should be enabled by default once Neutron agent L3 is deployed, using kolla-ansible. The service plugin can be verified in the /etc/neutron/neutron.conf file in the Neutron cloud controller node by checking the service_plugin = router line.

Optionally, virtual routers can be managed through Horizon. The kolla-ansible run for the Neutron deployment should enable the router module in the dashboard. That can be verified in the cloud controller node’s /etc/openstack-dashboard/local_settings.py file with the following settings:

...
OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': True,
...

In the next exercise, we will create a tenant virtual network with the ML2 plugin configured in OVS. A virtual router will be attached to the tenant network using the OpenStack CLI, as follows:

  1. Create a tenant virtual network using the OpenStack network CLI:
    $ openstack network create network_pp
  2. Create a subnet with an IP range of 10.10.0.10/24, an auto-assigned DHCP, and a default DNS nameserver of 8.8.8.8:
    $ openstack subnet create --subnet-range 10.10.0.0/24 --network network_pp --dns-server 8.8.8.8 priv_subnet
  3. Create a router and attach it to the created tenant network:
    $ openstack router create router_pp
    $ openstack router add subnet router_pp priv_subnet
  4. The router attachment to the tenant network will assign a private IP address to the router’s internal interface. By default, if no IP address is specified in the attachment command line, the internal interface will be assigned the default gateway of the subnet. The following command line verifies the assigned IP address of the router’s internal interface:
    $ openstack port list --router router_pp

    It gives the following output:

Figure 6.9 – Virtual router port listing

Figure 6.9 – Virtual router port listing

  1. The router interface can be checked within the router namespace qr- prefixed interface by running the following command:
    $ ip netns

    The output looks like the following:

Figure 6.10 – The network namespaces list

Figure 6.10 – The network namespaces list

  1. Copy the qrouter ID from the previous output, and then run the following command line to show the created interface of the router and the assigned IP address from the internal network:
    $ ip netns exec qrouter-3a211622-11da-9687-bda1-acae3d74ad12 ip addr show

    We will get the following output:

Figure 6.11 – A virtual router namespace internal interface listing

Figure 6.11 – A virtual router namespace internal interface listing

To enable instances to reach the external network, a second interface in the virtual router should be created that will be attached to the provider’s external network. A common networking setup, such as an external network device or a device integrated with a firewall appliance, should be placed in front of the OpenStack endpoint (such as the load balancer). Enabling internet access can be configured using the OpenStack CLI, as follows:

  1. Create an external network provider. As configured in the ML2 plugin for OVS, we can create the network as a VLAN type, with a defined segmentation ID and physnet1 for the physical network attribute:
    $ openstack network create --external -–provider-network-type vlan --provider-segment 40 --provider-physical-network physnet1 external_pp
  2. Create the subnet part of the external network with a network range of 10.20.0.0/24 and the default gateway as 10.20.0.1, disable DHCP, and set an allocation pool of 10.20.0.10-10.20.0.100:
    $ openstack subnet create --subnet-range 10.20.0.0/24 --no-dhcp
    --network external_pp --allocation-pool start=10.20.0.10,end=10.20.0.100 pub_subnet
  3. Attach the router to an external provider network by running the following command line:
    $ openstack router set --external-gateway external_pp router_pp
  4. The attachment operation will assign an external IP from the external IP pool to the router, which can be checked by running the following command line:
    $ openstack port list --router router_pp

    Here is the output:

Figure 6.12 – A created external port of the virtual router

Figure 6.12 – A created external port of the virtual router

  1. The last attachment creates a second interface, prefixed with qg- in the router namespace, which can be verified by the following command line:
    $ ip netns exec qrouter-3a211622-11da-9687-bda1-acae3d74ad12 ip addr show

    Here is the output:

Figure 6.13 – A virtual router namespace external interface listing

Figure 6.13 – A virtual router namespace external interface listing

The next part of our connectivity demonstration involves creating security groups and rules to allow ingress and egress traffic at the network port level. To reach the internet and access the instances, we will create a new security group and add ICMP and SSH access:

  1. Using the OpenStack CLI, create a new security group to be applied to your instances:
    $ openstack security group create SG_pp
  2. Create the rules associated with the created security group for SSH and ICMP, respectively:
    $ openstack security group rule create SG_pp --protocol tcp --dst-port 22
    $ openstack security group rule create SG_pp --protocol icmp
  3. Using the OpenStack CLI, create a new test instance with a tiny flavor and a cirros image that is connected to the private tenant network:
    $ openstack server create --flavor tiny --image cirros-0.5.2 --nic net-id=network_pp --security-group SG_pp instance_pp

Important note

Make sure to adjust your available Glance image name and list of flavors, based on your existing resources. The openstack server create command line will fail if any of the assigned arguments do not exist. Cirros is a minimal Linux distribution, useful for quick testing and proof of concepts. The default session username is cirros and the password is gocubsgo.

  1. To test the connectivity from the instance to reach the internet, make sure that the instance state is ACTIVE, as follows:
    $ openstack server list

    Here is the output:

Figure 6.14 – The instance listing

Figure 6.14 – The instance listing

  1. Access to the created instance can be made in different ways, using the virsh console command line from the compute node or simply via SSH from the router namespace. Make sure to use the default cirros image credentials – that is, the username cirros and the password gocubsgo – and run a simple ping to reach 8.8.8.8:
    $ ip netns exec qrouter-3a211622-11da-9687-bda1-acae3d74ad12 ssh cirros@10.10.0.12

    Here is the output:

Figure 6.15 – Testing external connectivity

Figure 6.15 – Testing external connectivity

  1. The instance uses the internal virtual router IP address as the default gateway to route traffic reaching the external network. The internet is reachable through SNAT, performed by the router. A quick run of the ip route command in the instance shows the default gateway associated with the network route table:

Figure 6.16 – Listing the default gateway

Figure 6.16 – Listing the default gateway

The next part of our walk-through is a demonstration of how resources hosted in external networks can reach instances in an OpenStack environment. By default, spawned instances will be assigned an IP address that is not visible outside of the tenant network. Neutron provides float IP addresses that implement DNAT (destination NAT). The router simply forwards incoming packets reaching its external interface to the destination instance by checking its configured DNAT rule. The traffic response from an instance to external resources uses the source IP addresses translated to the floating IP, demonstrated as follows:

  1. Extract the port ID of the created instance, and create a floating IP address to associate with it:
    $ openstack port list --server instance_pp

    Here is the output:

Figure 6.17 – Listing the instance port

Figure 6.17 – Listing the instance port

  1. Copy the port ID, and run the following command line by pasting the external port ID after the --port option:
    $ openstack floating ip create --port 524ead12-33da-dc11-e3a1-dc34e6da1c81 external_pp

    Here is the output:

Figure 6.18 – Assigning the floating IP address to the external port

Figure 6.18 – Assigning the floating IP address to the external port

  1. Under the hood, the router namespace configures a secondary address associated with the external interface, prefixed with 'qg':
    $ ip netns exec qrouter-3a211622-11da-9687-bda1-acae3d74ad12 ip addr show

    Here is the output:

Figure 6.19 – Associating the IP address with the virtual router external interface

Figure 6.19 – Associating the IP address with the virtual router external interface

Traffic routed through the external network provider can reach the instance via the assigned floating IP.

So far, we have explored a standard routing implementation in OpenStack. The next section will uncover another way of performing routing, via dynamic routing, in Neutron.

Neutron dynamic routing

Dynamic routing in OpenStack networking is based on BGP (Border Gateway Protocol), enabling tenant networks to advertise their network prefixes with physical or virtual routers and network devices that support BGP. The new addition of BGP in Neutron eliminates the usage of floating IPs for tenant networks that do not rely on network administrators to advertise their network upstream. The term dynamic routing in Neutron was introduced with the Mitaka release. The adoption of the BGP routing mechanism varies from one cloud environment to another, depending on the networking setup, mostly due to the requirement for direct connectivity between the network node and the physical network gateway device to peer with (such as a LAN or WAN peer). To avoid IP overlapping when advertising IP prefixes, dynamic routing relies on address scopes and subnet pools, Neutron mechanisms that control the allocation of subnet addresses and prevent the usage of addresses that overlap.

At the heart of the BGP implementation, Neutron introduced the BGP speaker, which enables peering between tenant networks and external router devices. The BGP speaker advertises the tenant network to the tenant router, initially as a first hop. The BGP speaker in Neutron is not a router instance, nor does it manipulate BGP routes. It mainly orchestrates the BGP peer’s information between the tenant routers and external ones. The speaker requires a network or cloud operator to configure the peering endpoints. As shown in the following diagram, for a successful BGP dynamic routing in OpenStack, the Neutron virtual router must be attached to both the tenant subnet and the external provider device interface.

Figure 6.20 – Neutron BGP peering and router connectivity for dynamic routing

Figure 6.20 – Neutron BGP peering and router connectivity for dynamic routing

Both the BGP speaker and the external provider device must be peered (connected to the Neutron virtual router). Finally, both the tenant and the external networks must be in the same address scope. Adding BGP dynamic routing using kolla-ansible is straightforward. We will configure the dynamic routing based on the OVS implementation.

Important note

Since the Antelope release, Neutron has supported dynamic routing with the OVN mechanism driver.

As Neutron provides a BGP agent with default configuration, we will just need to enable the agent installation in the network node by adding the following line to the /ansible/inventory/multi_packtpub_prod inventory file:

...
[neutron-bgp-dragent:children]
neutron

Enable the agent installation in the globals.yml file, as follows:

enable_neutron_bgp_dragent: "yes"

Launch the pipeline to roll out the BGP agent installation in the network node. In the network node, the installed BGP agent can be checked by running the following command line:

$ openstack network agent list --agent-type bgp

Here is the output:

Figure 6.21 – Listing the BGP network agent

Figure 6.21 – Listing the BGP network agent

The Neutron BGP CLI to manage BGP speakers can be found at https://docs.openstack.org/python-openstackclient/latest/cli/plugin-commands/neutron.html.

Virtual routers are part of the building blocks of OpenStack networking, providing a variety of connectivity options to tenants. It is important to note that running with a standalone router presents a single point of failure. Chapter 7, Running a Highly Available Cloud – Meeting the SLA, will discuss Neutron implementation for highly available routers. Dynamic routing via BGP in Neutron is considered an amazing routing addition in OpenStack. Neutron has been enriched and modified across OpenStack releases. One of these major modifications is the development of new networking services in OpenStack, which will be discussed in the next section.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image