Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Learning OpenStack Networking
Learning OpenStack Networking

Learning OpenStack Networking: Build a solid foundation in virtual networking technologies for OpenStack-based clouds , Third Edition

eBook
$29.99 $43.99
Paperback
$54.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Table of content icon View table of contents Preview book icon Preview Book

Learning OpenStack Networking

Introduction to OpenStack Networking

In today's data centers, networks are composed of more devices than ever before. Servers, switches, routers, storage systems, and security appliances that once consumed rows and rows of data center space now exist as virtual machines and virtual network appliances. These devices place a large strain on traditional network management systems, as they are unable to provide a scalable and automated approach to managing next-generation networks. Users now expect more control and flexibility of the infrastructure with quicker provisioning, all of which OpenStack promises to deliver.

This chapter will introduce many features that OpenStack Networking provides, as well as various network architectures supported by OpenStack. Some topics that will be covered include the following:

  • Features of OpenStack Networking
  • Physical infrastructure requirements
  • Service separation

What is OpenStack Networking?

OpenStack Networking is a pluggable, scalable, and API-driven system to manage networks in an OpenStack-based cloud. Like other core OpenStack components, OpenStack Networking can be used by administrators and users to increase the value and maximize the utilization of existing data center resources.

Neutron, the project name for the OpenStack Networking service, complements other core OpenStack services such as Compute (Nova), Image (Glance), Identity (Keystone), Block (Cinder), Object (Swift), and Dashboard (Horizon) to provide a complete cloud solution.

OpenStack Networking exposes an application programmable interface (API) to users and passes requests to the configured network plugins for additional processing. Users are able to define network connectivity in the cloud, and cloud operators are allowed to leverage different networking technologies to enhance and power the cloud.

OpenStack Networking services can be split between multiple hosts to provide resiliency and redundancy, or they can be configured to operate on a single node. Like many other OpenStack services, Neutron requires access to a database for persistent storage of the network configuration. A simplified example of the architecture can be seen here:

Figure 1.1

In figure 1.1, the Neutron server connects to a database where the logical network configuration persists. The Neutron server can take API requests from users and services and communicate with agents via a message queue. In a typical environment, network agents will be scattered across controller and compute nodes and perform duties on their respective node.

Features of OpenStack Networking

OpenStack Networking includes many technologies you would find in the data center, including switching, routing, load balancing, firewalling, and virtual private networks.

These features can be configured to leverage open source or commercial software and provide a cloud operator with all the tools necessary to build a functional and self-contained cloud networking stack. OpenStack Networking also provides a framework for third-party vendors to build on and enhance the capabilities of the cloud.

Switching

A virtual switch is defined as a software application or service that connects virtual machines to virtual networks at the data link layer of the OSI model, also known as layer 2. Neutron supports multiple virtual switching platforms, including Linux bridges provided by the bridge kernel module and Open vSwitch. Open vSwitch, also known as OVS, is an open source virtual switch that supports standard management interfaces and protocols, including NetFlow, SPAN, RSPAN, LACP, and 802.1q VLAN tagging. However, many of these features are not exposed to the user through the OpenStack API. In addition to VLAN tagging, users can build overlay networks in software using L2-in-L3 tunneling protocols, such as GRE or VXLAN. Virtual switches can be used to facilitate communication between instances and devices outside the control of OpenStack, which include hardware switches, network firewalls, storage devices, bare-metal servers, and more.

Additional information on the use of Linux bridges and Open vSwitch as switching platforms for OpenStack can be found in Chapter 4, Virtual Network Infrastructure Using Linux Bridges, and Chapter 5, Building a Virtual Switching Infrastructure Using Open vSwitch, respectively.

Routing

OpenStack Networking provides routing and NAT capabilities through the use of IP forwarding, iptables, and network namespaces. Each network namespace has its own routing table, interfaces, and iptables processes that provide filtering and network address translation. By leveraging network namespaces to separate networks, there is no need to worry about overlapping subnets between networks created by users. Configuring a router within Neutron enables instances to interact and communicate with outside networks or other networks in the cloud.

More information on routing within OpenStack can be found in Chapter 10, Creating Standalone Routers with Neutron, Chapter 11, Router Redundancy Using VRRP, and Chapter 12, Distributed Virtual Routers.

Load balancing

First introduced in the Grizzly release of OpenStack, Load Balancing as a Service (LBaaS v2) provides users with the ability to distribute client requests across multiple instances or servers. Users can create monitors, set connection limits, and apply persistence profiles to traffic traversing a virtual load balancer. OpenStack Networking is equipped with a plugin for LBaaS v2 that utilizes HAProxy in the open source reference implementation, but plugins are available that manage virtual and physical load-balancing appliances from third-party network vendors.

More information on the use of load balancers within Neutron can be found in Chapter 13, Load Balancing Traffic to Instances.

Firewalling

OpenStack Networking provides two API-driven methods of securing network traffic to instances: security groups and Firewall as a Service (FWaaS). Security groups find their roots in nova-network, the original networking stack for OpenStack built in to the Compute service, and are based on Amazon's EC2 security groups. When using security groups in OpenStack, instances are placed into groups that share common functionality and rule sets. In a reference implementation, security group rules are implemented at the instance port level using drivers that leverage iptables or OpenFlow. Security policies built using FWaaS are also implemented at the port level, but can be applied to ports of routers as well as instances. The original FWaaS v1 API implemented firewall rules inside Neutron router namespaces, but that behavior has been removed in the v2 API.

More information on securing instance traffic can be found in Chapter 8, Managing Security Groups. The use of FWaaS is outside the scope of this book.

Virtual private networks

A virtual private network (VPN) extends a private network across a public network such as the internet. A VPN enables a computer to send and receive data across public networks as if it were directly connected to the private network. Neutron provides a set of APIs to allow users to create IPSec-based VPN tunnels from Neutron routers to remote gateways when using the open source reference implementation. The use of VPN as a Service is outside the scope of this book.

Network functions virtualization

Network functions virtualization (NFV) is a network architecture concept that proposes virtualizing network appliances used for various network functions. These functions include intrusion detection, caching, gateways, WAN accelerators, firewalls, and more. Using SR-IOV, instances are no longer required to use para-virtualized drivers or to be connected to virtual bridges within the host. Instead, the instance is attached to a Neutron port that is associated with a virtual function (VF) in the NIC, allowing the instance to access the NIC hardware directly. Configuring and implementing SR-IOV with Neutron is outside the scope of this book.

OpenStack Networking resources

OpenStack gives users the ability to create and configure networks and subnets and instruct other services, such as Compute, to attach virtual devices to ports on these networks. The Identity service gives cloud operators the ability to segregate users into projects. OpenStack Networking supports project-owned resources, including each project having multiple private networks and routers. Projects can be left to choose their own IP addressing scheme, even if those addresses overlap with other project networks, or administrators can place limits on the size of subnets and addresses available for allocation.

There are two types of networks that can be expressed in OpenStack:

  • Project/tenant network: A virtual network created by a project or administrator on behalf of a project. The physical details of the network are not exposed to the project.
  • Provider network: A virtual network created to map to a physical network. Provider networks are typically created to enable access to physical network resources outside of the cloud, such as network gateways and other services, and usually map to VLANs. Projects can be given access to provider networks.
The terms project and tenant are used interchangeably within the OpenStack community, with the former being the newer and preferred nomenclature.

A project network provides connectivity to resources in a project. Users can create, modify, and delete project networks. Each project network is isolated from other project networks by a boundary such as a VLAN or other segmentation ID. A provider network, on the other hand, provides connectivity to networks outside of the cloud and is typically created and managed by a cloud administrator.

The primary differences between project and provider networks can be seen during the network provisioning process. Provider networks are created by administrators on behalf of projects and can be dedicated to a particular project, shared by a subset of projects, or shared by all projects. Project networks are created by projects for use by their instances and cannot be shared with all projects, though sharing with certain projects may be accomplished using role-based access control (RBAC) policies. When a provider network is created, the administrator can provide specific details that aren't available to ordinary users, including the network type, the physical network interface, and the network segmentation identifier, such as a VLAN ID or VXLAN VNI. Project networks have these same attributes, but users cannot specify them. Instead, they are automatically determined by Neutron.

There are other foundational network resources that will be covered in further detail later in this book, but are summarized in the following table for your convenience:

Resource

Description

Subnet

A block of IP addresses used to allocate ports created on the network.

Port

A connection point for attaching a single device, such as the virtual network interface card (vNIC) of a virtual instance, to a virtual network. Port attributes include the MAC address and the fixed IP address on the subnet.

Router

A virtual device that provides routing between self-service networks and provider networks.

Security group

A set of virtual firewall rules that control ingress and egress traffic at the port level.

DHCP

An agent that manages IP addresses for instances on provider and self-service networks.

Metadata

A service that provides data to instances during boot.

Virtual network interfaces

OpenStack deployments are most often configured to use the libvirt KVM/QEMU driver to provide platform virtualization. When an instance is booted for the first time, OpenStack creates a port for each network interface attached to the instance. A virtual network interface called a tap interface is created on the compute node hosting the instance. The tap interface corresponds directly to a network interface within the guest instance and has the properties of the port created in Neutron, including the MAC and IP address. Through the use of a bridge, the host can expose the guest instance to the physical network. Neutron allows users to specify alternatives to the standard tap interface, such as Macvtap and SR-IOV, by defining special attributes on ports and attaching them to instances.

Virtual network switches

OpenStack Networking supports many types of virtual and physical switches, and includes built-in support for Linux bridges and Open vSwitch virtual switches. This book will cover both technologies and their respective drivers and agents.

The terms bridge and switch are often used interchangeably in the context of OpenStack Networking, and may be used in the same way throughout this book.

Overlay networks

Neutron supports overlay networking technologies that provide network isolation at scale with little to no modification of the underlying physical infrastructure. To accomplish this, Neutron leverages L2-in-L3 overlay networking technologies such as GRE, VXLAN, and GENEVE. When configured accordingly, Neutron builds point-to-point tunnels between all network and compute nodes in the cloud using a predefined interface. These point-to-point tunnels create what is called a mesh network, where every host is connected to every other host. A cloud consisting of one combined controller and network node, and three compute nodes, would have a fully meshed overlay network that resembles figure 1.2:

Figure 1.2

Using the overlay network pictured in figure 1.2, traffic between instances or other virtual devices on any given host will travel between layer 3 endpoints on each of the underlying hosts without regard for the layer 2 network beneath them. Due to encapsulation, Neutron routers may be needed to facilitate communication between different project networks as well as networks outside of the cloud.

Virtual Extensible Local Area Network (VXLAN)

This book focuses primarily on VXLAN, an overlay technology that helps address scalability issues with VLANs. VXLAN encapsulates layer 2 Ethernet frames inside layer 4 UDP packets that can be forwarded or routed between hosts. This means that a virtual network can be transparently extended across a large network without any changes to the end hosts. In the case of OpenStack Networking, however, a VXLAN mesh network is commonly constructed only between nodes that exist in the same cloud.

Rather than use VLAN IDs to differentiate between networks, VXLAN uses a VXLAN Network Identifier (VNI) to serve as the unique identifier on a link that potentially carries traffic for tens of thousands of networks, or more. An 802.1q VLAN header supports up to 4,096 unique IDs, whereas a VXLAN header supports approximately 16 million unique IDs. Within an OpenStack cloud, virtual machine instances are unaware that VXLAN is used to forward traffic between hosts. The VXLAN Tunnel Endpoint (VTEP) on the physical node handles the encapsulation and decapsulation of traffic without the instance ever knowing.

Because VXLAN network traffic is encapsulated, many network devices cannot participate in these networks without additional configuration, if at all. As a result, VXLAN networks are effectively isolated from other networks in the cloud and require the use of a Neutron router to provide access to connected instances. More information on creating Neutron routers begins in Chapter 10, Creating Standalone Routers with Neutron.

While not as performant as VLAN or flat networks on some hardware, the use of VXLAN is becoming more popular in cloud network architectures where scalability and self-service are major drivers. Newer networking hardware that offers VXLAN offloading capabilities should be leveraged if you are considering implementing VXLAN-based overlay networks in your cloud.

More information on how VXLAN encapsulation works is described in RFC 7348, available at the following URL: https://tools.ietf.org/html/rfc7348

Generic Router Encapsulation (GRE)

A GRE network is similar to a VXLAN network in that traffic from one instance to another is encapsulated and sent over a layer 3 network. A unique segmentation ID is used to differentiate traffic from other GRE networks. Rather than use UDP as the transport mechanism, GRE uses IP protocol 47. For various reasons, the use of GRE for encapsulating tenant network traffic has fallen out of favor now that VXLAN is supported by both Open vSwitch and Linux Bridge network agents.

More information on how GRE encapsulation works is described in RFC 2784 available at the following URL: https://tools.ietf.org/html/rfc2784

As of the Pike release of OpenStack, the Open vSwitch mechanism driver is the only commonly used driver that supports GRE.

Generic Network Virtualization Encapsulation (GENEVE)

GENEVE is an emerging overlay technology that resembles VXLAN and GRE, in that packets between hosts are designed to be transmitted using standard networking equipment without having to modify the client or host applications. Like VXLAN, GENEVE encapsulates packets with a unique header and uses UDP as its transport mechanism. GENEVE leverages the benefits of multiple overlay technologies such as VXLAN, NVGRE, and STT, and may supplant those technologies over time. The Open Virtual Networking (OVN) mechanism driver relies on GENEVE as its overlay technology, which may speed up the adoption of GENEVE in later releases of OpenStack.

Preparing the physical infrastructure

Most OpenStack clouds are made up of physical infrastructure nodes that fit into one of the following four categories:

  • Controller node: Controller nodes traditionally run the API services for all of the OpenStack components, including Glance, Nova, Keystone, Neutron, and more. In addition, controller nodes run the database and messaging servers, and are often the point of management of the cloud via the Horizon dashboard. Most OpenStack API services can be installed on multiple controller nodes and can be load balanced to scale the OpenStack control plane.
  • Network node: Network nodes traditionally run DHCP and metadata services and can also host virtual routers when the Neutron L3 agent is installed. In smaller environments, it is not uncommon to see controller and network node services collapsed onto the same server or set of servers. As the cloud grows in size, most network services can be broken out between other servers or installed on their own server for optimal performance.
  • Compute node: Compute nodes traditionally run a hypervisor such as KVM, Hyper-V, or Xen, or container software such as LXC or Docker. In some cases, a compute node may also host virtual routers, especially when Distributed Virtual Routing (DVR) is configured. In proof-of-concept or test environments, it is not uncommon to see controller, network, and compute node services collapsed onto the same machine. This is especially common when using DevStack, a software package designed for developing and testing OpenStack code. All-in-one installations are not recommended for production use.
  • Storage node: Storage nodes are traditionally limited to running software related to storage such as Cinder, Ceph, or Swift. Storage nodes do not usually host any type of Neutron networking service or agent and will not be discussed in this book.

When Neutron services are broken out between many hosts, the layout of services will often resemble the following:

Figure 1.3

In figure 1.3, the neutron API service neutron-server is installed on the Controller node, while Neutron agents responsible for implementing certain virtual networking resources are installed on a dedicated network node. Each compute node hosts a network plugin agent responsible for implementing the network plumbing on that host. Neutron supports a highly available API service with a shared database backend, and it is recommended that the cloud operator load balances traffic to the Neutron API service when possible. Multiple DHCP, metadata, L3, and LBaaS agents should be implemented on separate network nodes whenever possible. Virtual networks, routers, and load balancers can be scheduled to one or more agents to provide a basic level of redundancy when an agent fails. Neutron even includes a built-in scheduler that can detect failure and reschedule certain resources when a failure is detected.

Configuring the physical infrastructure

Before the installation of OpenStack can begin, the physical network infrastructure must be configured to support the networks needed for an operational cloud. In a production environment, this will likely include a dedicated management VLAN used for server management and API traffic, a VLAN dedicated to overlay network traffic, and one or more VLANs that will be used for provider and VLAN-based project networks. Each of these networks can be configured on separate interfaces, or they can be collapsed onto a single interface if desired.

The reference architecture for OpenStack Networking defines at least four distinct types of traffic that will be seen on the network:

  • Management
  • API
  • External
  • Guest

These traffic types are often categorized as control plane or data plane, depending on the purpose, and are terms used in networking to describe the purpose of the traffic. In this case, control plane traffic is used to describe traffic related to management, API, and other non-VM related traffic. Data plane traffic, on the other hand, represents traffic generated by, or directed to, virtual machine instances.

Although I have taken the liberty of splitting out the network traffic onto dedicated interfaces in this book, it is not necessary to do so to create an operational OpenStack cloud. In fact, many administrators and distributions choose to collapse multiple traffic types onto single or bonded interfaces using VLAN tagging. Depending on the chosen deployment model, the administrator may spread networking services across multiple nodes or collapse them onto a single node. The security requirements of the enterprise deploying the cloud will often dictate how the cloud is built. The various network and service configurations will be discussed in the upcoming sections.

Management network

The management network, also referred to as the internal network in some distributions, is used for internal communication between hosts for services such as the messaging service and database service, and can be considered as part of the control plane.

All hosts will communicate with each other over this network. In many cases, this same interface may be used to facilitate image transfers between hosts or some other bandwidth-intensive traffic. The management network can be configured as an isolated network on a dedicated interface or combined with another network as described in the following section.

API network

The API network is used to expose OpenStack APIs to users of the cloud and services within the cloud and can be considered as part of the control plane. Endpoint addresses for API services such as Keystone, Neutron, Glance, and Horizon are procured from the API network.

It is common practice to utilize a single interface and IP address for API endpoints and management access to the host itself over SSH. A diagram of this configuration is provided later in this chapter.

It is recommended, though not required, that you physically separate management and API traffic from other traffic types, such as storage traffic, to avoid issues with network congestion that may affect operational stability.

External network

An external network is a provider network that provides Neutron routers with external network access. Once a router has been configured and attached to the external network, the network becomes the source of floating IP addresses for instances and other network resources attached to the router. IP addresses in an external network are expected to be routable and reachable by clients on a corporate network or the internet. Multiple external provider networks can be segmented using VLANs and trunked to the same physical interface. Neutron is responsible for tagging the VLAN based on the network configuration provided by the administrator. Since external networks are utilized by VMs, they can be considered as part of the data plane.

Guest network

The guest network is a network dedicated to instance traffic. Options for guest networks include local networks restricted to a particular node, flat, or VLAN-tagged networks, or virtual overlay networks made possible with GRE, VXLAN, or GENEVE encapsulation. For more information on guest networks, refer to Chapter 6, Building Networks with Neutron. Since guest networks provide connectivity to VMs, they can be considered part of the data plane.

The physical interfaces used for external and guest networks can be dedicated interfaces or ones that are shared with other types of traffic. Each approach has its benefits and drawbacks, and they are described in more detail later in this chapter. In the next few chapters, I will define networks and VLANs that will be used throughout the book to demonstrate the various components of OpenStack Networking. Generic information on the configuration of switch ports, routers, or firewalls will also be provided.

Physical server connections

The number of interfaces needed per host is dependent on the purpose of the cloud, the security and performance requirements of the organization, and the cost and availability of hardware. A single interface per server that results in a combined control and data plane is all that is needed for a fully operational OpenStack cloud. Many organizations choose to deploy their cloud this way, especially when port density is at a premium, the environment is simply used for testing, or network failure at the node level is a non-impacting event. When possible, however, it is recommended that you split control and data traffic across multiple interfaces to reduce the chances of network failure.

Single interface

For hosts using a single interface, all traffic to and from instances as well as internal OpenStack, SSH management, and API traffic traverse the same physical interface. This configuration can result in severe performance penalties, as a service or guest can potentially consume all available bandwidth. A single interface is recommended only for non-production clouds.

The following table demonstrates the networks and services traversing a single interface over multiple VLANs:

Service/function

Purpose

Interface

VLAN

SSH

Host management

eth0

10

APIs

Access to OpenStack APIs

eth0

15

Overlay network

Used to tunnel overlay (VXLAN, GRE, GENEVE) traffic between hosts

eth0

20

Guest/external network(s)

Used to provide access to external cloud resources and for VLAN-based project networks

eth0

Multiple

Multiple interfaces

To reduce the likelihood of guest traffic impacting management traffic, segregation of traffic between multiple physical interfaces is recommended. At a minimum, two interfaces should be used: one that serves as a dedicated interface for management and API traffic (control plane), and another that serves as a dedicated interface for external and guest traffic (data plane). Additional interfaces can be used to further segregate traffic, such as storage.

The following table demonstrates the networks and services traversing two interfaces with multiple VLANs:

Service/function

Purpose

Interface

VLAN

SSH

Host management

eth0

10

APIs

Access to OpenStack APIs

eth0

15

Overlay network

Used to tunnel overlay (VXLAN, GRE, GENEVE) traffic between hosts

eth1

20

Guest/external network(s)

Used to provide access to external cloud resources and for VLAN-based project networks

eth1

Multiple

Bonding

The use of multiple interfaces can be expanded to utilize bonds instead of individual network interfaces. The following common bond modes are supported:

  • Mode 1 (active-backup): Mode 1 bonding sets all interfaces in the bond to a backup state while one interface remains active. When the active interface fails, a backup interface replaces it. The same MAC address is used upon failover to avoid issues with the physical network switch. Mode 1 bonding is supported by most switching vendors, as it does not require any special configuration on the switch to implement.
  • Mode 4 (active-active): Mode 4 bonding involves the use of aggregation groups, a group in which all interfaces share an identical configuration and are grouped together to form a single logical interface. The interfaces are aggregated using the IEEE 802.3ad Link Aggregation Control Protocol (LACP). Traffic is load balanced across the links using methods negotiated by the physical node and the connected switch or switches. The physical switching infrastructure must be capable of supporting this type of bond. While some switching platforms require that multiple links of an LACP bond be connected to the same switch, others support technology known as Multi-Chassis Link Aggregation (MLAG) that allows multiple physical switches to be configured as a single logical switch. This allows links of a bond to be connected to multiple switches that provide hardware redundancy while allowing users the full bandwidth of the bond under normal operating conditions, all with no additional changes to the server configuration.

Bonding can be configured within the Linux operating system using tools such as iproute2, ifupdown, and Open vSwitch, among others.The configuration of bonded interfaces is outside the scope of OpenStack and this book.

Bonding configurations vary greatly between Linux distributions. Refer to the respective documentation of your Linux distribution for assistance in configuring bonding.

The following table demonstrates the use of two bonds instead of two individual interfaces:

Service/function

Purpose

Interface

VLAN

SSH

Host management

bond0

10

APIs

Access to OpenStack APIs

bond0

15

Overlay network

Used to tunnel overlay (VXLAN, GRE, GENEVE) traffic between hosts

bond1

20

Guest/external network(s)

Used to provide access to external cloud resources and for VLAN-based project networks

bond1

Multiple

In this book, an environment will be built using three non-bonded interfaces: one for management and API traffic, one for VLAN-based provider or project networks, and another for overlay network traffic. The following interfaces and VLAN IDs will be used:

Service/function

Purpose

Interface

VLAN

SSH and APIs

Host management and access to OpenStack APIs

eth0 / ens160

10

Overlay network

Used to tunnel overlay (VXLAN, GRE, GENEVE) traffic between hosts

eth1 / ens192

20

Guest/external network(s)

Used to provide access to external cloud resources and for VLAN-based project networks

eth2 / ens224

30,40-43

When an environment is virtualized in VMware, interface names may differ from the standard eth0, eth1, ethX naming convention. The interface names provided in the table reflect the interface naming convention seen on controller and compute nodes that exist as virtual machines, rather than bare-metal machines.

Separating services across nodes

Like other OpenStack services, cloud operators can split OpenStack Networking services across multiple nodes. Small deployments may use a single node to host all services, including networking, compute, database, and messaging. Others might find benefit in using a dedicated controller node and a dedicated network node to handle guest traffic routed through software routers and to offload Neutron DHCP and metadata services. The following sections describe a few common service deployment models.

Using a single controller node

In an environment consisting of a single controller and one or more compute nodes, the controller will likely handle all networking services and other OpenStack services while the compute nodes strictly provide compute resources.

The following diagram demonstrates a controller node hosting all OpenStack management and networking services where the Neutron layer 3 agent is not utilized. Two physical interfaces are used to separate management (control plane) and instance (data plane) network traffic:

Figure 1.3

The preceding diagram reflects the use of a single combined controller/network node and one or more compute nodes, with Neutron providing only layer 2 connectivity between instances and external gateway devices. An external router is needed to handle routing between network segments.

The following diagram demonstrates a controller node hosting all OpenStack management and networking services, including the Neutron L3 agent. Three physical interfaces are used to provide separate control and data planes:

Figure 1.4

The preceding diagram reflects the use of a single combined controller/network node and one or more compute nodes in a network configuration that utilizes the Neutron L3 agent. Software routers created with Neutron reside on the controller node, and handle routing between connected project networks and external provider networks.

Using a dedicated network node

A network node is dedicated to handling most or all the OpenStack networking services, including the L3 agent, DHCP agent, metadata agent, and more. The use of a dedicated network node provides additional security and resilience, as the controller node will be at less risk of network and resource saturation. Some Neutron services, such as the L3 and DHCP agents and the Neutron API service, can be scaled out across multiple nodes for redundancy and increased performance, especially when distributed virtual routers are used.

The following diagram demonstrates a network node hosting all OpenStack networking services, including the Neutron L3, DHCP, metadata, and LBaaS agents. The Neutron API service, however, remains installed on the controller node. Three physical interfaces are used where necessary to provide separate control and data planes:

Figure 1.5

The environment built out in this book will be composed of five hosts, including the following:

  • A single controller node running all OpenStack network services and the Linux bridge network agent
  • A single compute node running the Nova compute service and the Linux bridge network agent
  • Two compute nodes running the Nova compute service and the Open vSwitch network agent
  • A single network node running the Open vSwitch network agent and the L3 agent

Not all hosts are required should you choose not to complete the exercises described in the upcoming chapters.

Summary

OpenStack Networking offers the ability to create and manage different technologies found in a data center in a virtualized and programmable manner. If the built-in features and reference implementations are not enough, the pluggable architecture of OpenStack Networking allows for additional functionality to be provided by third-party commercial and open source vendors. The security requirements of the organization building the cloud as well as the use cases of the cloud, will ultimately dictate the physical layout and separation of services across the infrastructure nodes.

To successfully deploy Neutron and harness all it has to offer, it is important to have a strong understanding of core networking concepts. In this book, we will cover some fundamental network concepts around Neutron and build a foundation for deploying instances.

In the next chapter, we will begin a package-based installation of OpenStack on the Ubuntu 16.04 LTS operating system. Topics covered include the installation, configuration, and verification of many core OpenStack projects, including Identity, Image, Dashboard, and Compute. The installation and configuration of base OpenStack Networking services, including the Neutron API, can be found in Chapter 3, Installing Neutron.

Left arrow icon Right arrow icon

Key benefits

  • Learn the difference between Open vSwitch and Linux bridge switching technologies
  • Connect virtual machine instances to virtual networks, subnets, and ports
  • Implement virtual load balancers, firewalls, and routers in your network

Description

OpenStack Networking is a pluggable, scalable, and API-driven system to manage physical and virtual networking resources in an OpenStack-based cloud. Like other core OpenStack components, OpenStack Networking can be used by administrators and users to increase the value and maximize the use of existing datacenter resources. This third edition of Learning OpenStack Networking walks you through the installation of OpenStack and provides you with a foundation that can be used to build a scalable and production-ready OpenStack cloud. In the initial chapters, you will review the physical network requirements and architectures necessary for an OpenStack environment that provide core cloud functionality. Then, you’ll move through the installation of the new release of OpenStack using packages from the Ubuntu repository. An overview of Neutron networking foundational concepts, including networks, subnets, and ports will segue into advanced topics such as security groups, distributed virtual routers, virtual load balancers, and VLAN tagging within instances. By the end of this book, you will have built a network infrastructure for your cloud using OpenStack Neutron.

Who is this book for?

If you are an OpenStack-based cloud operator and administrator who is new to Neutron networking and wants to build your very own OpenStack cloud, then this book is for you.Prior networking experience and a physical server and network infrastructure is recommended to follow along with concepts demonstrated in the book.

What you will learn

  • Get familiar with Neutron constructs, including agents and plugins
  • Build foundational Neutron resources to provide connectivity to instances
  • Work with legacy Neutron routers and troubleshoot traffic through them
  • Explore high-availability routing capabilities utilizing Virtual Router Redundancy Protocol (VRRP)
  • Create and manage load balancers and associated components
  • Manage security groups as a method of securing traffic to and from instances
Estimated delivery fee Deliver to Chile

Standard delivery 10 - 13 business days

$19.95

Premium delivery 3 - 6 business days

$40.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Aug 31, 2018
Length: 462 pages
Edition : 3rd
Language : English
ISBN-13 : 9781788392495
Vendor :
OpenStack
Languages :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Estimated delivery fee Deliver to Chile

Standard delivery 10 - 13 business days

$19.95

Premium delivery 3 - 6 business days

$40.95
(Includes tracking information)

Product Details

Publication date : Aug 31, 2018
Length: 462 pages
Edition : 3rd
Language : English
ISBN-13 : 9781788392495
Vendor :
OpenStack
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 153.97
OpenStack for Architects
$43.99
OpenStack Cloud Computing Cookbook
$54.99
Learning OpenStack Networking
$54.99
Total $ 153.97 Stars icon

Table of Contents

15 Chapters
Introduction to OpenStack Networking Chevron down icon Chevron up icon
Installing OpenStack Chevron down icon Chevron up icon
Installing Neutron Chevron down icon Chevron up icon
Virtual Network Infrastructure Using Linux Bridges Chevron down icon Chevron up icon
Building a Virtual Switching Infrastructure Using Open vSwitch Chevron down icon Chevron up icon
Building Networks with Neutron Chevron down icon Chevron up icon
Attaching Instances to Networks Chevron down icon Chevron up icon
Managing Security Groups Chevron down icon Chevron up icon
Role-Based Access Control Chevron down icon Chevron up icon
Creating Standalone Routers with Neutron Chevron down icon Chevron up icon
Router Redundancy Using VRRP Chevron down icon Chevron up icon
Distributed Virtual Routers Chevron down icon Chevron up icon
Load Balancing Traffic to Instances Chevron down icon Chevron up icon
Advanced Networking Topics Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(1 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Christopher Morocho Vasquez Jun 17, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
De acuerdo a lo solicitado
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela