Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Installing Neutron

Save for later
  • 15 min read
  • 04 Nov 2015

article-image

We will learn about OpenStack networking in this article by James Denton, who is the author of the book Learning OpenStack Networking (Neutron) - Second Edition. OpenStack Networking, also known as Neutron, provides a network infrastructure as-a-service platform to users of the cloud. In this article, I will guide you through the installation of Neutron networking services on top of the OpenStack environment.

Components to be installed include:

  • Neutron API server
  • Modular Layer 2 (ML2) plugin


By the end of this article, you will have a basic understanding of the function and operation of various Neutron plugins and agents, as well as a foundation on top of which a virtual switching infrastructure can be built.

(For more resources related to this topic, see here.)


Basic networking elements in Neutron


Neutron constructs the virtual network using elements that are familiar to all system and network administrators, including networks, subnets, ports, routers, load balancers, and more.

Using version 2.0 of the core Neutron API, users can build a network foundation composed of the following entities:

  • Network: A network is an isolated layer 2 broadcast domain. Typically reserved for the tenants that created them, networks could be shared among tenants if configured accordingly. The network is the core entity of the Neutron API. Subnets and ports must always be associated with a network.
  • Subnet: A subnet is an IPv4 or IPv6 address block from which IP addresses can be assigned to virtual machine instances. Each subnet must have a CIDR and must be associated with a network. Multiple subnets can be associated with a single network and can be noncontiguous. A DHCP allocation range can be set for a subnet that limits the addresses provided to instances.
  • Port: A port in Neutron represents a virtual switch port on a logical virtual switch. Virtual machine interfaces are mapped to Neutron ports, and the ports define both the MAC address and the IP address to be assigned to the interfaces plugged into them. Neutron port definitions are stored in the Neutron database, which is then used by the respective plugin agent to build and connect the virtual switching infrastructure.


Cloud operators and users alike can configure network topologies by creating and configuring networks and subnets, and then instruct services such as Nova to attach virtual devices to ports on these networks. Users can create multiple networks, subnets, and ports, but are limited to thresholds defined by per-tenant quotas set by the cloud administrator.

Extending functionality with plugins


Neutron introduces support for third-party plugins and drivers that extend network functionality and implementation of the Neutron API. Plugins and drivers can be created that use a variety of software- and hardware-based technologies to implement the network built by operators and users.

There are two major plugin types within the Neutron architecture:

  • Core plugin
  • Service plugin


A core plugin implements the core Neutron API and is responsible for adapting the logical network described by networks, ports, and subnets into something that can be implemented by the L2 agent and IP address management system running on the host.

A service plugin provides additional network services such as routing, load balancing, firewalling, and more.

The Neutron API provides a consistent experience to the user despite the chosen networking plugin. For more information on interacting with the Neutron API, visit http://developer.openstack.org/api-ref-networking-v2.html.

Modular Layer 2 plugin


Prior to the inclusion of the Modular Layer 2 (ML2) plugin in the Havana release of OpenStack, Neutron was limited to using a single core plugin at a time. The ML2 plugin replaces two monolithic plugins in its reference implementation: the LinuxBridge plugin and the Open vSwitch plugin. Their respective agents, however, continue to be utilized and can be configured to work with the ML2 plugin.

Drivers


The ML2 plugin introduced the concept of type drivers and mechanism drivers to separate the types of networks being implemented and the mechanisms for implementing networks of those types.

Type drivers


An ML2 type driver maintains type-specific network state, validates provider network attributes, and describes network segments using provider attributes. Provider attributes include network interface labels, segmentation IDs, and network types. Supported network types include local, flat, vlan, gre, and vxlan.

Mechanism drivers


An ML2 mechanism driver is responsible for taking information established by the type driver and ensuring that it is properly implemented. Multiple mechanism drivers can be configured to operate simultaneously, and can be described using three types of models:

  • Agent-based: This includes LinuxBridge, Open vSwitch, and others
  • Controller-based: This includes OpenDaylight, VMWare NSX, and others
  • Top-of-Rack: This includes Cisco Nexus, Arista, Mellanox, and others


The LinuxBridge and Open vSwitch ML2 mechanism drivers are used to configure their respective switching technologies within nodes that host instances and network services. The LinuxBridge driver supports local, flat, vlan, and vxlan network types, while the Open vSwitch driver supports all of those as well as the gre network type.

The L2 population driver is used to limit the amount of broadcast traffic that is forwarded across the overlay network fabric. Under normal circumstances, unknown unicast, multicast, and broadcast traffic floods out all tunnels to other compute nodes. This behavior can have a negative impact on the overlay network fabric, especially as the number of hosts in the cloud scales out. As an authority on what instances and other network resources exist in the cloud, Neutron can prepopulate forwarding databases on all hosts to avoid a costly learning operation. When ARP proxy is used, Neutron prepopulates the ARP table on all hosts in a similar manner to avoid ARP traffic from being broadcast across the overlay fabric.

ML2 architecture


The following diagram demonstrates at a high level how the Neutron API service interacts with the various plugins and agents responsible for constructing the virtual and physical network:

installing-neutron-img-0
Figure 3.1


The preceding diagram demonstrates the interaction between the Neutron API, Neutron plugins and drivers, and services such as the L2 and L3 agents. For more information on the Neutron ML2 plugin architecture, refer to the OpenStack Neutron Modular Layer 2 Plugin Deep Dive video from the 2013 OpenStack Summit in Hong Kong available at https://www.youtube.com/watch?v=whmcQ-vHams.

Third-party support


Third-party vendors such as PLUMGrid and OpenContrail have implemented support for their respective SDN technologies by developing their own monolithic or ML2 plugins that implement the Neutron API and extended network services. Others, including Cisco, Arista, Brocade, Radware, F5, VMWare, and more, have created plugins that allow Neutron to interface with OpenFlow controllers, load balancers, switches, and other network hardware. For a look at some of the commands related to these plugins, refer to Appendix, Additional Neutron Commands.

The configuration and use of these plugins is outside the scope of this article. For more information on the available plugins for Neutron, visit http://docs.openstack.org/admin-guide-cloud/content/section_plugin-arch.html.

Network namespaces


OpenStack was designed with multitenancy in mind and provides users with the ability to create and manage their own compute and network resources. Neutron supports each tenant having multiple private networks, routers, firewalls, load balancers, and other networking resources. It is able to isolate many of those objects through the use of network namespaces.

A network namespace is defined as a logical copy of the network stack with its own routes, firewall rules, and network interface devices. When using the open source reference plugins and drivers, every network, router, and load balancer that is created by a user is represented by a network namespace. When network namespaces are enabled, Neutron is able to provide isolated DHCP and routing services to each network. These services allow users to create overlapping networks with other users in other projects and even other networks in the same project.

The following naming convention for network namespaces should be observed:

  • DHCP namespace: qdhcp-<network UUID>
  • Router namespace: qrouter-<router UUID>
  • Load Balancer namespace: qlbaas-<load balancer UUID>


A qdhcp namespace contains a DHCP service that provides IP addresses to instances using the DHCP protocol. In a reference implementation, dnsmasq is the process that services DHCP requests. The qdhcp namespace has an interface plugged into the virtual switch and is able to communicate with instances and other devices in the same network or subnet. A qdhcp namespace is created for every network where the associated subnet(s) have DHCP enabled.

A qrouter namespace represents a virtual router and is responsible for routing traffic to and from instances in the subnets it is connected to. Like the qdhcp namespace, the qrouter namespace is connected to one or more virtual switches depending on the configuration.

A qlbaas namespace represents a virtual load balancer and may run a service such as HAProxy that load balances traffic to instances. The qlbaas namespace is connected to a virtual switch and can communicate with instances and other devices in the same network or subnet.

The leading q in the name of the network namespaces stands for Quantum, the original name for the OpenStack Networking service.


Network namespaces of the types mentioned earlier will only be seen on nodes running the Neutron DHCP, L3, and LBaaS agents, respectively. These services are typically configured only on controllers or dedicated network nodes. The ip netns list command can be used to list available namespaces, and commands can be executed within the namespace using the following syntax:

ip netns exec NAMESPACE_NAME <command>


Commands that can be executed in the namespace include ip, route, iptables, and more. The output of these commands corresponds to data specific to the namespace they are executed in.

For more information on network namespaces, see the man page for ip netns at http://man7.org/linux/man-pages/man8/ip-netns.8.html.

Installing and configuring Neutron services


In this installation, the various services that make up OpenStack Networking will be installed on the controller node rather than a dedicated networking node. The compute nodes will run L2 agents that interface with the controller node and provide virtual switch connections to instances.

Remember that the configuration settings recommended here and online at docs.openstack.org may not be appropriate for production systems.


To install the Neutron API server, the DHCP and metadata agents, and the ML2 plugin on the controller, issue the following command:

# apt-get install neutron-server neutron-dhcp-agent 
neutron-metadata-agent neutron-plugin-ml2 neutron-common 
python-neutronclient


On the compute nodes, only the ML2 plugin is required:

# apt-get install neutron-plugin-ml2

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at AU $24.99/month. Cancel anytime

Creating the Neutron database


Using the mysql client, create the Neutron database and associated user. When prompted for the root password, use openstack:

# mysql –u root –p


Enter the following SQL statements in the MariaDB [(none)] > prompt:

CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
quit;


Update the [database] section of the Neutron configuration file at /etc/neutron/neutron.conf on all nodes to use the proper MySQL database connection string based on the preceding values rather than the default value:

[database]
connection = mysql://neutron:neutron@controller01/neutron

Configuring the Neutron user, role, and endpoint in Keystone


Neutron requires that you create a user, role, and endpoint in Keystone in order to function properly. When executed from the controller node, the following commands will create a user called neutron in Keystone, associate the admin role with the neutron user, and add the neutron user to the service project:

# openstack user create neutron --password neutron
# openstack role add --project service --user neutron admin


Create a service in Keystone that describes the OpenStack Networking service by executing the following command on the controller node:

# openstack service create --name neutron 
--description "OpenStack Networking" network


The service create command will result in the following output:

installing-neutron-img-1


Figure 3.2

To create the endpoint, use the following openstack endpoint create command:

# openstack endpoint create 
     --publicurl http://controller01:9696 
     --adminurl http://controller01:9696 
     --internalurl http://controller01:9696 
     --region RegionOne 
     network


The resulting endpoint is as follows:

installing-neutron-img-2
Figure 3.3


Enabling packet forwarding


Before the nodes can properly forward or route traffic for virtual machine instances, there are three kernel parameters that must be configured on all nodes:

  • net.ipv4.ip_forward
  • net.ipv4.conf.all.rp_filter
  • net.ipv4.conf.default.rp_filter


The net.ipv4.ip_forward kernel parameter allows the nodes to forward traffic from the instances to the network. The default value is 0 and should be set to 1 to enable IP forwarding. Use the following command on all nodes to implement this change:

# sysctl -w "net.ipv4.ip_forward=1"


The net.ipv4.conf.default.rp_filter and net.ipv4.conf.all.rp_filter kernel parameters are related to reverse path filtering, a mechanism intended to prevent certain types of denial of service attacks. When enabled, the Linux kernel will examine every packet to ensure that the source address of the packet is routable back through the interface in which it came. Without this validation, a router can be used to forward malicious packets from a sender who has spoofed the source address to prevent the target machine from responding properly.

In OpenStack, anti-spoofing rules are implemented by Neutron on each compute node within iptables. Therefore, the preferred configuration for these two rp_filter values is to disable them by setting them to 0. Use the following sysctl commands on all nodes to implement this change:

# sysctl -w "net.ipv4.conf.default.rp_filter=0"
# sysctl -w "net.ipv4.conf.all.rp_filter=0"


Using sysctl –w makes the changes take effect immediately. However, the changes are not persistent across reboots. To make the changes persistent, edit the /etc/sysctl.conf file on all hosts and add the following lines:

net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0


Load the changes into memory on all nodes with the following sysctl command:

# sysctl -p

Configuring Neutron to use Keystone


The Neutron configuration file found at /etc/neutron/neutron.conf has dozens of settings that can be modified to meet the needs of the OpenStack cloud administrator. A handful of these settings must be changed from their defaults as part of this installation.

To specify Keystone as the authentication method for Neutron, update the [DEFAULT] section of the Neutron configuration file on all hosts with the following setting:

[DEFAULT]
auth_strategy = keystone


Neutron must also be configured with the appropriate Keystone authentication settings. The username and password for the neutron user in Keystone were set earlier in this article. Update the [keystone_authtoken] section of the Neutron configuration file on all hosts with the following settings:

[keystone_authtoken]
auth_uri = http://controller01:5000
auth_url = http://controller01:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = neutron

Configuring Neutron to use a messaging service


Neutron communicates with various OpenStack services on the AMQP messaging bus. Update the [DEFAULT] and [oslo_messaging_rabbit] sections of the Neutron configuration file on all hosts to specify RabbitMQ as the messaging broker:

[DEFAULT]
rpc_backend = rabbit


The RabbitMQ authentication settings should match what was previously configured for the other OpenStack services:

[oslo_messaging_rabbit]
rabbit_host = controller01
rabbit_userid = openstack
rabbit_password = rabbit

Configuring Nova to utilize Neutron networking


Before Neutron can be utilized as the network manager for Nova Compute services, the appropriate configuration options must be set in the Nova configuration file located at /etc/nova/nova.conf on all hosts.

Start by updating the following sections with information on the Neutron API class and URL:

[DEFAULT]
network_api_class = nova.network.neutronv2.api.API

[neutron]
url = http://controller01:9696


Then, update the [neutron] section with the proper Neutron credentials:

[neutron]
auth_strategy = keystone
admin_tenant_name = service
admin_username = neutron
admin_password = neutron
admin_auth_url = http://controller01:35357/v2.0


Nova uses the firewall_driver configuration option to determine how to implement firewalling. As the option is meant for use with the nova-network networking service, it should be set to nova.virt.firewall.NoopFirewallDriver to instruct Nova not to implement firewalling when Neutron is in use:

[DEFAULT]
firewall_driver = nova.virt.firewall.NoopFirewallDriver


The security_group_api configuration option specifies which API Nova should use when working with security groups. For installations using Neutron instead of nova-network, this option should be set to neutron as follows:

[DEFAULT]
security_group_api = neutron


Nova requires additional configuration once a mechanism driver has been determined.

Configuring Neutron to notify Nova


Neutron must be configured to notify Nova of network topology changes. Update the [DEFAULT] and [nova] sections of the Neutron configuration file on the controller node located at /etc/neutron/neutron.conf with the following settings:

[DEFAULT]
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller01:8774/v2

[nova]
auth_url = http://controller01:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = nova

Summary


Neutron has seen major internal architectural improvements over the last few releases. These improvements have made developing and implementing network features easier for developers and operators, respectively. Neutron maintains the logical network architecture in its database, and network plugins and agents on each node are responsible for configuring virtual and physical network devices accordingly. With the introduction of the ML2 plugin, developers can spend less time implementing the core Neutron API functionality and more time developing value-added features.

Now that OpenStack Networking services have been installed across all nodes in the environment, configuration of a layer 2 networking plugin is all that remains before instances can be created.

Resources for Article:





Further resources on this subject: