Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
OpenDaylight Cookbook
OpenDaylight Cookbook

OpenDaylight Cookbook: Deploy and operate software-defined networking in your organization

Arrow left icon
Profile Icon Rodrigues Profile Icon ICARO CAMELO Profile Icon Rashmi Pujar
Arrow right icon
Free Trial
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.5 (2 Ratings)
Paperback Jun 2017 336 pages 1st Edition
eBook
NZ$39.99 NZ$57.99
Paperback
NZ$71.99
Subscription
Free Trial
Arrow left icon
Profile Icon Rodrigues Profile Icon ICARO CAMELO Profile Icon Rashmi Pujar
Arrow right icon
Free Trial
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.5 (2 Ratings)
Paperback Jun 2017 336 pages 1st Edition
eBook
NZ$39.99 NZ$57.99
Paperback
NZ$71.99
Subscription
Free Trial
eBook
NZ$39.99 NZ$57.99
Paperback
NZ$71.99
Subscription
Free Trial

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

OpenDaylight Cookbook

OpenDaylight Fundamentals

OpenDaylight is a collaborative platform supported by leaders in the networking industry and hosted by the Linux foundation. The goal of the platform is to enable the adoption of software-defined networking (SDN) and create a solid base for network functions virtualization (NFV).

In this chapter, we will cover the following recipes:

  • Connecting OpenFlow switches
  • Mounting a NETCONF device
  • Browsing data models with YANGUI
  • Basic distributed switching
  • Bonding links using LACP
  • Changing user authentication
  • OpenDaylight clustering

Introduction

OpenDaylight is an open source project aiming to be a common tool across the networking industry - for enterprises, service providers, and manufacturers. It provides a highly available, multi-protocol infrastructure geared at building and managing software-defined networking deployments. Based on a Model Driven Service Abstraction Layer, the platform is extensible and allows users to create applications to communicate with a wide variety of south-bound protocols and hardware.

In other words, OpenDaylight is a framework used to solve networking-related use cases in both software-defined networking and network function virtualization domains.

To download the OpenDaylight software, select the Beryllium-SR4 release available at this link:

https://www.opendaylight.org/downloads

Download the ZIP or the tarball, and once extracted, get into that folder through the command line, and you are ready to play with the recipes.

The recipes in this chapter will present fundamental use cases that one can solve using OpenDaylight.

A common and widely used network emulator, Mininet, is going to be required to perform various recipes within this book.

Prior to any recipe, as a requirement, you will need a running version of Mininet. To achieve this, please follow the steps explained in the Mininet documentation:

http://mininet.org/download/

For REST APIs access, user: admin and password: admin.

Connecting OpenFlow switches

OpenFlow is a vendor-neutral, standard communications interface defined to enable the interaction between the control and forwarding channels of an SDN architecture. The OpenFlowPlugin project intends to support implementations of the OpenFlow specification as it evolves. It currently supports OpenFlow versions 1.0 and 1.3.2. In addition, to support the core OpenFlow specification, OpenDaylight Beryllium also includes preliminary support for the table type patterns and OF-CONFIG specifications.

The OpenFlow southbound plugin currently provides the following components:

  • Flow management
  • Group management
  • Meter management
  • Statistics polling

Let's connect an OpenFlow switch to OpenDaylight.

Getting ready

This recipe requires an OpenFlow switch. If you don't have any, you can use a Mininet-VM with OvS installed. You can download Mininet-VM from the following website:

https://github.com/mininet/mininet/wiki/Mininet-VM-Images

Any version should work.

The following recipe will be presented using a Mininet-VM with OvS 2.0.2.

How to do it...

Perform the following steps:

  1. Start the OpenDaylight distribution using the karaf script. Using this script will give you access to the Karaf CLI:
$ ./bin/karaf
  1. Install the user-facing feature responsible for pulling in all dependencies needed to connect an OpenFlow switch:
opendaylight-user@root>feature:install odl-openflowplugin-all  

It might take a minute or so to complete the installation.

  1. Connect an OpenFlow switch to OpenDaylight.

As mentioned in the Getting ready section, we will use Mininet-VM as our OpenFlow switch as this VM runs an instance of OpenVSwitch:

  • Log in to Mininet-VM using:
    • Username: mininet
    • Password: mininet
  • Let's create a bridge:
mininet@mininet-vm:~$ sudo ovs-vsctl add-br br0  
  • Now let's connect OpenDaylight as the controller of br0:
mininet@mininet-vm:~$ sudo ovs-vsctl set-controller br0 tcp: ${CONTROLLER_IP}:6633  
  • Let's look at our topology:
mininet@mininet-vm:~$ sudo ovs-vsctl show
0b8ed0aa-67ac-4405-af13-70249a7e8a96
Bridge "br0"
Controller "tcp: ${CONTROLLER_IP}:6633"
is_connected: true
Port "br0"
Interface "br0"
type: internal
ovs_version: "2.0.2"

${CONTROLLER_IP} is the IP address of the host running OpenDaylight.

We're establishing a TCP connection. For a more secure connection, we could use TLS protocol; however, this will not be included in this book as this is beyond the scope of the book.

  1. Have a look at the created OpenFlow node.

Once the OpenFlow switch is connected, send the following request to get information regarding the switch:

  • Type: GET
  • Headers:

Authorization: Basic YWRtaW46YWRtaW4=

  • URL: http://localhost:8181/restconf/operational/opendaylight-inventory:nodes/

This will list all the nodes under the opendaylight-inventory subtree of MD-SAL that stores OpenFlow switch information. As we connected our first switch, we should have only one node there. It will contain all the information that the OpenFlow switch has, including its tables, its ports, flow statistics, and so on.

How it works...

Once the feature is installed, OpenDaylight is listening to connections on port 6633 and 6640. Setting up the controller on the OpenFlow-capable switch will immediately trigger a callback on OpenDaylight. It will create the communication pipeline between the switch and OpenDaylight so they can communicate in a scalable and non-blocking way.

Mounting a NETCONF device

The OpenDaylight component responsible for connecting remote NETCONF devices is called the NETCONF southbound plugin, aka the netconf-connector. Creating an instance of the netconf-connector will connect a NETCONF device. The NETCONF device will be seen as a mount point in the MD-SAL, exposing the device configuration and operational data store and its capabilities. These mount points allow applications and remote users (over RESTCONF) to interact with the mounted devices.

The netconf-connector currently supports RFC-6241, RFC-5277, and RFC-6022.

The following recipe will explain how to connect a NETCONF device to OpenDaylight.

Getting ready

How to do it...

Perform the following steps:

  1. Start the OpenDaylight Karaf distribution using the karaf script. Using this script will give you access to the Karaf CLI:
$ ./bin/karaf  

  1. Install the user-facing feature responsible for pulling in all dependencies needed to connect a NETCONF device:
opendaylight-user@root>feature:install odl-netconf-topology odl-restconf  

It might take a minute or so to complete the installation.

  1. Start your NETCONF device.

If you want to use the NETCONF test tool, it is time to simulate a NETCONF device using the following command:

$ java -jar netconf-testtool-1.0.1-Beryllium-SR4-executable.jar --device-count 1 

This will simulate one device that will be bound to port 17830.

  1. Configure a new netconf-connector.

Send the following request using RESTCONF:

  • Type: PUT
  • URL: http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device

By looking closer at the URL you will notice that the last part is new-netconf-device. This must match the node-id that we will define in the payload.

  • Headers:

Accept: application/xml

Content-Type: application/xml

Authorization: Basic YWRtaW46YWRtaW4=

  • Payload:
<node xmlns="urn:TBD:params:xml:ns:yang:network-topology"> 
<node-id>new-netconf-device</node-id>
<host xmlns="urn:opendaylight:netconf-node-topology">127.0.0.1</host>
<port xmlns="urn:opendaylight:netconf-node-topology">17830</port>
<username xmlns="urn:opendaylight:netconf-node-topology">admin</username>
<password xmlns="urn:opendaylight:netconf-node-topology">admin</password>
<tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only>
</node>
  1. Let's have a closer look at this payload:
  • node-id: Defines the name of the netconf-connector.
  • address: Defines the IP address of the NETCONF device.
  • port: Defines the port for the NETCONF session.
  • username: Defines the username of the NETCONF session. This should be provided by the NETCONF device configuration.
  • password: Defines the password of the NETCONF session. As for the username, this should be provided by the NETCONF device configuration.
  • tcp-only: Defines whether or not the NETCONF session should use TCP or SSL. If set to true it will use TCP.
This is the default configuration of the netconf-connector; it actually has more configurable elements that we will look at later.

Once you have completed the request, send it. This will spawn a new netconf-connector that connects to the NETCONF device at the provided IP address and port using the provided credentials.

  1. Verify that the netconf-connector has correctly been pushed and get information about the connected NETCONF device.

First, you could look at the log to see if any errors occurred. If no error has occurred, you will see the following:

2016-05-07 11:37:42,470 | INFO  | sing-executor-11 | NetconfDevice                    | 253 - org.opendaylight.netconf.sal-netconf-connector - 1.3.0.Beryllium | RemoteDevice{new-netconf-device}: Netconf connector initialized successfully 

Once the new netconf-connector is created, some useful metadata is written into the MD-SAL's operational data store under the network-topology subtree. To retrieve this information, you should send the following request:

  • Type: GET
  • Headers:

Authorization: Basic YWRtaW46YWRtaW4=

  • URL: http://localhost:8181/restconf/operational/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device

We're using new-netconf-device as the node-id because this is the name we assigned to the netconf-connector in a previous step.

This request will provide information about the connection status and device capabilities. The device capabilities are all the YANG models the NETCONF device is providing in its hello-message that was used to create the schema context.

  1. More configuration for the netconf-connector.

As mentioned previously, the netconf-connector contains various configuration elements. Those fields are non-mandatory, with default values. If you do not wish to override any of these values, you shouldn't provide them:

  • schema-cache-directory: This corresponds to the destination schema repository for YANG files downloaded from the NETCONF device. By default, those schemas are saved in the cache directory ($ODL_ROOT/cache/schema). Using this configuration will define where to save the downloaded schema related to the cache directory. For instance, if you assigned new-schema-cache, schemas related to this device would be located under $ODL_ROOT/cache/new-schema-cache/.
  • reconnect-on-changed-schema: If set to true, the connector will auto disconnect/reconnect when schemas are changed in the remote device. The netconf-connector will subscribe to base NETCONF notifications and listen for netconf-capability-change notifications. The default value is false.
  • connection-timeout-millis: Timeout in milliseconds after which the connection must be established. The default value is 20000 milliseconds.
  • default-request-timeout-millis: Timeout for blocking operations within transactions. Once this timer is reached, if the request is not yet finished, it will be canceled. The default value is 60000 milliseconds.
  • max-connection-attempts: Maximum number of connection attempts. Nonpositive or null values are interpreted as infinity. The default value is 0, which means it will retry forever.
  • between-attempts-timeout-millis: Initial timeout in milliseconds between connection attempts. This will be multiplied by the sleep-factor for every new attempt. The default value is 2000 milliseconds.
  • sleep-factor: Back-off factor used to increase the delay between connection attempt(s). The default value is 1.5.
  • keepalive-delay: netconf-connector sends keep-alive RPCs while the session is idle to ensure session connectivity. This delay specifies the timeout between keep-alive RPCs in seconds. Providing a 0 value will disable this mechanism. The default value is 120 seconds.

Using this configuration, your payload would look like this:

<node xmlns="urn:TBD:params:xml:ns:yang:network-topology"> 
<node-id>new-netconf-device</node-id>
<host xmlns="urn:opendaylight:netconf-node-topology">127.0.0.1</host>
<port xmlns="urn:opendaylight:netconf-node-topology">17830</port>
<username xmlns="urn:opendaylight:netconf-node-topology">admin</username>
<password xmlns="urn:opendaylight:netconf-node-topology">admin</password>
<tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only>
<schema-cache-directory xmlns="urn:opendaylight:netconf-node-topology">new_netconf_device_cache</schema-cache-directory>
<reconnect-on-changed-schema xmlns="urn:opendaylight:netconf-node-topology">false</reconnect-on-changed-schema>
<connection-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">20000</connection-timeout-millis>
<default-request-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">60000</default-request-timeout-millis>
<max-connection-attempts xmlns="urn:opendaylight:netconf-node-topology">0</max-connection-attempts>
<between-attempts-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">2000</between-attempts-timeout-millis>
<sleep-factor xmlns="urn:opendaylight:netconf-node-topology">1.5</sleep-factor>
<keepalive-delay xmlns="urn:opendaylight:netconf-node-topology">120</keepalive-delay>
</node>

How it works...

Once the request to connect a new NETCONF device is sent, OpenDaylight will set up the communication channel used for managing and interacting with the device. At first, the remote NETCONF device will send its hello-message defining all of the capabilities it has. Based on this, the netconf-connector will download all the YANG files provided by the device. All those YANG files will define the schema context of the device.

At the end of the process, some exposed capabilities might end up as unavailable, for two possible reasons:

  1. The NETCONF device provided a capability in its hello-message, but hasn't provided the schema.
  2. OpenDaylight failed to mount a given schema due to YANG violation(s).

OpenDaylight parses YANG models as per RFC 6020; if a schema is not respecting the RFC, it could end up as an unavailable-capability.

If you encounter one of these situations, looking at the logs will pinpoint the reason for such a failure.

There's more...

Once the NETCONF device is connected, all its capabilities are available through the mount point. View it as a pass-through directly to the NETCONF device.

GET data store

To see the data contained in the device data store, use the following request:

  • Type: GET
  • Headers:

Authorization: Basic YWRtaW46YWRtaW4=

  • URL: http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/

Adding yang-ext:mount/ to the URL will access the mount point created for new-netconf-device. This will show the configuration data store. If you want to see the operational one, replace config with operational in the URL.

If your device defines the YANG model, you can access its data using the following request:

  • Type: GET
  • Headers:

Authorization: Basic YWRtaW46YWRtaW4=

  • URL: http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/<module>:<container>

The <module> represents a schema defining the <container>. The <container> can either be a list or a container. It is not possible to access a single leaf. You can access containers/lists within containers/lists. The last part of the URL would look like this:

.../ yang-ext:mount/<module>:<container>/<sub-container>

Invoking RPC

In order to invoke an RPC on the remote device, you should use the following request:

  • Type: POST
  • Headers:

Accept: application/xml

Content-Type: application/xml

Authorization: Basic YWRtaW46YWRtaW4=

  • URL: http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/<module>:<operation>

This URL is accessing the mount point of new-netconf-device, and through this mount point we're accessing the <module> to call its <operation>. The <module> represents a schema defining the RPC and <operation> represents the RPC to call.

Deleting a netconf-connector

Removing a netconf-connector will drop the NETCONF session and all resources will be cleaned. To perform such an operation, use the following request:

  • Type: DELETE
  • Headers:

Authorization: Basic YWRtaW46YWRtaW4=

  • URL: http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device

By looking closer at the URL, you can see that we are removing the netconf node-idnew-netconf-device.

Browsing data models with YANGUI

YANGUI is a user interface application through which one can navigate among all YANG models available in the OpenDaylight controller. Not only does it aggregate all data models, it also enables their usage. Using this interface, you can create, remove, update, and delete any part of the model-driven data store. It provides a nice, smooth user interface making it easier to browse through the model(s).

This recipe will guide you through those functionalities.

Getting ready

This recipe only requires the OpenDaylight controller and a web browser.

How to do it...

Perform the following steps:

  1. Start your OpenDaylight distribution using the karaf script. Using this client will give you access to the Karaf CLI:
$ ./bin/karaf
  1. Install the user-facing feature responsible to pull in all dependencies needed to use YANGUI:
opendaylight-user@root>feature:install odl-dlux-yangui  

It might take a minute or so to complete the installation.

  1. Navigate to http://localhost:8181/index.html#/yangui/index:
    • Username: admin
    • Password: admin

Once logged in, all modules will be loaded until you see this message at the bottom of the screen:

Loading completed successfully

You should see the API tab listing all YANG models in the following format:

<module-name> rev.<revision-date>

For instance:

  • cluster-admin rev.2015-10-13
  • config rev.2013-04-05
  • credential-store rev.2015-02-26

By default, there isn't much you can do with the provided YANG models. So let's connect an OpenFlow switch to better understand how to use this YANGUI. To do so, please refer to the first recipe, Connecting OpenFlow switches, step 2.

Once done, refresh your web page to load newly added modules.

  1. Look for opendaylight-inventory rev.2013-08-19 and select the operational tab, as nothing will yet be in the config data store. Then click on nodes and you'll see a request bar at the bottom of the page with multiple options.

You can either copy the request to the clipboard to use it in your browser, send it, show a preview of it, or define a custom API request.

For now, we will only send the request.

You should see Request sent successfully and under this message should be the retrieved data. As we only have one switch connected, there is only one node. All the switch operational information is now printed on your screen.

You could do the same request by specifying the node-id in the request. To do that you will need to expand nodes and click on node {id}, which will enable a more fine-grained search.

How it works...

OpenDaylight has a model-driven architecture, which means that all of its components are modeled using YANG. While installing features, OpenDaylight loads YANG models, making them available within the MD-SAL data store.

YANGUI is a representation of this data store. Each schema represents a subtree based on the name of the module and its revision-date. YANGUI aggregates and parses all those models. It also acts as a REST client; through its web interface we can execute functions such as GET, POST, PUT, and DELETE.

There's more...

The example shown previously can be improved upon, as there was no user YANG model loaded. For instance, if you mount a NETCONF device containing its own YANG model, you could interact with it through YANGUI.

You would use the config data store to push/update some data, and you would see the operational data store updated accordingly. In addition, accessing your data would be much easier than having to define the exact URL, as mentioned in the Mounting a NETCONF device recipe.

See also

  • Using API doc as a REST API client

Basic distributed switching

The basic distributed switching in OpenDaylight is provided by the L2Switch project, proving layer 2 switch functionality. This project is built on top of the OpenFlowPlugin project, as it uses its capabilities to connect and interact with an OpenFlow switch.

The L2Switch project has the following features/components:

  • Packet handler: Decodes the incoming packets, and dispatches them appropriately. It defines a packet lifecycle in three stages:
    1. Decode
    2. Modify
    3. Transmit
  • Loop remover: Detects loops in the network and removes them.
  • Arp handler: Handles ARP packets provided by the packet handler.
  • Address tracker: Gathers MAC and IP addresses from network entities.
  • Host tracker: Tracks hosts' locations in the network.
  • L2Switch main: Installs flows on the switches present in the network.

Getting ready

This recipe requires an OpenFlow switch. If you don't have any, you can use a Mininet-VM with OvS installed.

You can download Mininet-VM from their website https://github.com/mininet/mininet/wiki/Mininet-VM-Images. All versions should work.

This recipe will be presented using a Mininet-VM with OvS 2.0.2.

How to do it...

Perform the following steps:

  1. Start your OpenDaylight distribution using the karaf script. Using this script will give you access to the Karaf CLI:
$ ./bin/karaf 
  1. Install the user-facing feature responsible for pulling in all dependencies needed to enable basic distributed switching:
opendaylight-user@root>feature:install odl-l2switch-switch-ui  

It might take a few minutes to complete the installation.

  1. Creating a network using Mininet:
  • Log in to Mininet-VM using:
    • Username: mininet
    • Password: mininet
  • Clean current Mininet state:

If you're using the same instance as before, you want to clear its state. We previously created one bridge, br0, so let's delete it:

mininet@mininet-vm:~$ sudo ovs-vsctl del-br br0  
  • Create the topology:

In order to do so, use the following command:

mininet@mininet-vm:~$ sudo mn --controller=remote,ip=${CONTROLLER_IP}--topo=linear,3 --switch ovsk,protocols=OpenFlow13  

Using this command will create a virtual network provisioned with three switches that will connect to the controller specified by ${CONTROLLER_IP}. The previous command will also set up links between switches and hosts.

We will end up with three OpenFlow nodes in the opendaylight-inventory:

  • Type: GET
  • Headers:

Authorization: Basic YWRtaW46YWRtaW4=

  • URL: http://localhost:8080/restconf/operational/opendaylight-inventory:nodes

This request will return the following:

            --[cut]- 
{
"id": "openflow:1",
--[cut]-
},
{
"id": "openflow:2",
--[cut]-
},
{
"id": "openflow:3",
--[cut]-
  1. Generate network traffic using mininet.

Between two hosts using ping:

mininet> h1 ping h2 

The preceding command will cause host1 (h1) to ping host2 (h2), and we can see that host1 is able to reach h2.

Between all hosts:

mininet> pingall 

The pingall command will make all hosts ping all other hosts.

  1. Checking address observations.

This is done thanks to the address tracker that observes address tuples on a switch's port (node-connector).

This information will be present in the OpenFlow node connector and can be retrieved using the following request (for openflow:2, which is the switch 2):

  • Type: GET
  • Headers:

Authorization: Basic YWRtaW46YWRtaW4=

  • URL: http://localhost:8080/restconf/operational/opendaylight-inventory:nodes/node/openflow:1/node-connector/openflow:2:1

This request will return the following:

{ 
"nodes": {
"node": [
{
"id": "openflow:2",
"node-connector": [
{
"id": "openflow:2:1",
--[cut]--
"address-tracker:addresses": [
{
"id": 0,
"first-seen": 1462650320161,
"mac": "7a:e4:ba:4d:bc:35",
"last-seen": 1462650320161,
"ip": "10.0.0.2"
}
]
},
--[cut]--

This result means the host with the mac address 7a:e4:ba:4d:bc:35 has sent a packet to switch 2 and that port 1 of switch 2 handled the incoming packet.

  1. Checking the host address and attachment point to the node/switch:
  • Type: GET
  • Headers:

Authorization: Basic YWRtaW46YWRtaW4=

  • URL: http://localhost:8080/restconf/operational/network-topology:network-topology/topology/flow:1/

This will return the following:

            --[cut]-- 
<node>
<node-id>host:c2:5f:c0:14:f3:1d</node-id>
<termination-point>
<tp-id>host:c2:5f:c0:14:f3:1d</tp-id>
</termination-point>
<attachment-points>
<tp-id>openflow:3:1</tp-id>
<corresponding-tp>host:c2:5f:c0:14:f3:1d</corresponding-tp>
<active>true</active>
</attachment-points>
<addresses>
<id>2</id>
<mac>c2:5f:c0:14:f3:1d</mac>
<last-seen>1462650434613</last-seen>
<ip>10.0.0.3</ip>
<first-seen>1462650434613</first-seen>
</addresses>
<id>c2:5f:c0:14:f3:1d</id>
</node>
--[cut]--

address contains information about the mapping between the MAC address and the IP address, and attachment-points defines the mapping between the MAC address and the switch port.

  1. Checking the spanning tree protocol status for each link.

The spanning tree protocol status can be either forwarding, meaning packets are flowing on an active link, or discarding, indicating packets are not sent as the link is inactive.

To check the link status, send this request:

  • Type: GET
  • Headers:

Authorization: Basic YWRtaW46YWRtaW4=

  • URL: http://localhost:8181/restconf/operational/opendaylight-inventory:nodes/node/openflow:2/node-connector/openflow:2:2

This will return the following:

{ 
"node-connector": [
{
"id": "openflow:2:2",
--[cut]--
"stp-status-aware-node-connector:status": "forwarding",
"opendaylight-port-statistics:flow-capable-node-connector-statistics": {}
}
}
]
}

In this case, all packets coming in port 2 of switch 2 will be forwarded on the established link.

  1. Checking created links.

In order to check the links created, we are going to send the same request as the one sent at step 6, but we will focus on a different part of the response:

  • Type: GET
  • Headers:

Authorization: Basic YWRtaW46YWRtaW4=

  • URL: http://localhost:8080/restconf/operational/network-topology:network-topology/topology/flow:1/

The different part this time is the following:

            --[cut]-- 
<link>
<link-id>host:7a:e4:ba:4d:bc:35/openflow:2:1</link-id>
<source>
<source-tp>host:7a:e4:ba:4d:bc:35</source-tp>
<source-node>host:7a:e4:ba:4d:bc:35</source-node>
</source>
<destination>
<dest-node>openflow:2</dest-node>
<dest-tp>openflow:2:1</dest-tp>
</destination>
</link>
<link>
<link-id>openflow:3:1/host:c2:5f:c0:14:f3:1d</link-id>
<source>
<source-tp>openflow:3:1</source-tp>
<source-node>openflow:3</source-node>
</source>
<destination>
<dest-node>host:c2:5f:c0:14:f3:1d</dest-node>
<dest-tp>host:c2:5f:c0:14:f3:1d</dest-tp>
</destination>
</link>
--[cut]--

It represents links that were established while setting the topology earlier. It also provides the source, destination node, and termination point.

How it works...

It leverages the OpenFlowPlugin project providing the basic communication channel between OpenFlow capable switches and OpenDaylight. The layer 2 discovery is handled by an ARP listener/responder. Using it, OpenDaylight is able to learn and track network entity addresses. Finally, using graph algorithms, it is able to detect the shortest path and remove loops within the network.

There's more...

It is possible to change or increase basic configuration of the L2Switch component to perform more accurate operations.

Configuring L2Switch

We have presented L2Switch usage with the default configuration.

To change the configuration, here are the steps to follow:

  1. Execute the two first points mentioned previously.
  2. Stop OpenDaylight:
opendaylight-user@root>logout  
  1. Navigate to $ODL_ROOT/etc/opendaylight/karaf/.
  2. Open the configuration file you want to modify.
  3. Perform your modification.
Do not play with the configuration files and their values, and be very careful and change only what is needed based on the link provided at the beginning of this tip, or else you could break functionality.
  1. Save the file and re-execute the steps mentioned in the How to do it section.

The new configuration should now be applied.

Bonding links using LACP

The Link Aggregation Control Protocol (LACP) project within OpenDaylight implements the LACP.

It will be used to auto-discover and aggregate links between the known OpenDaylight network and external equipment such as LACP capable endpoints or switches. Using LACP will increase the resilience of the link(s) and will aggregate the bandwidth.

LACP protocol was first released as an IEEE Ethernet specification 802.3ad, but later moved to Bridging and Management Group as an 802.1AX specification.

The LACP module will listen for LACP control packets that are generated from legacy switches (non-OpenFlow enabled).

Getting ready

This recipe requires an OpenFlow switch. If you don't have any, you can use a Mininet-VM with OvS installed.

You can download Mininet-VM from their website:

https://github.com/mininet/mininet/wiki/Mininet-VM-Images

OvS users:
You must use a version of OvS superior or equal to 2.1 so it can handle group tables. If you previously downloaded a Mininet-VM, you could create a new VM using its disk, and then update the OvS version within Mininet. Perform the update within mininet; you'll have to run the following commands:
$ cd /home/mininet/mininet/util
$ ./install.sh -V 2.3.1
This script will try updating your packages, but this operation can fail. If it does, run the command yourself then re-execute the script:
$ sudo apt-get update --fix-missing
Then rerun the install script. After a couple of minutes, the new version of OvS should be installed:
mininet@mininet-vm:~$ sudo ovs-vsctl show 1077578e-f495-46a1-a96b-441223e7cc22 ovs_version: "2.3.1"

This recipe will be presented using a Mininet-VM with OvS 2.3.1.

In order to use LACP, you have to ensure that legacy (non-OpenFlow) switches are configured with the LACP mode active with a long timeout to allow the LACP plugin to respond to its messages.

The sample code for this recipe is available at:

https://github.com/jgoodyear/OpenDaylightCookbook/tree/master/chapter1/chapter1-recipe5

How to do it...

Perform the following steps:

  1. Start your OpenDaylight distribution using the karaf script. Using this script will give you access to the Karaf CLI:
$ ./bin/karaf 
  1. Install the user-facing feature responsible for pulling in all dependencies needed to enable LACP functionality:
opendaylight-user@root>feature:install odl-lacp-ui  

It might take a few minutes to complete the installation.

  1. Creating a network using Mininet:
  • Log in to Mininet-VM using:
    • Username: mininet
    • Password: mininet
  • Create the topology:

In order to do so, use the following command:

mininet@mininet-vm:~$ sudo mn --controller=remote,ip=${CONTROLLER_IP} --topo=linear,1 --switch ovsk,protocols=OpenFlow13  

This command will create a virtual network containing one switch, connected to ${CONTROLLER_IP}.

We will end up with one OpenFlow node in the opendaylight-inventory:

  • Type: GET
  • Headers:

Authorization: Basic YWRtaW46YWRtaW4=

  • URL: http://localhost:8080/restconf/operational/opendaylight-inventory:nodes

This request will return the following:

     --[cut]- 
{
"id": "openflow:1",
--[cut]--
}
  1. Open a new terminal to access your Mininet instance and verify that the flow entry handling LACP packets is installed:
mininet@mininet-vm:~$ sudo ovs-ofctl -O OpenFlow13 dump-flows s1
OFPST_FLOW reply (OF1.3) (xid=0x2):
cookie=0x3000000000000003, duration=185.98s, table=0, n_packets=0, n_bytes=0, priority=5,dl_dst=01:80:c2:00:00:02,dl_type=0x8809 actions=CONTROLLER:65535

The flow is using ether type 0x8809, which is the one defined for LACP.

  1. From the Mininet CLI, let's add a new link between switch1 (s1) and host1 (h1), and then aggregate the two links. The Mininet CLI is where you ended up after creating the topology in step 3:
mininet> py net.addLink(s1, net.get('h1'))
<mininet.link.Link object at 0x7fe1fa0f17d0>
mininet> py s1.attach('s1-eth2')
  1. Configure host1 (h1) to act as your legacy switch. To do that, we will create a bond interface with mode type set to LACP. In order to do so, we need to create a new file under /etc/mobprobe.d in your Mininet instance.

Use the terminal window opened at step 4 to access this directory and create a file bonding.conf with this content:

alias bond0 bonding 
options bonding mode=4

mode=4 refers to LACP, and by default the timeout is set to be long.

  1. Using the Mininet CLI, let's create and configure the bond interface and add both physical interfaces of host, h1-eth0, and h1-eth, as members of the bound interface. Then set the interface up:
mininet> py net.get('h1').cmd('modprobe bonding')
mininet> py net.get('h1').cmd('ip link add bond0 type bond')
mininet> py net.get('h1').cmd('ip link set bond0 address ${MAC_ADDRESS}')
mininet> py net.get('h1').cmd('ip link set h1-eth0 down')
mininet> py net.get('h1').cmd('ip link set h1-eth0 master bond0')
mininet> py net.get('h1').cmd('ip link set h1-eth1 down')
mininet> py net.get('h1').cmd('ip link set h1-eth1 master bond0')
mininet> py net.get('h1').cmd('ip link set bond0 up')

Make sure to change ${MAC_ADDRESS} with an appropriate MAC address.

Once the created bond0 interface is up, host1 will send LACP packets to switch1. OpenDaylight LACP's module will create the link aggregation group on the switch1 (s1).

To visualize the bound interface, you can use the following command:

mininet> py net.get('h1').cmd('cat /proc/net/bonding/bond0')  
  1. Finally, let's look at the switch1 table; there should be a new entry within the group table with type=select:
mininet@mininet-vm:~$ sudo ovs-ofctl -O Openflow13 dump-groups s1
OFPST_GROUP_DESC reply (OF1.3) (xid=0x2):
group_id=41238,type=select,bucket=weight:0,actions=output:1,bucket=weight:0,actions=output:2
group_id=48742,type=all,bucket=weight:0,actions=drop

Let's focus on the first entry: the flow type is select, which means that the packets are processed by a single bucket in the group as well as have two buckets assigned with the same weight. Each bucket represents a given port on the switch, port 1 (s1-eth1) and 2 (s1-eth2) respectively, in this example.

  1. To apply link aggregation group on switches, flows should define the group_id of the established group table entry, which in our case is group_id=41238. The flow presented here is for the ARP Ethernet frame (dl_type = 0x0806):
sudo ovs-ofctl -O Openflow13 add-flow s1 dl_type=0x0806,dl_src=SRC_MAC,dl_dst=DST_MAC,actions=group:60169

How it works...

It leverages the OpenFlowPlugin project providing the basic communication channel between OpenFlow capable switches and OpenDaylight. The LCAP project implements the Link Aggregation Control Protocol as a service in MD-SAL. Using the packet processing service, it will receive and process LACP packets. Based on a periodic state machine, it will define whether or not to maintain an aggregation.

Changing user authentication

OpenDaylight's security is, in part, provided by the AAA project, which implements mechanisms to bring:

  • Authentication: Used to authenticate the users
  • Authorization: Used to authorize access to resources for a given user
  • Accounting: Used to record user's access to resources

By default, when you install any features, AAA authentication will be installed. It provides two users by default:

  • User admin with password admin
  • User user with password user

Getting ready

How to do it...

Perform the following steps:

  1. Start your OpenDaylight distribution using the karaf script. Using this script will give you access to the Karaf CLI:
$ ./bin/karaf 
  1. Install the user-facing feature, responsible for pulling in all dependencies needed to enable user authentication:
opendaylight-user@root>feature:install odl-aaa-authn  

It might take a few minutes to complete the installation.

  1. To retrieve the list of existing users, send the following request:
  • Type: GET
  • Headers:

Authorization: Basic YWRtaW46YWRtaW4=

  • URL: http://localhost:8181/auth/v1/users
{ 
"users": [
{
"userid": "admin@sdn",
"name": "admin",
"description": "admin user",
"enabled": true,
"email": "",
"password": "**********",
"salt": "**********",
"domainid": "sdn"
},
{
"userid": "user@sdn",
"name": "user",
"description": "user user",
"enabled": true,
"email": "",
"password": "**********",
"salt": "**********",
"domainid": "sdn"
}
]
}
  1. Update the configuration of a user.

First, you need the userid that can be retrieved using the previous request. For this tutorial, we will use userid=user@sdn.

To update the password for this user, do the following request:

  • Type: PUT
  • Headers:

Authorization: Basic YWRtaW46YWRtaW4=

This is the basic admin/admin authorization. We will not modify this one.

  • Payload:
{ 
"userid": "user@sdn",
"name": "user",
"description": "user user",
"enabled": true,
"email": "",
"password": "newpassword",
"domainid": "sdn"
}
  • URL: http://localhost:8181/auth/v1/users/user@sdn

Once sent, you will receive the acknowledged payload.

  1. Try your new user's password. Open your browser and go here http://localhost:8181/auth/v1/users, you should be asked for credentials. Use:
    • Username: user
    • Password: newpassword

You should now be logged in with the new, updated password for the user.

How it works...

The AAA project supports role-based access control (RBAC) based on the Apache Shiro permissions system. It defines a REST application used to interact with the h2 database. Each table has its own REST endpoint that can be used using a REST client to modify the h2 database content, such as the user information.

OpenDaylight clustering

The objective of OpenDaylight clustering is to have a set of nodes providing a fault-tolerant, decentralized, peer-to-peer membership with no single point of failure. From a networking perspective, clustering is when you have a group of compute nodes working together to achieve a common function or objective.

Getting ready

How to do it...

Perform the following steps:

  1. Create three VMs.

The mentioned repository in the Getting ready section is providing a Vagrantfile spawning VMs with the following network characteristics:

  • Adapter 1: NAT
  • Adapter 2: Bridge en0: Wi-Fi (AirPort)
  • Static IP address: 192.168.50.15X (X being the number of the node)
  • Adapter type: paravirtualized

These are the steps to follow:

$ git clone https://github.com/adetalhouet/cluster-nodes.git  
$ cd cluster-nodes
$ export NUM_OF_NODES=3
$ vagrant up

After a few minutes, to make sure the VMs are correctly running, execute the following command in the cluster-nodes folder:

$ vagrant status 
Current machine states:
node-1 running (virtualbox)
node-2 running (virtualbox)
node-3 running (virtualbox)

This environment represents multiple VMs. The VMs are all listed preceding with their current state. For more information about a specific VM, run vagrant status NAME.

The credentials of the VMs are:

  • User: vagrant
  • Password: vagrant

We now have three VMs available at those IP addresses:

  • 192.168.50.151
  • 192.168.50.152
  • 192.168.50.153
  1. Prepare the cluster deployment.

In order to deploy the cluster, we will use the cluster-deployer script provided by OpenDaylight:

$ git clone https://git.opendaylight.org/gerrit/integration/test.git 
$ cd test/tools/clustering/cluster-deployer/

You will need the following information:

  • Your VMs/containers IP addresses:

192.168.50.151, 192.168.50.152, 192.168.50.153

  • Their credentials (must be the same for all the VMs/containers):

vagrant/vagrant

  • The path to the distribution to deploy:

$ODL_ROOT

  • The cluster's configuration files located under the templates/multi-node-test repository:
$ cd templates/multi-node-test/ 
$ ls -1
akka.conf.template
jolokia.xml.template
module-shards.conf.template
modules.conf.template
org.apache.karaf.features.cfg.template
org.apache.karaf.management.cfg.template
  1. Deploy the cluster.

We are currently located in the cluster-deployer folder:

$ pwd 
test/tools/clustering/cluster-deployer

We need to create a temp folder, so the deployment script can put some temporary files in there:

$ mkdir temp 

Your tree architecture should look like this:

$ tree 
.
├── cluster-nodes
├── distribution-karaf-0.4.0-Beryllium.zip
└── test
└── tools
└── clustering
└── cluster-deployer
├── deploy.py
├── kill_controller.sh
├── remote_host.py
├── remote_host.pyc
├── restart.py
├── temp
└── templates
└── multi-node-test

Now let's deploy the cluster using this command:

$ python deploy.py --clean --distribution=../../../../distribution-karaf-0.4.0-Beryllium.zip --rootdir=/home/vagrant --hosts=192.168.50.151,192.168.50.152,192.168.50.153 --user=vagrant --password=vagrant --template=multi-node-test 

If the process went fine, you should see similar logs while deploying:

https://github.com/jgoodyear/OpenDaylightCookbook/tree/master/chapter1/chapter1-recipe8

  1. Verify the deployment.

Let's use Jolokia to read the cluster's nodes data store:

Let's request on node 1, located under 192.168.50.151, its config data store for the network-topology shard:

  • Type: GET
  • Headers:

Authorization: Basic YWRtaW46YWRtaW4=

  • URL: http://192.168.50.151:8181/jolokia/read/org.opendaylight.controller:Category=Shards,name=member-1-shard-topology-config,type=DistributedConfigDatastore
{ 
"request": {
"mbean": "org.opendaylight.controller:Category=Shards,name=member-1-shard-topology-config,type=DistributedConfigDatastore",
"type": "read"
},
"status": 200,
"timestamp": 1462739174,
"value": {
--[cut]--
"FollowerInfo": [
{
"active": true,
"id": "member-2-shard-topology-config",
"matchIndex": -1,
"nextIndex": 0,
"timeSinceLastActivity": "00:00:00.066"
},
{
"active": true,
"id": "member-3-shard-topology-config",
"matchIndex": -1,
"nextIndex": 0,
"timeSinceLastActivity": "00:00:00.067"
}
],
--[cut]--
"Leader": "member-1-shard-topology-config",
"PeerAddresses": "member-2-shard-topology-config: akka.tcp://opendaylight-cluster-data@192.168.50.152:2550/user/shardmanager-config/member-2-shard-topology-config, member-3-shard-topology-config: akka.tcp://opendaylight-cluster-data@192.168.50.153:2550/user/shardmanager-config/member-3-shard-topology-config",
"RaftState": "Leader",
--[cut]--
"ShardName": "member-1-shard-topology-config",
"VotedFor": "member-1-shard-topology-config",
--[cut]--
}

The result presents a lot of interesting information such as the leader of the requested shard, which can be seen under Leader. We can also see the current state (under active) of followers for this particular shard, represented by its id. Finally, it provides the addresses of the peers. Addresses can be found in the AKKA domain, as AKKA is the tool used to enable a node's wiring within the cluster.

By requesting the same shard on another peer, you would see similar information. For instance, for node 2 located under 192.168.50.152:

  • Type: GET
  • Headers:

Authorization: Basic YWRtaW46YWRtaW4=

  • URL: http://192.168.50.152:8181/jolokia/read/org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-config,type=DistributedConfigDatastore
Make sure to update the digit after member- in the shard name, as this should match the node you're requesting for:
{ 
"request": {
"mbean": "org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-config,type=DistributedConfigDatastore",
"type": "read"
},
"status": 200,
"timestamp": 1462739791,
"value": {
--[cut]--
"Leader": "member-1-shard-topology-config",
"PeerAddresses": "member-1-shard-topology-config: akka.tcp://opendaylight-cluster-data@192.168.50.151:2550/user/shardmanager-config/member-1-shard-topology-config, member-3-shard-topology-config: akka.tcp://opendaylight-cluster-data@192.168.50.153:2550/user/shardmanager-config/member-3-shard-topology-config",
"RaftState": "Follower",
--[cut]--
"ShardName": "member-2-shard-topology-config",
"VotedFor": "member-1-shard-topology-config",
--[cut]--
}
}

We can see the peers for this shard as well as that this node is voted node 1 - to be elected the shard leader.

How it works...

OpenDaylight clustering heavily relies on AKKA technology to provide the building blocks for the clustering components, especially for operations on remote shards. The main reason for using AKKA is because it suits the existing design of MD-SAL, as it is already based on the actor model.

OpenDaylight clustering components include:

  • ClusteringConfiguration: The ClusteringConfiguration defines information about the members of the cluster, and what data they contain.
  • ClusteringService: The ClusteringService reads the cluster configuration, resolves the member's name to its IP address/hostname and maintains the registration of the components that are interested in being notified of member status changes.
  • DistributedDataStore: The DistributedDataStore is responsible for the implementation of the DOMStore, which replaces the InMemoryDataStore. It creates the local shard actors in accordance with the cluster configuration and creates the listener wrapper actors when a consumer registers a listener.
  • Shard: Shard is a processor that contains some of the data in the system. A shard is an actor, communicating via messages. Those are very similar to the operations on the DOMStore interface. When a shard receives a message, it will log the event in a journal, which could then be used as a method to recover the state of the data store. This one would be maintained in an InMemoryDataStore object.

See also

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • This book will help you to build intelligent SDN networks that save your company time, money, and resources
  • From eminent authors, learn to address real-world challenges and troubleshoot day-to-day scalability and performance problems faced in OpenDayLight deployments
  • This is the only book that offers you quick fixes to create your own branded OpenDaylight

Description

OpenDaylight is an open source platform to program and build Software-Defined Networks (SDN). Its aim is to accelerate the adoption of SDN and NFV. With above 90 practical recipes, this book will help you to solve day-to-day problems and maintenance tasks surrounding OpenDaylight’s implementation. This book starts with the OpenDaylight fundamentals. In this book, you will gain a sound understanding of the methods and techniques when deploying OpenDaylight in production environment. Later on, you will learn to create a Service Chain using SFC. This book will address common problems and day-to-day maintenance tasks with OpenDaylight. We’ll also will teach you how to interact with OpenDaylight APIs and use the necessary tools to simulate networks. You will also explore how to create your own branded OpenDaylight along with authorising and authenticating users using OpenDaylight Identity Manager. By the end of this book, you will have the necessary skills to operate an OpenDaylight SDN environment.

Who is this book for?

This book is for experienced network administrators and IT professionals who are using or deploying SDN/OpenDaylight and are looking to gain expertise in building SDN solutions for organizations.

What you will learn

  • Grasp the fundamentals of OpenDaylight
  • Customize, authenticate, & authorize in OpenDaylight
  • Analyse network access control and policy
  • Manage datacenter optimization
  • Integrate OpenDaylight with third-party frameworks
  • Deploy, configure, and tune OpenDaylight-based solutions

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jun 29, 2017
Length: 336 pages
Edition : 1st
Language : English
ISBN-13 : 9781786462305
Concepts :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Jun 29, 2017
Length: 336 pages
Edition : 1st
Language : English
ISBN-13 : 9781786462305
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just NZ$7 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just NZ$7 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total NZ$ 215.97
OpenDaylight Cookbook
NZ$71.99
Software-Defined Networking (SDN) with OpenStack
NZ$71.99
Learning OpenDaylight
NZ$71.99
Total NZ$ 215.97 Stars icon

Table of Contents

8 Chapters
OpenDaylight Fundamentals Chevron down icon Chevron up icon
Virtual Customer Edge Chevron down icon Chevron up icon
Dynamic Interconnects Chevron down icon Chevron up icon
Network Virtualization Chevron down icon Chevron up icon
Virtual Core and Aggregation Chevron down icon Chevron up icon
Intent and Policy Networking Chevron down icon Chevron up icon
OpenDaylight Container Customizations Chevron down icon Chevron up icon
Authentication and Authorization Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.5
(2 Ratings)
5 star 50%
4 star 0%
3 star 0%
2 star 50%
1 star 0%
Pradeeban Jul 04, 2017
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is an easy-to-follow book to learn OpenDaylight with hands-on exercises. It touches various projects of OpenDaylight and various aspects of Software-Defined Networking (SDN). I was a reviewer of the book, and I enjoyed reviewing the book, as it is well-written and informative. I highly recommend the book.
Amazon Verified review Amazon
CISSP Jul 11, 2017
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
Encoding errors result in colons and other characters displaying incorrectly. Screenshot attached.The actual material has been good thus far; I'll re-review once this has been fixed.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.