Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
OpenDaylight Cookbook

You're reading from   OpenDaylight Cookbook Deploy and operate software-defined networking in your organization

Arrow left icon
Product type Paperback
Published in Jun 2017
Publisher Packt
ISBN-13 9781786462305
Length 336 pages
Edition 1st Edition
Concepts
Arrow right icon
Authors (6):
Arrow left icon
Rashmi Pujar Rashmi Pujar
Author Profile Icon Rashmi Pujar
Rashmi Pujar
ICARO CAMELO ICARO CAMELO
Author Profile Icon ICARO CAMELO
ICARO CAMELO
Mohamed Elserngawy Mohamed Elserngawy
Author Profile Icon Mohamed Elserngawy
Mohamed Elserngawy
Jamie Goodyear Jamie Goodyear
Author Profile Icon Jamie Goodyear
Jamie Goodyear
Mathieu Lemay Mathieu Lemay
Author Profile Icon Mathieu Lemay
Mathieu Lemay
Yrineu Rodrigues Yrineu Rodrigues
Author Profile Icon Yrineu Rodrigues
Yrineu Rodrigues
+2 more Show less
Arrow right icon
View More author details
Toc

Table of Contents (9) Chapters Close

Preface 1. OpenDaylight Fundamentals FREE CHAPTER 2. Virtual Customer Edge 3. Dynamic Interconnects 4. Network Virtualization 5. Virtual Core and Aggregation 6. Intent and Policy Networking 7. OpenDaylight Container Customizations 8. Authentication and Authorization

OpenDaylight clustering

The objective of OpenDaylight clustering is to have a set of nodes providing a fault-tolerant, decentralized, peer-to-peer membership with no single point of failure. From a networking perspective, clustering is when you have a group of compute nodes working together to achieve a common function or objective.

Getting ready

How to do it...

Perform the following steps:

  1. Create three VMs.

The mentioned repository in the Getting ready section is providing a Vagrantfile spawning VMs with the following network characteristics:

  • Adapter 1: NAT
  • Adapter 2: Bridge en0: Wi-Fi (AirPort)
  • Static IP address: 192.168.50.15X (X being the number of the node)
  • Adapter type: paravirtualized

These are the steps to follow:

$ git clone https://github.com/adetalhouet/cluster-nodes.git  
$ cd cluster-nodes
$ export NUM_OF_NODES=3
$ vagrant up

After a few minutes, to make sure the VMs are correctly running, execute the following command in the cluster-nodes folder:

$ vagrant status 
Current machine states:
node-1 running (virtualbox)
node-2 running (virtualbox)
node-3 running (virtualbox)

This environment represents multiple VMs. The VMs are all listed preceding with their current state. For more information about a specific VM, run vagrant status NAME.

The credentials of the VMs are:

  • User: vagrant
  • Password: vagrant

We now have three VMs available at those IP addresses:

  • 192.168.50.151
  • 192.168.50.152
  • 192.168.50.153
  1. Prepare the cluster deployment.

In order to deploy the cluster, we will use the cluster-deployer script provided by OpenDaylight:

$ git clone https://git.opendaylight.org/gerrit/integration/test.git 
$ cd test/tools/clustering/cluster-deployer/

You will need the following information:

  • Your VMs/containers IP addresses:

192.168.50.151, 192.168.50.152, 192.168.50.153

  • Their credentials (must be the same for all the VMs/containers):

vagrant/vagrant

  • The path to the distribution to deploy:

$ODL_ROOT

  • The cluster's configuration files located under the templates/multi-node-test repository:
$ cd templates/multi-node-test/ 
$ ls -1
akka.conf.template
jolokia.xml.template
module-shards.conf.template
modules.conf.template
org.apache.karaf.features.cfg.template
org.apache.karaf.management.cfg.template
  1. Deploy the cluster.

We are currently located in the cluster-deployer folder:

$ pwd 
test/tools/clustering/cluster-deployer

We need to create a temp folder, so the deployment script can put some temporary files in there:

$ mkdir temp 

Your tree architecture should look like this:

$ tree 
.
├── cluster-nodes
├── distribution-karaf-0.4.0-Beryllium.zip
└── test
└── tools
└── clustering
└── cluster-deployer
├── deploy.py
├── kill_controller.sh
├── remote_host.py
├── remote_host.pyc
├── restart.py
├── temp
└── templates
└── multi-node-test

Now let's deploy the cluster using this command:

$ python deploy.py --clean --distribution=../../../../distribution-karaf-0.4.0-Beryllium.zip --rootdir=/home/vagrant --hosts=192.168.50.151,192.168.50.152,192.168.50.153 --user=vagrant --password=vagrant --template=multi-node-test 

If the process went fine, you should see similar logs while deploying:

https://github.com/jgoodyear/OpenDaylightCookbook/tree/master/chapter1/chapter1-recipe8

  1. Verify the deployment.

Let's use Jolokia to read the cluster's nodes data store:

Let's request on node 1, located under 192.168.50.151, its config data store for the network-topology shard:

  • Type: GET
  • Headers:

Authorization: Basic YWRtaW46YWRtaW4=

  • URL: http://192.168.50.151:8181/jolokia/read/org.opendaylight.controller:Category=Shards,name=member-1-shard-topology-config,type=DistributedConfigDatastore
{ 
"request": {
"mbean": "org.opendaylight.controller:Category=Shards,name=member-1-shard-topology-config,type=DistributedConfigDatastore",
"type": "read"
},
"status": 200,
"timestamp": 1462739174,
"value": {
--[cut]--
"FollowerInfo": [
{
"active": true,
"id": "member-2-shard-topology-config",
"matchIndex": -1,
"nextIndex": 0,
"timeSinceLastActivity": "00:00:00.066"
},
{
"active": true,
"id": "member-3-shard-topology-config",
"matchIndex": -1,
"nextIndex": 0,
"timeSinceLastActivity": "00:00:00.067"
}
],
--[cut]--
"Leader": "member-1-shard-topology-config",
"PeerAddresses": "member-2-shard-topology-config: akka.tcp://opendaylight-cluster-data@192.168.50.152:2550/user/shardmanager-config/member-2-shard-topology-config, member-3-shard-topology-config: akka.tcp://opendaylight-cluster-data@192.168.50.153:2550/user/shardmanager-config/member-3-shard-topology-config",
"RaftState": "Leader",
--[cut]--
"ShardName": "member-1-shard-topology-config",
"VotedFor": "member-1-shard-topology-config",
--[cut]--
}

The result presents a lot of interesting information such as the leader of the requested shard, which can be seen under Leader. We can also see the current state (under active) of followers for this particular shard, represented by its id. Finally, it provides the addresses of the peers. Addresses can be found in the AKKA domain, as AKKA is the tool used to enable a node's wiring within the cluster.

By requesting the same shard on another peer, you would see similar information. For instance, for node 2 located under 192.168.50.152:

  • Type: GET
  • Headers:

Authorization: Basic YWRtaW46YWRtaW4=

  • URL: http://192.168.50.152:8181/jolokia/read/org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-config,type=DistributedConfigDatastore
Make sure to update the digit after member- in the shard name, as this should match the node you're requesting for:
{ 
"request": {
"mbean": "org.opendaylight.controller:Category=Shards,name=member-2-shard-topology-config,type=DistributedConfigDatastore",
"type": "read"
},
"status": 200,
"timestamp": 1462739791,
"value": {
--[cut]--
"Leader": "member-1-shard-topology-config",
"PeerAddresses": "member-1-shard-topology-config: akka.tcp://opendaylight-cluster-data@192.168.50.151:2550/user/shardmanager-config/member-1-shard-topology-config, member-3-shard-topology-config: akka.tcp://opendaylight-cluster-data@192.168.50.153:2550/user/shardmanager-config/member-3-shard-topology-config",
"RaftState": "Follower",
--[cut]--
"ShardName": "member-2-shard-topology-config",
"VotedFor": "member-1-shard-topology-config",
--[cut]--
}
}

We can see the peers for this shard as well as that this node is voted node 1 - to be elected the shard leader.

How it works...

OpenDaylight clustering heavily relies on AKKA technology to provide the building blocks for the clustering components, especially for operations on remote shards. The main reason for using AKKA is because it suits the existing design of MD-SAL, as it is already based on the actor model.

OpenDaylight clustering components include:

  • ClusteringConfiguration: The ClusteringConfiguration defines information about the members of the cluster, and what data they contain.
  • ClusteringService: The ClusteringService reads the cluster configuration, resolves the member's name to its IP address/hostname and maintains the registration of the components that are interested in being notified of member status changes.
  • DistributedDataStore: The DistributedDataStore is responsible for the implementation of the DOMStore, which replaces the InMemoryDataStore. It creates the local shard actors in accordance with the cluster configuration and creates the listener wrapper actors when a consumer registers a listener.
  • Shard: Shard is a processor that contains some of the data in the system. A shard is an actor, communicating via messages. Those are very similar to the operations on the DOMStore interface. When a shard receives a message, it will log the event in a journal, which could then be used as a method to recover the state of the data store. This one would be maintained in an InMemoryDataStore object.

See also

You have been reading a chapter from
OpenDaylight Cookbook
Published in: Jun 2017
Publisher: Packt
ISBN-13: 9781786462305
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime