Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Learning Karaf Cellar

You're reading from   Learning Karaf Cellar Build and implement a complete clustering solution for the Apache Karaf OSGi container

Arrow left icon
Product type Paperback
Published in Jul 2014
Publisher
ISBN-13 9781783984602
Length 124 pages
Edition 1st Edition
Arrow right icon
Authors (2):
Arrow left icon
Jean Baptiste Onofre Jean Baptiste Onofre
Author Profile Icon Jean Baptiste Onofre
Jean Baptiste Onofre
Jean-Baptiste Onofré Jean-Baptiste Onofré
Author Profile Icon Jean-Baptiste Onofré
Jean-Baptiste Onofré
Arrow right icon
View More author details
Toc

Multiple Apache Karaf containers

Natively, Apache Karaf provides a high availability mechanism based on a locking system. It's a master-slaves configuration, following an active/passive pattern. Apache Karaf supports two kinds of locks, which are as follows:

  • Lock on the filesystem
  • Lock on a database (JDBC)

When the first Apache Karaf instance starts, if the lock is available, the instance acquires the lock and becomes the master.

If another instance starts, as the lock is not available (held by the master), the instance is in standby (slave) mode and periodically checks the lock.

When you use a lock on a filesystem, all instances have to share the same filesystem. The lock is a simple file. If the Apache Karaf instances are located on different machines, it means that the filesystem storing the lock has to be available for all machines (using NFS, CIFS, SAN, and so on).

In order to enable the filesystem locking system, you have to update the etc/system.properties configuration file as follows:

karaf.lock=true
karaf.lock.class=org.apache.karaf.main.SimpleFileLock
karaf.lock.dir=/path/to/lockfile
karaf.lock.delay=10

When a shared filesystem is not an option (for security or infrastructure reasons, for instance), you can use a database to store the lock. With database locking, Apache Karaf uses a lock on a table (the KARAF_LOCK table by default). Any database that supports JDBC can be used.

The configuration is also defined in the etc/system.properties configuration file as follows:

karaf.lock=true
karaf.lock.class=org.apache.karaf.main.DefaultJDBCLock
karaf.lock.level=50
karaf.lock.delay=10
karaf.lock.jdbc.url=jdbc:derby://dbserver:1527/sample
karaf.lock.jdbc.driver=org.apache.derby.jdbc.ClientDriver
karaf.lock.jdbc.user=user
karaf.lock.jdbc.password=password
karaf.lock.jdbc.table=KARAF_LOCK
karaf.lock.jdbc.clustername=karaf
karaf.lock.jdbc.timeout=30

You have to copy the JDBC driver JAR file into the lib/ext folder. Apache Karaf provides the JDBC lock implementation dedicated to some specific databases (DefaultJDBCLock is the generic one, OracleJDBCLock for Oracle databases, DerbyJDBCLock for Derby databases, MySQLJDBCLock for MySQL databases, PostgreSQLJDBCLock for PostgreSQL databases, and SQLServerJDBCLock for Microsoft SQLServer databases).

The Apache Karaf locking mechanism provides a good solution for high availability. However, only one Apache Karaf instance is active (the master); all other instances are inactive (standby/master).

In order to provide both high availability and performance scalability, having multiple active Apache Karaf instances is a great advantage.

Provisioning clusters

Imagine you have a farm of Apache Karaf containers, each on a different machine. If you want to provision an OSGi application on the container instances, you have to connect on each container and install the features.

This means that you have to perform the following tasks:

  • Log on on each container in order to perform the same action again and again
  • Eventually, adapt the configuration depending on each local instance (port number, file path, and so on)
  • Add new instances, which will require the same action again

Basically, this means a lot of human actions with a potential risk of error. This is where a provisioning cluster helps.

The purpose of a provisioning cluster is to keep multiple container instances synchronized. For Apache Karaf, it means that a change in the status of a resource will be broadcasted to all the containers' members of the same cluster.

A resource can be a bundle, feature, configuration, or any kind of resource local to a node. This means that local actions will send an event to update the other members of the cluster.

On the other hand, it's also possible to create a cluster event that is sent to all the members to update them.

Basically, this means that a provisioning cluster performs the following tasks:

  • Creates event: This event can be created due to a local change or by hand
  • Broadcasts event: This event is sent to the members of the cluster

If provisioning is the first purpose of a provisioning cluster, it doesn't mean that it can't provide additional features useful in a cluster topology. For instance, centralized logs, load balancers, session replication, and so on are interesting features that can be provided on top of a provisioning cluster. In the next chapters, we will see Karaf Cellar as a provisioning cluster solution.

You have been reading a chapter from
Learning Karaf Cellar
Published in: Jul 2014
Publisher:
ISBN-13: 9781783984602
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image