Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
APACHE KARAF COOKBOOK
APACHE KARAF COOKBOOK

APACHE KARAF COOKBOOK: Over 60 recipes to help you get the most out of your Apache Karaf deployments

eBook
€15.99 €23.99
Paperback
€29.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Table of content icon View table of contents Preview book icon Preview Book

APACHE KARAF COOKBOOK

Chapter 1. Apache Karaf for System Builders

In this chapter, we will cover the following topics:

  • Configuring production-ready logging in Apache Karaf
  • Creating our own custom Karaf command using a Maven archetype
  • Branding the Apache Karaf console
  • Deploying applications as a feature
  • Using JMX to monitor and administer Apache Karaf
  • Reconfiguring SSH access to Apache Karaf
  • Installing Apache Karaf as a service
  • Setting up Apache Karaf for high availability

Introduction

Experienced users of Apache Karaf will tell you that out of the box, Karaf provides you with the features and tools you'll need to deploy your application. However, to build a production-ready environment, you'll want to tweak things.

The recipes in this chapter are devoted to systems builders, the people who need to make their Apache Karaf instance production-ready and applications within it manageable.

Tip

New to Apache Karaf and OSGi?

Readers interested in obtaining a deeper understanding of Apache Karaf and its underlying technologies should consult Packt Publishing's Instant OSGi Starter, Jamie Goodyear and Johan Edstrom, and Learning Apache Karaf, Jamie Goodyear, Johan Edstrom, and Heath Kesler.

Configuring production-ready logging in Apache Karaf

One of the first tasks administrators of Apache Karaf undertake is changing the default logging configuration to more production-ready settings.

To improve the default logging configuration, we'll perform the following tasks:

  • Update the logfile location to be outside the data folder. This helps administrators avoid accidentally wiping out logfiles when deleting runtime data.
  • Increase the logfile size. The default size of 1 MB is too small for most production deployments. Generally, we set this to 50 or 100 MB, depending on the available disk space.
  • Increase the number of logfiles we retain. There is no correct number of logfiles to retain. However, when disk space is cheap and available, keeping a large number of files is a preferred configuration.

How to do it…

Configuring Karaf's logging mechanism requires you to edit the etc/org.ops4j.pax.logging.cfg file. Open the file with your preferred editor and alter the following highlighted code entries:

# Root logger
log4j.rootLogger=INFO, out, osgi:*
log4j.throwableRenderer=org.apache.log4j.OsgiThrowableRenderer

# File appender
log4j.appender.out=org.apache.log4j.RollingFileAppender
log4j.appender.out.layout=org.apache.log4j.PatternLayout
log4j.appender.out.layout.ConversionPattern=%d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n
log4j.appender.out.file=${karaf.base}/log/karaf.log
log4j.appender.out.append=true
log4j.appender.out.maxFileSize=10MB
log4j.appender.out.maxBackupIndex=100

In the preceding configuration, we instruct Karaf to write logs to a log folder in the base installation directory, increase the logfile size to 10 MB, and increase the number of retained logfiles to 100.

When finished editing the file, save the changes. They will take effect shortly.

Tip

We can change the verbosity of logging by altering the log4j.rootLogger entry from INFO to DEBUG, WARN, ERROR, or TRACE.

How it works…

The logging system for Karaf is based on OPS4J Pax Logging with the log4j library acting as its backend. The configuration file, etc/org.ops4j.pax.logging.cfg, is used to define appenders, log levels, and so on. Let's take a look at the following default appender configuration and how we'll tweak it to become more production-ready:

# Root logger
log4j.rootLogger=INFO, out, osgi:*
log4j.throwableRenderer=org.apache.log4j.OsgiThrowableRenderer

# File appender
#log4j.appender.out=org.apache.log4j.RollingFileAppender
#log4j.appender.out.layout=org.apache.log4j.PatternLayout
#log4j.appender.out.layout.ConversionPattern=%d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n
#log4j.appender.out.file=${karaf.data}/log/karaf.log
#log4j.appender.out.append=true
#log4j.appender.out.maxFileSize=1MB
#log4j.appender.out.maxBackupIndex=10

In the previous code, the File appender configuration sets up the default Karaf logging behavior. The initial configuration sets RollingFileAppender and constructs a log entry pattern. The remaining options dictate the location of the logfile, its size, and the number of logfiles to retain.

Karaf monitors the configuration file in the KARAF_HOME/etc folder. When the updates to the configuration file are read, the logging service is updated with the new value(s). The mechanism that allows this behavior is provided by File Install (available at http://felix.apache.org/site/apache-felix-file-install.html) and the OSGi Configuration Admin service. Have a look at the following figure:

How it works…

As illustrated in the preceding figure, when a file in the KARAF_HOME/etc directory is created, deleted, or modified, the file scanner will pick up on the event. Given a configuration file change (a change in the file format of the Java properties file), a configuration processor will process the entries and update the OSGi Configuration Admin service.

Tip

Downloading the example code

You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

There's more…

To further improve logging, you can provide the log4j library with an external logging location, separating the I/O requirements of logging from the base system at the expense of increased network traffic. This architecture is shown in the following figure:

There's more…

To achieve this logging architecture, you'll need to mount the external volume on the server on which Karaf is running.

See also

  • The Creating our own custom Karaf command using a Maven archetype recipe.

Creating our own custom Karaf command using a Maven archetype

The Karaf console provides a multitude of useful commands to interact with the OSGi runtime and manage deployed applications. As a systems builder, you may want to develop custom commands that integrate directly into Karaf so that you can automate tasks or interact directly with your applications.

Custom Karaf commands will appear in your container as a fully integrated component of the console, as shown in the following screenshot:

Creating our own custom Karaf command using a Maven archetype

The previous screenshot illustrates our sample cookbook command accepting an option flag and an argument. Let's dive into building your own command.

Getting ready

The ingredients of this recipe include the Apache Karaf distribution kit, access to JDK, Maven, and a source code editor. The sample code for this recipe is available at https://github.com/jgoodyear/ApacheKarafCookbook/tree/master/chapter1/chapter1-recipe2.

How to do it…

  1. The first step is generating a template command project. To encourage building custom commands, the community has provided the following Maven archetype invocation to generate Karaf command projects:
    mvn archetype:generate \
      -DarchetypeGroupId=org.apache.karaf.archetypes \
      -DarchetypeArtifactId=karaf-command-archetype \
      -DarchetypeVersion=3.0.0 \
      -DgroupId=com.packt.chapter1 \
      -DartifactId=command \
    -Dversion=1.0.0-SNAPSHOT \
    -Dpackage=com.packt
    

    In the preceding archetype invocation, we supply the Maven project group and artifact names. The process will request you to supply a command name. Maven then generates a project template for your command.

  2. The next step is implementing your custom code. The custom command template project will supply you with a Maven POM file, Blueprint wiring (in the src/main/resources/OSGI-INF/blueprint directory), and custom command stub implementation (in the src/main/java/ directory). Edit these files as required to add your custom actions.
  3. The last step is building and deploying the custom command in Karaf. We build our command via the Maven invocation mvn install. Deploying it in Karaf only requires issuing a well-formed install command; to do this, invoke install –s mvn:groupId/artifactId in the Karaf console. Consider the following invocation:
    karaf@root()> install –s mvn:com.packt.chapter1/command
     Bundle ID: 88
    karaf@root()>
    

    The preceding invocation has the groupId value as com.packt.chapter1 and the artifactId value as command.

How it works…

The Maven archetype generates the POM build file, Java code, and Blueprint file for your custom command. Let's take a look at these key components.

The generated POM file contains all of the essential dependencies a Karaf command requires and sets up a basic Maven Bundle Plugin configuration. Edit this file to bring in additional libraries your command requires. Make sure that you update your bundle's build parameters accordingly. When this project is built, a bundle will be produced that can be installed directly into Karaf.

Our custom command logic resides in the generated Java source file, which will be named after the command name you supplied. The generated command extends Karaf's OSGICommandSupport class, which provides us with access to the underlying command session and OSGi environment. A Command annotation adorns our code. This provides the runtime with the scope, name, and description. Karaf provides the Argument and Option annotations to simplify adding a command-line argument and option processing.

The Blueprint container wires together our command implementation to the commands available in Karaf's console.

Tip

For more information on extending Karaf's console, see http://karaf.apache.org/manual/latest/developers-guide/extending.html.

There's more…

Thanks to Apache Karaf's SSHD service and remote client, your custom commands can be leveraged to provide external command and control of your applications. Just pass your command and its parameters to the remote client and monitor the returned results.

See also

  • The Branding the Apache Karaf console recipe

Branding the Apache Karaf console

Apache Karaf is used as the runtime environment for production application platforms. In such deployments, it is common to have Karaf sporting a custom branding.

The Karaf community has made rebranding the runtime a simple task. Let's make our own for this book.

Getting ready

The ingredients of this recipe include the Apache Karaf distribution kit, access to JDK, Maven, and a source code editor. The sample code for this recipe is available at https://github.com/jgoodyear/ApacheKarafCookbook/tree/master/chapter1/chapter1-recipe3.

How to do it…

  1. The first step is generating a Maven-based project structure. For this recipe, we need to only create the bare of Maven POM files, set its packaging to bundle, and include a build section.
  2. The next step is adding a resource directive to our POM file's build section. In our POM file, we add a resource directive to our build section, as shown in the following code:
    <resource>
      <directory>
        ${project.basedir}/src/main/resources
      </directory>
      <filtering>true</filtering>
      <includes>
        <include>**/*</include>
      </includes>
    </resource>

    We add a resource directive to our build section to instruct Maven to process the contents of our resources folder, filter any wildcards, and include the result in the generated bundle.

  3. Next, we configure the Maven Bundle Plugin as shown in the following code:
    <configuration>
      <instructions>
        <Bundle-SymbolicName>
          ${project.artifactId}
        </Bundle-SymbolicName>
        <Import-Package>*</Import-Package>
        <Private-Package>!*</Private-Package>
        <Export-Package>
          org.apache.karaf.branding
        </Export-Package>
        <Spring-Context>
          *;publish-context:=false
        </Spring-Context>
      </instructions>
    </configuration>

    We configured the Maven Bundle Plugin to export Bundle-SymbolicName as the artifactId and set the Export-Package option to org.apache.karaf.branding. The symbolic name as the project's artifactId variable is a common convention among Karaf bundle developers. We export the Karaf branding package so that the Karaf runtime will identify the bundle as containing the custom branding.

  4. The next step is creating our custom branding resource file. Returning to our project, we'll create a branding.properties file in the src/main/resource/org/apache/karaf/branding directory. This .properties file will contain ASCII and Jansi text characters, organized to produce your custom look. Using Maven resource filtering, you can use variable substitutions in the ${variable} format, as shown in the following code:
    ##
    welcome = \
    \u001B[33m\u001B[0m\n\
    \u001B[33m      _       ___  ____    ______  \u001B[0m\n\
    \u001B[33m     / \\    |_  ||_  _|  .' ___  | \u001B[0m\n\
    \u001B[33m    / _ \\     | |_/ /   / .'   \\_| \u001B[0m\n\
    \u001B[33m   / ___ \\    |  __'.   | |        \u001B[0m\n\
    \u001B[33m _/ /   \\ \\_ _| |  \\  \\_ \\ '.___.'\\ \u001B[0m\n\
    \u001B[33m|____| |____||____||____| '.____ .' \u001B[0m\n\
    \u001B[33m                                   \u001B[0m\n\
    \u001B[33m       Apache Karaf Cookbook       \u001B[0m\n\
    \u001B[33m Packt Publishing - http://www.packtpub.com\u001B[0m\n\
    \u001B[33m       (version ${project.version})\u001B[0m\n\
    \u001B[33m\u001B[0m\n\
    \u001B[33mHit '\u001B[1m<tab>\u001B[0m' for a list of available commands\u001B[0m\n\
    \u001B[33mand '\u001B[1m[cmd] --help\u001B[0m' for help on a specific command.\u001B[0m\n\
    \u001B[33mHit '\u001B[1m<ctrl-d>\u001B[0m' or '\u001B[1mosgi:shutdown\u001B[0m' to shutdown\u001B[0m\n\
    \u001B[33m\u001B[0m\n\

    In the preceding code, we use a combination of ASCII characters and Jansi text markup in the branding.properties file to produce simple text effects in Karaf, as shown in the following screenshot:

    How to do it…
  5. The final step is building and deploying our custom branding. We build our branding via the Maven invocation mvn install. After we build our branding bundle, we place a copy inside Karaf's KARAF_HOME/lib folder and then start the container. Upon the first boot, you will see our custom branding displayed.

How it works…

At the first boot, Apache Karaf will check for any bundle in its lib folder and will export the org.apache.karaf.branding package. Upon detection of this resource, it will access the branding.properties file content and display it as part of the runtime startup routine.

There's more…

The Apache Karaf community maintains a web console that may also be branded to reflect your organization's branding. See https://karaf.apache.org/index/subprojects/webconsole.html for more details.

Deploying applications as a feature

Managing the assembly and deployment of repository locations, bundles, configuration, and other artifacts quickly becomes a major headache for system builders. To combat this, the Karaf community has developed the concept of features. The following figure describes the concept of features:

Deploying applications as a feature

A feature descriptor is an XML-based file that describes a collection of artifacts to be installed together into the Karaf container. In this recipe, we'll learn how to make a feature, add it to Karaf, and then use it to install bundles.

Getting ready

The ingredients of this recipe include the Apache Karaf distribution kit, access to JDK, Maven, and a source code editor. The sample code for this recipe is available at https://github.com/jgoodyear/ApacheKarafCookbook/tree/master/chapter1/chapter1-recipe4.

How to do it…

  1. The first step is generating a Maven-based project. For this recipe, we need to create a Maven POM file, set its packaging to bundle, and include a build section.
  2. The next step is editing the POM file's build directives. We add a resources directive to our POM file's build section and maven-resources-plugin and build-helper-maven-plugin to its plugin list. Consider the following code:
    <resources>
        <resource>
            <directory>src/main/resources</directory>
            <filtering>true</filtering>
        </resource>
    </resources>

    In the preceding code, the resources directive indicates the location of the features file we'll create for processing. Now, consider the following code:

    <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-resources-plugin</artifactId>
        <executions>
            <execution>
                <id>filter</id>
                <phase>generate-resources</phase>
                <goals>
                    <goal>resources</goal>
                </goals>
            </execution>
        </executions>
    </plugin>

    In the preceding code, maven-resources-plugin is configured to process our resources. Now, consider the following code:

    <plugin>
        <groupId>org.codehaus.mojo</groupId>
        <artifactId>build-helper-maven-plugin</artifactId>
        <executions>
            <execution>
                <id>attach-artifacts</id>
                <phase>package</phase>
                <goals>
                    <goal>attach-artifact</goal>
                </goals>
                <configuration>
                    <artifacts>
                        <artifact>
                            <file>${project.build.directory}/classes/${features.file}</file>
                            <type>xml</type>
                            <classifier>features</classifier>
                        </artifact>
                    </artifacts>
                </configuration>
            </execution>
        </executions>
    </plugin>

    Finally, build-helper-maven-plugin completes the build of our features.xml file as described in the preceding code.

  3. The third step is creating a features.xml resource. In the src/main/resources folder, add a file named features.xml with the details of your bundles, as shown in the following code:
    <?xml version="1.0" encoding="UTF-8"?>
    
    <features>
    
      <feature name='moduleA' version='${project.version}'>
        <bundle>
          mvn:com.packt/moduleA/${project.version}
        </bundle>
      </feature>
    
      <feature name='moduleB' version='${project.version}'>
        <bundle>
          mvn:com.packt/moduleB/${project.version}
        </bundle>
      </feature>
    
      <feature name='recipe4-all-modules' version='${project.version}'>
        <feature version='${project.version}'>moduleA</feature>
        <feature version='${project.version}'>moduleB</feature>
      </feature>
    
    </features>

    We provide each feature with a name that Karaf will use as a reference to install each element specified in the named feature's configuration. Features may reference other features, thus providing fine-grained control over installation. In the preceding features file, we can see three named features: moduleA, moduleB, and recipe4-all-modules. The recipe4-all-modules feature includes the content of the other two features.

    Tip

    If you need to include a JAR file that is not offered as a bundle, try using the wrap protocol to automatically provide the file with the OSGi manifest headers. For more information, see https://ops4j1.jira.com/wiki/display/paxurl/Wrap+Protocol.

  4. The final step is building and deploying our feature. Using our sample recipe project, we will build our feature by executing mvn install. This performs all of the feature file variable substitutions and installs a processed copy in your local m2 repository.

    To make our feature available to Karaf, we'll add the feature file's Maven coordinates as follows:

    karaf@root()>feature:repo-add mvn:com.packt/features-file/1.0.0-  SNAPSHOT/xml/features
    

    Now, we can use Karaf's feature commands to install moduleA and moduleB, as shown in the following command-line snippet:

    karaf@root()>feature:install recipe4-all-modules
    Apache Karaf starting moduleA bundle
    Apache Karaf starting moduleB bundle
    karaf@root()>
    

    Using feature:install in this fashion helps to promote repeatable deployments and avoid missing component installations that are not caught by the OSGi environment (if no bundle dependencies are missing, then as far as the container is concerned, all is well). We can verify whether our feature is installed by invoking the following command:

    karaf@root()>feature:list | grep –i "recipe"
    

    We can then observe whether our feature is listed or not.

How it works…

When Karaf processes a feature descriptor as a bundle, hot deployment, or via a system start-up property, the same processing and assembly functions occur, as shown in the following figure:

How it works…

The feature descriptor invocation is transformed into a list of artifacts to be installed in the OSGi container. At the lowest level, individual elements in a feature have a handler to obtain the described artifact (such as a bundle, JAR file, or configuration file). Our sample feature uses Maven coordinates to obtain bundles, and the Maven handler will be called to process these resources. If an HTTP URL was specified, then the HTTP handler is called. Each artifact in the specified feature will be installed until the entire list is processed.

There's more…

The How to do it… section of this recipe outlines a general methodology to produce a feature file for your projects and automate the filtering of resource versions. From Apache Karaf's point of view, it just processes a well-formatted features file so that you can handwrite the file and deploy it directly into Karaf.

Feature files have additional attributes that can be used to set bundle start levels, flag bundles as being dependencies, and set configuration properties. For more information, visit http://karaf.apache.org/manual/latest/users-guide/provisioning.html.

An advanced use case of Karaf feature files is to build a KAraf aRchive (KAR). A KAR file is the processed form of a feature file, collecting all the required artifacts into a single deployable form. This archive is ideal for deployment when your Karaf instance will not have access to remote repositories, as all required resources are packaged in the KAR file.

See also

  • We'll be using the features concept of Apache Karaf in several chapters of this book to simplify the installation of Apache Camel, ActiveMQ, and CXF among other projects.

Using JMX to monitor and administer Apache Karaf

By default, Apache Karaf can be administered via Java Management Extensions (JMX). However, systems builders often need to tweak the default configurations to get their deployment integrated into their network. In this recipe, we'll show you how to make these changes.

Getting ready

The ingredients of this recipe include the Apache Karaf distribution kit, access to JDK, and a source code editor. The sample configuration for this recipe is available at https://github.com/jgoodyear/ApacheKarafCookbook/tree/master/chapter1/chapter1-recipe5.

Tip

Administrators should take care when exposing JMX access to their Karaf instance. Enabling of SSL and use of strong passwords is recommended.

How to do it…

  1. The first step is editing the management configuration. Apache Karaf ships with a default management configuration. To make our modifications, we update the etc/org.apache.karaf.management.cfg file. Consider the following code:
    #
    # Port number for RMI registry connection
    #
    rmiRegistryPort = 11099
    
    #
    # Port number for RMI server connection
    #
    rmiServerPort = 44445

    The default ports, 1099 and 44444, are usually fine for general deployment. Change these ports only if you are experiencing port conflicts on your deployment. Now, consider the following snippet:

    #
    # Role name used for JMX access authorization
    # If not set, this defaults to the ${karaf.admin.role} configured in etc/system.properties
    #
    jmxRole=admin

    Towards the bottom of the configuration file, there will be a commented-out entry for jmxRole; enable this by removing the hash character.

  2. The next step is updating the user's file. We must now update the etc/users.properties file with the following code:
    karaf = karaf,_g_:admingroup
    _g_\:admingroup = group,admin,manager,viewer,jmxRole
    

    The users.properties file is used to configure users, groups, and roles in Karaf. We append jmxRole to the admin group. The syntax for this file follows the Username = password, groups format.

  3. The last step is testing our configuration. After making the previous configuration changes, we'll need to restart our Karaf instance. Now, we can test our JMX setup. Have a look at the following screenshot:
    How to do it…

    After restarting Karaf, use a JMX-based admin tool of your choice (the previous screenshot shows JConsole) to connect to the container. Due to image size restrictions, the full URL couldn't be displayed. The full URL is service:jmx:rmi://127.0.0.1:44445/jndi/rmi://127.0.0.1:11099/karaf-root. The syntax of the URL is service:jmx:rmi://host:${rmiServerPort}/jndi/rmi://host:${rmiRegistryPort}/${karaf-instance-name}.

Reconfiguring SSH access to Apache Karaf

Using Apache Karaf via its local console provides the user with superb command and control capabilities over their OSGi container. Apache Karaf's remote console extends this experience to remote consoles, and as such, presents systems builders with an opportunity to further harden their systems. In this recipe, we'll change Karaf's default remote connection parameters.

Getting ready

The ingredients of this recipe include the Apache Karaf distribution kit, access to JDK, and a source code editor. The sample configuration for this recipe is available at https://github.com/jgoodyear/ApacheKarafCookbook/tree/master/chapter1/chapter1-recipe6.

How to do it…

  1. The first step is editing the shell configuration. Apache Karaf ships with a default shell configuration file. It's a good practice to edit entries in the etc/org.apache.karaf.shell.cfg file to point to the non-default ports as a security precaution. Consider the following code:
    #
    # Via sshPort and sshHost you define the address you can login into Karaf.
    #
    sshPort = 8102
    sshHost = 192.168.1.110

    In the preceding sample configuration, we defined the port for SSH access to 8102 and set sshHost to an IP address of the host machine (the default value, 0.0.0.0, means the SSHD service is bound to all network interfaces). Restricting access to particular network interfaces can help reduce unwanted access.

  2. The next step is restarting Karaf. After editing the configuration, we must restart Karaf. Once restarted, you'll be able to connect to Karaf using an SSH client command as follows:
    ssh –p 8102 karaf@127.0.0.1
    

    Upon connection, you'll be prompted for your password.

There's more…

Changing the default remote access configuration is a good start. However, system builders should also consider changing the default karafuser/password combination found in the users.properties file.

You might also decide to generate a server SSH key file to simplify remote access. Information regarding this configuration can be found at http://karaf.apache.org/manual/latest/users-guide/remote.html.

Installing Apache Karaf as a service

When we install Apache Karaf, we'll want it to operate as a system service on our host platform (just like Windows or Linux). In this recipe, we'll set up Karaf to start when your system boots up.

Getting ready

The ingredients of this recipe include the Apache Karaf distribution kit, access to JDK, and a source code editor. The sample wrapper configuration for this recipe is available at https://github.com/jgoodyear/ApacheKarafCookbook/tree/master/chapter1/chapter1-recipe7.

How to do it…

  1. The first step is installing the service wrapper feature. Apache Karaf utilizes a service wrapper feature to handle gathering and deploying of the required resources for your host operating environment. We begin its installation by invoking the following command:
    karaf@root()>feature:install service-wrapper
    

    The service wrapper feature URL is included in Karaf by default; so, no additional step is required to make it available.

  2. The next step is installing the wrapper service. Now, we must instruct the wrapper to configure and install the appropriate service scripts and resources for us. Consider the following command:
    karaf@root()>wrapper:install –s AUTO_START –n Karaf3 –D "Apache Karaf Cookbook"
    

    The preceding wrapper:install command invocation includes three flags: -s for the start type, -n for the service name, and –D for the service description. The start type can be one of two options: AUTO_START, to automatically start the service on boot, and DEMAND_START, to start only when manually invoked. The service name is used as an identifier in the host's service registry. The description provides system administrators with a brief description of your Karaf installation. After executing the install command, the Karaf console will display the libraries, scripts, and configuration files that the wrapper generates. You'll now need to exit Karaf to continue the service installation.

  3. The final step is integrating it in to the host operating system. This step will require administrator level permissions to execute the generated Karaf service wrapper installation scripts.

    The following command installs the service natively into Windows:

    C:> C:\Path\To\apache-karaf-3.0.0\bin\Karaf3-service.bat install
    

    The following net commands allow an administrator to start or stop the Karaf service:

    C:> net start "Karaf3"
    C:> net stop "Karaf3"
    

    Linux integration will vary based on distribution. The following commands will work on Debian- or Ubuntu-based systems:

    jgoodyear@ubuntu1204:~$ ln –s /Path/To/apache-karaf-3.0.0/bin/Karaf3-service /etc/init.d
    jgoodyear@ubuntu1204:~$ update-rc.d Karaf3-service defaults
    jgoodyear@ubuntu1204:~$ /etc/init.d/Karaf3-service start
    jgoodyear@ubuntu1204:~$ /etc/init.d/Karaf3-service stop
    

    The first command creates a symbolic link from the service script in Karaf's bin folder to the init.d directory and then updates the startup scripts to include the Karaf service to automatically start during boot. The remaining two commands can be used to manually start or stop the Karaf service.

How it works…

The wrapper service feature integrates Karaf into the host operating system's service mechanism. This means that on a Windows- or Linux-based system, Karaf will avail of the available fault, crash, processing freeze, out of memory, or similar event detections and automatically attempt to restart Karaf.

See also

  • The Setting up Apache Karaf for high availability recipe

Setting up Apache Karaf for high availability

To help provide higher service availability, Karaf provides the option to set up a secondary instance of Apache Karaf to failover upon in case of an operating environment error. In this recipe, we'll configure a Master/Slave failover deployment and briefly discuss how you can expand the recipe to multiple hosts.

Getting ready

The ingredients of this recipe include the Apache Karaf distribution kit, access to JDK, and a source code editor. The sample configuration for this recipe is available at https://github.com/jgoodyear/ApacheKarafCookbook/tree/master/chapter1/chapter1-recipe8.

How to do it…

  1. The first step is editing the system properties file. To enable a Master/Slave failover, we edit the etc/system.properties file of two or more Karaf instances to include the following Karaf locking configuration:
    ##
    ## Sample lock configuration
    ##
    karaf.lock=true
    karaf.lock.class=org.apache.karaf.main.lock.SimpleFileLock
    # specify path to lock directory
    karaf.lock.dir=[PathToLockFileDirectory]
    karaf.lock.delay=10

    The previous configuration sample contains the essential entries for a file-based locking mechanism, that is, two or more Karaf instances attempt to gain exclusive ownership of a file over a shared filesystem.

  2. The next step is providing locking resources. If using a shared locking file approach is suitable to your deployment, then all you must do at this time is mount the filesystem on each machine that'll host Karaf instances in the Master/Slave deployment.

    Tip

    If you plan to use the shared file lock, consider using an NFSv4 filesystem, as it implements flock correctly.

    Each Karaf instance will include the same lock directory location on a shared filesystem common to each Karaf installation. If a shared filesystem is not practical between systems, then a JDBC locking mechanism can be used. This is described in the following code:

    karaf.lock=true
    karaf.lock.class=org.apache.karaf.main.DefaultJDBCLock
    karaf.lock.delay=10
    karaf.lock.jdbc.url=jdbc:derby://dbserver:1527/sample
    karaf.lock.jdbc.driver=org.apache.derby.jdbc.ClientDriver
    karaf.lock.jdbc.user=user
    karaf.lock.jdbc.password=password
    karaf.lock.jdbc.table=KARAF_LOCK
    karaf.lock.jdbc.clustername=karaf
    karaf.lock.jdbc.timeout=30

    The JDBC configuration is similar to the SimpleFileLock configuration. However, it is expanded to contain the JDBC url, driver, timeout, user, and password options. Two additional JDBC options are included to allow for multiple Master/Slave Karaf deployments to use a single database. These are the JDBC table and clustername options. The JDBC table property sets the database table to use for the lock, and the JDBC clustername property specifies which pairing group a Karaf instance belongs to (for example, hosts A and B belong to a cluster prod group, and hosts C and D belong to a cluster dev group).

    When using the JDBC locking mechanism, you'll have to provide the relevant JDBC driver JAR file to Karaf's lib/ext folder. For specific database configurations, consult Karaf's user manual (http://karaf.apache.org/manual/latest/index.html).

  3. The final step is verifying the lock behavior. Once you have configured each Karaf instance to be a participant of the Master/Slave deployment and ensured that any locking resources have been made available (mounted filesystems or database drivers/connectivity), you must now validate that it is all working as desired. The general test to perform is to start one instance of Karaf, allow it to gain the lock (you'll see this recorded in the logfile), and then start all additional instances. Only the first instance should be fully booted; the others should be trying to gain the lock. Stopping this first instance should result in another instance becoming the Master. This verification step is vital. Most Master/Slave deployment failures occur due to misconfigurations or shared resource permissions.

How it works…

Each instance of Apache Karaf contains a copy of the locking configuration in its etc/system.properties file. This is described in the following figure:

How it works…

In the case of a SimpleFileLock configuration, Karaf attempts to utilize an exclusive lock upon a file to manage which Karaf instance will operate as a live (Master) container. The other instances in the set will try gaining lock file access for karaf.lock.delay seconds each. This can be easily simulated on a single host machine with two Karaf installations both configured to use the same locking file. If the lock file is located on a shared NFSv4 filesystem, then multiple servers may be able to use this configuration. However, a JDBC-based lock is the most often used in multihost architectures.

There's more…

Karaf failover describes an active/passive approach to high availability. There is also a similar concept that provides active/active architecture via Apache Karaf Cellar.

Left arrow icon Right arrow icon

Description

This book is intended for developers who have some familiarity with Apache Karaf and who want a quick reference for practical, proven tips on how to perform common tasks such as configuring Pax modules deployed in Apache Karaf, Extending HttpService with Apache Karaf. You should have working knowledge of Apache karaf, as the book provides a deeper understanding of the capabilities of Apache Karaf.
Estimated delivery fee Deliver to Austria

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Aug 25, 2014
Length: 260 pages
Edition : 1st
Language : English
ISBN-13 : 9781783985081
Vendor :
Apache
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Estimated delivery fee Deliver to Austria

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Publication date : Aug 25, 2014
Length: 260 pages
Edition : 1st
Language : English
ISBN-13 : 9781783985081
Vendor :
Apache
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 90.97
Apache Camel Developer's Cookbook
€41.99
APACHE KARAF COOKBOOK
€29.99
Learning Karaf Cellar
€18.99
Total 90.97 Stars icon

Table of Contents

11 Chapters
1. Apache Karaf for System Builders Chevron down icon Chevron up icon
2. Making Smart Routers with Apache Camel Chevron down icon Chevron up icon
3. Deploying a Message Broker with Apache ActiveMQ Chevron down icon Chevron up icon
4. Hosting a Web Server with Pax Web Chevron down icon Chevron up icon
5. Hosting Web Services with Apache CXF Chevron down icon Chevron up icon
6. Distributing a Clustered Container with Apache Karaf Cellar Chevron down icon Chevron up icon
7. Providing a Persistence Layer with Apache Aries and OpenJPA Chevron down icon Chevron up icon
8. Providing a Big Data Integration Layer with Apache Cassandra Chevron down icon Chevron up icon
9. Providing a Big Data Integration Layer with Apache Hadoop Chevron down icon Chevron up icon
10. Testing Apache Karaf with Pax Exam Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.8
(4 Ratings)
5 star 50%
4 star 0%
3 star 25%
2 star 25%
1 star 0%
Rodrigo Serra Dec 14, 2014
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Good introductory book
Amazon Verified review Amazon
Alexandros Koufoudakis Jan 25, 2015
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I wish the book had been written three years ago, when I just started working with Apache Camel, Karaf, Servicemix, and other open source integration frameworks.The book is full of useful recipes, which you can use as a cheat sheet to use a very broad range of integration techniques. You can find out how to deploy Apache CXF based web services (both REST and SOAP), routes with Apache Camel, and an Apache ActiveMQ broker. The book not only explains how to deploy an application, based on a specific technology, but also gives a brief introduction to this technology (e.g., Apache Camel, Apache CXF). Are you familiar with Hadoop, Cassandra, and big data? The book describes how you can deploy a Cassandra or Hadoop client to Karaf and leverage from OSGi. Do you want to customize Apache Karaf itself? The first chapter is dedicated to the creation of your own commands in Apache Karaf.The book also explains different deployment modes: OSGi bundles, features, and KAR (Karaf Archives). Is you jar file not an OSGi bundle? You can find an answer about how to deploy it to Karaf too.All the recipes are accompanied by good and detailed coding examples.Of course, you can find certain errors in the command spelling, but you can easily find a correct version by "googling" or by using any other search engine.I would recommend this book for everyone, who wants to expand and to make deeper their knowledge with Apache Karaf and how it can be used to deploy an integration route, a web service, or an embedded web server. Of course, if you really want to go deeper, I would recommend "Learning Apache Karaf" by Johan Edstrom et al., published by Packt.
Amazon Verified review Amazon
Ohaerik Oct 14, 2014
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
The Apache Karaf Cookbook set out to help system administrators and developers learn how to implement Karaf in a production environment. This is a much needed value in every enterprise where karaf can be implemented. No organization will just download the Karaf code and implement it as-is in production. So, how did the Apache Karaf Cookbook do? The first four chapters raised very important issues relevant in the production environment and provided some useful ingredients to those issues except when it comes to customizing the logo. The book does not provide adequate explanation or demonstrate how you can generate your own ASCII text character for your logo. It just directs you to the code with already generated ASCII text and Jansi character . I went online and generated my own ASCII Text Jansi character into branding.properties text file. When I started Karaf, it just distorted the character and it was not recognizable. It was an ugly mess. It did not work. After trying several times to make it work without success, I moved on with my reading.Chapter two is very helpful for anyone who has read Camel in Action. It helps to learn how to implement Camel route in a Karaf environment. Chapter three is also very useful for incorporating ActiveMQ in a Karaf environment. The book provided many helpful ingredients about how you can use Karaf to implement a lot of useful solutions in your organization ranging from implementing the Pax web to implementing web services using Apache CXF, however, it failed in my judgment, in the most critical area in production- Security integration with enterprise security infrastructure.First, every organization has at least one or more enterprise identity management system and authentication providers such as Active Directory, LDAP, SAML, CA SiteMinder, Oracle's OID, or even home-grown, etc. These security systems provide interfaces for enterprise integration with applications security implementation, therefore, for a system such as Karaf to be implemented in production, it must also integrate with any of these authentication providers.Karaf Cookbook failed to even demonstrate how LDAP can be integrated to provide authentication for Karaf and application bundles you deploy under Karaf.The book discussed configuring security for a web application in Karaf where Java Authentication and Authorization Service (JAAS) is discussed but the examples provided were based only on user.properties file. Even in using user.properties file in a web application, the book failed to demonstrate how custom user entitlements can be implemented using the roles in user.properties. In other words, it needed to show how you can use your own organization's roles and groups to inherit the properties of the parent roles in the user.properties. For example, if you have admin and manager roles in user.properties file, in your application you can define custom roles such as ACME_admin or ACME_manager. These roles will inherit the properties of their respective parent roles in user.properties file. Then in the application you just focus on assigning different levels of privileges to the ACME_admin and ACME_manager roles and other custom groups or roles you need based on your business requirements then at runtime application users can assign users to these custom roles and groups. This provides flexibility in driving user entitlements in the application. In this construct, application users can assign users from any authentication provider that is integrated with Karaf.I know there is a discussion of how to implement LDAP and other authentication providers in the Apache Karaf Security framework document. The document does not provide adequate information that can help a busy system administrator or a developer to be productive quickly. That's why books like the Karaf Cookbook should address these security implementations so that helps folks like me be productive quickly or at least helps us to implement solutions for a quick POC for our organizations before we request outside Karaf consultants to come on site to help us review and audit our systems before production implementation.The book also failed to demonstrate how you can integrate Karaf and SMTP server. In todays enterprise environment, no system stands alone. For Karaf to be accepted in the enterprise, there must be a good narrative about how to accomplish enterprise security and collaboration integrations.The book spent a lot of time addressing concepts other books have already addressed such as Karaf Celler. There is a book for that already, it is called Learning Karaf Celler. I wish, it focused on its core mission which is making Karaf production ready by addressing the most important thing in production- Security integration.Overall, I give the book three stars for thinking about the need for productionizing Karaf. This is a good effort. I am looking forward to more efforts in the area of SMTP and security integration.
Amazon Verified review Amazon
Jan Lolling Nov 23, 2016
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
Das Buch ist von einem der Committer des Karaf. Leider macht sich das dadurch bemerkbar dass der Autor von einem sehr tiefen Wissen ausgeht und es nicht schafft die Beispiele nachvollziehbar zu beschreiben.Das passt auch zur offiziellen Dokumentation z.B. des pax-jdbc Projektes welches extrem minimalistisch ist.Fazit: Wer dieses Buch versteht, braucht es nicht, wer dieses Buch nicht versteht hat Pech.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela