Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Application Development

357 Articles
article-image-prerequisites
Packt
25 Mar 2015
6 min read
Save for later

Prerequisites

Packt
25 Mar 2015
6 min read
In this article by Deepak Vohra, author of the book, Advanced Java® EE Development with WildFly® you will see how to create a Java EE project and its pre-requisites. (For more resources related to this topic, see here.) The objective of the EJB 3.x specification is to simplify its development by improving the EJB architecture. This simplification is achieved by providing metadata annotations to replace XML configuration. It also provides default configuration values by making entity and session beans POJOs (Plain Old Java Objects) and by making component and home interfaces redundant. The EJB 2.x entity beans is replaced with EJB 3.x entities. EJB 3.0 also introduced the Java Persistence API (JPA) for object-relational mapping of Java objects. WildFly 8.x supports EJB 3.2 and the JPA 2.1 specifications from Java EE 7. The sample application is based on Java EE 6 and EJB 3.1. The configuration of EJB 3.x with Java EE 7 is also discussed and the sample application can be used or modified to run on a Java EE 7 project. We have used a Hibernate 4.3 persistence provider. Unlike some of the other persistence providers, the Hibernate persistence provider supports automatic generation of relational database tables including the joining of tables. In this article, we will create an EJB 3.x project. This article has the following topics: Setting up the environment Creating a WildFly runtime Creating a Java EE project Setting up the environment We need to download and install the following software: WildFly 8.1.0.Final: Download wildfly-8.1.0.Final.zip from http://wildfly.org/downloads/. MySQL 5.6 Database-Community Edition: Download this edition from http://dev.mysql.com/downloads/mysql/. When installing MySQL, also install Connector/J. Eclipse IDE for Java EE Developers: Download Eclipse Luna from https://www.eclipse.org/downloads/packages/release/Luna/SR1. JBoss Tools (Luna) 4.2.0.Final: Install this as a plug-in to Eclipse from the Eclipse Marketplace (http://tools.jboss.org/downloads/installation.html). The latest version from Eclipse Marketplace is likely to be different than 4.2.0. Apache Maven: Download version 3.05 or higher from http://maven.apache.org/download.cgi. Java 7: Download Java 7 from http://www.oracle.com/technetwork/java/javase/downloads/index.html?ssSourceSiteId=ocomcn. Set the environment variables: JAVA_HOME, JBOSS_HOME, MAVEN_HOME, and MYSQL_HOME. Add %JAVA_HOME%/bin, %MAVEN_HOME%/bin, %JBOSS_HOME%/bin, and %MYSQL_HOME%/bin to the PATH environment variable. The environment settings used are C:wildfly-8.1.0.Final for JBOSS_HOME, C:Program FilesMySQLMySQL Server 5.6.21 for MYSQL_HOME, C:mavenapache-maven-3.0.5 for MAVEN_HOME, and C:Program FilesJavajdk1.7.0_51 for JAVA_HOME. Run the add-user.bat script from the %JBOSS_HOME%/bin directory to create a user for the WildFly administrator console. When prompted What type of user do you wish to add?, select a) Management User. The other option is b) Application User. Management User is used to log in to Administration Console, and Application User is used to access applications. Subsequently, specify the Username and Password for the new user. When prompted with the question, Is this user going to be used for one AS process to connect to another AS..?, enter the answer as no. When installing and configuring the MySQL database, specify a password for the root user (the password mysql is used in the sample application). Creating a WildFly runtime As the application is run on WildFly 8.1, we need to create a runtime environment for WildFly 8.1 in Eclipse. Select Window | Preferences in Eclipse. In Preferences, select Server | Runtime Environment. Click on the Add button to add a new runtime environment, as shown in the following screenshot: In New Server Runtime Environment, select JBoss Community | WildFly 8.x Runtime. Click on Next: In WildFly Application Server 8.x, which appears below New Server Runtime Environment, specify a Name for the new runtime or choose the default name, which is WildFly 8.x Runtime. Select the Home Directory for the WildFly 8.x server using the Browse button. The Home Directory is the directory where WildFly 8.1 is installed. The default path is C:wildfly-8.1.0.Final. Select the Runtime JRE as JavaSE-1.7. If the JDK location is not added to the runtime list, first add it from the JRE preferences screen in Eclipse. In Configuration base directory, select standalone as the default setting. In Configuration file, select standalone.xml as the default setting. Click on Finish: A new server runtime environment for WildFly 8.x Runtime gets created, as shown in the following screenshot. Click on OK: Creating a Server Runtime Environment for WildFly 8.x is a prerequisite for creating a Java EE project in Eclipse. In the next topic, we will create a new Java EE project for an EJB 3.x application. Creating a Java EE project JBoss Tools provides project templates for different types of JBoss projects. In this topic, we will create a Java EE project for an EJB 3.x application. Select File | New | Other in Eclipse IDE. In the New wizard, select the JBoss Central | Java EE EAR Project wizard. Click on the Next button: The Java EE EAR Project wizard gets started. By default, a Java EE 6 project is created. A Java EE EAR Project is a Maven project. The New Project Example window lists the requirements and runs a test for the requirements. The JBoss AS runtime is required and some plugins (including the JBoss Maven Tools plugin) are required for a Java EE project. Select Target Runtime as WildFly 8.x Runtime, which was created in the preceding topic. Then, check the Create a blank project checkbox. Click on the Next button: Specify Project name as jboss-ejb3, Package as org.jboss.ejb3, and tick the Use default Workspace location box. Click on the Next button: Specify Group Id as org.jboss.ejb3, Artifact Id as jboss-ejb3, Version as 1.0.0, and Package as org.jboss.ejb3.model. Click on Finish: A Java EE project gets created, as shown in the following Project Explorer window. The jboss-ejb3 project consists of three subprojects: jboss-ejb3-ear, jboss-ejb3-ejb, and jboss-ejb3-web. Each subproject consists of a pom.xml file for Maven. The jboss-ejb3-ejb subproject consists of a META-INF/persistence.xml file within the src/main/resources source folder for the JPA database persistence configuration. Summary In this article, we learned how to create a Java EE project and its prerequisites. Resources for Article: Further resources on this subject: Common performance issues [article] Running our first web application [article] Various subsystem configurations [article]
Read more
  • 0
  • 0
  • 1524

article-image-your-first-fuelphp-application-7-easy-steps
Packt
04 Mar 2015
12 min read
Save for later

Your first FuelPHP application in 7 easy steps

Packt
04 Mar 2015
12 min read
In this article by Sébastien Drouyer, author of the book FuelPHP Application Development Blueprints we will see that FuelPHP is an open source PHP framework using the latest technologies. Its large community regularly creates and improves packages and extensions, and the framework’s core is constantly evolving. As a result, FuelPHP is a very complete solution for developing web applications. (For more resources related to this topic, see here.) In this article, we will also see how easy it is for developers to create their first website using the PHP oil utility. The target application Suppose you are a zoo manager and you want to keep track of the monkeys you are looking after. For each monkey, you want to save: Its name If it is still in the zoo Its height A description input where you can enter custom information You want a very simple interface with five major features. You want to be able to: Create new monkeys Edit existing ones List all monkeys View a detailed file for each monkey Delete monkeys These preceding five major features, very common in computer applications, are part of the Create, Read, Update and Delete (CRUD) basic operations. Installing the environment The FuelPHP framework needs the three following components: Webserver: The most common solution is Apache PHP interpreter: The 5.3 version or above Database: We will use the most popular one, MySQL The installation and configuration procedures of these components will depend on the operating system you use. We will provide here some directions to get you started in case you are not used to install your development environment. Please note though that these are very generic guidelines. Feel free to search the web for more information, as there are countless resources on the topic. Windows A complete and very popular solution is to install WAMP. This will install Apache, MySQL and PHP, in other words everything you need to get started. It can be accessed at the following URL: http://www.wampserver.com/en/ Mac PHP and Apache are generally installed on the latest version of the OS, so you just have to install MySQL. To do that, you are recommended to read the official documentation: http://dev.mysql.com/doc/refman/5.1/en/macosx-installation.html A very convenient solution for those of you who have the least system administration skills is to install MAMP, the equivalent of WAMP but for the Mac operating system. It can be downloaded through the following URL: http://www.mamp.info/en/downloads/ Ubuntu As this is the most popular Linux distribution, we will limit our instructions to Ubuntu. You can install a complete environment by executing the following command lines: # Apache, MySQL, PHP sudo apt-get install lamp-server^   # PHPMyAdmin allows you to handle the administration of MySQL DB sudo apt-get install phpmyadmin   # Curl is useful for doing web requests sudo apt-get install curl libcurl3 libcurl3-dev php5-curl   # Enabling the rewrite module as it is needed by FuelPHP sudo a2enmod rewrite   # Restarting Apache to apply the new configuration sudo service apache2 restart Getting the FuelPHP framework There are four common ways to download FuelPHP: Downloading and unzipping the compressed package which can be found on the FuelPHP website. Executing the FuelPHP quick command-line installer. Downloading and installing FuelPHP using Composer. Cloning the FuelPHP GitHub repository. It is a little bit more complicated but allows you to select exactly the version (or even the commit) you want to install. The easiest way is to download and unzip the compressed package located at: http://fuelphp.com/files/download/28 You can get more information about this step in Chapter 1 of FuelPHP Application Development Blueprints, which can be accessed freely. It is also well-documented on the website installation instructions page: http://fuelphp.com/docs/installation/instructions.html Installation directory and apache configuration Now that you know how to install FuelPHP in a given directory, we will explain where to install it and how to configure Apache. The simplest way The simplest way is to install FuelPHP in the root folder of your web server (generally the /var/www directory on Linux systems). If you install fuel in the DIR directory inside the root folder (/var/www/DIR), you will be able to access your project on the following URL: http://localhost/DIR/public/ However, be warned that fuel has not been implemented to support this, and if you publish your project this way in the production server, it will introduce security issues you will have to handle. In such cases, you are recommended to use the second way we explained in the section below, although, for instance if you plan to use a shared host to publish your project, you might not have the choice. A complete and up to date documentation about this issue can be found in the Fuel installation instruction page: http://fuelphp.com/docs/installation/instructions.html By setting up a virtual host Another way is to create a virtual host to access your application. You will need a *nix environment and a little bit more apache and system administration skills, but the benefit is that it is more secured and you will be able to choose your working directory. You will need to change two files: Your apache virtual host file(s) in order to link a virtual host to your application Your system host file, in order redirect the wanted URL to your virtual host In both cases, the files location will be very dependent on your operating system and the server environment you are using, so you will have to figure their location yourself (if you are using a common configuration, you won’t have any problem to find instructions on the web). In the following example, we will set up your system to call your application when requesting the my.app URL on your local environment. Let’s first edit the virtual host file(s); add the following code at the end: <VirtualHost *:80>    ServerName my.app    DocumentRoot YOUR_APP_PATH/public    SetEnv FUEL_ENV "development"    <Directory YOUR_APP_PATH/public>        DirectoryIndex index.php        AllowOverride All        Order allow,deny        Allow from all    </Directory> </VirtualHost> Then, open your system host files and add the following line at the end: 127.0.0.1 my.app Depending on your environment, you might need to restart Apache after that. You can now access your website on the following URL: http://my.app/ Checking that everything works Whether you used a virtual host or not, the following should now appear when accessing your website: Congratulation! You just have successfully installed the FuelPHP framework. The welcome page shows some recommended directions to continue your project. Database configuration As we will store our monkeys into a MySQL database, it is time to configure FuelPHP to use our local database. If you open fuel/app/config/db.php, all you will see is an empty array but this configuration file is merged to fuel/app/config/ENV/db.php, ENV being the current Fuel’s environment, which in that case is development. You should therefore open fuel/app/config/development/db.php: <?php //... return array( 'default' => array(    'connection' => array(      'dsn'       => 'mysql:host=localhost;dbname=fuel_dev',      'username'   => 'root',      'password'   => 'root',    ), ), ); You should adapt this array to your local configuration, particularly the database name (currently set to fuel_dev), the username, and password. You must create your project’s database manually. Scaffolding Now that the database configuration is set, we will be able to generate a scaffold. We will use for that the generate feature of the oil utility. Open the command-line utility and go to your website root directory. To generate a scaffold for a new model, you will need to enter the following line: php oil generate scaffold/crud MODEL ATTR_1:TYPE_1 ATTR_2:TYPE_2 ... Where: MODEL is the model name ATTR_1, ATTR_2… are the model’s attributes names TYPE_1, TYPE_2… are each attribute type In our case, it should be: php oil generate scaffold/crud monkey name:string still_here:bool height:float description:text Here we are telling oil to generate a scaffold for the monkey model with the following attributes: name: The name of the monkey. Its type is string and the associated MySQL column type will be VARCHAR(255). still_here: Whether or not the monkey is still in the facility. Its type is boolean and the associated MySQL column type will be TINYINT(1). height: Height of the monkey. Its type is float and its associated MySQL column type will be FLOAT. description: Description of the monkey. Its type is text and its associated MySQL column type will be TEXT. You can do much more using the oil generate feature, as generating models, controllers, migrations, tasks, package and so on. We will see some of these in the FuelPHP Application Development Blueprints book and you are also recommended to take a look at the official documentation: http://fuelphp.com/docs/packages/oil/generate.html When you press Enter, you will see the following lines appear: Creating migration: APPPATH/migrations/001_create_monkeys.php Creating model: APPPATH/classes/model/monkey.php Creating controller: APPPATH/classes/controller/monkey.php Creating view: APPPATH/views/monkey/index.php Creating view: APPPATH/views/monkey/view.php Creating view: APPPATH/views/monkey/create.php Creating view: APPPATH/views/monkey/edit.php Creating view: APPPATH/views/monkey/_form.php Creating view: APPPATH/views/template.php Where APPPATH is your website directory/fuel/app. Oil has generated for us nine files: A migration file, containing all the necessary information to create the model’s associated table The model A controller Five view files and a template file More explanation about these files and how they interact with each other can be accessed in Chapter 1 of the FuelPHP Application Development Blueprints book, freely available. For those of you who are not yet familiar with MVC and HMVC frameworks, don’t worry; the chapter contains an introduction to the most important concepts. Migrating One of the generated files was APPPATH/migrations/001_create_monkeys.php. It is a migration file and contains the required information to create our monkey table. Notice the name is structured as VER_NAME where VER is the version number and NAME is the name of the migration. If you execute the following command line: php oil refine migrate All migrations files that have not been yet executed will be executed from the oldest version to the latest version (001, 002, 003, and so on). Once all files are executed, oil will display the latest version number. Once executed, if you take a look at your database, you will observe that not one, but two tables have been created: monkeys: As expected, a table have been created to handle your monkeys. Notice that the table name is the plural version of the word we typed for generating the scaffold; such a transformation was internally done using the Inflector::pluralize method. The table will contain the specified columns (name, still_here), the id column, but also created_at and updated_at. These columns respectively store the time an object was created and updated, and are added by default each time you generate your models. It is though possible to not generate them with the --no-timestamp argument. migration: This other table was automatically created. It keeps track of the migrations that were executed. If you look into its content, you will see that it already contains one row; this is the migration you just executed. You can notice that the row does not only indicate the name of the migration, but also a type and a name. This is because migrations files can be placed at many places such as modules or packages. The oil utility allows you to do much more. Don’t hesitate to take a look at the official documentation: http://fuelphp.com/docs/packages/oil/intro.html Or, again, to read FuelPHP Application Development Blueprints’ Chapter 1 which is available for free. Using your application Now that we generated the code and migrated the database, our application is ready to be used. Request the following URL: If you created a virtual host: http://my.app/monkey Otherwise (don’t forget to replace DIR): http://localhost/DIR/public/monkey As you can notice, this webpage is intended to display the list of all monkeys, but since none have been added, the list is empty. Then let’s add a new monkey by clicking on the Add new Monkey button. The following webpage should appear: You can enter your monkey’s information here. The form is certainly not perfect - for instance the Still here field use a standard input although a checkbox would be more appropriated - but it is a great start. All we will have to do is refine the code a little bit. Once you have added several monkeys, you can again take a look at the listing page: Again, this is a great start, though we might want to refine it. Each item on the list has three associated actions: View, Edit, and Delete. Let’s first click on View: Again a great start, though we will refine this webpage. You can return back to the listing by clicking on Back or edit the monkey file by clicking on Edit. Either accessed from the listing page or the view page, it will display the same form as when creating a new monkey, except that the form will be prefilled of course. Finally, if you click on Delete, a confirmation box will appear to prevent any miss clicking. Want to learn more ? Don’t hesitate to check out FuelPHP Application Development Blueprints’ Chapter 1 which is freely available in Packt Publishing’s website. In this chapter, you will find a more thorough introduction to FuelPHP and we will show how to improve this first application. You are also recommended to explore FuelPHP website, which contains a lot of useful information and an excellent documentation: http://www.fuelphp.com There is much more to discover about this wonderful framework. Summary In this article we leaned about the installation of the FuelPHP environment and installation of directories in it. Resources for Article: Further resources on this subject: PHP Magic Features [Article] FuelPHP [Article] Building a To-do List with Ajax [Article]
Read more
  • 0
  • 0
  • 4872

article-image-entity-framework-db-first-inheritance-relationships-between-entities
Packt
02 Mar 2015
19 min read
Save for later

Entity Framework DB First – Inheritance Relationships between Entities

Packt
02 Mar 2015
19 min read
This article is written by Rahul Rajat Singh, the author of Mastering Entity Framework. So far, we have seen how we can use various approaches of Entity Framework, how we can manage database table relationships, and how to perform model validations using Entity Framework. In this article, we will see how we can implement the inheritance relationship between the entities. We will see how we can change the generated conceptual model to implement the inheritance relationship, and how it will benefit us in using the entities in an object-oriented manner and the database tables in a relational manner. (For more resources related to this topic, see here.) Domain modeling using inheritance in Entity Framework One of the major challenges while using a relational database is to manage the domain logic in an object-oriented manner when the database itself is implemented in a relational manner. ORMs like Entity Framework provide the strongly typed objects, that is, entities for the relational tables. However, it might be possible that the entities generated for the database tables are logically related to each other, and they can be better modeled using inheritance relationships rather than having independent entities. Entity Framework lets us create inheritance relationships between the entities, so that we can work with the entities in an object-oriented manner, and internally, the data will get persisted in the respective tables. Entity Framework provides us three ways of object relational domain modeling using the inheritance relationship: The Table per Type (TPT) inheritance The Table per Class Hierarchy (TPH) inheritance The Table per Concrete Class (TPC) inheritance Let's now take a look at the scenarios where the generated entities are not logically related, and how we can use these inheritance relationships to create a better domain model by implementing inheritance relationships between entities using the Entity Framework Database First approach. The Table per Type inheritance The Table per Type (TPT) inheritance is useful when our database has tables that are related to each other using a one-to-one relationship. This relation is being maintained in the database by a shared primary key. To illustrate this, let's take a look at an example scenario. Let's assume a scenario where an organization maintains a database of all the people who work in a department. Some of them are employees getting a fixed salary, and some of them are vendors who are hired at an hourly rate. This is modeled in the database by having all the common data in a table called Person, and there are separate tables for the data that is specific to the employees and vendors. Let's visualize this scenario by looking at the database schema: The database schema showing the TPT inheritance database schema The ID column for the People table can be an auto-increment identity column, but it should not be an auto-increment identity column for the Employee and Vendors tables. In the preceding figure, the People table contains all the data common to both type of worker. The Employee table contains the data specific to the employees and the Vendors table contains the data specific to the vendors. These tables have a shared primary key and thus, there is a one-to-one relationship between the tables. To implement the TPT inheritance, we need to perform the following steps in our application: Generate the default Entity Data Model. Delete the default relationships. Add the inheritance relationship between the entities. Use the entities via the DBContext object. Generating the default Entity Data Model Let's add a new ADO.NET Entity Data Model to our application, and generate the conceptual Entity Model for these tables. The default generated Entity Model will look like this: The generated Entity Data Model where the TPT inheritance could be used Looking at the preceding conceptual model, we can see that Entity Framework is able to figure out the one-to-one relationship between the tables and creates the entities with the same relationship. However, if we take a look at the generated entities from our application domain perspective, it is fairly evident that these entities can be better managed if they have an inheritance relationship between them. So, let's see how we can modify the generated conceptual model to implement the inheritance relationship, and Entity Framework will take care of updating the data in the respective tables. Deleting default relationships The first thing we need to do to create the inheritance relationship is to delete the existing relationship from the Entity Model. This can be done by right-clicking on the relationship and selecting Delete from Model as follows: Deleting an existing relationship from the Entity Model Adding inheritance relationships between entities Once the relationships are deleted, we can add the new inheritance relationships in our Entity Model as follows: Adding inheritance relationships in the Entity Model When we add an inheritance relationship, the Visual Entity Designer will ask for the base class and derived class as follows: Selecting the base class and derived class participating in the inheritance relationship Once the inheritance relationship is created, the Entity Model will look like this: Inheritance relationship in the Entity Model After creating the inheritance relationship, we will get a compile error that the ID property is defined in all the entities. To resolve this problem, we need to delete the ID column from the derived classes. This will still keep the ID column that maps the derived classes as it is. So, from the application perspective, the ID column is defined in the base class but from the mapping perspective, it is mapped in both the base class and derived class, so that the data will get inserted into tables mapped in both the base and derived entities. With this inheritance relationship in place, the entities can be used in an object-oriented manner, and Entity Framework will take care of updating the respective tables for each entity. Using the entities via the DBContext object As we know, DbContext is the primary class that should be used to perform various operations on entities. Let's try to use our SampleDbContext class to create an Employee and a Vendor using this Entity Model and see how the data gets updated in the database: using (SampleDbEntities db = new SampleDbEntities()) { Employee employee = new Employee(); employee.FirstName = "Employee 1"; employee.LastName = "Employee 1"; employee.PhoneNumber = "1234567"; employee.Salary = 50000; employee.EmailID = "employee1@test.com"; Vendor vendor = new Vendor(); vendor.FirstName = "vendor 1"; vendor.LastName = "vendor 1"; vendor.PhoneNumber = "1234567"; vendor.HourlyRate = 100; vendor.EmailID = "vendor1@test.com"; db.Workers.Add(employee); db.Workers.Add(vendor); db.SaveChanges(); } In the preceding code, what we are doing is creating an object of the Employee and Vendor type, and then adding them to People using the DbContext object. What Entity Framework will do internally is that it will look at the mappings of the base entity and the derived entities, and then push the respective data into the respective tables. So, if we take a look at the data inserted in the database, it will look like the following: A database snapshot of the inserted data It is clearly visible from the preceding database snapshot that Entity Framework looks at our inheritance relationship and pushes the data into the Person, Employee, and Vendor tables. The Table per Class Hierarchy inheritance The Table per Class Hierarchy (TPH) inheritance is modeled by having a single database table for all the entity classes in the inheritance hierarchy. The TPH inheritance is useful in cases where all the information about the related entities is stored in a single table. For example, using the earlier scenario, let's try to model the database in such a way that it will only contain a single table called Workers to store the Employee and Vendor details. Let's try to visualize this table: A database schema showing the TPH inheritance database schema Now what will happen in this case is that the common fields will be populated whenever we create a type of worker. Salary will only contain a value if the worker is of type Employee. The HourlyRate field will be null in this case. If the worker is of type Vendor, then the HourlyRate field will have a value, and Salary will be null. This pattern is not very elegant from a database perspective. Since we are trying to keep unrelated data in a single table, our table is not normalized. There will always be some redundant columns that contain null values if we use this approach. We should try not to use this pattern unless it is absolutely needed. To implement the TPH inheritance relationship using the preceding table structure, we need to perform the following activities: Generate the default Entity Data Model. Add concrete classes to the Entity Data Model. Map the concrete class properties to their respective tables and columns. Make the base class entity abstract. Use the entities via the DBContext object. Let's discuss this in detail. Generating the default Entity Data Model Let's now generate the Entity Data Model for this table. The Entity Framework will create a single entity, Worker, for this table: The generated model for the table created for implementing the TPH inheritance Adding concrete classes to the Entity Data Model From the application perspective, it would be a much better solution if we have classes such as Employee and Vendor, which are derived from the Worker entity. The Worker class will contain all the common properties, and Employee and Vendor will contain their respective properties. So, let's add new entities for Employee and Vendor. While creating the entity, we can specify the base class entity as Worker, which is as follows: Adding a new entity in the Entity Data Model using a base class type Similarly, we will add the Vendor entity to our Entity Data Model, and specify the Worker entity as its base class entity. Once the entities are generated, our conceptual model will look like this: The Entity Data Model after adding the derived entities Next, we have to remove the Salary and HourlyRate properties from the Worker entity, and put them in the Employee and the Vendor entities respectively. So, once the properties are put into the respective entities, our final Entity Data model will look like this: The Entity Data Model after moving the respective properties into the derived entities Mapping the concrete class properties to the respective tables and columns After this, we have to define the column mappings in the derived classes to let the derived classes know which table and column should be used to put the data. We also need to specify the mapping condition. The Employee entity should save the Salary property's value in the Salary column of the Workers table when the Salary property is Not Null and HourlyRate is Null: Table mapping and conditions to map the Employee entity to the respective tables Once this mapping is done, we have to mark the Salary property as Nullable=false in the entity property window. This will let Entity Framework know that if someone is creating an object of the Employee type, then the Salary field is mandatory: Setting the Employee entity properties as Nullable Similarly, the Vendor entity should save the HourlyRate property's value in the HourlyRate column of the Workers table when Salary is Null and HourlyRate is Not Null: Table mapping and conditions to map the Vendor entity to the respective tables And similar to the Employee class, we also have to mark the HourlyRate property as Nullable=false in the Entity Property window. This will help Entity Framework know that if someone is creating an object of the Vendor type, then the HourlyRate field is mandatory: Setting the Vendor entity properties to Nullable Making the base class entity abstract There is one last change needed to be able to use these models. To be able to use these models, we need to mark the base class as abstract, so that Entity Framework is able to resolve the object of Employee and Vendors to the Workers table. Making the base class Workers as abstract This will also be a better model from the application perspective because the Worker entity itself has no meaning from the application domain perspective. Using the entities via the DBContext object Now we have our Entity Data Model configured to use the TPH inheritance. Let's try to create an Employee object and a Vendor object, and add them to the database using the TPH inheritance hierarchy: using (SampleDbEntities db = new SampleDbEntities()){Employee employee = new Employee();employee.FirstName = "Employee 1";employee.LastName = "Employee 1";employee.PhoneNumber = "1234567";employee.Salary = 50000;employee.EmailID = "employee1@test.com";Vendor vendor = new Vendor();vendor.FirstName = "vendor 1";vendor.LastName = "vendor 1";vendor.PhoneNumber = "1234567";vendor.HourlyRate = 100;vendor.EmailID = "vendor1@test.com";db.Workers.Add(employee);db.Workers.Add(vendor);db.SaveChanges();} In the preceding code, we created objects of the Employee and Vendor types, and then added them to the Workers collection using the DbContext object. Entity Framework will look at the mappings of the base entity and the derived entities, will check the mapping conditions and the actual values of the properties, and then push the data to the respective tables. So, let's take a look at the data inserted in the Workers table: A database snapshot after inserting the data using the Employee and Vendor entities So, we can see that for our Employee and Vendor models, the actual data is being kept in the same table using Entity Framework's TPH inheritance. The Table per Concrete Class inheritance The Table per Concrete Class (TPC) inheritance can be used when the database contains separate tables for all the logical entities, and these tables have some common fields. In our existing example, if there are two separate tables of Employee and Vendor, then the database schema would look like the following: The database schema showing the TPC inheritance database schema One of the major problems in such a database design is the duplication of columns in the tables, which is not recommended from the database normalization perspective. To implement the TPC inheritance, we need to perform the following tasks: Generate the default Entity Data Model. Create the abstract class. Modify the CDSL to cater to the change. Specify the mapping to implement the TPT inheritance. Use the entities via the DBContext object. Generating the default Entity Data Model Let's now take a look at the generated entities for this database schema: The default generated entities for the TPC inheritance database schema Entity Framework has given us separate entities for these two tables. From our application domain perspective, we can use these entities in a better way if all the common properties are moved to a common abstract class. The Employee and Vendor entities will contain the properties specific to them and inherit from this abstract class to use all the common properties. Creating the abstract class Let's add a new entity called Worker to our conceptual model and move the common properties into this entity: Adding a base class for all the common properties Next, we have to mark this class as abstract from the properties window: Marking the base class as abstract class Modifying the CDSL to cater to the change Next, we have to specify the mapping for these tables. Unfortunately, the Visual Entity Designer has no support for this type of mapping, so we need to perform this mapping ourselves in the EDMX XML file. The conceptual schema definition language (CSDL) part of the EDMX file is all set since we have already moved the common properties into the abstract class. So, now we should be able to use these properties with an abstract class handle. The problem will come in the storage schema definition language (SSDL) and mapping specification language (MSL). The first thing that we need to do is to change the SSDL to let Entity Framework know that the abstract class Worker is capable of saving the data in two tables. This can be done by setting the EntitySet name in the EntityContainer tags as follows: <EntityContainer Name="todoDbModelStoreContainer">   <EntitySet Name="Employee" EntityType="Self.Employee" Schema="dbo" store_Type="Tables" />   <EntitySet Name="Vendor" EntityType="Self.Vendor" Schema="dbo" store_Type="Tables" /></EntityContainer> Specifying the mapping to implement the TPT inheritance Next, we need to change the MSL to properly map the properties to the respective tables based on the actual type of object. For this, we have to specify EntitySetMapping. The EntitySetMapping should look like the following: <EntityContainerMapping StorageEntityContainer="todoDbModelStoreContainer" CdmEntityContainer="SampleDbEntities">    <EntitySetMapping Name="Workers">   <EntityTypeMapping TypeName="IsTypeOf(SampleDbModel.Vendor)">       <MappingFragment StoreEntitySet="Vendor">       <ScalarProperty Name="HourlyRate" ColumnName="HourlyRate" />       <ScalarProperty Name="EMailId" ColumnName="EMailId" />       <ScalarProperty Name="PhoneNumber" ColumnName="PhoneNumber" />       <ScalarProperty Name="LastName" ColumnName="LastName" />       <ScalarProperty Name="FirstName" ColumnName="FirstName" />       <ScalarProperty Name="ID" ColumnName="ID" />       </MappingFragment>   </EntityTypeMapping>      <EntityTypeMapping TypeName="IsTypeOf(SampleDbModel.Employee)">       <MappingFragment StoreEntitySet="Employee">       <ScalarProperty Name="ID" ColumnName="ID" />       <ScalarProperty Name="Salary" ColumnName="Salary" />       <ScalarProperty Name="EMailId" ColumnName="EMailId" />       <ScalarProperty Name="PhoneNumber" ColumnName="PhoneNumber" />       <ScalarProperty Name="LastName" ColumnName="LastName" />       <ScalarProperty Name="FirstName" ColumnName="FirstName" />       </MappingFragment>   </EntityTypeMapping>   </EntitySetMapping></EntityContainerMapping> In the preceding code, we specified that if the actual type of object is Vendor, then the properties should map to the columns in the Vendor table, and if the actual type of entity is Employee, the properties should map to the Employee table, as shown in the following screenshot: After EDMX modifications, the mapping are visible in Visual Entity Designer If we now open the EDMX file again, we can see the properties being mapped to the respective tables in the respective entities. Doing this mapping from Visual Entity Designer is not possible, unfortunately. Using the entities via the DBContext object Let's use these "entities from our code: using (SampleDbEntities db = new SampleDbEntities()) { Employee employee = new Employee(); employee.FirstName = "Employee 1"; employee.LastName = "Employee 1"; employee.PhoneNumber = "1234567"; employee.Salary = 50000; employee.EMailId = "employee1@test.com"; Vendor vendor = new Vendor(); vendor.FirstName = "vendor 1"; vendor.LastName = "vendor 1"; vendor.PhoneNumber = "1234567"; vendor.HourlyRate = 100; vendor.EMailId = "vendor1@test.com"; db.Workers.Add(employee); db.Workers.Add(vendor); db.SaveChanges(); } In the preceding code, we created objects of the Employee and Vendor types and saved them using the Workers entity set, which is actually an abstract class. If we take a look at the inserted database, we will see the following: Database snapshot of the inserted data using TPC inheritance From the preceding screenshot, it is clear that the data is being pushed to the respective tables. The insert operation we saw in the previous code is successful but there will be an exception in the application. This exception is because when Entity Framework tries to access the values that are in the abstract class, it finds two records with same ID, and since the ID column is specified as a primary key, two records with the same value is a problem in this scenario. This exception clearly shows that the store/database generated identity columns will not work with the TPC inheritance. If we want to use the TPC inheritance, then we either need to use GUID based IDs, or pass the ID from the application, or perhaps use some database mechanism that can maintain the uniqueness of auto-generated columns across multiple tables. Choosing the inheritance strategy Now that we know about all the inheritance strategies supported by Entity Framework, let's try to analyze these approaches. The most important thing is that there is no single strategy that will work for all the scenarios. Especially if we have a legacy database. The best option would be to analyze the application requirements and then look at the existing table structure to see which approach is best suited. The Table per Class Hierarchy inheritance tends to give us denormalized tables and have redundant columns. We should only use it when the number of properties in the derived classes is very less, so that the number of redundant columns is also less, and this denormalized structure will not create problems over a period of time. Contrary to TPH, if we have a lot of properties specific to derived classes and only a few common properties, we can use the Table per Concrete Class inheritance. However, in this approach, we will end up with some properties being repeated in all the tables. Also, this approach imposes some limitations such as we cannot use auto-increment identity columns in the database. If we have a lot of common properties that could go into a base class and a lot of properties specific to derived classes, then perhaps Table per Type is the best option to go with. In any case, complex inheritance relationships that become unmanageable in the long run should be avoided. One alternative could be to have separate domain models to implement the application logic in an object-oriented manner, and then use mappers to map these domain models to Entity Framework's generated entity models. Summary In this article, we looked at the various types of inheritance relationship using Entity Framework. We saw how these inheritance relationships can be implemented, and some guidelines on which should be used in which scenario. Resources for Article: Further resources on this subject: Working with Zend Framework 2.0 [article] Hosting the service in IIS using the TCP protocol [article] Applying LINQ to Entities to a WCF Service [article]
Read more
  • 0
  • 0
  • 4791

article-image-financial-management-microsoft-dynamics-ax-2012-r3
Packt
11 Feb 2015
4 min read
Save for later

Financial Management with Microsoft Dynamics AX 2012 R3

Packt
11 Feb 2015
4 min read
In this article by Mohamed Aamer, author of Microsoft Dynamics AX 2012 R3 Financial Management, we will cover that the core foundation of Enterprise Resource Planning (ERP) is financial management; it is vital to comprehend the financial characteristics in Microsoft Dynamics AX 2012 R3 from a practical perspective engaged with the application mechanism. It is important to cover the following topics: Understanding financial management aspects in Microsoft Dynamics AX 2012 R3 Covering the business rational, basic setups, and configuration Real-life business requirements and its solution Hints of implementation tips and tricks in addition to the key consideration points during analysis, design, deployment, and operation (For more resources related to this topic, see here.) Microsoft Dynamics AX 2012 R3 Financial Management book covers the main characteristics general ledger and its integration between other subledgers (Accounts payable, Accounts receivable, fixed assets, cash and bank management, and inventory). It also covers the core features of main accounts, the categorization accounts, and its controls, along with the opening balance process and concept, and the closing procedure. It then discusses subledgers functionality (Accounts payable, Accounts receivable, fixed assets, cash and bank management, cash flow management, and inventory) in more details by walking through the master data, controls, and transactions and its effects on the general ledger. It explores financial reporting that is one of the basic implementation corner stone. The main principles for reporting are reliability of business information and the ability to get the right information at the right time for the right person. Reports that analyze ERP data in an expressive way represent the output of the ERP implementation; it is considered as the cream of the implementation—the next level of value that solution stakeholders should target for. This ultimate outcome results from building all reports based on a single point of information. Planning reporting needs for ERP The Microsoft Dynamics AX implementation teamwork should challenge the management's reporting needs in the analysis phase of the implementation with a particular focus on exploring the data required to build reports. These data requirements should then be cross-checked with the real data entry activities that end users will execute to ensure that business users will get vital information from the reports. The reporting levels are as follows: Operational management Middle management Top management Understanding information technology value chain The model of a management information system is most applicable to the Information Technology (IT) manager or Chief Information Officer (CIO) of a business. Business owners likely don't care as much about the specifics as long as these aspects of the solution deliver the required results. The following are the basic layers of the value chain: Database management Business processes Business Intelligence Frontend Understanding Microsoft Dynamics AX information source blocks This section explores the information sources that eventually determine the strategic value of Business Intelligence (BI) reporting and analytics. These are divided into three blocks. Detailed transactions block Business Intelligence block Executive decisions block Discovering Microsoft Dynamics AX reporting The reporting options offered by Microsoft Dynamics AX are: Inquiry forms SQL Reporting Services (SSRS) reports The original transaction The Original document function Audit trail Reporting currency Companies report their transactions in a specific currency that is known as accounting currency or local currency. It is normal to post transactions in a different currency, and this amount of money is translated to the home currency using the current exchange rate. Autoreports The Autoreport wizard is a user-friendly tool. The end user can easily generate a report starting from every form in Microsoft Dynamics AX. This wizard helps the user to create a report based on the information in the form and save the report. Summary In this article, we covered financial reporting from planning to consideration of reporting levels. We covered important points that affect reporting quality by considering the reporting value chain, which consists of infrastructure, database management, business processes, business intelligence, and the frontend. We also discussed the information source blocks, which consist of the detailed transactions block, business intelligence block, and executive decisions block. Then we learned about the reporting possibilities in Microsoft Dynamics AX such as inquiry forms and SSRS reports, and autoreport capabilities in Microsoft Dynamics AX 2012 R3. Resources for Article: Further resources on this subject: Customization in Microsoft Dynamics CRM [Article] Getting Started with Microsoft Dynamics CRM 2013 Marketing [Article] SOP Module Setup in Microsoft Dynamics GP [Article]
Read more
  • 0
  • 0
  • 3075

article-image-how-to-build-a-koa-web-application-part-2
Christoffer Hallas
08 Feb 2015
5 min read
Save for later

How to Build a Koa Web Application - Part 2

Christoffer Hallas
08 Feb 2015
5 min read
In Part 1 of this series, we got everything in place for our Koa app using Jade and Mongel. In this post, we will cover Jade templates and how to use listing and viewing pages. Please note that this series requires that you use Node.js version 0.11+. Jade templates Rendering HTML is always an important part of any web application. Luckily, when using Node.js there are many great choices, and for this article we’ve chosen Jade. Keep in mind though that we will only touch on a tiny fraction of the Jade functionality. Let’s create our first Jade template. Create a file called create.jade and put in the following: create.jade doctype html html(lang='en') head title Create Page body h1 Create Page form(method='POST', action='/create') input(type='text', name='title', placeholder='Title') input(type='text', name='contents', placeholder='Contents') input(type='submit') For all the Jade questions you have that we won’t answer in this series, I refer you to the excellent official Jade website at http://jade-lang.com . If you add the following statement app.listen(3000); to the end of index.js, then you should be able to run the program from your terminal using the following command and by visiting http://localhost:3000 in your browser. $ node --harmony index.js The --harmony flag just tells the node program that we need support for generators in our program: Listing and viewing pages Now that we can create a page in our MongoDB database, it is time to actually list and view these pages. For this purpose we need to add another middleware to our index.js file after the first middleware: app.use(function* () { if (this.method != 'GET') { this.status = 405; this.body = 'Method Not Allowed'; return } … }); As you can probably already tell, this new middleware is very similar to the first one we added that handled the creation of pages. At first we make sure that the method of the request is GET, and if not, we respond appropriately and return the following: var params = this.path.split('/').slice(1); var id = params[0]; if (id.length == 0) { var pages = yield Page.find(); var html = jade.renderFile('list.jade', { pages: pages }); this.body = html; return } Then, we proceed to inspect the path attribute of the Koa context, looking for an ID that represents the page in the database. Remember how we redirected using the ID in the previous middleware. We inspect the path by splitting it into an array of strings separated by the forward slashes of a URL; this way the path /1234 becomes an array of ‘’ and ‘1234.’ Because the path starts with a forward slash, the first item in the array will always be the empty string, so we just discard that by default. Then we check the length of the ID parameter, and if it’s zero we know that there is in fact no ID in the path, and we should just look for the pages in the database and render our list.jade template with those pages made available to the template as the variable pages. Making data available in templates is also known as providing locals to the template. list.jade doctype html html(lang="en") head title Your Web Application body h1 Your Web Application ul - each page in pages li a(href='/#{page._id}')= page.title But if the length of id was not zero, we assume that it’s an id and we try to load that specific page from the database instead of all the pages, and we proceed to render our view.jade template with the: var page = yield Page.findById(id); var html = jade.renderFile('view.jade', page); this.body = html; view.jade doctype html html(lang="en") head title= title body h1= title p= contents That’s it You should now be able to run the app as previously described and create a page, list all of your pages, and view them. If you want to, you can continue and build a simple CMS system. Koa is very simple to use and doesn’t enforce a lot of functionality on you, allowing you to pick and choose between libraries that you need and want to use. There are many possibilities and that is one of Koa’s biggest strengths. Find even more Node.js content on our Node.js page. Featuring our latest titles and most popular tutorials, it's the perfect place to learn more about Node.js. About the author Christoffer Hallas is a software developer and entrepreneur from Copenhagen, Denmark. He is a computer polyglot and contributes to and maintains a number of open source projects. When not contemplating his next grand idea (which remains an idea), he enjoys music, sports, and design of all kinds. Christoffer can be found on GitHub as hallas and at Twitter as @hamderhallas.
Read more
  • 0
  • 0
  • 2189

article-image-contexts-and-dependency-injection-netbeans
Packt
06 Feb 2015
18 min read
Save for later

Contexts and Dependency Injection in NetBeans

Packt
06 Feb 2015
18 min read
In this article by David R. Heffelfinger, the author of Java EE 7 Development with NetBeans 8, we will introduce Contexts and Dependency Injection (CDI) and other aspects of it. CDI can be used to simplify integrating the different layers of a Java EE application. For example, CDI allows us to use a session bean as a managed bean, so that we can take advantage of the EJB features, such as transactions, directly in our managed beans. In this article, we will cover the following topics: Introduction to CDI Qualifiers Stereotypes Interceptor binding types Custom scopes (For more resources related to this topic, see here.) Introduction to CDI JavaServer Faces (JSF) web applications employing CDI are very similar to JSF applications without CDI; the main difference is that instead of using JSF managed beans for our model and controllers, we use CDI named beans. What makes CDI applications easier to develop and maintain are the excellent dependency injection capabilities of the CDI API. Just as with other JSF applications, CDI applications use facelets as their view technology. The following example illustrates a typical markup for a JSF page using CDI: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html      >    <h:head>        <title>Create New Customer</title>    </h:head>    <h:body>        <h:form>            <h3>Create New Customer</h3>            <h:panelGrid columns="3">                <h:outputLabel for="firstName" value="First Name"/>                <h:inputText id="firstName" value="#{customer.firstName}"/>                <h:message for="firstName"/>                  <h:outputLabel for="middleName" value="Middle Name"/>                <h:inputText id="middleName"                  value="#{customer.middleName}"/>                <h:message for="middleName"/>                  <h:outputLabel for="lastName" value="Last Name"/>                <h:inputText id="lastName" value="#{customer.lastName}"/>                <h:message for="lastName"/>                  <h:outputLabel for="email" value="Email Address"/>                <h:inputText id="email" value="#{customer.email}"/>                <h:message for="email"/>                <h:panelGroup/>                <h:commandButton value="Submit"                  action="#{customerController.navigateToConfirmation}"/>            </h:panelGrid>        </h:form>    </h:body> </html> As we can see, the preceding markup doesn't look any different from the markup used for a JSF application that does not use CDI. The page renders as follows (shown after entering some data): In our page markup, we have JSF components that use Unified Expression Language expressions to bind themselves to CDI named bean properties and methods. Let's take a look at the customer bean first: package com.ensode.cdiintro.model;   import java.io.Serializable; import javax.enterprise.context.RequestScoped; import javax.inject.Named;   @Named @RequestScoped public class Customer implements Serializable {      private String firstName;    private String middleName;    private String lastName;    private String email;      public Customer() {    }      public String getFirstName() {        return firstName;    }      public void setFirstName(String firstName) {        this.firstName = firstName;    }      public String getMiddleName() {        return middleName;    }      public void setMiddleName(String middleName) {        this.middleName = middleName;    }      public String getLastName() {        return lastName;    }      public void setLastName(String lastName) {        this.lastName = lastName;    }      public String getEmail() {        return email;    }      public void setEmail(String email) {        this.email = email;    } } The @Named annotation marks this class as a CDI named bean. By default, the bean's name will be the class name with its first character switched to lowercase (in our example, the name of the bean is "customer", since the class name is Customer). We can override this behavior if we wish by passing the desired name to the value attribute of the @Named annotation, as follows: @Named(value="customerBean") A CDI named bean's methods and properties are accessible via facelets, just like regular JSF managed beans. Just like JSF managed beans, CDI named beans can have one of several scopes as listed in the following table. The preceding named bean has a scope of request, as denoted by the @RequestScoped annotation. Scope Annotation Description Request @RequestScoped Request scoped beans are shared through the duration of a single request. A single request could refer to an HTTP request, an invocation to a method in an EJB, a web service invocation, or sending a JMS message to a message-driven bean. Session @SessionScoped Session scoped beans are shared across all requests in an HTTP session. Each user of an application gets their own instance of a session scoped bean. Application @ApplicationScoped Application scoped beans live through the whole application lifetime. Beans in this scope are shared across user sessions. Conversation @ConversationScoped The conversation scope can span multiple requests, and is typically shorter than the session scope. Dependent @Dependent Dependent scoped beans are not shared. Any time a dependent scoped bean is injected, a new instance is created. As we can see, CDI has equivalent scopes to all JSF scopes. Additionally, CDI adds two additional scopes. The first CDI-specific scope is the conversation scope, which allows us to have a scope that spans across multiple requests, but is shorter than the session scope. The second CDI-specific scope is the dependent scope, which is a pseudo scope. A CDI bean in the dependent scope is a dependent object of another object; beans in this scope are instantiated when the object they belong to is instantiated and they are destroyed when the object they belong to is destroyed. Our application has two CDI named beans. We already discussed the customer bean. The other CDI named bean in our application is the controller bean: package com.ensode.cdiintro.controller;   import com.ensode.cdiintro.model.Customer; import javax.enterprise.context.RequestScoped; import javax.inject.Inject; import javax.inject.Named;   @Named @RequestScoped public class CustomerController {      @Inject    private Customer customer;      public Customer getCustomer() {        return customer;    }      public void setCustomer(Customer customer) {        this.customer = customer;    }      public String navigateToConfirmation() {        //In a real application we would        //Save customer data to the database here.          return "confirmation";    } } In the preceding class, an instance of the Customer class is injected at runtime; this is accomplished via the @Inject annotation. This annotation allows us to easily use dependency injection in CDI applications. Since the Customer class is annotated with the @RequestScoped annotation, a new instance of Customer will be injected for every request. The navigateToConfirmation() method in the preceding class is invoked when the user clicks on the Submit button on the page. The navigateToConfirmation() method works just like an equivalent method in a JSF managed bean would, that is, it returns a string and the application navigates to an appropriate page based on the value of that string. Like with JSF, by default, the target page's name with an .xhtml extension is the return value of this method. For example, if no exceptions are thrown in the navigateToConfirmation() method, the user is directed to a page named confirmation.xhtml: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html      >    <h:head>        <title>Success</title>    </h:head>    <h:body>        New Customer created successfully.        <h:panelGrid columns="2" border="1" cellspacing="0">            <h:outputLabel for="firstName" value="First Name"/>            <h:outputText id="firstName" value="#{customer.firstName}"/>              <h:outputLabel for="middleName" value="Middle Name"/>            <h:outputText id="middleName"              value="#{customer.middleName}"/>              <h:outputLabel for="lastName" value="Last Name"/>            <h:outputText id="lastName" value="#{customer.lastName}"/>              <h:outputLabel for="email" value="Email Address"/>            <h:outputText id="email" value="#{customer.email}"/>          </h:panelGrid>    </h:body> </html> Again, there is nothing special we need to do to access the named beans properties from the preceding markup. It works just as if the bean was a JSF managed bean. The preceding page renders as follows: As we can see, CDI applications work just like JSF applications. However, CDI applications have several advantages over JSF, for example (as we mentioned previously) CDI beans have additional scopes not found in JSF. Additionally, using CDI allows us to decouple our Java code from the JSF API. Also, as we mentioned previously, CDI allows us to use session beans as named beans. Qualifiers In some instances, the type of bean we wish to inject into our code may be an interface or a Java superclass, but we may be interested in injecting a subclass or a class implementing the interface. For cases like this, CDI provides qualifiers we can use to indicate the specific type we wish to inject into our code. A CDI qualifier is an annotation that must be decorated with the @Qualifier annotation. This annotation can then be used to decorate the specific subclass or interface. In this section, we will develop a Premium qualifier for our customer bean; premium customers could get perks that are not available to regular customers, for example, discounts. Creating a CDI qualifier with NetBeans is very easy; all we need to do is go to File | New File, select the Contexts and Dependency Injection category, and select the Qualifier Type file type. In the next step in the wizard, we need to enter a name and a package for our qualifier. After these two simple steps, NetBeans generates the code for our qualifier: package com.ensode.cdiintro.qualifier;   import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.FIELD; import static java.lang.annotation.ElementType.PARAMETER; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.inject.Qualifier;   @Qualifier @Retention(RUNTIME) @Target({METHOD, FIELD, PARAMETER, TYPE}) public @interface Premium { } Qualifiers are standard Java annotations. Typically, they have retention of runtime and can target methods, fields, parameters, or types. The only difference between a qualifier and a standard annotation is that qualifiers are decorated with the @Qualifier annotation. Once we have our qualifier in place, we need to use it to decorate the specific subclass or interface implementation, as shown in the following code: package com.ensode.cdiintro.model;   import com.ensode.cdiintro.qualifier.Premium; import javax.enterprise.context.RequestScoped; import javax.inject.Named;   @Named @RequestScoped @Premium public class PremiumCustomer extends Customer {      private Integer discountCode;      public Integer getDiscountCode() {        return discountCode;    }      public void setDiscountCode(Integer discountCode) {        this.discountCode = discountCode;    } } Once we have decorated the specific instance we need to qualify, we can use our qualifiers in the client code to specify the exact type of dependency we need: package com.ensode.cdiintro.controller;   import com.ensode.cdiintro.model.Customer; import com.ensode.cdiintro.model.PremiumCustomer; import com.ensode.cdiintro.qualifier.Premium;   import java.util.logging.Level; import java.util.logging.Logger; import javax.enterprise.context.RequestScoped; import javax.inject.Inject; import javax.inject.Named;   @Named @RequestScoped public class PremiumCustomerController {      private static final Logger logger = Logger.getLogger(            PremiumCustomerController.class.getName());    @Inject    @Premium    private Customer customer;      public String saveCustomer() {          PremiumCustomer premiumCustomer =          (PremiumCustomer) customer;          logger.log(Level.INFO, "Saving the following information n"                + "{0} {1}, discount code = {2}",                new Object[]{premiumCustomer.getFirstName(),                    premiumCustomer.getLastName(),                    premiumCustomer.getDiscountCode()});          //If this was a real application, we would have code to save        //customer data to the database here.          return "premium_customer_confirmation";    } } Since we used our @Premium qualifier to decorate the customer field, an instance of the PremiumCustomer class is injected into that field. This is because this class is also decorated with the @Premium qualifier. As far as our JSF pages go, we simply access our named bean as usual using its name, as shown in the following code; <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html      >    <h:head>        <title>Create New Premium Customer</title>    </h:head>    <h:body>        <h:form>            <h3>Create New Premium Customer</h3>            <h:panelGrid columns="3">                <h:outputLabel for="firstName" value="First Name"/>                 <h:inputText id="firstName"                    value="#{premiumCustomer.firstName}"/>                <h:message for="firstName"/>                  <h:outputLabel for="middleName" value="Middle Name"/>                <h:inputText id="middleName"                     value="#{premiumCustomer.middleName}"/>                <h:message for="middleName"/>                  <h:outputLabel for="lastName" value="Last Name"/>                <h:inputText id="lastName"                    value="#{premiumCustomer.lastName}"/>                <h:message for="lastName"/>                  <h:outputLabel for="email" value="Email Address"/>                <h:inputText id="email"                    value="#{premiumCustomer.email}"/>                <h:message for="email"/>                  <h:outputLabel for="discountCode" value="Discount Code"/>                <h:inputText id="discountCode"                    value="#{premiumCustomer.discountCode}"/>                <h:message for="discountCode"/>                   <h:panelGroup/>                <h:commandButton value="Submit"                      action="#{premiumCustomerController.saveCustomer}"/>            </h:panelGrid>        </h:form>    </h:body> </html> In this example, we are using the default name for our bean, which is the class name with the first letter switched to lowercase. Now, we are ready to test our application: After submitting the page, we can see the confirmation page. Stereotypes A CDI stereotype allows us to create new annotations that bundle up several CDI annotations. For example, if we need to create several CDI named beans with a scope of session, we would have to use two annotations in each of these beans, namely @Named and @SessionScoped. Instead of having to add two annotations to each of our beans, we could create a stereotype and annotate our beans with it. To create a CDI stereotype in NetBeans, we simply need to create a new file by selecting the Contexts and Dependency Injection category and the Stereotype file type. Then, we need to enter a name and package for our new stereotype. At this point, NetBeans generates the following code: package com.ensode.cdiintro.stereotype;   import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.FIELD; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.enterprise.inject.Stereotype;   @Stereotype @Retention(RUNTIME) @Target({METHOD, FIELD, TYPE}) public @interface NamedSessionScoped { } Now, we simply need to add the CDI annotations that we want the classes annotated with our stereotype to use. In our case, we want them to be named beans and have a scope of session; therefore, we add the @Named and @SessionScoped annotations as shown in the following code: package com.ensode.cdiintro.stereotype;   import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.FIELD; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.enterprise.context.SessionScoped; import javax.enterprise.inject.Stereotype; import javax.inject.Named;   @Named @SessionScoped @Stereotype @Retention(RUNTIME) @Target({METHOD, FIELD, TYPE}) public @interface NamedSessionScoped { } Now we can use our stereotype in our own code: package com.ensode.cdiintro.beans;   import com.ensode.cdiintro.stereotype.NamedSessionScoped; import java.io.Serializable;   @NamedSessionScoped public class StereotypeClient implements Serializable {      private String property1;    private String property2;      public String getProperty1() {        return property1;    }      public void setProperty1(String property1) {        this.property1 = property1;    }      public String getProperty2() {        return property2;    }      public void setProperty2(String property2) {        this.property2 = property2;    } } We annotated the StereotypeClient class with our NamedSessionScoped stereotype, which is equivalent to using the @Named and @SessionScoped annotations. Interceptor binding types One of the advantages of EJBs is that they allow us to easily perform aspect-oriented programming (AOP) via interceptors. CDI allows us to write interceptor binding types; this lets us bind interceptors to beans and the beans do not have to depend on the interceptor directly. Interceptor binding types are annotations that are themselves annotated with @InterceptorBinding. Creating an interceptor binding type in NetBeans involves creating a new file, selecting the Contexts and Dependency Injection category, and selecting the Interceptor Binding Type file type. Then, we need to enter a class name and select or enter a package for our new interceptor binding type. At this point, NetBeans generates the code for our interceptor binding type: package com.ensode.cdiintro.interceptorbinding;   import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Inherited; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.interceptor.InterceptorBinding;   @Inherited @InterceptorBinding @Retention(RUNTIME) @Target({METHOD, TYPE}) public @interface LoggingInterceptorBinding { } The generated code is fully functional; we don't need to add anything to it. In order to use our interceptor binding type, we need to write an interceptor and annotate it with our interceptor binding type, as shown in the following code: package com.ensode.cdiintro.interceptor;   import com.ensode.cdiintro.interceptorbinding.LoggingInterceptorBinding; import java.io.Serializable; import java.util.logging.Level; import java.util.logging.Logger; import javax.interceptor.AroundInvoke; import javax.interceptor.Interceptor; import javax.interceptor.InvocationContext;   @LoggingInterceptorBinding @Interceptor public class LoggingInterceptor implements Serializable{      private static final Logger logger = Logger.getLogger(            LoggingInterceptor.class.getName());      @AroundInvoke    public Object logMethodCall(InvocationContext invocationContext)            throws Exception {          logger.log(Level.INFO, new StringBuilder("entering ").append(                invocationContext.getMethod().getName()).append(                " method").toString());          Object retVal = invocationContext.proceed();          logger.log(Level.INFO, new StringBuilder("leaving ").append(                invocationContext.getMethod().getName()).append(                " method").toString());          return retVal;    } } As we can see, other than being annotated with our interceptor binding type, the preceding class is a standard interceptor similar to the ones we use with EJB session beans. In order for our interceptor binding type to work properly, we need to add a CDI configuration file (beans.xml) to our project. Then, we need to register our interceptor in beans.xml as follows: <?xml version="1.0" encoding="UTF-8"?> <beans               xsi_schemaLocation="http://>    <interceptors>          <class>        com.ensode.cdiintro.interceptor.LoggingInterceptor      </class>    </interceptors> </beans> To register our interceptor, we need to set bean-discovery-mode to all in the generated beans.xml and add the <interceptor> tag in beans.xml, with one or more nested <class> tags containing the fully qualified names of our interceptors. The final step before we can use our interceptor binding type is to annotate the class to be intercepted with our interceptor binding type: package com.ensode.cdiintro.controller;   import com.ensode.cdiintro.interceptorbinding.LoggingInterceptorBinding; import com.ensode.cdiintro.model.Customer; import com.ensode.cdiintro.model.PremiumCustomer; import com.ensode.cdiintro.qualifier.Premium; import java.util.logging.Level; import java.util.logging.Logger; import javax.enterprise.context.RequestScoped; import javax.inject.Inject; import javax.inject.Named;   @LoggingInterceptorBinding @Named @RequestScoped public class PremiumCustomerController {      private static final Logger logger = Logger.getLogger(            PremiumCustomerController.class.getName());    @Inject    @Premium    private Customer customer;      public String saveCustomer() {          PremiumCustomer premiumCustomer = (PremiumCustomer) customer;          logger.log(Level.INFO, "Saving the following information n"                + "{0} {1}, discount code = {2}",                new Object[]{premiumCustomer.getFirstName(),                    premiumCustomer.getLastName(),                    premiumCustomer.getDiscountCode()});          //If this was a real application, we would have code to save        //customer data to the database here.          return "premium_customer_confirmation";    } } Now, we are ready to use our interceptor. After executing the preceding code and examining the GlassFish log, we can see our interceptor binding type in action. The lines entering saveCustomer method and leaving saveCustomer method were added to the log by our interceptor, which was indirectly invoked by our interceptor binding type. Custom scopes In addition to providing several prebuilt scopes, CDI allows us to define our own custom scopes. This functionality is primarily meant for developers building frameworks on top of CDI, not for application developers. Nevertheless, NetBeans provides a wizard for us to create our own CDI custom scopes. To create a new CDI custom scope, we need to go to File | New File, select the Contexts and Dependency Injection category, and select the Scope Type file type. Then, we need to enter a package and a name for our custom scope. After clicking on Finish, our new custom scope is created, as shown in the following code: package com.ensode.cdiintro.scopes;   import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.FIELD; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Inherited; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.inject.Scope;   @Inherited @Scope // or @javax.enterprise.context.NormalScope @Retention(RUNTIME) @Target({METHOD, FIELD, TYPE}) public @interface CustomScope { } To actually use our scope in our CDI applications, we would need to create a custom context which, as mentioned previously, is primarily a concern for framework developers and not for Java EE application developers. Therefore, it is beyond the scope of this article. Interested readers can refer to JBoss Weld CDI for Java Platform, Ken Finnigan, Packt Publishing. (JBoss Weld is a popular CDI implementation and it is included with GlassFish.) Summary In this article, we covered NetBeans support for CDI, a new Java EE API introduced in Java EE 6. We provided an introduction to CDI and explained additional functionality that the CDI API provides over standard JSF. We also covered how to disambiguate CDI injected beans via CDI Qualifiers. Additionally, we covered how to group together CDI annotations via CDI stereotypes. We also, we saw how CDI can help us with AOP via interceptor binding types. Finally, we covered how NetBeans can help us create custom CDI scopes. Resources for Article: Further resources on this subject: Java EE 7 Performance Tuning and Optimization [article] Java EE 7 Developer Handbook [article] Java EE 7 with GlassFish 4 Application Server [article]
Read more
  • 0
  • 0
  • 4194
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-working-webstart-and-browser-plugin
Packt
06 Feb 2015
12 min read
Save for later

Working with WebStart and the Browser Plugin

Packt
06 Feb 2015
12 min read
 In this article by Alex Kasko, Stanislav Kobyl yanskiy, and Alexey Mironchenko, authors of the book OpenJDK Cookbook, we will cover the following topics: Building the IcedTea browser plugin on Linux Using the IcedTea Java WebStart implementation on Linux Preparing the IcedTea Java WebStart implementation for Mac OS X Preparing the IcedTea Java WebStart implementation for Windows Introduction For a long time, for end users, the Java applets technology was the face of the whole Java world. For a lot of non-developers, the word Java itself is a synonym for the Java browser plugin that allows running Java applets inside web browsers. The Java WebStart technology is similar to the Java browser plugin but runs remotely on loaded Java applications as separate applications outside of web browsers. The OpenJDK open source project does not contain the implementations for the browser plugin nor for the WebStart technologies. The Oracle Java distribution, otherwise matching closely to OpenJDK codebases, provided its own closed source implementation for these technologies. The IcedTea-Web project contains free and open source implementations of the browser plugin and WebStart technologies. The IcedTea-Web browser plugin supports only GNU/Linux operating systems and the WebStart implementation is cross-platform. While the IcedTea implementation of WebStart is well-tested and production-ready, it has numerous incompatibilities with the Oracle WebStart implementation. These differences can be seen as corner cases; some of them are: Different behavior when parsing not well-formed JNLP descriptor files: The Oracle implementation is generally more lenient for malformed descriptors. Differences in JAR (re)downloading and caching behavior: The Oracle implementation uses caching more aggressively. Differences in sound support: This is due to differences in sound support between Oracle Java and IcedTea on Linux. Linux historically has multiple different sound providers (ALSA, PulseAudio, and so on) and IcedTea has more wide support for different providers, which can lead to sound misconfiguration. The IcedTea-Web browser plugin (as it is built on WebStart) has these incompatibilities too. On top of them, it can have more incompatibilities in relation to browser integration. User interface forms and general browser-related operations such as access from/to JavaScript code should work fine with both implementations. But historically, the browser plugin was widely used for security-critical applications like online bank clients. Such applications usually require security facilities from browsers, such as access to certificate stores or hardware crypto-devices that can differ from browser to browser, depending on the OS (for example, supports only Windows), browser version, Java version, and so on. Because of that, many real-world applications can have problems running the IcedTea-Web browser plugin on Linux. Both WebStart and the browser plugin are built on the idea of downloading (possibly untrusted) code from remote locations, and proper privilege checking and sandboxed execution of that code is a notoriously complex task. Usually reported security issues in the Oracle browser plugin (most widely known are issues during the year 2012) are also fixed separately in IcedTea-Web. Building the IcedTea browser plugin on Linux The IcedTea-Web project is not inherently cross-platform; it is developed on Linux and for Linux, and so it can be built quite easily on popular Linux distributions. The two main parts of it (stored in corresponding directories in the source code repository) are netx and plugin. NetX is a pure Java implementation of the WebStart technology. We will look at it more thoroughly in the following recipes of this article. Plugin is an implementation of the browser plugin using the NPAPI plugin architecture that is supported by multiple browsers. Plugin is written partly in Java and partly in native code (C++), and it officially supports only Linux-based operating systems. There exists an opinion about NPAPI that this architecture is dated, overcomplicated, and insecure, and that modern web browsers have enough built-in capabilities to not require external plugins. And browsers have gradually reduced support for NPAPI. Despite that, at the time of writing this book, the IcedTea-Web browser plugin worked on all major Linux browsers (Firefox and derivatives, Chromium and derivatives, and Konqueror). We will build the IcedTea-Web browser plugin from sources using Ubuntu 12.04 LTS amd64. Getting ready For this recipe, we will need a clean Ubuntu 12.04 running with the Firefox web browser installed. How to do it... The following procedure will help you to build the IcedTea-Web browser plugin: Install prepackaged binaries of OpenJDK 7: sudo apt-get install openjdk-7-jdk Install the GCC toolchain and build dependencies: sudo apt-get build-dep openjdk-7 Install the specific dependency for the browser plugin: sudo apt-get install firefox-dev Download and decompress the IcedTea-Web source code tarball: wget http://icedtea.wildebeest.org/download/source/icedtea-web-1.4.2.tar.gz tar xzvf icedtea-web-1.4.2.tar.gz Run the configure script to set up the build environment: ./configure Run the build process: make Install the newly built plugin into the /usr/local directory: sudo make install Configure the Firefox web browser to use the newly built plugin library: mkdir ~/.mozilla/plugins cd ~/.mozilla/plugins ln -s /usr/local/IcedTeaPlugin.so libjavaplugin.so Check whether the IcedTea-Web plugin has appeared under Tools | Add-ons | Plugins. Open the http://java.com/en/download/installed.jsp web page to verify that the browser plugin works. How it works... The IcedTea browser plugin requires the IcedTea Java implementation to be compiled successfully. The prepackaged OpenJDK 7 binaries in Ubuntu 12.04 are based on IcedTea, so we installed them first. The plugin uses the GNU Autconf build system that is common between free software tools. The xulrunner-dev package is required to access the NPAPI headers. The built plugin may be installed into Firefox for the current user only without requiring administrator privileges. For that, we created a symbolic link to our plugin in the place where Firefox expects to find the libjavaplugin.so plugin library. There's more... The plugin can also be installed into other browsers with NPAPI support, but installation instructions can be different for different browsers and different Linux distributions. As the NPAPI architecture does not depend on the operating system, in theory, a plugin can be built for non-Linux operating systems. But currently, no such ports are planned. Using the IcedTea Java WebStart implementation on Linux On the Java platform, the JVM needs to perform the class load process for each class it wants to use. This process is opaque for the JVM and actual bytecode for loaded classes may come from one of many sources. For example, this method allows the Java Applet classes to be loaded from a remote server to the Java process inside the web browser. Remote class loading also may be used to run remotely loaded Java applications in standalone mode without integration with the web browser. This technique is called Java WebStart and was developed under Java Specification Request (JSR) number 56. To run the Java application remotely, WebStart requires an application descriptor file that should be written using the Java Network Launching Protocol (JNLP) syntax. This file is used to define the remote server to load the application form along with some metainformation. The WebStart application may be launched from the web page by clicking on the JNLP link, or without the web browser using the JNLP file obtained beforehand. In either case, running the application is completely separate from the web browser, but uses a sandboxed security model similar to Java Applets. The OpenJDK project does not contain the WebStart implementation; the Oracle Java distribution provides its own closed-source WebStart implementation. The open source WebStart implementation exists as part of the IcedTea-Web project. It was initially based on the NETwork eXecute (NetX) project. Contrary to the Applet technology, WebStart does not require any web browser integration. This allowed developers to implement the NetX module using pure Java without native code. For integration with Linux-based operating systems, IcedTea-Web implements the javaws command as shell script that launches the netx.jar file with proper arguments. In this recipe, we will build the NetX module from the official IcedTea-Web source tarball. Getting ready For this recipe, we will need a clean Ubuntu 12.04 running with the Firefox web browser installed. How to do it... The following procedure will help you to build a NetX module: Install prepackaged binaries of OpenJDK 7: sudo apt-get install openjdk-7-jdk Install the GCC toolchain and build dependencies: sudo apt-get build-dep openjdk-7 Download and decompress the IcedTea-Web source code tarball: wget http://icedtea.wildebeest.org/download/source/icedtea-web-1.4.2.tar.gz tar xzvf icedtea-web-1.4.2.tar.gz Run the configure script to set up a build environment excluding the browser plugin from the build: ./configure –disable-plugin Run the build process: make Install the newly-built plugin into the /usr/local directory: sudo make install Run the WebStart application example from the Java tutorial: javaws http://docs.oracle.com/javase/tutorialJWS/samples/ deployment/dynamictree_webstartJWSProject/dynamictree_webstart.jnlp How it works... The javaws shell script is installed into the /usr/local/* directory. When launched with a path or a link to the JNLP file, javaws launches the netx.jar file, adding it to the boot classpath (for security reasons) and providing the JNLP link as an argument. Preparing the IcedTea Java WebStart implementation for Mac OS X The NetX WebStart implementation from the IcedTea-Web project is written in pure Java, so it can also be used on Mac OS X. IcedTea-Web provides the javaws launcher implementation only for Linux-based operating systems. In this recipe, we will create a simple implementation of the WebStart launcher script for Mac OS X. Getting ready For this recipe, we will need Mac OS X Lion with Java 7 (the prebuilt OpenJDK or Oracle one) installed. We will also need the netx.jar module from the IcedTea-Web project, which can be built using instructions from the previous recipe. How to do it... The following procedure will help you to run WebStart applications on Mac OS X: Download the JNLP descriptor example from the Java tutorials at http://docs.oracle.com/javase/tutorialJWS/samples/deployment/dynamictree_webstartJWSProject/dynamictree_webstart.jnlp. Test that this application can be run from the terminal using netx.jar: java -Xbootclasspath/a:netx.jar net.sourceforge.jnlp.runtime.Boot dynamictree_webstart.jnlp Create the wslauncher.sh bash script with the following contents: #!/bin/bash if [ "x$JAVA_HOME" = "x" ] ; then JAVA="$( which java 2>/dev/null )" else JAVA="$JAVA_HOME"/bin/java fi if [ "x$JAVA" = "x" ] ; then echo "Java executable not found" exit 1 fi if [ "x$1" = "x" ] ; then echo "Please provide JNLP file as first argument" exit 1 fi $JAVA -Xbootclasspath/a:netx.jar net.sourceforge.jnlp.runtime.Boot $1 Mark the launcher script as executable: chmod 755 wslauncher.sh Run the application using the launcher script: ./wslauncher.sh dynamictree_webstart.jnlp How it works... The next.jar file contains a Java application that can read JNLP files and download and run classes described in JNLP. But for security reasons, next.jar cannot be launched directly as an application (using the java -jar netx.jar syntax). Instead, netx.jar is added to the privileged boot classpath and is run specifying the main class directly. This allows us to download applications in sandbox mode. The wslauncher.sh script tries to find the Java executable file using the PATH and JAVA_HOME environment variables and then launches specified JNLP through netx.jar. There's more... The wslauncher.sh script provides a basic solution to run WebStart applications from the terminal. To integrate netx.jar into your operating system environment properly (to be able to launch WebStart apps using JNLP links from the web browser), a native launcher or custom platform scripting solution may be used. Such solutions lay down the scope of this book. Preparing the IcedTea Java WebStart implementation for Windows The NetX WebStart implementation from the IcedTea-Web project is written in pure Java, so it can also be used on Windows; we also used it on Linux and Mac OS X in previous recipes in this article. In this recipe, we will create a simple implementation of the WebStart launcher script for Windows. Getting ready For this recipe, we will need a version of Windows running with Java 7 (the prebuilt OpenJDK or Oracle one) installed. We will also need the netx.jar module from the IcedTea-Web project, which can be built using instructions from the previous recipe in this article. How to do it... The following procedure will help you to run WebStart applications on Windows: Download the JNLP descriptor example from the Java tutorials at http://docs.oracle.com/javase/tutorialJWS/samples/deployment/dynamictree_webstartJWSProject/dynamictree_webstart.jnlp. Test that this application can be run from the terminal using netx.jar: java -Xbootclasspath/a:netx.jar net.sourceforge.jnlp.runtime.Boot dynamictree_webstart.jnlp Create the wslauncher.sh bash script with the following contents: #!/bin/bash if [ "x$JAVA_HOME" = "x" ] ; then JAVA="$( which java 2>/dev/null )" else JAVA="$JAVA_HOME"/bin/java fi if [ "x$JAVA" = "x" ] ; then echo "Java executable not found" exit 1 fi if [ "x$1" = "x" ] ; then echo "Please provide JNLP file as first argument" exit 1 fi $JAVA -Xbootclasspath/a:netx.jar net.sourceforge.jnlp.runtime.Boot $1 Mark the launcher script as executable: chmod 755 wslauncher.sh Run the application using the launcher script: ./wslauncher.sh dynamictree_webstart.jnlp How it works... The netx.jar module must be added to the boot classpath as it cannot be run directly because of security reasons. The wslauncher.bat script tries to find the Java executable using the JAVA_HOME environment variable and then launches specified JNLP through netx.jar. There's more... The wslauncher.bat script may be registered as a default application to run the JNLP files. This will allow you to run WebStart applications from the web browser. But the current script will show the batch window for a short period of time before launching the application. It also does not support looking for Java executables in the Windows Registry. A more advanced script without those problems may be written using Visual Basic script (or any other native scripting solution) or as a native executable launcher. Such solutions lay down the scope of this book. Summary In this article we covered the configuration and installation of WebStart and browser plugin components, which are the biggest parts of the Iced Tea project.
Read more
  • 0
  • 0
  • 5060

article-image-how-to-build-a-koa-web-application-part-1
Christoffer Hallas
15 Dec 2014
8 min read
Save for later

How to Build a Koa Web Application - Part 1

Christoffer Hallas
15 Dec 2014
8 min read
You may be a seasoned or novice web developer, but no matter your level of experience, you must always be able to set up a basic MVC application. This two part series will briefly show you how to use Koa, a bleeding edge Node.js web application framework to create a web application using MongoDB as its database. Koa has a low footprint and tries to be as unbiased as possible. For this series, we will also use Jade and Mongel, two Node.js libraries that provide HTML template rendering and MongoDB model interfacing, respectively. Note that this series requires you to use Node.js version 0.11+. At the end of the series, we will have a small and basic app where you can create pages with a title and content, list your pages, and view them. Let’s get going! Using NPM and Node.js If you do not already have Node.js installed, you can download installation packages at the official Node.js website, http://nodejs.org. I strongly suggest that you install Node.js in order to code along with the article. Once installed, Node.js will add two new programs to your computer that you can access from your terminal; they’re node and npm. The first program is the main Node.js program and is used to run Node.js applications, and the second program is the Node Package Manager and it’s used to install Node.js packages. For this application we start out in an empty folder by using npm to install four libraries: $ npm install koa jade mongel co-body Once this is done, open your favorite text editor and create an index.js file in the folder in which we will now start our creating our application. We start by using the require function to load the four libraries we just installed: var koa = require('koa'); var jade = require('jade'); var mongel = require('mongel'); var parse = require(‘co-body'); This simply loads the functionality of the libraries into the respective variables. This lets us create our Page model and our Koa app variables: var Page = mongel('pages', ‘mongodb://localhost/app'); var app = koa(); As you can see, we now use the variables mongel and koa that we previously loaded into our program using require. To create a model with mongel, all we have to do is give the name of our MongoDB collection and a MongoDB connection URI that represents the network location of the database; in this case we’re using a local installation of MongoDB and a database called app. It’s simple to create a basic Koa application, and as seen in the code above, all we do is create a new variable called app that is the result of calling the Koa library function. Middleware, generators, and JavaScript Koa uses a new feature in JavaScript called generators. Generators are not widely available in browsers yet except for some versions of Google Chrome, but since Node.js is built on the same JavaScript as Google Chrome it can use generators. The generators function is much like a regular JavaScript function, but it has a special ability to yield several values along with the normal ability of returning a single value. Some expert JavaScript programmers used this to create a new and improved way of writing asynchronous code in JavaScript, which is required when building a networked application such as a web application. The generators function is a complex subject and we won’t cover it in detail. We’ll just show you how to use it in our small and basic app. In Koa, generators are used as something called middleware, a concept that may be familiar to you from other languages such as Ruby and Python. Think of middleware as a stack of functions through which an HTTP request must travel in order to create an appropriate response. Middleware should be created so that the functionality of a given middleware is encapsulated together. In our case, this means we’ll be creating two pieces of middleware: one to create pages and one to list pages or show a page. Let’s create our first middleware: app.use(function* (next) { … }); As you can see, we start by calling the app.use function, which takes a generator as its argument, and this effectively pushes the generator into the stack. To create a generator, we use a special function syntax where an asterisk is added as seen in the previous code snippet. We let our generator take a single argument called next, which represents the next middleware in the stack, if any. From here on, it is simply a matter of checking and responding to the parameters of the HTTP request, which are accessible to us in the Koa context. This is also the function context, which in JavaScript is the keyword this, similar to other languages and the keyword self: if (this.path != '/create') { yield next; return } Since we’re creating some middleware that helps us create pages, we make sure that this request is for the right path, in our case, /create; if not, we use the yield keyword and the next argument to pass the control of the program to the next middleware. Please note the return keyword that we also use; this is very important in this case as the middleware would otherwise continue while also passing control to the next middleware. This is not something you want to happen unless the middleware you’re in will not modify the Koa context or HTTP response, because subsequent middleware will always expect that they’re now in control. Now that we have checked that the path is correct, we still have to check the method to see if we’re just showing the form to create a page, or if we should actually create a page in the database: if (this.method == 'POST') { var body = yield parse.form(this); var page = yield Page.createOne({    title: body.title,    contents: body.contents }); this.redirect('/' + page._id); return } else if (this.method != 'GET') { this.status = 405; this.body = 'Method Not Allowed'; return } To check the method, we use the Koa context again and the method attribute. If we’re handling a POST request we now know how to create a page, but this also means that we must extract extra information from the request. Koa does not process the body of a request, only the headers, so we use the co-body library that we downloaded early and loaded in as the parse variable. Notice how we yield on the parse.form function; this is because this is an asynchronous function and we have to wait until it is done before we continue the program. Then we proceed to use our mongel model Page to create a page using the data we found in the body of the request, again this is an asynchronous function and we use yield to wait before we finally redirect the request using the page’s database id. If it turns out the method was not POST, we still want to use this middleware to show the form that is actually used to issue the request. That means we have to make sure that the method is GET, so we added an else if statement to the original check, and if the request is neither POST or GET we respond with an HTTP status 405 and the message Method Not Allowed, which is the appropriate response for this case. Notice how we don’t yield next; this is because the middleware was able to determine a satisfying response for the request and it requires no further processing. Finally, if the method was actually POST, we use the Jade library that we also installed using npm to render a create.jade template in HTML: var html = jade.renderFile('create.jade'); this.body = html; Notice how we set the Koa context’s body attribute to the rendered HTML from Jade; all this does is tell Koa that we want to send that back to the browser that sent the request. Wrapping up You are well on your way to creating your Koa app. In Part 2 we will implement Jade templates and list and view pages. Ready for the next step? Read Part 2 here. Explore all of our top Node.js content in one place - visit our Node.js page today! About the author Christoffer Hallas is a software developer and entrepreneur from Copenhagen, Denmark. He is a computer polyglot and contributes to and maintains a number of open source projects. When not contemplating his next grand idea (which remains an idea) he enjoys music, sports, and design of all kinds. Christoffer can be found on GitHub as hallas and at Twitter as @hamderhallas.
Read more
  • 0
  • 0
  • 3190

article-image-configuring-distributed-rails-applications-chef-part-2
Rahmal Conda
07 Nov 2014
9 min read
Save for later

Configuring Distributed Rails Applications with Chef: Part 2

Rahmal Conda
07 Nov 2014
9 min read
In my Part 1 post, I gave you the low down about Chef. I covered what it’s for and what it’s capable of. Now let’s get into some real code and take a look at how you install and run Chef Solo and Chef Server. What we want to accomplish First let’s make a list of some goals. What are we trying to get out of deploying and provisioning with Chef? Once we have it set up, provisioning a new server should be simple; no more than a few simple commands. We want it to be platform-agnostic so we can deploy any VPS provider we choose with the same scripts. We want it to be easy to follow and understand. Any new developer coming later should have no problem figuring out what’s going on. We want the server to be nearly automated. It should take care of itself as much as possible, and alert us if anything goes wrong. Before we start, let’s decide on a stack. You should feel free to run any stack you choose. This is just what I’m using for this post setup: Ubuntu 12.04 LTS RVM Ruby 1.9.3+ Rails 3.2+ Postgres 9.3+ Redis 3.1+ Chef Git Now that we’ve got that out of the way, let’s get started! Step 1: Install the tools First, make sure that all of the packages we download to our VPS are up to date: ~$ sudo apt-get update Next, we'll install RVM (Ruby Version Manager). RVM is a great tool for installing Ruby. It allows you to use several versions of Ruby on one server. Don't get ahead of yourself though; at this point, we only care about one version. To install RVM, we’ll need curl: ~$ sudo apt-get install curl We also need to install Git. Git is an open source distributed version control system, primarily used to maintain software projects. (If you didn't know that much, you're probably reading the wrong post. But I digress!): ~$ sudo apt-get install git Now install RVM with this curl command: ~$ curl -sSL https://get.rvm.io | bash -s stable You’ll need to source RVM (you can add this to your bash profile): ~$ source ~/.rvm/scripts/rvm In order for it to work, RVM has some of its own dependencies that need to be installed. To automatically install them, use the following command: ~$ rvm requirements Once we have RVM set up, installing Ruby is simple: ~$ rvm install 1.9.3 Ruby 1.9.3 is now installed! Since we'll be accessing it through a tool that can potentially have a variety of Ruby versions loaded, we need to tell the system to use this version as the default: ~$ rvm use 1.9.3 --default Next we'll make sure that we can install any Ruby Gem we need into this new environment. We'll stick with RVM for installing gems as well. This'll ensure they get loaded into our Ruby version properly. Run this command: ~$ rvm rubygems current Don’t worry if it seems like you’re setting up a lot of things manually now. Once Chef is set up, all of this will be part of your cookbooks, so you’ll only have to do this once. Step 2: Install Chef and friends First, we'll start off by cloning the Opscode Chef repository: ~$ git clone git://github.com/opscode/chef-repo.git chef With Ruby and RubyGems set up, we can install some gems! We’ll start with a gem called Librarian-Chef. Librarian-Chef is sort of a Rails Bundler for Chef cookbooks. It'll download and manage cookbooks that you specify in Cheffile. Many useful cookbooks are published by different sources within the Chef community. You'll want to make use of them as you build out your own Chef environment. ~$ gem install librarian-chef  Initialize Librarian in your Chef repository with this command: ~$ cd chef ~/chef$ librarian-chef init This command will create a Cheffile in your Chef repository. All of your dependencies should be specified in that file. To deploy the stack we just built, your Cheffile should look like this: 1 site 'http://community.opscode.com/api/v1' 2 cookbook 'sudo' 3 cookbook 'apt' 4 cookbook 'user' 5 cookbook 'git' 6 cookbook 'rvm' 7 cookbook 'postgresql' 8 cookbook 'rails' ~ Now use Librarian to pull these community cookbooks: ~/chef$ librarian-chef install Librarian will pull the cookbooks you specify, along with their dependencies, to the cookbooks folder and create a Cheffile.lock file. Commit both Cheffile and Cheffile.lock to your repo: ~/chef$ git add Cheffile Cheffile.lock ~/chef$ git commit -m “updated cookbooks list” There is no need to commit the cookbooks folder, because you can always use the install command and Librarian will pull the same group of cookbooks with the correct versions. You should not touch the cookbooks folder—let Librarian manage it for you. Librarian will overwrite any changes you make inside that folder. If you want to manually create and manage cookbooks, outside of Librarian, add a new folder, like local-cookbooks, for instance. Step 3: Cooking up somethin’ good! Now that you see how to get the cookbooks, you can create your roles. You use roles to determine what role a server instance would have in you server stack, and you specify what that role would need. For instance, your Database Server role would most likely need a Postgresql server (or you DB of choice), a DB client, user authorization and management, while your Web Server would need Apache (or Nginx), Unicorn, Passenger, and so on. You can also make base roles, to have a basic provision that all your servers would have. Given what we’ve installed so far, our basic configuration might look something like this: name "base" description "Basic configuration for all nodes" run_list( 'recipe[git]', 'recipe[sudo]', 'recipe[apt]', 'recipe[rvm::user]', 'recipe[postgresql::client]' ) override_attributes( authorization: { sudo: { users: ['ubuntu'], passwordless: true } }, rvm: { rubies: ['ruby-1.9.3-p125'], default_ruby: 'ruby-1.9.3-p125', global_gems: ['bundler', 'rake'] } ) ~ Deploying locally with Chef Solo: Chef Solo is a Ruby gem that runs a self-contained Chef instance. Solo is great for running your recipes locally to test them, or to provision development machines. If you don’t have a hosted Chef Server set up, you can use Chef Solo to set up remote servers too. If your architecture is still pretty small, this might be just what you need. We need to create a Chef configuration file, so we’ll call it deploy.rb: root = File.absolute_path(File.dirname(__FILE__)) roles = File.join(root, 'cookbooks') books = File.join(root, 'roles') file_cache_path root cookbook_path books role_path roles ~ We’ll also need a JSON-formatted configuration file. Let’s call this one deploy.json: { "run_list": ["recipe[base]"] } ~ Now run Chef with this command: ~/chef$ sudo chef-solo -j deploy.json -c deploy.rb Deploying to a new Amazon EC2 instance: You’ll need the Chef server for this step. First you need to create a new VPS instance for your Chef server and configure it with a static IP or a domain name, if possible. We won’t go through that here, but you can find instructions for setting up a server instance on EC2 with a public IP and configuring a domain name in the documentation for your VPS. Once you have your server instance set up, SSH onto the instance and install Chef server. Start by downloading the dep package using the wget tool: ~$ wget https://opscode-omnibus-packages.s3.amazonaws.com/ ubuntu/12.04/x86_64/chef-server_11.0.10-1.ubuntu.12.04_amd64.deb Once the dep package has downloaded, install Chef server like so: ~$ sudo dpkg -i chef-server* When it completes, it will print to the screen an instruction that you need to run this next command to actually configure the service for your specific machine. This command will configure everything automatically: ~$ sudo chef-server-ctl reconfigure Once the configuration step is complete, the Chef server should be up and running. You can access the web interface immediately by browsing to your server's domain name or IP address. Now that you’ve got Chef up and running, install the knife EC2 plugin. This will also install the knife gem as a dependency: ~$ gem install knife-ec2 You now have everything you need! So create another VPS to provision with Chef. Once you do that, you’ll need to copy your SSH keys over: ~$ ssh-copy-id root@yourserverip You can finally provision your server! Start by installing Chef on your new machine: ~$ knife solo prepare root@yourserverip This will generate a file, nodes/yourserverip.json. You need to change this file to add your own environment settings. For instance, you will need to add username and password for monit. You will also need to add a password for postgresql to the file. Run the openssl command again to create a password for postgresql. Take the generated password, and add it to the file. Now, you can finally provision your server! Start the Chef command: ~$ knife solo cook root@yourserverip Now just sit back, relax and watch Chef cook up your tasty app server. This process may take a while. But once it completes, you’ll have a server ready for a Rails, Postgres, and Redis! I hope these posts helped you get an idea of how much Chef can simplify your life and your deployments. Here’s a couple of links with more information and references about Chef: Chef community site:http://cookbooks.opscode.com/ Chef Wiki:https://wiki.opscode.com/display/chef/Home Chef Supermarket:https://community.opscode.com/cookbooks?utf8=%E2%9C%93&q=user Chef cookbooks for busy Ruby developers:http://teohm.com/blog/2013/04/17/chef-cookbooks-for-busy-ruby-developers/ Deploying Rails apps with Chef and Capistrano:http://www.slideshare.net/SmartLogic/guided-exploration-deploying-rails-apps-with-chef-and-capistrano About the author Rahmal Conda is a Software Development Professional and Ruby aficionado from Chicago. After 10 years working in web and application development, he moved out to the Bay Area, eager to join the startup scene. He had a taste of the startup life in Chicago working at a small personal finance company. After that he knew it was the life he had been looking for. So he moved his family out west. Since then he's made a name for himself in the social space at some high profile Silicon Valley startups. Right now he's the one of the Co-founders and Platform Architect of Boxes, a mobile marketplace for the world's hidden treasures.
Read more
  • 0
  • 0
  • 1464

article-image-distributed-rails-applications-with-chef
Rahmal Conda
31 Oct 2014
4 min read
Save for later

Configuring Distributed Rails Applications with Chef: Part 1

Rahmal Conda
31 Oct 2014
4 min read
Since the Advent of Rails (and Ruby by extension), in the period between 2005 and 2010, Rails went from a niche Web Application Framework to being the center of a robust web application platform. To do this it needed more than Ruby and a few complementary gems. Anyone who has ever tried to deploy a Rails application into a production environment knows that Rails doesn’t run in a vacuum. Rails still needs a web server in front of it to help manage requests like Apache or Nginx. Oops, you’ll need unicorn or Passenger too. Almost all of the Rails apps are backed by some sort of data persistence layer. Usually that is some sort of relational database. More and more it’s a NoSQL DB like MongoDB or depending on the application, you’re probably going to deploy a caching strategy at some point: Memcached, Redis, the list goes on. What about background jobs? You’ll need another server instance for that too, and not just one either. High availability systems need to be redundant. If you’re lucky enough to get a lot of traffic, you’ll need a way to scale all of this. Why Chef? Chances are that you’re managing all of this traffic manually. Don’t feel bad, everyone starts out that way. But as you grow, how do you manage all of this without going insane? Most Rails developers start off with Capistrano, which is a great choice. Capistrano is a remote server automation tool. It’s used most often as a deployment tool for Rails. For the most part it’s a great solution for managing multiple servers that make up your Rails stack. It’s only when your architecture reaches a certain size that I’d recommend choosing Chef over Capistrano. But really, there’s no reason to choose one over the other since they actually work pretty well together, and they are both similar regarding deployment. Where Chef excels, however, is when you need to provision multiple servers with different roles, and changing software stacks. This is what I’m going to focus on in this post. But let’s introduce Chef first. What is Chef anyway? Basically, Chef is a Ruby-based configuration management engine. It is a software configuration management tool, used for provisioning servers for certain roles within a platform stack, and deploying applications to those servers. It is used to automate server configuration and integration into your infrastructure. You define your infrastructure in configuration files written in Chef’s Ruby DSL and Chef takes care of setting up individual machines and linking them together. Chef server You set up one of your server instances (virtual or otherwise) as the server and all your other instances are clients that communicate with the Chef "server" via REST over HTTPS. The server is an application that stores cookbooks for your nodes. Recipes and cookbooks Recipes are files that contain sets of instructions written in Chef’s Ruby DSL. These instructions perform some kind of procedure, usually installing software and configuring some service. These recipes are bound together along with configuration file templates, resources, and helper scripts as cookbooks. Cookbooks generally correspond to a specific server configuration. For instance, a Postgres cookbook might contain a recipe for Postgres Server, Postgres Client, maybe PostGIS, and some configuration files for how the DB instance should be provisioned. Chef Solo For stacks that don’t necessarily need a full Chef server setup, but use cookbooks to set up Rails and DB servers, there’s Chef Solo. Chef Solo is a local standalone Chef application that can be used to remotely deploy servers and applications. Wait, where is the code? In Part 2 of this post I’m going to walk you through the setting up of a Rails application with Chef Solo, then I’ll expand to show a full Chef server configuration management engine. While Chef can be used for many different application stacks, I’m going to focus on Rails configuration and deployment, provisioning and deploying the entire stack. See you next time! About the Author Rahmal Conda is a Software Development Professional and Ruby aficionado from Chicago. After 10 years working in web and application development, he moved out to the Bay Area, eager to join the startup scene. He had a taste of the startup life in Chicago working at a small personal finance company. After that he knew it was the life he had been looking for. So he moved his family out west. Since then he's made a name for himself in the social space at some high profile Silicon Valley startups. Right now he's the one of the Co-founders and Platform Architect of Boxes, a mobile marketplace for the world's hidden treasures.
Read more
  • 0
  • 0
  • 1541
article-image-using-osgi-services
Packt
26 Aug 2014
14 min read
Save for later

Using OSGi Services

Packt
26 Aug 2014
14 min read
This article created by Dr Alex Blewitt the author of Mastering Eclipse Plug-in Development will present OSGi services as a means to communicate with and connect applications. Unlike the Eclipse extension point mechanism, OSGi services can have multiple versions available at runtime and can work in other OSGi environments, such as Felix or other commercial OSGi runtimes. (For more resources related to this topic, see here.) Overview of services In an Eclipse or OSGi runtime, each individual bundle is its own separate module, which has explicit dependencies on library code via Import-Package, Require-Bundle, or Require-Capability. These express static relationships and provide a way of configuring the bundle's classpath. However, this presents a problem. If services are independent, how can they use contributions provided by other bundles? In Eclipse's case, the extension registry provides a means for code to look up providers. In a standalone OSGi environment, OSGi services provide a similar mechanism. A service is an instance of a class that implements a service interface. When a service is created, it is registered with the services framework under one (or more) interfaces, along with a set of properties. Consumers can then get the service by asking the framework for implementers of that specific interface. Services can also be registered under an abstract class, but this is not recommended. Providing a service interface exposed as an abstract class can lead to unnecessary coupling of client to implementation. The following diagram gives an overview of services: This separation allows the consumer and producer to depend on a common API bundle, but otherwise be completely decoupled from one another. This allows both the consumer and producer to be mocked out or exchange with different implementations in the future. Registering a service programmatically To register a service, an instance of the implementation class needs to be created and registered with the framework. Interactions with the framework are performed with an instance of BundleContext—typically provided in the BundleActivator.start method and stored for later use. The *FeedParser classes will be extended to support registration as a service instead of the Equinox extension registry. Creating an activator A bundle's activator is a class that is instantiated and coupled to the lifetime of the bundle. When a bundle is started, if a manifest entry Bundle-Activator exists, then the corresponding class is instantiated. As long as it implements the BundleActivator interface, the start method will be called. This method is passed as an instance of BundleContext, which is the bundle's connection to the hosting OSGi framework. Create a class in the com.packtpub.e4.advanced.feeds project called com.packtpub.e4.advanced.feeds.internal.FeedsActivator, which implements the org.osgi.framework.BundleActivator interface. The quick fix may suggest adding org.osgi.framework as an imported package. Accept this, and modify the META-INF/MANIFEST.MF file as follows: Import-Package: org.osgi.framework Bundle-Activator: com.packtpub.e4.advanced.feeds.internal.FeedsActivator The framework will automatically invoke the start method of the FeedsActivator when the bundle is started, and correspondingly, the stop method when the bundle is stopped. Test this by inserting a pair of println calls: public class FeedsActivator implements BundleActivator { public void start(BundleContext context) throws Exception { System.out.println("Bundle started"); } public void stop(BundleContext context) throws Exception { System.out.println("Bundle stopped"); } } Now run the project as an OSGi framework with the feeds bundle, the Equinox console, and the Gogo shell. The required dependencies can be added by clicking on Add Required Bundles, although the Include optional dependencies checkbox does not need to be selected. Ensure that the other workspace and target bundles are deselected with the Deselect all button, as shown in the following screenshot: The required bundles are as follows: com.packtpub.e4.advanced.feeds org.apache.felix.gogo.command org.apache.felix.gogo.runtime org.apache.felix.gogo.shell org.eclipse.equinox.console org.eclipse.osgi On the console, when the bundle is started (which happens automatically if the Default Auto-Start is set to true), the Bundle started message should be seen. If the bundle does not start, ss in the console will print a list of bundles and start 2 will start the bundle with the ID 2. Afterwards, stop 2 can be used to stop bundle 2. Bundles can be stopped/started dynamically in an OSGi framework. Registering the service Once the FeedsActivator instance is created, a BundleContext instance will be available for interaction with the framework. This can be persisted for subsequent use in an instance field and can also be used directly to register a service. The BundleContext class provides a registerService method, which takes an interface, an instance, and an optional Dictionary instance of key/value pairs. This can be used to register instances of the feed parser at runtime. Modify the start method as follows: public void start(BundleContext context) throws Exception { context.registerService(IFeedParser.class, new RSSFeedParser(), null); context.registerService(IFeedParser.class, new AtomFeedParser(), null); context.registerService(IFeedParser.class, new MockFeedParser(), null); } Now start the framework again. In the console that is launched, look for the bundle corresponding to the feeds bundle: osgi> bundles | grep feeds com.packtpub.e4.advanced.feeds_1.0.0.qualifier [4] {com.packtpub.e4.advanced.feeds.IFeedParser}={service.id=56} {com.packtpub.e4.advanced.feeds.IFeedParser}={service.id=57} {com.packtpub.e4.advanced.feeds.IFeedParser}={service.id=58} This shows that bundle 4 has started three services, using the interface com.packtpub.e4.advanced.feeds.IFeedParser, and with service IDs 56, 57, and 58. It is also possible to query the runtime framework for services of a known interface type directly using the services command and an LDAP style filter: osgi> services (objectClass=com.packtpub.e4.advanced.feeds.IFeedParser) {com.packtpub.e4.advanced.feeds.IFeedParser}={service.id=56} "Registered by bundle:" com.packtpub.e4.advanced.feeds_1.0.0.qualifier [4] "No bundles using service." {com.packtpub.e4.advanced.feeds.IFeedParser}={service.id=57} "Registered by bundle:" com.packtpub.e4.advanced.feeds_1.0.0.qualifier [4] "No bundles using service." {com.packtpub.e4.advanced.feeds.IFeedParser}={service.id=58} "Registered by bundle:" com.packtpub.e4.advanced.feeds_1.0.0.qualifier [4] "No bundles using service." The results displayed represent the three services instantiated. They can be introspected using the service command passing the service.id: osgi> service 56 com.packtpub.e4.advanced.feeds.internal.RSSFeedParser@52ba638e osgi> service 57 com.packtpub.e4.advanced.feeds.internal.AtomFeedParser@3e64c3a osgi> service 58 com.packtpub.e4.advanced.feeds.internal.MockFeedParser@49d5e6da Priority of services Services have an implicit order, based on the order in which they were instantiated. Each time a service is registered, a global service.id is incremented. It is possible to define an explicit service ranking with an integer property. This is used to ensure relative priority between services, regardless of the order in which they are registered. For services with equal service.ranking values, the service.id values are compared. OSGi R6 adds an additional property, service.bundleid, which is used to denote the ID of the bundle that provides the service. This is not used to order services, and is for informational purposes only. Eclipse Luna uses OSGi R6. To pass a priority into the service registration, create a helper method called priority, which takes an int value and stores it in a Hashtable with the key service.ranking. This can be used to pass a priority to the service registration methods. The following code illustrates this: private Dictionary<String,Object> priority(int priority) { Hashtable<String, Object> dict = new Hashtable<String,Object>(); dict.put("service.ranking", new Integer(priority)); return dict; } public void start(BundleContext context) throws Exception { context.registerService(IFeedParser.class, new RSSFeedParser(), priority(1)); context.registerService(IFeedParser.class, new MockFeedParser(), priority(-1)); context.registerService(IFeedParser.class, new AtomFeedParser(), priority(2)); } Now when the framework starts, the services are displayed in order of priority: osgi> services | (objectClass=com.packtpub.e4.advanced.feeds.IFeedParser) {com.packtpub.e4.advanced.feeds.IFeedParser}={service.ranking=2, service.id=58} "Registered by bundle:" com.packtpub.e4.advanced.feeds_1.0.0.qualifier [4] "No bundles using service." {com.packtpub.e4.advanced.feeds.IFeedParser}={service.ranking=1, service.id=56} "Registered by bundle:" com.packtpub.e4.advanced.feeds_1.0.0.qualifier [4] "No bundles using service." {com.packtpub.e4.advanced.feeds.IFeedParser}={service.ranking=-1, service.id=57} "Registered by bundle:" com.packtpub.e4.advanced.feeds_1.0.0.qualifier [4] "No bundles using service." Dictionary was the original Java Map interface, and Hashtable the original HashMap implementation. They fell out of favor in Java 1.2 when Map and HashMap were introduced (mainly because they weren't synchronized by default) but OSGi was developed to run on early releases of Java (JSR 8 proposed adding OSGi as a standard for the Java platform). Not only that, early low-powered Java mobile devices didn't support the full Java platform, instead exposing the original Java 1.1 data structures. Because of this history, many APIs in OSGi refer to only Java 1.1 data structures so that low-powered devices can still run OSGi systems. Using the services The BundleContext instance can be used to acquire services as well as register them. FeedParserFactory, which originally used the extension registry, can be upgraded to refer to services instead. To obtain an instance of BundleContext, store it in the FeedsActivator.start method as a static variable. That way, classes elsewhere in the bundle will be able to acquire the context. An accessor method provides an easy way to do this: public class FeedsActivator implements BundleActivator { private static BundleContext bundleContext; public static BundleContext getContext() { return bundleContext; } public void start(BundleContext context) throws Exception { // register methods as before bundleContext = context; } public void stop(BundleContext context) throws Exception { bundleContext = null; } } Now the FeedParserFactory class can be updated to acquire the services. OSGi services are represented via a ServiceReference instance (which is a sharable object representing a handle to the service) and can be used to acquire a service instance: public class FeedParserFactory { public List<IFeedParser> getFeedParsers() { List<IFeedParser> parsers = new ArrayList<IFeedParser>(); BundleContext context = FeedsActivator.getContext(); try { Collection<ServiceReference<IFeedParser>> references = context.getServiceReferences(IFeedParser.class, null); for (ServiceReference<IFeedParser> reference : references) { parsers.add(context.getService(reference)); context.ungetService(reference); } } catch (InvalidSyntaxException e) { // ignore } return parsers; } } In this case, the service references are obtained from the bundle context with a call to context.getServiceReferences(IFeedParser.class,null). The service references can be used to access the service's properties, and to acquire the service. The service instance is acquired with the context.getService(ServiceReference) call. The contract is that the caller "borrows" the service, and when finished, should return it with an ungetService(ServiceReference) call. Technically, the service is only supposed to be used between the getService and ungetService calls as its lifetime may be invalid afterwards; instead of returning an array of service references, the common pattern is to pass in a unit of work that accepts the service and then call ungetService afterwards. However, to fit in with the existing API, the service is acquired, added to the list, and then released immediately afterwards. Lazy activation of bundles Now run the project as an Eclipse application, with the feeds and feeds.ui bundles installed. When a new feed is created by navigating to File | New | Other | Feeds | Feed, and a feed such as http://alblue.bandlem.com/atom.xml is entered, the feeds will be shown in the navigator view. When drilling down, a NullPointerException may be seen in the logs, as shown in the following: !MESSAGE An exception occurred invoking extension: com.packtpub.e4.advanced.feeds.ui.feedNavigatorContent for object com.packtpub.e4.advanced.feeds.Feed@770def59 !STACK 0 java.lang.NullPointerException at com.packtpub.e4.advanced.feeds.FeedParserFactory. getFeedParsers(FeedParserFactory.java:31) at com.packtpub.e4.advanced.feeds.ui.FeedContentProvider. getChildren(FeedContentProvider.java:80) at org.eclipse.ui.internal.navigator.extensions. SafeDelegateTreeContentProvider. getChildren(SafeDelegateTreeContentProvider.java:96) Tracing through the code indicates that the bundleContext is null, which implies that the feeds bundle has not yet been started. This can be seen in the console of the running Eclipse application by executing the following code: osgi> ss | grep feeds 866 ACTIVE com.packtpub.e4.advanced.feeds.ui_1.0.0.qualifier 992 RESOLVED com.packtpub.e4.advanced.feeds_1.0.0.qualifier While the feeds.ui bundle is active, the feeds bundle is not. Therefore, the services haven't been instantiated, and bundleContext has not been cached. By default, bundles are not started when they are accessed for the first time. If the bundle needs its activator to be called prior to using any of the classes in the package, it needs to be marked as having an activation policy of lazy. This is done by adding the following entry to the MANIFEST.MF file: Bundle-ActivationPolicy: lazy The manifest editor can be used to add this configuration line by selecting Activate this plug-in when one of its classes is loaded, as shown in the following screenshot: Now, when the application is run, the feeds will resolve appropriately. Comparison of services and extension points Both mechanisms (using the extension registry and using the services) allow for a list of feed parsers to be contributed and used by the application. What are the differences between them, and are there any advantages to one or the other? Both the registry and services approaches can be used outside of an Eclipse runtime. They work the same way when used in other OSGi implementations (such as Felix) and can be used interchangeably. The registry approach can also be used outside of OSGi, although that is far less common. The registry encodes its information in the plugin.xml file by default, which means that it is typically edited as part of a bundle's install (it is possible to create registry entries from alternative implementations if desired, but this rarely happens). The registry has a notification system, which can listen to contributions being added and removed. The services approach uses the OSGi framework to store and maintain a list of services. These services don't have an explicit configuration file and, in fact, can be contributed by code (such as the registerService calls) or by declarative representations. The separation of how the service is created versus how the service is registered is a key difference between the service and the registry approach. Like the registry, the OSGi services system can generate notifications when services come and go. One key difference in an OSGi runtime is that bundles depending on the Eclipse registry must be declared as singletons; that is, they have to use the ;singleton:=true directive on Bundle-SymbolicName. This means that there can only be one version of a bundle that exposes registry entries in a runtime, as opposed to multiple versions in the case of general services. While the registry does provide mechanisms to be able to instantiate extensions from factories, these typically involve simple configurations and/or properties that are hard-coded in the plugin.xml files themselves. They would not be appropriate to store sensitive details such as passwords. On the other hand, a service can be instantiated from whatever external configuration information is necessary and then registered, such as a JDBC connection for a database. Finally, extensions in the registry are declarative by default and are activated on demand. This allows Eclipse to start quickly because it does not need to build the full set of class loader objects or run code, and then bring up services on demand. Although the approach previously didn't use declarative services, it is possible to do this. Summary This article introduced OSGi services as a means to extend an application's functionality. It also shed light on how to register a service programmatically. Resources for Article: Further resources on this subject: Apache Maven and m2eclipse [article] Introducing an Android platform [article] Installing and Setting up JavaFX for NetBeans and Eclipse IDE [article]
Read more
  • 0
  • 0
  • 1019

article-image-apache-karaf-provisioning-and-clusters
Packt
18 Jul 2014
12 min read
Save for later

Apache Karaf – Provisioning and Clusters

Packt
18 Jul 2014
12 min read
(For more resources related to this topic, see here.) In this article, we will cover the following topics: What is OSGi and what are its key features? The role of the OSGi framework The OSGi base artifact—the OSGi bundle and the concept of dependencies between bundles The Apache Karaf OSGi container and the provisioning of applications in the container How to manage the provisioning on multiple Karaf instances? What is OSGi? Developers are always looking for very dynamic, flexible, and agile software components. The purposes to do so are as follows: Reuse: This feature states that instead of duplicating the code, a component should be shared by other components, and multiple versions of the same component should be able to cohabit. Visibility: This feature specifies that a component should not use the implementation from another component directly. The implementation should be hidden, and the client module should use the interface provided by another component. Agility: This feature specifies that the deployment of a new version of a component should not require you to restart the platform. Moreover, a configuration change should not require a restart. For instance, it's not acceptable to restart a production platform just to change a log level. A minor change such as a log level should be dynamic, and the platform should be agile enough to reload the components that should be reloaded. Discovery: This feature states that a component should be able to discover other components. It's a kind of Plug and Play system: as soon as a component needs another component, it just looks for it and uses it. OSGi has been created to address the preceding points. The core concept is to force developers to use a very modular architecture in order to reduce complexity. As this paradigm is applicable for most modern systems, OSGi is now used for small embedded devices as well as for very large systems. Different applications and systems use OSGi, for example, desktop applications, application servers, frameworks, embedded devices, and so on. The OSGi framework OSGi is designed to run in Java. In order to provide these features and deploy OSGi applications, a core layer has to be deployed in the Java Virtual Machine (JVM): the OSGi framework. This framework manages the life cycle and the relationship between the different OSGi components and artifacts. The OSGi bundle In OSGi, the components are packaged as OSGi bundles. An OSGi bundle is a simple Java JAR (Java ARchive) file that contains additional metadata used by the OSGi framework. These metadata are stored in the manifest file of the JAR file. The following is the metadata: Manifest-Version: 1.0 Bundle-ManifestVersion: 2 Bundle-Version: 2.1.6 Bundle-Name: My Logger Bundle-SymbolicName: my_logger Export-Package: org.my.osgi.logger;version=2.1 Import-Package: org.apache.log4j;version="[1.2,2)" Private-Package: org.my.osgi.logger.internal We can see that OSGi is very descriptive and verbose. We explicitly describe all the OSGi metadata (headers), including the package that we export or import with a specified version or version range. As the OSGi headers are defined in the META-INF/MANIFEST file contained in the JAR file, it means that an OSGi bundle is a regular JAR file that you can use outside of OSGi. The life cycle layer of the OSGi framework is an API to install, start, stop, update, and uninstall OSGi bundles. Dependency between bundles An OSGi bundle can use other bundles from the OSGi framework in two ways. The first way is static code sharing. When we say that this bundle exports packages, it means a bundle can expose some code for other bundles. On the other hand, when we say that this bundle imports packages, it means a bundle can use code from other bundles. For instance, we have the bundle A (packaged as the bundleA.jar file) with the following META-INF/MANIFEST file: Manifest-Version: 1.0 Bundle-ManifestVersion: 2 Bundle-Version: 1.0.0 Bundle-Name: Bundle A Bundle-SymbolicName: bundle_a Export-Package: com.bundle.a;version=1.0 We can see that the bundle A exposes (exports) the com.bundle.a package with Version 1.0. On the other hand, we have the bundle B (packaged as the bundleB.jar file) with the following META-INF/MANIIFEST file: Manifest-Version: 1.0 Bundle-ManifestVersion: 2 Bundle-Version: 2.0.0 Bundle-Name: Bundle B Bundle-SymbolicName: bundle_b Import-Package: com.bundle.a;version="[1.0,2)" We can see that the bundle B imports (so, it will use) the com.bundle.a package in any version between 1.0 and 2 (excluded). So, this means that the OSGi framework will wire the bundles, as the bundle A provides the package used by the bundle B (so, the constraint is resolved). This mechanism is similar to regular Java applications, but instead of embedding the required JAR files in your application, you can just declare the expected code. The OSGi framework is responsible for the link between the different bundles; it's done by the modules layer of the OSGi framework. This approach is interesting when you want to use code which is not natively designed for OSGi. It's a step forward for the reuse of components. However, it provides a limited answer to the purposes seen earlier in the article, especially visibility and discovery. The second way in which an OSGi bundle can use other bundles from the OSGi framework is more interesting. It uses Service-Oriented Architecture (SOA) for low-level components. Here, more than exposing the code, an OSGi bundle exposes a OSGi service. On the other hand, another bundle can use an OSGi service. The services layer of the OSGi framework provides a service registry and all the plumbing mechanisms to wire the services. The OSGi services provide a very dynamic system, offering a Publish-Find-Bind model for the bundles. The OSGi container The OSGi container provides a set of additional features on top of the OSGi framework, as shown in the following diagram: Apache Karaf provides the following features: It provides the abstraction of the OSGi framework. If you write an OSGi application, you have to package your application tightly coupled with the OSGi framework (such as the Apache Felix framework or Eclipse Equinox). Most of the time, you have to prepare the scripts, configuration files, and so on in order to provide a complete, ready-to-use application. Apache Karaf allows you to focus only on your application. Karaf, by default, provides the packaging (including scripts and so on), and it also abstracts the OSGi framework. Thanks to Karaf, it's very easy to switch from Apache Felix (the default framework in Karaf) to Eclipse Equinox. Provides support for the OSGi Blueprint and Spring frameworks. Apache Karaf allows you to directly use Blueprint or Spring as the dependency framework in your bundles. In the new version of Karaf (starting from Karaf 3.0.1), it also supports new dependency frameworks (such as DS, CDI, and so on). Apache Karaf provides a complete, Unix-like shell console where you have a lot of commands available to manage and monitor your running container. This shell console works on any system supporting Java and provides a complete Unix-like environment, including completion, contextual help, key bindings, and more. You can access the shell console using SSH. Apache Karaf also provides a complete management layer (using JMX) that is remotely accessible, which means you can perform the same actions as you do using the shell commands with several MBeans. In addition to the default root Apache Karaf container, for convenience, Apache Karaf allows you to manage multiple container instances. Apache Karaf provides dedicated commands and MBeans to create the instances, control the instances, and so on. Logging is a key layer for any kind of software container. Apache Karaf provides a powerful and very dynamic logging system powered by Pax Logging. In your OSGi application, you are not coupled to a specific logging framework; you can use the framework of your choice (slf4j, log4j, logback, commons-logging, and so on). Apache Karaf uses a central configuration file irrespective of the logging frameworks in use. All changes in this configuration file are made on the fly; no need to restart anything. Again, Apache Karaf provides commands and MBeans dedicated to log management (changing the log level, direct display of the log in the shell console, and so on). Hot deployment is also an interesting feature provided by Apache Karaf. By default, the container monitors a deploy folder periodically. When a new file is dropped in the deploy folder, Apache Karaf checks the file type and delegates the deployment logic for this file to a deployer. Apache Karaf provides different deployers by default (spring, blueprint, features, war, and so on). If Java Authentication and Authorization Service (JAAS) is the Java implementation of Pluggable Authentication Modules (PAM), it's not very OSGi compliant by default. Apache Karaf leverages JAAS, exposing realm and login modules as OSGi services. Again, Apache Karaf provides dedicated JAAS shell commands and MBeans. The security framework is very flexible, allowing you to define the chain of login modules that you want for authentication. By default, Apache Karaf uses a PropertiesLoginModule using the etc/users.properties file for storage. The security framework also provides support for password encryption (you just have to enable encryption in the etc/org.apache.karaf.jaas.cfg configuration file). The new Apache Karaf version (3.0.0) also provides a complete Role Based Access Control (RBAC) system, allowing you to configure the users who can run commands, call MBeans, and so on. Apache Karaf is an enterprise-ready container and provides features dedicated to enterprise. The following enterprise features are not installed by default (to minimize the size and footprint of the container by default), but a simple command allows you to extend the container with enterprise functionalities: WebContainer allows you to deploy a Web Application Bundle (WAB) or WAR file. Apache Karaf is a complete HTTP server with JSP/servlet support, thanks to Pax Web. Java Naming and Directory Interface (JNDI) adds naming context support in Apache Karaf. You can bind an OSGi service to a JNDI name and look up these services using the name, thanks to Aries and Xbean naming. Java Transaction API (JTA) allows you to add a transaction engine (exposed as an OSGi service) in Apache Karaf, thanks to Aries JTA. Java Persistence API (JPA) allows you to add a persistence adapter (exposed as an OSGi service) in Apache Karaf, thanks to Aries JPA. Ready-to-use persistence engines can also be installed very easily (especially Apache OpenJPA and Hibernate). Java Database Connectivity (JDBC) or Java Message Service (JMS) are convenient features, allowing you to easily create JDBC DataSources or JMS ConnectionFactories and use them directly in the shell console. If you can completely administrate Apache Karaf using the shell commands and the JMX MBeans, you can also install Web Console. This Web Console uses the Felix Web Console and allows you to manage Karaf with a simple browser. Thanks to these features, Apache Karaf is a complete, rich, and enterprise-ready container. We can consider Apache Karaf as an OSGi application server. Provisioning in Apache Karaf In addition, Apache Karaf provides three core functionalities that can be used both internally in Apache Karaf or can be used by external applications deployed in the container: OSGi bundle management Configuration management Provisioning using Karaf Features As we learned earlier, the default artifact in OSGi is the bundle. Again, it's a regular JAR file with additional OSGi metadata in the MANIFEST file. The bundles are directly managed by the OSGi framework, but for convenience, Apache Karaf wraps the usage of bundles in specific commands and MBeans. A bundle has a specific life cycle. Especially when you install a bundle, the OSGi framework tries to resolve all the dependencies required by your bundle to promote it in a resolved state. The following is the life cycle of a bundle: The OSGi framework checks whether other bundles provide the packages imported by your bundle. The equivalent action for the OSGi services is performed when you start your bundle. It means that a bundle may require a lot of other bundles to start and so on for the transitive bundles. Moreover, a bundle may require configuration to work. Apache Karaf proposes a very convenient way to manage the configurations. The etc folder is periodically monitored to discover new configuration files and load the corresponding configurations. On the other hand, you have dedicated shell commands and MBeans to manage configurations (and configuration files). If a bundle requires a configuration to work, you first have to create a configuration file in the etc folder (with the expected filename) or use the config:* shell command or ConfigMBean to create the configuration. Considering that an OSGi application is a set of bundles, the installation of an OSGi application can be long and painful by hand. The deployment of an OSGi application is called provisioning as it gathers the following: The installation of a set of bundles, including transitive bundles The installation of a set of configurations required by these bundles OBR OSGi Bundle Repository (OBR) can be the first option to be considered in order to solve this problem. Apache Karaf can connect to the OBR server. The OBR server stores all the metadata for all the bundles, which includes the capabilities, packages, and services provided by a bundle and the requirements, packages, and services needed by a bundle. When you install a bundle via OBR, the OBR server checks the requirement of the installed bundle and finds the bundles that provide the capabilities matching the requirements. The OBR server can automatically install the bundles required for the first one.
Read more
  • 0
  • 0
  • 1737

article-image-part-1-deploying-multiple-applications-capistrano-single-project
Rodrigo Rosenfeld
01 Jul 2014
9 min read
Save for later

Part 1: Deploying Multiple Applications with Capistrano from a Single Project

Rodrigo Rosenfeld
01 Jul 2014
9 min read
Capistrano is a deployment tool written in Ruby that is able to deploy projects using any language or framework, through a set of recipes, which are also written in Ruby. Capistrano expects an application to have a single repository and it is able to run arbitrary commands on the server through an SSH non-interactive session. Capistrano was designed assuming that an application is completely described by a single repository with all code belonging to it. For example, your web application is written with Ruby on Rails and simply serving that application would be enough. But what if you decide to use a separate application for managing your users, in a separate language and framework? Or maybe some issue tracker application? You could setup a proxy server to properly deliver each request to the right application based upon the request path for example. But the problem remains: how do you use Capistrano to manage more complex scenarios like this if it supports a single repository? The typical approach is to integrate Capistrano on each of the component applications and then switching between those projects before deploying those components. Not only this is a lot of work to deploy all of these components, but it may also lead to a duplication of settings. For example, if your main application and the user management application both use the same database for a given environment, you’d have to duplicate this setting in each of the components. For the Market Tracker product, used byLexisNexis clients (which we develop at e-Core for Matterhorn Transactions Inc.), we were looking for a better way to manage many component applications, in lots of environments and servers. We wanted to manage all of them from a single repository, instead of adding Capistrano integration to each of our component’s repositories and having to worry about keeping the recipes in sync between each of the maintained repository branches. Motivation The Market Tracker application we maintain consists of three different applications: the main one, another to export search results to Excel files, and an administrative interface to manage users and other entities. We host the application in three servers: two for the real thing and another back-up server. The first two are identical ones and allow us to have redundancy and zero downtime deployments except for a few cases where we change our database schema in incompatible ways with previous versions. To add to the complexity of deploying our three composing applications to each of those servers, we also need to deploy them multiple times for different environments like production, certification, staging, and experimental. All of them run on the same server, in separate ports, and they are running separate databases:Solr and Redis instances. This is already complex enough to manage when you integrate Capistrano to each of your projects, but it gets worse. Sometimes you find bugs in production and have to release quick fixes, but you can't deploy the version in the master branch that has several other changes. At other times you find bugs on your Capistrano recipes themselves and fix them on the master. Or maybe you are changing your deploy settings rather than the application’s code. When you have to deploy to production, depending on how your Capistrano recipes work, you may have to change to the production branch, backport any changes for the Capistrano recipes from the master and finally deploy the latest fixes. This happens if your recipe will use any project files as a template and they moved to another place in the master branch, for example. We decided to try another approach, similar to what we do with our database migrations. Instead of integrating the database migrations into the main application (the default on Rails, Django, Grails, and similar web frameworks) we prefer to handle it as a separate project. In our case we use theactive_record_migrations gem, which brings standalone support for ActiveRecord migrations (the same that is bundled with Rails apps by default). Our database is shared between the administrative interface project and the main web application and we feel it's better to be able to manage our database schema independently from the projects using the database. We add the migrations project to the other application as submodules so that we know what database schema is expected to work for a particular commit of the application, but that's all. We wanted to apply the same principles to our Capistrano recipes. We wanted to manage all of our applications on different servers and environments from a single project containing the Capistrano recipes. We also wanted to store the common settings in a single place to avoid code duplication, which makes it hard to add new environments or update existing ones. Grouping all applications' Capistrano recipes in a single project It seems we were not the first to want all Capistrano recipes for all of our applications in a single project. We first tried a project called caphub. It worked fine initially and its inheritance model would allow us to avoid our code duplication. Well, not entirely. The problem is that we needed some kind of multiple inheritances or mixins. We have some settings, like token private key, that are unique across environments, like Certification and Production. But we also have other settings that are common in within a server. For example, the database host name will be the same for all applications and environments inside our collocation facility, but it will be different in our backup server at Amazon EC2. CapHub didn't help us to get rid of the duplication in such cases, but it certainly helped us to find a simple solution to get what we wanted. Let's explore how Capistrano 3 allows us to easily manage such complex scenarios that are more common than you might think. Capistrano stages Since Capistrano 3, multistage support is built-in (there was a multistage extension for Capistrano 2). That means you can writecap stage_nametask_name, for examplecap production deploy. By default,cap install will generate two stages: production and staging. You can generate as many as you want, for example: cap install STAGES=production,cert,staging,experimental,integrator But how do we deploy each of those stages to our multiple servers, since the settings for each stage may be different across the servers? Also, how can we manage separate applications? Even though those settings are called "stages" by Capistrano, you can use it as you want. For example, suppose our servers are named m1,m2, and ec2 and the applications are named web, exporter and admin. We can create settings likem1_staging_web, ec2_production_admin, and so on. This will result in lots of files (specifically 45 = 5 x 3 x 3 to support five environments, three applications, and three servers) but it's not a big deal if you consider the settings files can be really small, as the examples will demonstrate later on in this article by using mixins. Usually people will start with staging and production only, and then gradually add other environments. Also, they usually start with one or two servers and keep growing as they feel the need. So supporting 45 combinations is not such a pain since you don’t write all of them at once. On the other hand, if you have enough resources to have a separate server for each of your environments, Capistrano will allow you to add multiple "server" declarations and assign roles to them, which can be quite useful if you're running a cluster of servers. In our case, to avoid downtime we don't upgrade all servers in our cluster at once. We also don't have the budget to host 45 virtual machines or even 15. So the little effort to generate 45 small settings files compensates the savings with hosting expenses. Using mixins My next post will create an example deployment project from scratch providing detail for everything that has been discussed in this post. But first, let me introduce the concept of what we call a mixin in our project. Capistrano 3 is simply a wrapper on top of Rake. Rake is a build tool written in Ruby, similar to “make.” It has targets and targets have prerequisites. This fits nicely in the way Capistrano works, where some deployment tasks will depend on other tasks. Instead of a Rakefile (Rake’s Makefile) Capistrano will use a Capfile, but other than that it works almost the same way. The Domain Specific Language (DSL) in a Capfile is enhanced as you include Capistrano extensions to the Rake DSL. Here’s a sample Capfile, generated by cap install, when you install Capistrano: # Load DSL and Setup Up Stages require'capistrano/setup' # Includes default deployment tasks require'capistrano/deploy' # Includes tasks from other gems included in your Gemfile # # For documentation on these, see for example: # # https://github.com/capistrano/rvm # https://github.com/capistrano/rbenv # https://github.com/capistrano/chruby # https://github.com/capistrano/bundler # https://github.com/capistrano/rails # # require 'capistrano/rvm' # require 'capistrano/rbenv' # require 'capistrano/chruby' # require 'capistrano/bundler' # require 'capistrano/rails/assets' # require 'capistrano/rails/migrations' # Loads custom tasks from `lib/capistrano/tasks' if you have any defined. Dir.glob('lib/capistrano/tasks/*.rake').each { |r| import r } Just like a Rakefile, a Capfile is valid Ruby code, which you can easily extend using regular Ruby code. So, to support a mixin DSL, we simply need to extend the DSL, like this:   defmixin (path) loadFile.join('config', 'mixins', path +'.rb') end Pretty simple, right? We prefer to add this to a separate file, like lib/mixin.rb and add this to the Capfile: $:.unshiftFile.dirname(__FILE__) require 'lib/mixin' After that, calling mixin 'environments/staging' should load settings that are common for the staging environment from a file called config/mixins/environments/staging.rb in the root of the Capistrano-enabled project. This is the base to set up our deployment project that we will create in the next post. About the author Rodrigo Rosenfeld Rosas lives in Vitória-ES, Brazil, with his lovely wife and daughter. He graduated in Electrical Engineering with a Master’s degree in Robotics and Real-time Systems.For the past five years Rodrigo has focused on building and maintaining single page web applications. He is the author of some gems includingactive_record_migrations, rails-web-console, the JS specs runner oojspec, sequel-devise and the Linux X11 utility ktrayshortcut.Rodrigo was hired by e-Core (Porto Alegre - RS, Brazil) to work from home, building and maintaining software forMatterhorn Transactions Inc. with a team of great developers. Matterhorn'smain product, the Market Tracker, is used by LexisNexis clients.
Read more
  • 0
  • 0
  • 3421
article-image-part-2-deploying-multiple-applications-capistrano-single-project
Rodrigo Rosenfeld
01 Jul 2014
8 min read
Save for later

Part 2: Deploying Multiple Applications with Capistrano from a Single Project

Rodrigo Rosenfeld
01 Jul 2014
8 min read
In part 1, we covered Capistrano and why you would use it. We also covered mixins, which provide the base for what we will do in this post, which is to deploy a sample project using Capistrano. For this project, suppose our user interface is a combination of two applications,app1 and app2. They should be deployed to servers do and ec2. And we'll provide two environments,production and cert. Make sure Ruby and Bundler are installed before you start. First, we create a new directory for our project, and add a Gemfile to it with capistrano as a dependency. Then we will create the Capistrano directory structure: mkdircapsample cd capsample bundle init echo "gem 'capistrano'" >>Gemfile bundle bundle exec cap install STAGES="do_prod_app1,do_prod_app2,do_cert_app1,do_cert_app2,ec2_prod_app1,ec2_prod_app2,ec2_cert_app1,ec2_cert_app2" This will create nine files under config/deploy, one for each server/environment/application group. This is just to demonstrate the idea. We'll completely override their entire content later on. It will also create a Capfile file that works in a similar way to a regular Rakefile. With Rake, you can get a list of the available tasks with rake -T. With Capistrano you can get the same using: bundle exec cap -T Behind the scenes, cap is a binary distributed with the capistrano gem that will run Rake with Capfile set as the Rakefile and supporting a few other options like --roles.Now create a new file,lib/mixin.rb, with the content mentioned in the Using mixins section in part 1. Then add this to the top of the Capfile: $: . unshiftFile.dirname(__FILE__) require'lib/mixin' Each of the files under config/deploy will look very similar to each other. For instance, ec2_prod_app1 would look like this: mixin 'servers/ec2' mixin'environments/production' mixin'applications/app1' Then config/mixins/servers/ec2.rb would look like this: server 'ec2.mydomain.com', roles: [:main] set :database_host, 'ec2-db.mydomain.com' This file contains definitions that are valid (or default) for the whole server, no matter what environment or application we're deploying. In this example the database host is shared for all applications and environments hosted on our ec2 server. Something to note here is that we're adding a single role named main to our server. If we specified all roles, like [:web, :db, :assets, :puma], then they would be shared with all recipes relying on this server mixin. So, a better approach would be to add them on the application's recipe, if required. For instance, you might want to add something like set :server_name, 'ec2.mydomain.com' to your server definitions. Then you can dynamically set the role in the application's recipe by calling role :db, [fetch(:server_name)] and so on for all required roles. However, this is usually not necessary for third-party recipes as they let you decide which role the recipe should act on. For example, if you want to deploy your application with Puma you can write set :puma_role, :main. Before we discuss a full example for the application recipe, let's look at what config/mixins/environments/production.rb might look like: set :branch, 'production' set :encoding_key, '098f6bcd4621d373cade4e832627b4f6' set :database_name, 'app_production' set :app1_port, 3000 set :app2_port, 3001 set :redis_port, 6379 set :solr_port, 8080 In this example, we're assuming that the ports for app1 and app2 , Redis and Solr will be the same for production in all servers, as well as the database name. Finally, the recipes themselves, which tell Capistrano how to set up an application, will be defined byconfig/mixins/applications/app1.rb. Here's an example for a simple Rails application: Rake :: Task['load:defaults'].invoke Rake::Task['load:defaults'].clear require'capistrano/rails' require'capistrano/puma' Rake::Task['load:defaults'].reenable Rake::Task['load:defaults'].invoke set :application, 'app1' set :repo_url, 'git@example.com:me/app1.git' set :rails_env, 'production' set :assets_roles, :main set :migration_role, :main set :puma_role, :main set :puma_bind, "tcp://0.0.0.0:#{fetch :app1_port}" namespace :railsdo desc'Generate settings file' task :generate_settingsdo on roles(:all) do template ="config/templates/database.yml.erb" dbconfig=StringIO.new(ERB.new(File.read template).result binding) upload! dbconfig, release_path.join('config', 'database.yml') end end end before 'deploy:migrate', 'rails:generate_settings' # Create directories expected by Puma default settings: before 'puma:restart', 'create_log_and_tmp'do on roles(:all) do within shared_pathdo execute :mkdir, '-p', 'log', 'tmp/pids' end end end Make sure you remove the lines that set application and repo_url on the config/deploy.rb file generated bycap install. Also, if you're deploying a Rails application using this recipe you should also add the capistrano-rails andcapistrano3-puma gems to your Gemfile and run bundle again. In case you're running rbenv or rvmto install Ruby in the server, make sure you include either capistrano-rbenv or capistrano-rvm gems and require them on the recipe. You may also need to provide more information in this case. For rbenv you'd need to tell it which version to use with set :rbenv_ruby, '2.1.2' for example. Sometimes you'll find out that some settings are valid for all applications under all environments in all servers. The most important one to notice is the location for our applications as they must not conflict with each other. Another setting that could be shared across all combinations could be the private key used to connect to all servers. For such cases, you should add those settings directly to config/deploy.rb: set :deploy_to, -> { "/home/vagrant/apps/#{fetch :environment}/#{fetch :application}" } set :ssh_options, { keys: %w(~/.vagrant.d/insecure_private_key) } I strongly recommend connecting to your servers with a regular account rather than root. For our applications we use userbenv to manage our Ruby versions, so we're able to deploy them as regular users as long as our applications listen to high port numbers. We'd then setup our proxy server (nginx in our case) to forward the requests on port 80 and 443 to each application's port accordingly to the requested domains and paths. This is set up by some Chef recipes. Those recipes run as root in our servers. To connect using another user, just pass it in the server declaration. To connect to vagrant@192.168.33.10, this is how you'd set it up: server '192.168.33.10', user: 'vagrant', roles: [:main] set :ssh_options, { keys: %w(~/.vagrant.d/insecure_private_key) } Finally, we create a config/database.yml that's suited for our environment on demand, before running the migrations task. Here's what the template config/templates/database.ymlcould look like: production: adapter: postgresql encoding: unicode pool: 30 database: <%= fetch :database_name %> host: <%= fetch :database_host %> I've omitted the settings for app2 , but in case it was another Rails application, we could extract the common logic between them to another common_rails mixin. Also notice that because we're not requiring capistrano/rails and capistrano/puma in the Capfile, their default values won't be set as Capistrano has already invoked the load:defaults task before our mixins are loaded. That's why we clear that task, require the recipes, and then re-enable and re-run the task so that the default for those recipes have the opportunity to load. Another approach is to require those recipes directly in the Capfile. But unless the recipes are carefully crafted to only run their commands for very specific roles, it's likely that you can get unexpected behavior if you deploy an application with Rails, another one with Grails, and yet another with NodeJS. If any of them has commands that run for all roles, or if the role names between them conflict somehow you'd be in trouble. So, unless you have total control and understanding about all your third-party recipes, I'd recommend that you use the approach outlined in the examples above. Conclusion All the techniques presented here are used to manage our real complex scenario at e-Core, where we support multiple applications in lots of environments that are replicated in three servers. We found that this allowed us to quickly add new environments or servers as needed to recreate our application in no time. Also, I'd like to thank Juan Ibiapina, who worked with me on all these recipes to ensure our deployment procedures are fully automated—almost. We still manage our databases and documents manually because we prefer to. About the author Rodrigo Rosenfeld Rosas lives in Vitória-ES, Brazil, with his lovely wife and daughter. He graduated in Electrical Engineering with a Master’s degree in Robotics and Real-time Systems. For the past five years Rodrigo has focused on building and maintaining single page web applications. He is the author of some gems includingactive_record_migrations,rails-web-console, the JS specs runner oojspec, sequel-devise, and the Linux X11 utility ktrayshortcut. Rodrigo was hired by e-Core (Porto Alegre-RS, Brazil) to work from home, building and maintaining software for Matterhorn Transactions Inc. with a team of great developers. Matterhorn's main product, the Market Tracker, is used by LexisNexis clients .
Read more
  • 0
  • 0
  • 3268

article-image-part-1-managing-multiple-apps-and-environments-capistrano-3-and-chef-solo
Rodrigo Rosenfeld
30 Jun 2014
8 min read
Save for later

Part 1: Managing Multiple Apps and Environments with Capistrano 3 and Chef Solo

Rodrigo Rosenfeld
30 Jun 2014
8 min read
In my previous two posts, I explored how to use Capistrano to deploy multiple applications in different environments and servers. This, however, is only one part of our deployment procedures. It just takes care of the applications themselves, but we still rely on the server being properly set up so that our Capistrano recipes work. In these two posts I'll explain how to use Chef to manage servers, and how to integrate it with Capistrano and perform all of your deployment procedures from a single project. Introducing the sample deployment project After I wrote the previous two posts, I realized I was not fully happy with a few issues of our company's deployment strategy: Duplicate settings: This was the main issue that was puzzling me. I didn't like the fact that we had to duplicate some settings like the application's binding port in both Chef and Capistrano projects. Too many required files (45 to support 3 servers, 5 environments, and 3 applications): While the files were really small, I felt that this situation could be further improved by the use of some conventions. So, I decided to work in a proof-of-concept project that would integrate both Chef and Capistrano and fix these issues. After a weekend working (almost) full time on it, I came up with a sample project so that you can fork it and adapt it to your deployment scenario. The main goal of this project hasn't changed from my previous article. We want to be able to support new environments and servers very quickly by simply adding some settings to the project. Go ahead and clone it. Follow the instructions on the README and it should deploy the Rails Devise sample application into a VirtualBox Virtual Machine (VM) using Vagrant. The following sections will explain how it works and the reasons behind its design. The overall idea While it's possible to accomplish all of your deployment tasks with either Chef or Capistrano alone, I feel that they are more suitable for different tasks. There are many existing recipes that you can take advantage of for both projects, but they usually don't overlap much. There are Chef community cookbooks available to help you install nginx, apache2, java, databases, and much more. You probably want to use Chef to perform administrative tasks like managing services, server backup, installing software, and so on. Capistrano, on the other hand, will help you by deploying the applications itself after the server is ready to go, and after running your Chef recipes. This includes creating releases of your application, which allows you to easily rollback to a previous working version, for example. You'll find existing Capistrano recipes to help you with several application-related tasks like running Bundler, switching between Ruby versions with either rbenv, rvm or chruby, running Rails migrations and assets precompilation, and so on. Capistrano recipes are well integrated with the Capistrano deploy flow. For instance, the capistrano-puma recipe will automatically generate a settings file if it is missing and start puma after the remaining deployment tasks have finished by including this in its recipes: after 'deploy:check', 'puma:check' after 'deploy:finished', 'puma:smart_restart' Another difference between sysadmin and deployment tasks is that usually the former will require superuser privileges while the latter is recommended to be accomplished by a regular user. This way, you can feel safer when deploying Capistrano recipes, since you know it won't affect the server itself, except for the applications managed by that user account. And deploying an application is way more common than installing and configuring programs or changing the proxy's settings. Some of the settings required by Chef and Capistrano recipes overlap. One example is a Chef recipe that generates an nginx settings file that will proxy requests to a Rails application listening on a local port. In this scenario, the binding address used by the Capistrano puma recipe needs to coincide with the port declared in the proxy settings for the nginx configuration file. Managing deployment settings Capistrano and Chef provide different built-in ways of managing their settings. Capistrano will use a Domain Specific Language (DSL) like set/fetch, while Chef will read the attributes following a well described precedence. I strongly advise you to keep with those approaches for settings that are specific for each project. To enable you to remove any duplication by overlapping deployment settings, I introduced another configuration declaration framework for the shared settings using the configatron gem, by taking advantage of the fact that both Chef and Capistrano are written in Ruby. Take a look at the settings directory in the sample project: settings/ ├── applications │ └── rails-devise.rb ├── common.rb ├── environments │ ├── development.rb │ └── production.rb └── servers     └── vagrant.rb The settings are split in common, along with those specific for each application, environment, and servers. As you would expect, the Rails Devise application deployed to the production environment in the vagrant server will read the settings from common.rb, servers/vagrant.rb, environments/production.rb, and applications/rails-devise.rb. If some of your settings apply to the Rails Devise running on a given server or environment (or both), it's possible to override the specific settings in other files like rails-devise_production.rb, vagrant_production.rb, or vagrant_production_rails-devise.rb. Here's the definition of load_app_settings in common_helpers/settings_loader.rb: def load_app_settings(app_name, app_server, app_env) cfg.app_name = app_name cfg.app_server = app_server cfg.app_env = app_env [ 'common', "servers/#{app_server}", "environments/#{app_env}", "applications/#{app_name}", "#{app_server}_#{app_env}", "#{app_server}_#{app_name}", "#{app_name}_#{app_env}", "#{app_server}_#{app_env}_#{app_name}", ].each{|s| load_settings s } cfg.lock! end Feel free to change the load path order. The latest settings take precedence over the first ones. So if the binding port is usually 3000 for production but 4000 for your ec2 server, you can add a cfg.my_app.binding_port = 3000 to environments/production.rb and override it on ec2_production.rb. Once those settings are loaded, they are locked and can't be changed by the deployment recipes. As a final note, the settings can also be set using a hash notation, which can be useful if you’re using a dynamic setting attribute. Here’s an example: cfg[:my_app][“binding_#{‘port’}”] = 3000. This is not really useful in this case, but it illustrates the setting capabilities. Calculated settings Two types of calculated settings are supported on this project: delayed and dynamic. Delayed are lazily evaluated the first time they are requested, while dynamic are always evaluated. They are useful for providing default values for some settings that could be overridden by other settings files. I prefer to use delayed attributes for those that are meant to be overridden and dynamic ones for those that are meant to be calculated, even though delayed ones would be suitable for both cases. Here's the common.rb from the sample project to illustrate the idea: require 'set' cfg.chef_runlist = Set.new cfg.deploy_user = 'deploy' cfg.deployment_repo_url = 'git@github.com:rosenfeld/capistrano-chef-deployment.git' cfg.deployment_repo_host = 'github.com' cfg.deployment_repo_symlink = false cfg.nginx.default = false # Delayed attributes: they are set to the block values unless explicitly set to other value cfg.database_name = delayed_attr{ "app1_#{cfg.app_env}" } cfg.nginx.subdomain = delayed_attr{ cfg.app_env } # Dynamic/calculated attributes: those are always evaluated by the block # Those attributes are not meant to be overrideable cfg.nginx.host = dyn_attr{ "#{cfg.nginx.subdomain}.mydomain.com" } cfg.nginx.host in this instance is not meant to be overridden by any other settings file and follows the company's policy. But it would be okay to override the production database name to app1 instead of using the default app1_production. This is just a guideline, but it should give you a good idea of some ways that Chef and Capistrano can be used together. Conclusion I hope you found this post as useful as I did. Being able to fully deploy the whole application stack from a single repository saves us a lot of time and simplifies our deployment a lot, and in the next post, Part 2, I will walk you through that deployment. About The Author Rodrigo Rosenfeld Rosas lives in Vitória-ES, Brazil, with his lovely wife and daughter. He graduated in Electrical Engineering with a Master’s degree in Robotics and Real-time Systems. For the past 5 years Rodrigo has focused on building and maintaining single page web applications. He is the author of some gems including active_record_migrations, rails-web-console, the JS specs runner oojspec, sequel-devise, and the Linux X11 utility ktrayshortcut. Rodrigo was hired by e-Core (Porto Alegre - RS, Brazil) to work from home, building and maintaining software for Matterhorn Transactions Inc. with a team of great developers. Matterhorn's main product, the Market Tracker, is used by LexisNexis clients.
Read more
  • 0
  • 0
  • 1938