Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Application Development

357 Articles
article-image-java-hibernate-collections-associations-and-advanced-concepts
Packt
15 Sep 2015
16 min read
Save for later

Java Hibernate Collections, Associations, and Advanced Concepts

Packt
15 Sep 2015
16 min read
In this article by Yogesh Prajapati and Vishal Ranapariya, the author of the book Java Hibernate Cookbook, he has provide a complete guide to the following recipes: Working with a first-level cache One-to-one mapping using a common join table Persisting Map (For more resources related to this topic, see here.) Working with a first-level cache Once we execute a particular query using hibernate, it always hits the database. As this process may be very expensive, hibernate provides the facility to cache objects within a certain boundary. The basic actions performed in each database transaction are as follows: The request reaches the database server via the network. The database server processes the query in the query plan. Now the database server executes the processed query. Again, the database server returns the result to the querying application through the network. At last, the application processes the results. This process is repeated every time we request a database operation, even if it is for a simple or small query. It is always a costly transaction to hit the database for the same records multiple times. Sometimes, we also face some delay in receiving the results because of network routing issues. There may be some other parameters that affect and contribute to the delay, but network routing issues play a major role in this cycle. To overcome this issue, the database uses a mechanism that stores the result of a query, which is executed repeatedly, and uses this result again when the data is requested using the same query. These operations are done on the database side. Hibernate provides an in-built caching mechanism known as the first-level cache (L1 cache). Following are some properties of the first-level cache: It is enabled by default. We cannot disable it even if we want to. The scope of the first-level cache is limited to a particular Session object only; the other Session objects cannot access it. All cached objects are destroyed once the session is closed. If we request for an object, hibernate returns the object from the cache only if the requested object is found in the cache; otherwise, a database call is initiated. We can use Session.evict(Object object) to remove single objects from the session cache. The Session.clear() method is used to clear all the cached objects from the session. Getting ready Let's take a look at how the L1 cache works. Creating the classes For this recipe, we will create an Employee class and also insert some records into the table: Source file: Employee.java @Entity @Table public class Employee { @Id @GeneratedValue private long id; @Column(name = "name") private String name; // getters and setters @Override public String toString() { return "Employee: " + "nt Id: " + this.id + "nt Name: " + this.name; } } Creating the tables Use the following table script if the hibernate.hbm2ddl.auto configuration property is not set to create: Use the following script to create the employee table: CREATE TABLE `employee` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `name` varchar(255) DEFAULT NULL, PRIMARY KEY (`id`) ); We will assume that two records are already inserted, as shown in the following employee table: id name 1 Yogesh 2 Aarush Now, let's take a look at some scenarios that show how the first-level cache works. How to do it… Here is the code to see how caching works. In the code, we will load employee#1 and employee#2 once; after that, we will try to load the same employees again and see what happens: Code System.out.println("nLoading employee#1..."); /* Line 2 */ Employee employee1 = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1.toString()); System.out.println("nLoading employee#2..."); /* Line 6 */ Employee employee2 = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2.toString()); System.out.println("nLoading employee#1 again..."); /* Line 10 */ Employee employee1_dummy = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1_dummy.toString()); System.out.println("nLoading employee#2 again..."); /* Line 15 */ Employee employee2_dummy = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2_dummy.toString()); Output Loading employee#1... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Loading employee#2... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 2 Name: Aarush Loading employee#1 again... Employee: Id: 1 Name: Yogesh Loading employee#2 again... Employee: Id: 2 Name: Aarush How it works… Here, we loaded Employee#1 and Employee#2 as shown in Line 2 and 6 respectively and also the print output for both. It's clear from the output that hibernate will hit the database to load Employee#1 and Employee#2 because at startup, no object is cached in hibernate. Now, in Line 10, we tried to load Employee#1 again. At this time, hibernate did not hit the database but simply use the cached object because Employee#1 is already loaded and this object is still in the session. The same thing happened with Employee#2. Hibernate stores an object in the cache only if one of the following operations is completed: Save Update Get Load List There's more… In the previous section, we took a look at how caching works. Now, we will discuss some other methods used to remove a cached object from the session. There are two more methods that are used to remove a cached object: evict(Object object): This method removes a particular object from the session clear(): This method removes all the objects from the session evict (Object object) This method is used to remove a particular object from the session. It is very useful. The object is no longer available in the session once this method is invoked and the request for the object hits the database: Code System.out.println("nLoading employee#1..."); /* Line 2 */ Employee employee1 = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1.toString()); /* Line 5 */ session.evict(employee1); System.out.println("nEmployee#1 removed using evict(…)..."); System.out.println("nLoading employee#1 again..."); /* Line 9*/ Employee employee1_dummy = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1_dummy.toString()); Output Loading employee#1... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Employee#1 removed using evict(…)... Loading employee#1 again... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Here, we loaded an Employee#1, as shown in Line 2. This object was then cached in the session, but we explicitly removed it from the session cache in Line 5. So, the loading of Employee#1 will again hit the database. clear() This method is used to remove all the cached objects from the session cache. They will no longer be available in the session once this method is invoked and the request for the objects hits the database: Code System.out.println("nLoading employee#1..."); /* Line 2 */ Employee employee1 = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1.toString()); System.out.println("nLoading employee#2..."); /* Line 6 */ Employee employee2 = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2.toString()); /* Line 9 */ session.clear(); System.out.println("nAll objects removed from session cache using clear()..."); System.out.println("nLoading employee#1 again..."); /* Line 13 */ Employee employee1_dummy = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1_dummy.toString()); System.out.println("nLoading employee#2 again..."); /* Line 17 */ Employee employee2_dummy = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2_dummy.toString()); Output Loading employee#1... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Loading employee#2... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 2 Name: Aarush All objects removed from session cache using clear()... Loading employee#1 again... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Loading employee#2 again... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 2 Name: Aarush Here, Line 2 and 6 show how to load Employee#1 and Employee#2 respectively. Now, we removed all the objects from the session cache using the clear() method. As a result, the loading of both Employee#1 and Employee#2 will again result in a database hit, as shown in Line 13 and 17. One-to-one mapping using a common join table In this method, we will use a third table that contains the relationship between the employee and detail tables. In other words, the third table will hold a primary key value of both tables to represent a relationship between them. Getting ready Use the following script to create the tables and classes. Here, we use Employee and EmployeeDetail to show a one-to-one mapping using a common join table: Creating the tables Use the following script to create the tables if you are not using hbm2dll=create|update: Use the following script to create the detail table: CREATE TABLE `detail` ( `detail_id` bigint(20) NOT NULL AUTO_INCREMENT, `city` varchar(255) DEFAULT NULL, PRIMARY KEY (`detail_id`) ); Use the following script to create the employee table: CREATE TABLE `employee` ( `employee_id` BIGINT(20) NOT NULL AUTO_INCREMENT, `name` VARCHAR(255) DEFAULT NULL, PRIMARY KEY (`employee_id`) ); Use the following script to create the employee_detail table: CREATE TABLE `employee_detail` ( `detail_id` BIGINT(20) DEFAULT NULL, `employee_id` BIGINT(20) NOT NULL, PRIMARY KEY (`employee_id`), KEY `FK_DETAIL_ID` (`detail_id`), KEY `FK_EMPLOYEE_ID` (`employee_id`), CONSTRAINT `FK_EMPLOYEE_ID` FOREIGN KEY (`employee_id`) REFERENCES `employee` (`employee_id`), CONSTRAINT `FK_DETAIL_ID` FOREIGN KEY (`detail_id`) REFERENCES `detail` (`detail_id`) ); Creating the classes Use the following code to create the classes: Source file: Employee.java @Entity @Table(name = "employee") public class Employee { @Id @GeneratedValue @Column(name = "employee_id") private long id; @Column(name = "name") private String name; @OneToOne(cascade = CascadeType.ALL) @JoinTable( name="employee_detail" , joinColumns=@JoinColumn(name="employee_id") , inverseJoinColumns=@JoinColumn(name="detail_id") ) private Detail employeeDetail; public long getId() { return id; } public void setId(long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Detail getEmployeeDetail() { return employeeDetail; } public void setEmployeeDetail(Detail employeeDetail) { this.employeeDetail = employeeDetail; } @Override public String toString() { return "Employee" +"n Id: " + this.id +"n Name: " + this.name +"n Employee Detail " + "nt Id: " + this.employeeDetail.getId() + "nt City: " + this.employeeDetail.getCity(); } } Source file: Detail.java @Entity @Table(name = "detail") public class Detail { @Id @GeneratedValue @Column(name = "detail_id") private long id; @Column(name = "city") private String city; @OneToOne(cascade = CascadeType.ALL) @JoinTable( name="employee_detail" , joinColumns=@JoinColumn(name="detail_id") , inverseJoinColumns=@JoinColumn(name="employee_id") ) private Employee employee; public Employee getEmployee() { return employee; } public void setEmployee(Employee employee) { this.employee = employee; } public String getCity() { return city; } public void setCity(String city) { this.city = city; } public long getId() { return id; } public void setId(long id) { this.id = id; } @Override public String toString() { return "Employee Detail" +"n Id: " + this.id +"n City: " + this.city +"n Employee " + "nt Id: " + this.employee.getId() + "nt Name: " + this.employee.getName(); } } How to do it… In this section, we will take a look at how to insert a record step by step. Inserting a record Using the following code, we will insert an Employee record with a Detail object: Code Detail detail = new Detail(); detail.setCity("AHM"); Employee employee = new Employee(); employee.setName("vishal"); employee.setEmployeeDetail(detail); Transaction transaction = session.getTransaction(); transaction.begin(); session.save(employee); transaction.commit(); Output Hibernate: insert into detail (city) values (?) Hibernate: insert into employee (name) values (?) Hibernate: insert into employee_detail (detail_id, employee_id) values (?,?) Hibernate saves one record in the detail table and one in the employee table and then inserts a record in to the third table, employee_detail, using the primary key column value of the detail and employee tables. How it works… From the output, it's clear how this method works. The code is the same as in the other methods of configuring a one-to-one relationship, but here, hibernate reacts differently. Here, the first two statements of output insert the records in to the detail and employee tables respectively, and the third statement inserts the mapping record in to the third table, employee_detail, using the primary key column value of both the tables. Let's take a look at an option used in the previous code in detail: @JoinTable: This annotation, written on the Employee class, contains the name="employee_detail" attribute and shows that a new intermediate table is created with the name "employee_detail" joinColumns=@JoinColumn(name="employee_id"): This shows that a reference column is created in employee_detail with the name "employee_id", which is the primary key of the employee table inverseJoinColumns=@JoinColumn(name="detail_id"): This shows that a reference column is created in the employee_detail table with the name "detail_id", which is the primary key of the detail table Ultimately, the third table, employee_detail, is created with two columns: one is "employee_id" and the other is "detail_id". Persisting Map Map is used when we want to persist a collection of key/value pairs where the key is always unique. Some common implementations of java.util.Map are java.util.HashMap, java.util.LinkedHashMap, and so on. For this recipe, we will use java.util.HashMap. Getting ready Now, let's assume that we have a scenario where we are going to implement Map<String, String>; here, the String key is the e-mail address label, and the value String is the e-mail address. For example, we will try to construct a data structure similar to <"Personal e-mail", "emailaddress2@provider2.com">, <"Business e-mail", "emailaddress1@provider1.com">. This means that we will create an alias of the actual e-mail address so that we can easily get the e-mail address using the alias and can document it in a more readable form. This type of implementation depends on the custom requirement; here, we can easily get a business e-mail using the Business email key. Use the following code to create the required tables and classes. Creating tables Use the following script to create the tables if you are not using hbm2dll=create|update. This script is for the tables that are generated by hibernate: Use the following code to create the email table: CREATE TABLE `email` ( `Employee_id` BIGINT(20) NOT NULL, `emails` VARCHAR(255) DEFAULT NULL, `emails_KEY` VARCHAR(255) NOT NULL DEFAULT '', PRIMARY KEY (`Employee_id`,`emails_KEY`), KEY `FK5C24B9C38F47B40` (`Employee_id`), CONSTRAINT `FK5C24B9C38F47B40` FOREIGN KEY (`Employee_id`) REFERENCES `employee` (`id`) ); Use the following code to create the employee table: CREATE TABLE `employee` ( `id` BIGINT(20) NOT NULL AUTO_INCREMENT, `name` VARCHAR(255) DEFAULT NULL, PRIMARY KEY (`id`) ); Creating a class Source file: Employee.java @Entity @Table(name = "employee") public class Employee { @Id @GeneratedValue @Column(name = "id") private long id; @Column(name = "name") private String name; @ElementCollection @CollectionTable(name = "email") private Map<String, String> emails; public long getId() { return id; } public void setId(long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Map<String, String> getEmails() { return emails; } public void setEmails(Map<String, String> emails) { this.emails = emails; } @Override public String toString() { return "Employee" + "ntId: " + this.id + "ntName: " + this.name + "ntEmails: " + this.emails; } } How to do it… Here, we will consider how to work with Map and its manipulation operations, such as inserting, retrieving, deleting, and updating. Inserting a record Here, we will create one employee record with two e-mail addresses: Code Employee employee = new Employee(); employee.setName("yogesh"); Map<String, String> emails = new HashMap<String, String>(); emails.put("Business email", "emailaddress1@provider1.com"); emails.put("Personal email", "emailaddress2@provider2.com"); employee.setEmails(emails); session.getTransaction().begin(); session.save(employee); session.getTransaction().commit(); Output Hibernate: insert into employee (name) values (?) Hibernate: insert into email (Employee_id, emails_KEY, emails) values (?,?,?) Hibernate: insert into email (Employee_id, emails_KEY, emails) values (?,?,?) When the code is executed, it inserts one record into the employee table and two records into the email table and also sets a primary key value for the employee record in each record of the email table as a reference. Retrieving a record Here, we know that our record is inserted with id 1. So, we will try to get only that record and understand how Map works in our case. Code Employee employee = (Employee) session.get(Employee.class, 1l); System.out.println(employee.toString()); System.out.println("Business email: " + employee.getEmails().get("Business email")); Output Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from employee employee0_ where employee0_.id=? Hibernate: select emails0_.Employee_id as Employee1_0_0_, emails0_.emails as emails0_, emails0_.emails_KEY as emails3_0_ from email emails0_ where emails0_.Employee_id=? Employee Id: 1 Name: yogesh Emails: {Personal email=emailaddress2@provider2.com, Business email=emailaddress1@provider1.com} Business email: emailaddress1@provider1.com Here, we can easily get a business e-mail address using the Business email key from the map of e-mail addresses. This is just a simple scenario created to demonstrate how to persist Map in hibernate. Updating a record Here, we will try to add one more e-mail address to Employee#1: Code Employee employee = (Employee) session.get(Employee.class, 1l); Map<String, String> emails = employee.getEmails(); emails.put("Personal email 1", "emailaddress3@provider3.com"); session.getTransaction().begin(); session.saveOrUpdate(employee); session.getTransaction().commit(); System.out.println(employee.toString()); Output Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from employee employee0_ where employee0_.id=? Hibernate: select emails0_.Employee_id as Employee1_0_0_, emails0_.emails as emails0_, emails0_.emails_KEY as emails3_0_ from email emails0_ where emails0_.Employee_id=? Hibernate: insert into email (Employee_id, emails_KEY, emails) values (?, ?, ?) Employee Id: 2 Name: yogesh Emails: {Personal email 1= emailaddress3@provider3.com, Personal email=emailaddress2@provider2.com, Business email=emailaddress1@provider1.com} Here, we added a new e-mail address with the Personal email 1 key and the value is emailaddress3@provider3.com. Deleting a record Here again, we will try to delete the records of Employee#1 using the following code: Code Employee employee = new Employee(); employee.setId(1); session.getTransaction().begin(); session.delete(employee); session.getTransaction().commit(); Output Hibernate: delete from email where Employee_id=? Hibernate: delete from employee where id=? While deleting the object, hibernate will delete the child records (here, e-mail addresses) as well. How it works… Here again, we need to understand the table structures created by hibernate: Hibernate creates a composite primary key in the email table using two fields: employee_id and emails_KEY. Summary In this article you familiarized yourself with recipes such as working with a first-level cache, one-to-one mapping using a common join table, and persisting map. Resources for Article: Further resources on this subject: PostgreSQL in Action[article] OpenShift for Java Developers[article] Oracle 12c SQL and PL/SQL New Features [article]
Read more
  • 0
  • 0
  • 2766

article-image-netbeans-developers-life-cycle
Packt
08 Sep 2015
30 min read
Save for later

The NetBeans Developer's Life Cycle

Packt
08 Sep 2015
30 min read
In this article by David Salter, the author of Mastering NetBeans, we'll cover the following topics: Running applications Debugging applications Profiling applications Testing applications On a day-to-day basis, developers spend much of their time writing and running applications. While writing applications, they typically debug, test, and profile them to ensure that they provide the best possible application to customers. Running, debugging, profiling, and testing are all integral parts of the development life cycle, and NetBeans provides excellent tooling to help us in all these areas. (For more resources related to this topic, see here.) Running applications Executing applications from within NetBeans is as simple as either pressing the F6 button on the keyboard or selecting the Run menu item or Project Context menu item. Choosing either of these options will launch your application without specifying any additional Java command-line parameters using the default platform JDK that NetBeans is currently using. Sometimes we want to change the options that are used for launching applications. NetBeans allows these options to be easily specified by a project's properties. Right-clicking on a project in the Projects window and selecting the Properties menu option opens the Project Properties dialog. Selecting the Run category allows the configuration options to be defined for launching an application. From this dialog, we can define and select multiple run configurations for the project via the Configuration dropdown. Selecting the New… button to the right of the Configuration dropdown allows us to enter a name for a new configuration. Once a new configuration is created, it is automatically selected as the active configuration. The Delete button can be used for removing any unwanted configurations. The preceding screenshot shows the Project Properties dialog for a standard Java project. Different project types (for example, web or mobile projects) have different options in the Project Properties window. As can be seen from the preceding Project Properties dialog, several pieces of information can be defined for a standard Java project, which together make up the launch configuration for a project: Runtime Platform: This option allows us to define which Java platform we will use when launching the application. From here, we can select from all the Java platforms that are configured within NetBeans. Selecting the Manage Platforms… button opens the Java Platform Manager dialog, allowing full configuration of the different Java platforms available (both Java Standard Edition and Remote Java Standard Edition). Selecting this button has the same effect as selecting the Tools and then Java Platforms menu options. Main Class: This option defines the main class that is used to launch the application. If the project has more than one main class, selecting the Browse… button will cause the Browse Main Classes dialog to be displayed, listing all the main classes defined in the project. Arguments: Different command-line arguments can be passed to the main class as defined in this option. Working Directory: This option allows the working directory for the application to be specified. VM Options: If different VM options (such as heap size) require setting, they can be specified by this option. Selecting the Customize button displays a dialog listing the different standard VM options available which can be selected (ticked) as required. Custom VM properties can also be defined in the dialog. For more information on the different VM properties for Java, check out http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html. From here, the VM properties for Java 7 (and earlier versions) and Java 8 for Windows, Solaris, Linux, and Mac OS X can be referenced. Run with Java Web Start: Selecting this option allows the application to be executed using Java Web Start technologies. This option is only available if Web Start is enabled in the Application | Web Start category. When running a web application, the project properties are different from those of a standalone Java application. In fact, the project properties for a Maven web application are different from those of a standard NetBeans web application. The following screenshot shows the properties for a Maven-based web application; as discussed previously, Maven is the standard project management tool for Java applications, and the recommended tool for creating and managing web applications: Debugging applications In the previous section, we saw how NetBeans provides the easy-to-use features to allow developers to launch their applications, but then it also provides more powerful additional features. The same is true for debugging applications. For simple debugging, NetBeans provides the standard facilities you would expect, such as stepping into or over methods, setting line breakpoints, and monitoring the values of variables. When debugging applications, NetBeans provides several different windows, enabling different types of information to be displayed and manipulated by the developer: Breakpoints Variables Call stack Loaded classes Sessions Threads Sources Debugging Analyze stack All of these windows are accessible from the Window and then Debugging main menu within NetBeans. Breakpoints NetBeans provides a simple approach to set breakpoints and a more comprehensive approach that provides many more useful features. Breakpoints can be easily added into Java source code by clicking on the gutter on the left-hand side of a line of Java source code. When a breakpoint is set, a small pink square is shown in the gutter and the entire line of source code is also highlighted in the same color. Clicking on the breakpoint square in the gutter toggles the breakpoint on and off. Once a breakpoint has been created, instead of removing it altogether, it can be disabled by right-clicking on the bookmark in the gutter and selecting the Breakpoint and then Enabled menu options. This has the effect of keeping the breakpoint within your codebase, but execution of the application does not stop when the breakpoint is hit. Creating a simple breakpoint like this can be a very powerful way of debugging applications. It allows you to stop the execution of an application when a line of code is hit. If we want to add a bit more control onto a simple breakpoint, we can edit the breakpoint's properties by right-clicking on the breakpoint in the gutter and selecting the Breakpoint and then Properties menu options. This causes the Breakpoint Properties dialog to be displayed: In this dialog, we can see the line number and the file that the breakpoint belongs to. The line number can be edited to move the breakpoint if it has been created on the wrong line. However, what's more interesting is the conditions that we can apply to the breakpoint. The Condition entry allows us to define a condition that has to be met for the breakpoint to stop the code execution. For example, we can stop the code when the variable i is equal to 20 by adding a condition, i==20. When we add conditions to a breakpoint, the breakpoint becomes known as a conditional breakpoint, and the icon in the gutter changes to a square with the lower-right quadrant removed. We can also cause the execution of the application to halt at a breakpoint when the breakpoint has been hit a certain number of times. The Break when hit count is condition can be set to Equal to, Greater than, or Multiple of to halt the execution of the application when the breakpoint has been hit the requisite number of times. Finally, we can specify what actions occur when a breakpoint is hit. The Suspend dropdown allows us to define what threads are suspended when a breakpoint is hit. NetBeans can suspend All threads, Breakpoint thread, or no threads at all. The text that is displayed in the Output window can be defined via the Print Text edit box and different breakpoint groups can be enabled or disabled via the Enable Group and Disable Group drop-down boxes. But what exactly is a breakpoint group? Simply put, a breakpoint group is a collection of breakpoints that can all be set or unset at the same time. It is a way of categorizing breakpoints into similar collections, for example, all the breakpoints in a particular file, or all the breakpoints relating to exceptions or unit tests. Breakpoint groups are created in the Breakpoints window. This is accessible by selecting the Debugging and then Breakpoints menu options from within the main NetBeans Window menu. To create a new breakpoint group, simply right-click on an existing breakpoint in the Breakpoints window and select the Move Into Group… and then New… menu options. The Set the Name of Breakpoints Group dialog is displayed in which the name of the new breakpoint group can be entered. After creating a breakpoint group and assigning one or more breakpoints into it, the entire group of breakpoints can be enabled or disabled, or even deleted by right-clicking on the group in the Breakpoints window and selecting the appropriate option. Any newly created breakpoint groups will also be available in the Breakpoint Properties window. So far, we've seen how to create breakpoints that stop on a single line of code, and also how to create conditional breakpoints so that we can cause an application to stop when certain conditions occur for a breakpoint. These are excellent techniques to help debug applications. NetBeans, however, also provides the ability to create more advanced breakpoints so that we can get even more control of when the execution of applications is halted by breakpoints. So, how do we create these breakpoints? These different types of breakpoints are all created from in the Breakpoints window by right-clicking and selecting the New Breakpoint… menu option. In the New Breakpoint dialog, we can create different types of breakpoints by selecting the appropriate entry from the Breakpoint Type drop-down list. The preceding screenshot shows an example of creating a Class breakpoint. The following types of breakpoints can be created: Class: This creates a breakpoint that halts execution when a class is loaded, unloaded, or either event occurs. Exception: This stops execution when the specified exception is caught, uncaught, or either event occurs. Field: This creates a breakpoint that halts execution when a field on a class is accessed, modified, or either event occurs. Line: This stops execution when the specified line of code is executed. It acts the same way as creating a breakpoint by clicking on the gutter of the Java source code editor window. Method: This creates a breakpoint that halts execution when a method is entered, exited, or when either event occurs. Optionally, the breakpoint can be created for all methods inside a specified class rather than a single method. Thread: This creates a breakpoint that stops execution when a thread is started, finished, or either event occurs. AWT/Swing Component: This creates a breakpoint that stops execution when a GUI component is accessed. For each of these different types of breakpoints, conditions and actions can be specified in the same way as on simple line-based breakpoints. The Variables debug window The Variables debug window lists all the variables that are currently within  the scope of execution of the application. This is therefore thread-specific, so if multiple threads are running at one time, the Variables window will only display variables in scope for the currently selected thread. In the Variables window, we can see the variables currently in scope for the selected thread, their type, and value. To display variables for a different thread to that currently selected, we must select an alternative thread via the Debugging window. Using the triangle button to the left of each variable, we can expand variables and drill down into the properties within them. When a variable is a simple primitive (for example, integers or strings), we can modify it or any property within it by altering the value in the Value column in the Variables window. The variable's value will then be changed within the running application to the newly entered value. By default, the Variables window shows three columns (Name, Type, and Value). We can modify which columns are visible by pressing the selection icon () at the top-right of the window. Selecting this displays the Change Visible Columns dialog, from which we can select from the Name, String value, Type, and Value columns: The Watches window The Watches window allows us to see the contents of variables and expressions during a debugging session, as can be seen in the following screenshot: In this screenshot, we can see that the variable i is being displayed along with the expressions 10+10 and i+20. New expressions can be watched by clicking on the <Enter new watch> option or by right-clicking on the Java source code editor and selecting the New Watch… menu option. Evaluating expressions In addition to watching variables in a debugging session, NetBeans also provides the facility to evaluate expressions. Expressions can contain any Java code that is valid for the running scope of the application. So, for example, local variables, class variables, or new instances of classes can be evaluated. To evaluate variables, open the Evaluate Expression window by selecting the Debug and then Evaluate Expression menu options. Enter an expression to be evaluated in this window and press the Evaluate Code Fragment button at the bottom-right corner of the window. As a shortcut, pressing the Ctrl + Enter keys will also evaluate the code fragment. Once an expression has been evaluated, it is shown in the Evaluation Result window. The Evaluation Result window shows a history of each expression that has previously been evaluated. Expressions can be added to the list of watched variables by right-clicking on the expression and selecting the Create Fixed Watch expression. The Call Stack window The Call Stack window displays the call stack for the currently executing thread: The call stack is displayed from top to bottom with the currently executing frame at the top of the list. Double-clicking on any entry in the call stack opens up the corresponding source code in the Java editor within NetBeans. Right-clicking on an entry in the call stack displays a pop-up menu with the choice to: Make Current: This makes the selected thread the current thread Pop To Here: This pops the execution of the call stack to the selected location Go To Source: This displays the selected code within the Java source editor Copy Stack: This copies the stack trace to the clipboard for use elsewhere When debugging, it can be useful to change the stack frame of the currently executing thread by selecting the Pop To Here option from within the stack trace window. Imagine the following code: // Get some magic int magic = getSomeMagicNumber(); // Perform calculation performCalculation(magic); During a debugging session, if after stepping over the getSomeMagicNumber() method, we decided that the method has not worked as expected, our course of action would probably be to debug into the getSomeMagicNumber() method. But, we've just stepped over the method, so what can we do? Well, we can stop the debugging session and start again or repeat the operation that called this section of code and hope there are no changes to the application state that affect the method we want to debug. A better solution, however, would be to select the line of code that calls the getSomeMagicNumber() method and pop the stack frame using the Pop To Here option. This would have the effect of rewinding the code execution so that we can then step into the method and see what is happening inside it. As well as using the Pop To Here functionality, NetBeans also offers several menu options for manipulating the stack frame, namely: Make Callee Current: This makes the callee of the current method the currently executing stack frame Make Caller Current: This makes the caller of the current method the currently executing stack frame Pop Topmost Call: This pops one stack frame, making the calling method the currently executing stack frame When moving around the call stack using these techniques, any operations performed by the currently executing method are not undone. So, for example, strange results may be seen if global or class-based variables are altered within a method and then an entry is popped from the call stack. Popping entries in the call stack is safest when no state changes are made within a method. The call stack displayed in the Debugging window for each thread behaves in the same way as in the Call Stack window itself. The Loaded Classes window The Loaded Classes window displays a list of all the classes that are currently loaded, showing how many instances there are of each class as a number and as a percentage of the total number of classes loaded. Depending upon the number of external libraries (including the standard Java runtime libraries) being used, you may find it difficult to locate instances of your own classes in this window. Fortunately, the filter at the bottom of the window allows the list of classes to be filtered, based upon an entered string. So, for example, entering the filter String will show all the classes with String in the fully qualified class name that are currently loaded, including java.lang.String and java.lang.StringBuffer. Since the filter works on the fully qualified name of a class, entering a package name will show all the classes listed in that package and subpackages. So, for example, entering a filter value as com.davidsalter.multithread would show only the classes listed in that package and subpackages. The Sessions window Within NetBeans, it is possible to perform multiple debugging sessions where either one project is being debugged multiple times, or more commonly, multiple projects are being debugged at the same time, where one is acting as a client application and the other is acting as a server application. The Sessions window displays a list of the currently running debug sessions, allowing the developer control over which one is the current session. Right-clicking on any of the sessions listed in the window provides the following options: Make Current: This makes the selected session the currently active debugging session Scope: This debugs the current thread or all the threads in the selected session Language: This options shows the language of the application being debugged—Java Finish: This finishes the selected debugging session Finish All: This finishes all the debugging sessions The Sessions window shows the name of the debug session (for example the main class being executed), its state (whether the application is Stopped or Running) and language being debugged. Clicking the selection icon () at the top-right of the window allows the user to choose which columns are displayed in the window. The default choice is to display all columns except for the Host Name column, which displays the name of the computer the session is running on. The Threads window The Threads window displays a hierarchical list of threads in use by the application currently being debugged. The current thread is displayed in bold. Double-clicking on any of the threads in the hierarchy makes the thread current. Similar to the Debugging window, threads can be made current, suspended, or interrupted by right-clicking on the thread and selecting the appropriate option. The default display for the Threads window is to show the thread's name and its state (Running, Waiting, or Sleeping). Clicking the selection icon () at the top-right of the window allows the user to choose which columns are displayed in the window. The Sources window The Sources window simply lists all of the source roots that NetBeans considers for the selected project. These are the only locations that NetBeans will search when looking for source code while debugging an application. If you find that you are debugging an application, and you cannot step into code, the most likely scenario is that the source root for the code you wish to debug is not included in the Sources window. To add a new source root, right-click in the Sources window and select the Add Source Root option. The Debugging window The Debugging window allows us to see which threads are running while debugging our application. This window is, therefore, particularly useful when debugging multithreaded applications. In this window, we can see the different threads that are running within our application. For each thread, we can see the name of the thread and the call stack leading to the breakpoint. The current thread is highlighted with a green band along the left-hand side edge of the window. Other threads created within our application are denoted with a yellow band along the left-hand side edge of the window. System threads are denoted with a gray band. We can make any of the threads the current thread by right-clicking on it and selecting the Make Current menu option. When we do this, the Variables and Call Stack windows are updated to show new information for the selected thread. The current thread can also be selected by clicking on the Debug and then Set Current Thread… menu options. Upon selecting this, a list of running threads is shown from which the current thread can be selected. Right-clicking on a thread and selecting the Resume option will cause the selected thread to continue execution until it hits another breakpoint. For each thread that is running, we can also Suspend, Interrupt, and Resume the thread by right-clicking on the thread and choosing the appropriate action. In each thread listing, the current methods call stack is displayed for each thread. This can be manipulated in the same way as from the Call Stack window. When debugging multithreaded applications, new breakpoints can be hit within different threads at any time. NetBeans helps us with multithreaded debugging by not automatically switching the user interface to a different thread when a breakpoint is hit on the non-current thread. When a breakpoint is hit on any thread other than the current thread, an indication is displayed at the bottom of the Debugging window, stating New Breakpoint Hit (an example of this can be seen in the previous window). Clicking on the icon to the right of the message shows all the breakpoints that have been hit together with the thread name in which they occur. Selecting the alternate thread will cause the relevant breakpoint to be opened within NetBeans and highlighted in the appropriate Java source code file. NetBeans provides several filters on the Debugging window so that we can show more/less information as appropriate. From left to right, these images allow us to: Show less (suspended and current threads only) Show thread groups Show suspend/resume table Show system threads Show monitors Show qualified names Sort by suspended/resumed state Sort by name Sort by default Debugging multithreaded applications can be a lot easier if you give your threads names. The thread's name is displayed in the Debugging window, and it's a lot easier to understand what a thread with a proper name is doing as opposed to a thread called Thread-1. Deadlock detection When debugging multithreaded applications, one of the problems that we can see is that a deadlock occurs between executing threads. A deadlock occurs when two or more threads become blocked forever because they are both waiting for a shared resource to become available. In Java, this typically occurs when the synchronized keyword is used. NetBeans allows us to easily check for deadlocks using the Check for Deadlock tool available on the Debug menu. When a deadlock is detected using this tool, the state of the deadlocked threads is set to On Monitor in the Threads window. Additionally, the threads are marked as deadlocked in the Debugging window. Each deadlocked thread is displayed with a red band on the left-hand side border and the Deadlock detected warning message is displayed. Analyze Stack Window When running an application within NetBeans, if an exception is thrown and not caught, the stack trace will be displayed in the Output window, allowing the developer to see exactly where errors have occurred. From the following screenshot, we can easily see that a NullPointerException was thrown from within the FaultyImplementation class in the doUntestedOperation() method at line 16. Looking before this in the stack trace (that is the entry underneath), we can see that the doUntestedOperation() method was called from within the main() method of the Main class at line 21: In the preceding example, the FaultyImplementation class is defined as follows: public class FaultyImplementation { public void doUntestedOperation() { throw new NullPointerException(); } } Java is providing an invaluable feature to developers, allowing us to easily see where exceptions are thrown and what the sequence of events was that led to the exception being thrown. NetBeans, however, enhances the display of the stack traces by making the class and line numbers clickable hyperlinks which, when clicked on, will navigate to the appropriate line in the code. This allows us to easily delve into a stack trace and view the code at all the levels of the stack trace. In the previous screenshot, we can click on the hyperlinks FaultyImplementation.java:16 and Main.java:21 to take us to the appropriate line in the appropriate Java file. This is an excellent time-saving feature when developing applications, but what do we do when someone e-mails us a stack trace to look at an error in a production system? How do we manage stack traces that are stored in log files? Fortunately, NetBeans provides an easy way to allow a stack trace to be turned into clickable hyperlinks so that we can browse through the stack trace without running the application. To load and manage stack traces into NetBeans, the first step is to copy the stack trace onto the system clipboard. Once the stack trace has been copied onto the clipboard, Analyze Stack Window can be opened within NetBeans by selecting the Window and then Debugging and then Analyze Stack menu options (the default installation for NetBeans has no keyboard shortcut assigned to this operation). Analyze Stack Window will default to showing the stack trace that is currently in the system clipboard. If no stack trace is in the clipboard, or any other data is in the system's clipboard, Analyze Stack Window will be displayed with no contents. To populate the window, copy a stack trace into the system's clipboard and select the Insert StackTrace From Clipboard button. Once a stack trace has been displayed in Analyze Stack Window, clicking on the hyperlinks in it will navigate to the appropriate location in the Java source files just as it does from the Output window when an exception is thrown from a running application. You can only navigate to source code from a stack trace if the project containing the relevant source code is open in the selected project group. Variable formatters When debugging an application, the NetBeans debugger can display the values of simple primitives in the Variables window. As we saw previously, we can also display the toString() representation of a variable if we select the appropriate columns to display in the window. Sometimes when debugging, however, the toString() representation is not the best way to display formatted information in the Variables window. In this window, we are showing the value of a complex number class that we have used in high school math. The ComplexNumber class being debugged in this example is defined as: public class ComplexNumber { private double realPart; private double imaginaryPart; public ComplexNumber(double realPart, double imaginaryPart) { this.realPart = realPart; this.imaginaryPart = imaginaryPart; } @Override public String toString() { return "ComplexNumber{" + "realPart=" + realPart + ", imaginaryPart=" + imaginaryPart + '}'; } // Getters and Setters omitted for brevity… } Looking at this class, we can see that it essentially holds two members—realPart and imaginaryPart. The toString() method outputs a string, detailing the name of the object and its parameters which would be very useful when writing ComplexNumbers to log files, for example: ComplexNumber{realPart=1.0, imaginaryPart=2.0} When debugging, however, this is a fairly complicated string to look at and comprehend—particularly, when there is a lot of debugging information being displayed. NetBeans, however, allows us to define custom formatters for classes that detail how an object will be displayed in the Variables window when being debugged. To define a custom formatter, select the Java option from the NetBeans Options dialog and then select the Java Debugger tab. From this tab, select the Variable Formatters category. On this screen, all the variable formatters that are defined within NetBeans are shown. To create a new variable formatter, select the Add… button to display the Add Variable Formatter dialog. In the Add Variable Formatter dialog, we need to enter Formatter Name and a list of Class types that NetBeans will apply the formatting to when displaying values in the debugger. To apply the formatter to multiple classes, enter the different classes, separated by commas. The value that is to be formatted is entered in the Value formatted as a result of code snippet field. This field takes the scope of the object being debugged. So, for example, to output the ComplexNumber class, we can enter the custom formatter as: "("+realPart+", "+imaginaryPart+"i)" We can see that the formatter is built up from concatenating static strings and the values of the members realPart and imaginaryPart. We can see the results of debugging variables using custom formatters in the following screenshot: Debugging remote applications The NetBeans debugger provides rapid access for debugging local applications that are executing within the same JVM as NetBeans. What happens though when we want to debug a remote application? A remote application isn't necessarily hosted on a separate server to your development machine, but is defined as any application running outside of the local JVM (that is the one that is running NetBeans). To debug a remote application, the NetBeans debugger can be "attached" to the remote application. Then, to all intents, the application can be debugged in exactly the same way as a local application is debugged, as described in the previous sections of this article. To attach to a remote application, select the Debug and then Attach Debugger… menu options. On the Attach dialog, the connector (SocketAttach, ProcessAttach, or SocketListen) must be specified to connect to the remote application. The appropriate connection details must then be entered to attach the debugger. For example, the process ID must be entered for the ProcessAttach connector and the host and port must be specified for the SocketAttach connector. Profiling applications Learning how to debug applications is an essential technique in software development. Another essential technique that is often overlooked is profiling applications. Profiling applications involves measuring various metrics such as the amount of heap memory used or the number of loaded classes or running threads. By profiling applications, we can gain an understanding of what our applications are actually doing and as such we can optimize them and make them function better. NetBeans provides first class profiling tools that are easy to use and provide results that are easy to interpret. The NetBeans profiler allows us to profile three specific areas: Application monitoring Performance monitoring Memory monitoring Each of these monitoring tools is accessible from the Profile menu within NetBeans. To commence profiling, select the Profile and then Profile Project menu options. After instructing NetBeans to profile a project, the profiler starts providing the choice of the type of profiling to perform. Testing applications Writing tests for applications is probably one of the most important aspects of modern software development. NetBeans provides the facility to write and run both JUnit and TestNG tests and test suites. In this section, we'll provide details on how NetBeans allows us to write and run these types of tests, but we'll assume that you have some knowledge of either JUnit or TestNG. TestNG support is provided by default with NetBeans, however, due to license concerns, JUnit may not have been installed when you installed NetBeans. If JUnit support is not installed, it can easily be added through the NetBeans Plugins system. In a project, NetBeans creates two separate source roots: one for application sources and the other for test sources. This allows us to keep tests separate from application source code so that when we ship applications, we do not need to ship tests with them. This separation of application source code and test source code enables us to write better tests and have less coupling between tests and applications. The best situation is for the test source root to have a dependency on application classes and the application classes to have no dependency on the tests that we have written. To write a test, we must first have a project. Any type of Java project can have tests added into it. To add tests into a project, we can use the New File wizard. In the Unit Tests category, there are templates for: JUnit Tests Tests for Existing Class (this is for JUnit tests) Test Suite (this is for JUnit tests) TestNG Test Case TestNG Test Suite When creating classes for these types of tests, NetBeans provides the option to automatically generate code; this is usually a good starting point for writing classes. When executing tests, NetBeans iterates through the test packages in a project looking for the classes that are suffixed with the word Test. It is therefore essential to properly name tests to ensure they are executed correctly. Once tests have been created, NetBeans provides several methods for running the tests. The first method is to run all the tests that we have defined for an application. Selecting the Run and then Test Project menu options runs all of the tests defined for a project. The type of the project doesn't matter (Java SE or Java EE), nor whether a project uses Maven or the NetBeans project build system (Ant projects are even supported if they have a valid test activity), all tests for the project will be run when selecting this option. After running the tests, the Test Results window will be displayed, highlighting successful tests in green and failed tests in red. In the Test Results window, we have several options to help categorize and manage the tests: Rerun all of the tests Rerun the failed tests Show only the passed tests Show only the failed tests Show errors Show aborted tests Show skipped tests Locate previous failure Locate next failure Always open test result window Always open test results in a new tab The second option within NetBeans for running tests it to run all the tests in a package or class. To perform these operations, simply right-click on a package in the Projects window and select Test Package or right-click on a Java class in the Projects window and select Test File. The final option for running tests it to execute a single test in a class. To perform this operation, right-click on a test in the Java source code editor and select the Run Focussed Test Method menu option. After creating tests, how do we keep them up to date when we add new methods to application code? We can keep tests suites up to date by manually editing them and adding new methods corresponding to new application code or we can use the Create/Update Tests menu. Selecting the Tools and then Create/Update Tests menu options displays the Create Tests dialog that allows us to edit the existing test classes and add new methods into them, based upon the existing application classes. Summary In this article, we looked at the typical tasks that a developer does on a day-to-day basis when writing applications. We saw how NetBeans can help us to run and debug applications and how to profile applications and write tests for them. Finally, we took a brief look at TDD, and saw how the Red-Green-Refactor cycle can be used to help us develop more stable applications. Resources for Article: Further resources on this subject: Contexts and Dependency Injection in NetBeans [article] Creating a JSF composite component [article] Getting to know NetBeans [article]
Read more
  • 0
  • 0
  • 3394

article-image-introduction-odoo
Packt
04 Sep 2015
12 min read
Save for later

Introduction to Odoo

Packt
04 Sep 2015
12 min read
 In this article by Greg Moss, author of Working with Odoo, he explains that Odoo is a very feature-filled business application framework with literally hundreds of applications and modules available. We have done our best to cover the most essential features of the Odoo applications that you are most likely to use in your business. Setting up an Odoo system is no easy task. Many companies get into trouble believing that they can just install the software and throw in some data. Inevitably, the scope of the project grows and what was supposed to be a simple system ends up being a confusing mess. Fortunately, Odoo's modular design will allow you to take a systematic approach to implementing Odoo for your business. (For more resources related to this topic, see here.) What is an ERP system? An Enterprise Resource Planning (ERP) system is essentially a suite of business applications that are integrated together to assist a company in collecting, managing, and reporting information throughout core business processes. These business applications, typically called modules, can often be independently installed and configured based on the specific needs of the business. As the needs of the business change and grow, additional modules can be incorporated into an existing ERP system to better handle the new business requirements. This modular design of most ERP systems gives companies great flexibility in how they implement the system. In the past, ERP systems were primarily utilized in manufacturing operations. Over the years, the scope of ERP systems have grown to encompass a wide range of business-related functions. Recently, ERP systems have started to include more sophisticated communication and social networking features. Common ERP modules The core applications of an ERP system typically include: Sales Orders Purchase Orders Accounting and Finance Manufacturing Resource Planning (MRP) Customer Relationship Management (CRM) Human Resources (HR) Let's take a brief look at each of these modules and how they address specific business needs. Selling products to your customer Sales Orders, commonly abbreviated as SO, are documents that a business generates when they sell products and services to a customer. In an ERP system, the Sales Order module will usually allow management of customers and products to optimize efficiency for data entry of the sales order. Many sales orders begin as customer quotes. Quotes allow a salesperson to collect order information that may change as the customer makes decisions on what they want in their final order. Once a customer has decided exactly what they wish to purchase, the quote is turned into a sales order and is confirmed for processing. Depending on the requirements of the business, there are a variety of methods to determine when a customer is invoiced or billed for the order. This preceding screenshot shows a sample sales order in Odoo. Purchasing products from suppliers Purchase Orders, often known as PO, are documents that a business generates when they purchase products from a vendor. The Purchase Order module in an ERP system will typically include management of vendors (also called suppliers) as well as management of the products that the vendor carries. Much like sales order quotes, a purchase order system will allow a purchasing department to create draft purchase orders before they are finalized into a specific purchasing request. Often, a business will configure the Sales Order and Purchase Order modules to work together to streamline business operations. When a valid sales order is entered, most ERP systems will allow you to configure the system so that a purchase order can be automatically generated if the required products are not in stock to fulfill the sales order. ERP systems will allow you to set minimum quantities on-hand or order limits that will automatically generate purchase orders when inventory falls below a predetermined level. When properly configured, a purchase order system can save a significant amount of time in purchasing operations and assist in preventing supply shortages. Managing your accounts and financing in Odoo Accounting and finance modules integrate with an ERP system to organize and report business transactions. In many ERP systems, the accounting and finance module is known as GL for General Ledger. All accounting and finance modules are built around a structure known as the chart of accounts. The chart of accounts organizes groups of transactions into categories such as assets, liabilities, income, and expenses. ERP systems provide a lot of flexibility in defining the structure of your chart of accounts to meet the specific requirements for your business. Accounting transactions are grouped by date into periods (typically by month) for reporting purposes. These reports are most often known as financial statements. Common financial statements include balance sheets, income statements, cash flow statements, and statements of owner's equity. Handling your manufacturing operations The Manufacturing Resource Planning (MRP) module manages all the various business operations that go into the manufacturing of products. The fundamental transaction of an MRP module is a manufacturing order, which is also known as a production order in some ERP systems. A manufacturing order describes the raw products or subcomponents, steps, and routings required to produce a finished product. The raw products or subcomponents required to produce the finished product are typically broken down into a detailed list called a bill of materials or BOM. A BOM describes the exact quantities required of each component and are often used to define the raw material costs that go into manufacturing the final products for a company. Often an MRP module will incorporate several submodules that are necessary to define all the required operations. Warehouse management is used to define locations and sublocations to store materials and products as they move through the various manufacturing operations. For example, you may receive raw materials in one warehouse location, assemble those raw materials into subcomponents and store them in another location, then ultimately manufacture the end products and store them in a final location before delivering them to the customer. Managing customer relations in Odoo In today's business environment, quality customer service is essential to being competitive in most markets. A Customer Relationship Management (CRM) module assists a business in better handling the interactions they may have with each customer. Most CRM systems also incorporate a presales component that will manage opportunities, leads, and various marketing campaigns. Typically, a CRM system is utilized the most by the sales and marketing departments within a company. For this reason, CRM systems are often considered to be sales force automation tools or SFA tools. Sales personnel can set up appointments, schedule call backs, and employ tools to manage their communication. More modern CRM systems have started to incorporate social networking features to assist sales personnel in utilizing these newly emerging technologies. Configuring human resource applications in Odoo Human Resource modules, commonly known as HR, manage the workforce- or employee-related information in a business. Some of the processes ordinarily covered by HR systems are payroll, time and attendance, benefits administration, recruitment, and knowledge management. Increased regulations and complexities in payroll and benefits have led to HR modules becoming a major component of most ERP systems. Modern HR modules typically include employee kiosk functions to allow employees to self-administer many tasks such as putting in a leave request or checking on their available vacation time. Finding additional modules for your business requirements In addition to core ERP modules, Odoo has many more official and community-developed modules available. At the time of this article's publication, the Odoo application repository had 1,348 modules listed for version 7! Many of these modules provide small enhancements to improve usability like adding payment type to a sales order. Other modules offer e-commerce integration or complete application solutions, such as managing a school or hospital. Here is a short list of the more common modules you may wish to include in an Odoo installation: Point of Sale Project Management Analytic Accounting Document Management System Outlook Plug-in Country-Specific Accounting Templates OpenOffice Report Designer You will be introduced to various Odoo modules that extend the functionality of the base Odoo system. You can find a complete list of Odoo modules at http://apps.Odoo.com/. This preceding screenshot shows the module selection page in Odoo. Getting quickly into Odoo Do you want to jump in right now and get a look at Odoo 7 without any complex installations? Well, you are lucky! You can access an online installation of Odoo, where you can get a peek at many of the core modules right from your web browser. The installation is shared publicly, so you will not want to use this for any sensitive information. It is ideal, however, to get a quick overview of the software and to get an idea for how the interface functions. You can access a trial version of Odoo at https://www.Odoo.com/start. Odoo – an open source ERP solution Odoo is a collection of business applications that are available under an open source license. For this reason, Odoo can be used without paying license fees and can be customized to suit the specific needs of a business. There are many advantages to open source software solutions. We will discuss some of these advantages shortly. Free your company from expensive software license fees One of the primary downsides of most ERP systems is they often involve expensive license fees. Increasingly, companies must pay these license fees on an annual basis just to receive bug fixes and product updates. Because ERP systems can require companies to devote great amounts of time and money for setup, data conversion, integration, and training, it can be very expensive, often prohibitively so, to change ERP systems. For this reason, many companies feel trapped as their current ERP vendors increase license fees. Choosing open source software solutions, frees a company from the real possibility that a vendor will increase license fees in the years ahead. Modify the software to meet your business needs With proprietary ERP solutions, you are often forced to accept the software solution the vendor provides chiefly "as is". While you may have customization options and can sometimes pay the company to make specific changes, you rarely have the freedom to make changes directly to the source code yourself. The advantages to having the source code available to enterprise companies can be very significant. In a highly competitive market, being able to develop solutions that improve business processes and give your company the flexibility to meet future demands can make all the difference. Collaborative development Open source software does not rely on a group of developers who work secretly to write proprietary code. Instead, developers from all around the world work together transparently to develop modules, prepare bug fixes, and increase software usability. In the case of Odoo, the entire source code is available on Launchpad.net. Here, developers submit their code changes through a structure called branches. Changes can be peer reviewed, and once the changes are approved, they are incorporated into the final source code product. Odoo – AGPL open source license The term open source covers a wide range of open source licenses that have their own specific rights and limitations. Odoo and all of its modules are released under the Affero General Public License (AGPL) version 3. One key feature of this license is that any custom-developed module running under Odoo must be released with the source code. This stipulation protects the Odoo community as a whole from developers who may have a desire to hide their code from everyone else. This may have changed or has been appended recently with an alternative license. You can find the full AGPL license at http://www.gnu.org/licenses/agpl-3.0.html. A real-world case study using Odoo The goal is to do more than just walk through the various screens and reports of Odoo. Instead, we want to give you a solid understanding of how you would implement Odoo to solve real-world business problems. For this reason, this article will present a real-life case study in which Odoo was actually utilized to improve specific business processes. Silkworm, Inc. – a mid-sized screen printing company Silkworm, Inc. is a highly respected mid-sized silkscreen printer in the Midwest that manufactures and sells a variety of custom apparel products. They have been kind enough to allow us to include some basic aspects of their business processes as a set of real-world examples implementing Odoo into a manufacturing operation. Using Odoo, we will set up the company records (or system) from scratch and begin by walking through their most basic sales order process, selling T-shirts. From there, we will move on to manufacturing operations, where custom art designs are developed and then screen printed onto raw materials for shipment to customers. We will come back to this real-world example so that you can see how Odoo can be used to solve real-world business solutions. Although Silkworm is actively implementing Odoo, Silkworm, Inc. does not directly endorse or recommend Odoo for any specific business solution. Every company must do their own research to determine whether Odoo is a good fit for their operation. Summary In this article, we have learned about the ERP system and common ERP modules. An introduction about Odoo and features of it. Resources for Article: Further resources on this subject: Getting Started with Odoo Development[article] Machine Learning in IPython with scikit-learn [article] Making Goods with Manufacturing Resource Planning [article]
Read more
  • 0
  • 0
  • 5513

article-image-introducing-liferay-your-intranet
Packt
04 Sep 2015
32 min read
Save for later

Introducing Liferay for Your Intranet

Packt
04 Sep 2015
32 min read
In this article by Navin Agarwal, author of the book Liferay Portal 6.2 Enterprise Intranets, we will learn that Liferay is an enterprise application solution. It provides a lot of functionalities, which helps an organization to grow and is a one-solution package as a portal and content management solution. In this article, we will look at the following topics: The complete features you want your organization's intranet solution to have Reasons why Liferay is an excellent choice to build your intranet Where and how Liferay is used besides intranet portals Easy integration with other open source tools and applications Getting into more technical information about what Liferay is and how it works So, let's start looking at exactly what kind of site we're going to build. (For more resources related to this topic, see here.) Liferay Portal makes life easy We're going to build a complete corporate intranet solution using Liferay. Let's discuss some of the features your intranet portal will have. Hosted discussions Are you still using e-mail for group discussions? Then, it's time you found a better way! Running group discussions over e-mail clogs up the team's inbox—this means you have to choose your distribution list in advance, and that makes it hard for team members to opt in and out of the discussion. Using Liferay, we will build a range of discussion boards for discussion within and between teams. The discussions are archived in one place, which means that it's always possible to go back and refer to them later. On one level, it's just more convenient to move e-mail discussions to a discussion forum designed for the purpose. But once the forum is in place, you will find that a more productive group discussion takes place here than it ever did over e-mail. Collaborative documents using wikis Your company probably has guideline documents that should be updated regularly but swiftly lose their relevance as practices and procedures change. Even worse, each of your staff will know useful, productive tricks and techniques—but there's probably no easy way to record that knowledge in a way that is easy for others to find and use. We will see how to host wikis within Liferay. A wiki enables anybody to create and edit web pages and link all of those web pages together without requiring any HTML or programming skills. You can put your guideline documents into a wiki, and as practices change, your frontline staff can quickly and effortlessly update the guideline documentation. Wikis can also act as a shared notebook, enabling team members to collaborate and share ideas and findings and work together on documents. Team and individual blogs Your company probably needs frequent, chronological publications of personal thoughts and web links in the intranet. Your company probably has teams and individuals working on specific projects in order to share files and blogs about a project process and more. By using the Liferay Blog features, you can use HTML text editors to create or update files and blogs and to provide RSS feeds. Liferay provides an easy way for teams and individuals to share files with the help of blogs. Blogs provide a straightforward blogging solution with features such as RSS, user and guest comments, browsable categories, tags and labels, and a rating system. Liferay's RSS with the subscription feature provides the ability to frequently read RSS feeds from within the portal framework. At the same time, What You See Is What You Get (WYSIWYG) editors provide the ability to edit web content, including the blogs' content. Less technical people can use the WYSIWYG editor instead of sifting through complex code. Shared calendars Many companies require calendar information and share the calendar among users from different departments. We will see how to share a calendar within Liferay. The shared calendar can satisfy the basic business requirements incorporated into a featured business intranet, such as scheduling meetings, sending meeting invitations, checking for attendees' availability, and so on. Therefore, you can provide an environment for users to manage events and share calendars. Document management – CMS When there is a need for document sharing and document management, Liferay's Documents and Media library helps you with lots of features. The Documents and Media portlet allows you to add folders and subfolders for documents and media files, and also allows users to publish documents. It serves as a repository for all types of files and makes Content management systems (CMSes) available for intranets. The Documents and Media library portlet is equipped with customizable folders and acts as a web-based solution to share documents and media files among all your team members—just as a shared drive would. All the intranet users will be able to access the files from anywhere, and the content is accessible only by those authorized by administrators. All the files are secured by the permission layer by the administrator. Web content management – WCM Your company may have a lot of images and documents, and you may need to manage all these images and documents as well. Therefore, you require the ability to manage a lot of web content and then publish web content in intranets. We will see how to manage web content and how to publish web content within Liferay. Liferay Journal (Web Content) not only provides high availability to publish, manage, and maintain web content and documents, but it also separates content from the layout. Liferay WCM allows us to create, edit, and publish web content (articles). It also allows quick changes in the preview of the web content by changing the layout. It has built-in functionality, such as workflow, search, article versioning, scheduling, and metadata. Personalization and internalization All users can get a personal space that can be either made public (published as a website with a unique, friendly URL) or kept private. You can also customize how the space looks, what tools and applications are included, what goes into Documents and Media, and who can view and access all of this content. In addition, Liferay supports multiple languages, where you can select your own language. Multilingual organizations get out-of-the-box support for up to 45 languages. Users can toggle among different language settings with just one click and produce/publish multilingual documents and web content. Users can make use of the internalization feature to define the specific site in a localized language. Workflow, staging, scheduling, and publishing You can use a workflow to manage definitions, instances, and predetermined sequences of connected steps. Workflow can be used for web content management, assets, and so on. Liferay's built-in workflow engine is called Kaleo. It allows users to set up the review and publishing process on the web content article of any document that needs to end up on the live site. Liferay 6.2 integrates with the powerful features of the workflow and data capabilities of dynamic data lists in Kaleo Forms; it's only available in Liferay Enterprise Edition. Staging environments are integrated with Liferay's workflow engine. To have a review process for staged pages, you need to make sure you have a workflow engine configured and you have a staging setup in the workflow. As a content creator, you can update what you've created and publish it in a staging workflow. Other users can then review and modify it. Moreover, content editors can make a decision on whether to publish web content from staging to live, that is, you can easily create and manage everything from a simple article of text and images to fully functional websites in staging and then publish them live. Before going live, you can schedule web content as well. For instance, you can publish web content immediately or schedule it for publishing on a specific date. Social networks and Social Office Liferay Portal supports social networks—you can easily manage your Google Plus, Facebook, MySpace, Twitter, and other social network accounts in Liferay. In addition, you can manage your instant messenger accounts, such as AIM, ICQ, Jabber, MSN, Skype, YM, and so on smoothly from inside Liferay. Liferay Social Office gives us a social collaboration on top of the portal—a fully virtual workspace that streamlines communication and builds up group cohesion. It provides holistic enhancement to the way you and your colleagues work together. All components in Social Office are tied together seamlessly, getting everyone on the same page by sharing the same look and feel. More importantly, the dynamic activity tracking gives us a bird's-eye view of who has been doing what and when within each individual site. Using Liferay Social Office, you can enhance your existing personal workflow with social tools, keep your team up to date, and turn collective knowledge into collective action. Note that Liferay 6.2 supports the Liferay Social Office 3.0 current version. Liferay Sync and Marketplace Liferay Sync is Liferay's newest product, designed to make file sharing as easy as a simple drag and drop! Liferay Sync is an add-on product for Liferay 6.1 CE, EE, and later versions, which makes it a more raw boost product and enables the end user to publish and access documents and files from multiple environments and devices, including Windows and MacOS systems, and iOS-based mobile platforms. Liferay Sync is one of the best features, and it is fully integrated into the Liferay platform. Liferay 6.1 introduced the new concept of the marketplace, which leverages the developers to develop any components or functionality and release and share it with other users. It's a user-friendly and one-stop place to share apps. Liferay Marketplace provides the portal product with add-on features with a new hub to share, browse, and download Liferay-compatible applications. In Liferay 6.2, Marketplace comes under App Manager, where all the app-related controls can be possible. More features The intranet also arranges staff members into teams and sites, provides a way of real-time IM and chatting, and gives each user an appropriate level of access. This means that they can get all the information they need and edit and add content as necessary but won't be able to mess with sensitive information that they have no reason to see. In particular, the portal provides an integrating framework so that you can integrate external applications easily. For example, you can integrate external applications with the portal, such as Alfresco, OpenX, LDAP, SSO CAS, Orbeon Forms, Konakart, PayPal, Solr, and so on. In a word, the portal offers compelling benefits to today's enterprises—reduced operational costs, improved customer satisfaction, and streamlined business processes. Everything in one place All of these features are useful on their own. However, it gets better when you consider that all of these features will be combined into one easy-to-use searchable portal. A user of the intranet, for example, can search for a topic—let's say financial report—and find the following in one go: Any group discussions about financial reports Blog entries within the intranet concerning financial reports Documents and files—perhaps the financial reports themselves Wiki entries with guidelines on preparing financial reports Calendar entries for meetings to discuss the financial report Of course, users can also restrict their search to just one area if they already know exactly what they are looking for. Liferay provides other features, such as tagging, in order to make it even easier to organize information across the whole intranet. We will do all of this and more. Introducing Palm Tree Publications We are going to build an intranet for a fictional company as an example, focusing on how to install, configure, and integrate it with other applications and also implement portals and plugins (portlets, themes, layout templates, hooks, and webs) within Liferay. By applying the instructions to your own business, you will be able to build an intranet to meet your own company's needs. "Palm Tree Publications" needs an intranet of its own, which we will call bookpub.com. The enterprise's global headquarters are in the United States. It has several departments—editorial, website, engineering, marketing, executive, and human resources. Each department has staff in the U.S., Germany, and India or in all three places. The intranet site provides a site called "Book Street and Book Workshop" consisting of users who have an interest in reading books. The enterprise needs to integrate collaboration tools, such as wikis, discussion forums, blogs, instant messaging, mail, RSS, shared calendars, tagging, and so on. Palm Tree Publications has more advanced needs too: a workflow to edit, approve, and publish books. Furthermore, the enterprise has a lot of content, such as books stored and managed alfresco currently. In order to build the intranet site, the following functionality should be considered: Installing the portal, experiencing the portal and portlets, and customizing the portal and personal web pages Bringing the features of enabling document sharing, calendar sharing, and other collaboration within a business to the users of the portal Discussion forums—employees should be able to discuss book ideas and proposals Wikis—keeping track of information about editorial guidance and other resources that require frequent editing Dissemination of information via blogs—small teams working on specific projects share files and blogs about a project process Sharing a calendar among employees Web content management creation by the content author and getting approved by the publisher Document repository—using effective content management systems (CMSes), a natural fit for a portal for secure access, permissions, and distinct roles (such as writers, editors, designers, administrators, and so on) Collaborative chat and instant messaging, social network, Social Office, and knowledge management tools Managing a site named Book Street and Book Workshop that consists of users who have the same interest in reading books as staging, scheduling, and publishing web content related to books Federated search for discussion forum entries, blog posts, wiki articles, users in the directory, and content in both the Document and Media libraries; search by tags Integrating back-of-the-house software applications, such as Alfresco, Orbeon Forms, the Drools rule server, Jasper Server, and BI/Reporting Pentaho; strong authentication and authorization with LDAP; and single authentication to access various company sites besides the intranet site The enterprise can have the following groups of people: Admin: This group installs systems, manages membership, users, user groups, organizations, roles and permissions, security on resources, workflow, servers and instances, and integrates with third-party systems Executives: Executive management handles approvals Marketing: This group handles websites, company brochures, marketing campaigns, projects, and digital assets Sales: This group makes presentations, contracts, documents, and reports Website editors: This group manages pages of the intranet—writes articles, reviews articles, designs the layout of articles, and publishes articles Book editors: This group writes, reviews, and publishes books and approves and rejects the publishing of books Human resources: This group manages corporate policy documents Finance: This group manages accounts documents, scanned invoices and checks accounts Corporate communications: This group manages external public relations, internal news releases, and syndication Engineering: This group sets up the development environment and collaborates on engineering projects and presentation templates Introducing Liferay Portal's architecture and framework Liferay Portal's architecture supports high availability for mission-critical applications using clustering and the fully distributed cache and replication support across multiple servers. The following diagram has been taken from the Liferay forum written by Jorge Ferrer. This diagram depicts the various architectural layers and functionalities of portlets: Figure 1.1: The Liferay architecture The preceding image was taken from https://www.liferay.com/web/jorge.ferrer/blog/-/blogs/liferay-s-architecture-the-beginning-of-a-blog-series site blog. The Liferay Portal architecture is designed in such a way that it provides tons of features at one place: Frontend layer: This layer is the end user's interface Service layer: This contains the great majority of the business logic for the portal platform and all of the portlets included out of the box Persistence layer: Liferay relies on Hibernate to do most of its database access Web services API layer: This handles web services, such as JSON and SOAP In Liferay, the service layer, persistence layer, and web services API layer are built automatically by that wonderful tool called Service Builder. Service Builder is the tool that glues together all of Liferay's layers and that hides the complexities of using Spring or Hibernate under the hood. Service-oriented architecture Liferay Portal uses service-oriented architecture (SOA) design principles throughout and provides the tools and framework to extend SOA to other enterprise applications. Under the Liferay enterprise architecture, not only can the users access the portal from traditional and wireless devices, but developers can also access it from the exposed APIs via REST, SOAP, RMI, XML-RPC, XML, JSON, Hessian, and Burlap. Liferay Portal is designed to deploy portlets that adhere to the portlet API compliant with both JSR-168 and JSR-286. A set of useful portlets are bundled with the portal, including Documents and Media, Calendar, Message Boards, Blogs, Wikis, and so on. They can be used as examples to add custom portlets. In a word, the key features of Liferay include using SOA design principles throughout, such as reliable security, integrating the portal with SSO and LDAP, multitier and limitless clustering, high availability, caching pages, dynamic virtual hosting, and so on. Understanding Enterprise Service Bus Enterprise Service Bus (ESB) is a central connection manager that allows applications and services to be added quickly to an enterprise infrastructure. When an application needs to be replaced, it can easily be disconnected from the bus at a single point. Liferay Portal uses Mule or ServiceMix as ESB. Through ESB, the portal can integrate with SharePoint, BPM (such as the jBPM workflow engine and Intalio | BPMS engine), BI Xforms reporting, JCR repository, and so on. It supports JSR 170 for content management systems with the integration of JCR repositories, such as Jackrabbit. It also uses Hibernate and JDBC to connect to any database. Furthermore, it supports an event system with synchronous and asynchronous messaging and a lightweight message bus. Liferay Portal uses the Spring framework for its business and data services layers. It also uses the Spring framework for its transaction management. Based on service interfaces, portal-impl is implemented and exposed only for internal usage—for example, they are used for the extension environment. portal-kernel and portal-service are provided for external usage (or for internal usage)—for example, they are used for the Plugins SDK environment. Custom portlets, both JSR-168 and JSR-286, and web services can be built based on portal-kernel and portal-service. In addition, the Web 2.0 Mail portlet and the Web 2.0 Chat portlet are supported as well. More interestingly, scheduled staging and remote staging and publishing serve as a foundation through the tunnel web for web content management and publishing. Liferay Portal supports web services to make it easy for different applications in an enterprise to communicate with each other. Java, .NET, and proprietary applications can work together easily because web services use XML standards. It also supports REST-style JSON web services for lightweight, maintainable code and supports AJAX-based user interfaces. Liferay Portal uses industry-standard, government-grade encryption technologies, including advanced algorithms, such as DES, MD5, and RSA. Liferay was benchmarked as one of the most secure portal platforms using LogicLibrary's Logiscan suite. Liferay offers customizable single sign-on (SSO) that integrates into Yale CAS, JAAS, LDAP, NTLM, CA Siteminder, Novell Identity Manager, OpenSSO, and more. Open ID, OpenAuth, Yale CAS, Siteminder, and OpenAM integration are offered by it out of the box. In short, Liferay Portal uses ESB in general with an abstraction layer on top of an enterprise messaging system. It allows integration architects to exploit the value of messaging systems, such as reporting, e-commerce, and advertisements. Understanding the advantages of using Liferay to build an intranet Of course, there are lots of ways to build a company intranet. What makes Liferay such a good choice to create an intranet portal? It has got the features we need All of the features we outlined for our intranet come built into Liferay: discussions, wikis, calendars, blogs, and so on are part of what Liferay is designed to do. It is also designed to tie all of these features together into one searchable portal, so we won't be dealing with lots of separate components when we build and use our intranet. Every part will work together with others. Easy to set up and use Liferay has an intuitive interface that uses icons, clear labels, and drag and drop to make it easy to configure and use the intranet. Setting up the intranet will require a bit more work than using it, of course. However, you will be pleasantly surprised by how simple it is—no programming is required to get your intranet up and running. Free and open source How much does Liferay cost? Nothing! It's a free, open source tool. Here, being free means that you can go to Liferay's website and download it without paying anything. You can then go ahead and install it and use it. Liferay comes with an enterprise edition too, for which users need to pay. In addition, Liferay provides full support and access to additional enterprise edition plugins/applications. Liferay makes its money by providing additional services, including training. However, the standard use of Liferay is completely free. Now you probably won't have to pay another penny to get your intranet working. Being open source means that the program code that makes Liferay work is available to anybody to look at and change. Even if you're not a programmer, this is still good for you: If you need Liferay to do something new, then you can hire a programmer to modify Liferay to do it. There are lots of developers studying the source code, looking for ways to make it better. Lots of improvements get incorporated into Liferay's main code. Developers are always working to create plugins—programs that work together with Liferay to add new features. Probably, for now, the big deal here is that it doesn't cost any money. However, as you use Liferay more, you will come to understand the other benefits of open source software for you. Grows with you Liferay is designed in a way that means it can work with thousands and thousands of users at once. No matter how big your business is or how much it grows, Liferay will still work and handle all of the information you throw at it. It also has features especially suited to large, international businesses. Are you opening offices in non-English speaking countries? No problem! Liferay has internationalization features tailored to many of the world's popular languages. Works with other tools Liferay is designed to work with other software tools—the ones that you're already using and the ones that you might use in the future—for instance: You can hook up Liferay to your LDAP directory server and SSO so that user details and login credentials are added to Liferay automatically Liferay can work with Alfresco—a popular and powerful Enterprise CMS (used to provide extremely advanced document management capabilities, which are far beyond what Liferay does on its own) Based on "standards" This is a more technical benefit; however, it is a very useful one if you ever want to use Liferay in a more specialized way. Liferay is based on standard technologies that are popular with developers and other IT experts and that confer the following benefits on users: Built using Java: Java is a popular programming language that can run on just about any computer. There are millions of Java programmers in the world, so it won't be too hard to find developers who can customize Liferay. Based on tried and tested components: With any tool, there's a danger of bugs. Liferay uses lots of well-known, widely tested components to minimize the likelihood of bugs creeping in. If you are interested, here are some of the well-known components and technologies Liferay uses—Apache ServiceMix, Mule, ehcache, Hibernate, ICEfaces, Java J2EE/JEE, jBPM, Activiti, JGroups, Alloy UI, Lucene, PHP, Ruby, Seam, Spring and AOP, Struts and Tiles, Tapestry, Velocity, and FreeMarker. Uses standard ways to communicate with other software: There are various standards established to share data between pieces of software. Liferay uses these so that you can easily get information from Liferay into other systems. The standards implemented by Liferay include AJAX, iCalendar and Microformat, JSR-168, JSR-127, JSR-170, JSR-286 (Portlet 2.0), JSR-314 (JSF 2.0), OpenSearch, the Open platform with support for web services, including JSON, Hessian, Burlap, REST, RMI, and WSRP, WebDAV, and CalDAV. Makes publication and collaboration tools Web Content Accessibility Guidelines 2.0 (WCAG 2.0) compliant: The new W3C recommendation is to make web content accessible to a wide range of people with disabilities, including blindness and low vision, deafness and hearing loss, learning disabilities, cognitive limitations, limited movement, speech disabilities, photosensitivity, and combinations of these. For example, the portal integrates CKEditor-standards support, such as W3C (WAI-AA and WCAG), 508 (Section 508). Alloy UI: The Liferay UI supports HTML 5, CSS 3, and Yahoo! User Interface Library 3 (YUI 3). Supports Apache Ant 1.8 and Maven 2: Liferay Portal can be built through Apache Ant by default, where you can build services; clean, compile, and build JavaScript CMD; build language native to ASCII, deploy, fast deploy; and so on. Moreover, Liferay supports Maven 2 SDK, providing Community Edition (CE) releases through public maven repositories as well as Enterprise Edition (EE) customers to install maven artifacts in their local maven repository. Bootstrap: Liferay 6.2 provides support for Twitter Bootstrap out of the box. With its fully responsive UI, the benefit of bootstrap is that it will support any device to render the content. Even content authors can use bootstrap markup and styles to make the content nicer. Many of these standards are things that you will never need to know much about, so don't worry if you've never heard of them. Liferay is better for using them, but mostly, you won't even know they are there. Other advantages of Liferay Liferay isn't just for intranets! Users and developers are building all kinds of different websites and systems based on Liferay. Corporate extranets An intranet is great for collaboration and information sharing within a company. An extranet extends this facility to suppliers and customers, who usually log in over the Internet. In many ways, this is similar to an intranet—however, there are a few technical differences. The main difference is that you create user accounts for people who are not part of your company. Collaborative websites Collaborative websites not only provide a secure and administrated framework, but they also empower users with collaborative tools, such as blogs, instant e-mail, message boards, instant messaging, shared calendars, and so on. Moreover, they encourage users to use other tools, such as tag administration, fine-grained permissions, delegable administrator privileges, enterprise taxonomy, and ad hoc user groups. By means of these tools, as an administrator, you can ultimately control what people can and cannot do in Liferay. In many ways, this is similar to an intranet too; however, there are a few technical differences. The main difference is that you use collaborative tools simply, such as blogs, instant e-mail, message boards, instant messaging, shared calendars, and so on. Content management and web publishing You can also use Liferay to run your public company website with content management and web publishing. Content management and web publishing are useful features in websites. It is a fact that the volume of digital content for any organization is increasing on a daily basis. Therefore, an effective CMS is a vital part of any organization. Meanwhile, document management is also useful and more effective when repositories have to be assigned to different departments and groups within the organization. Content management and document management are effective in Liferay. Moreover, when managing and publishing content, we may have to answer many questions, such as "who should be able to update and delete a document from the system?". Fortunately, Liferay's security and permissions model can satisfy the need for secure access and permissions and distinct roles (for example, writer, editor, designer, and administrator). Furthermore, Liferay integrates with the workflow engine. Thus, users can follow a flow to edit, approve, and publish content in the website. Content management and web publishing are similar to an intranet; however, there are a few technical differences. The main difference is that you can manage content and publish web content smoothly. Infrastructure portals Infrastructure portals integrate all possible functions, as we stated previously. This covers collaboration and information sharing within a company in the form of collaborative tools, content management, and web publishing. In infrastructure portals, users can create a unified interface to work with content, regardless of source via content interaction APIs. Furthermore, using the same API and the same interface as that of the built-in CMS, users can also manage content and publish web content from third-party systems, such as Alfresco, Vignette, Magnolia, FatWire, Microsoft SharePoint, and so on. Infrastructure portals are similar to an intranet; there are a few technical differences though. The main difference is that you can use collaborative tools, manage content, publish web content, and integrate other systems in one place. Why do you need a portal? The main reason is that a portal can serve as a framework to aggregate content and applications. A portal normally provides a secure and manageable framework where users can easily make new and existing enterprise applications available. In order to build an infrastructure portal smoothly, Liferay Portal provides an SOA-based framework to integrate third-party systems. Out-of-the-box portlets and features Liferay provides out-of-the-box (OOTB) portlets that have key features and can be used in the enterprise intranet very efficiently. These portlets are very scalable and powerful and provide the developer with the tools to customize it very easily. Let's see some of the most frequently used portlets in Liferay Portal. Content management Content management is a common feature in any web-based portal or website: The Web Content portlet has the features of full web publishing, office integration, and the asset library, which contains documents, images, and videos. This portlet also has the structure and templates that help with the designing of the web content's look and feel. Structure can be designed with the help of a visual editor with drag and drop. It has the integrated help feature with tooltips to name the attributes of the fields. The Asset Publisher portlet provides you with the feature to select any type of content/asset, such as wiki pages, web content, calendar events, message board messages, documents, media documents, and many more. It also allows us to use filter on them by types, categories, tags, and sources. The display settings provide configurable settings, which helps the content to be displayed to the end users perfectly. The Document and Media portlet is one of the most usable portlets to store any type of document. It allows you to store and manage your documents. It allows you to manage Liferay documents from your own machine's filesystem with the help of WebDAV integration. It has lots of new, built-in features, such as the inline document preview, image preview, and video player. Document metadata is displayed in document details, which makes it easier for you to review the metadata of the document. Also, Document and Media has features named checkin and checkout that helps editing the document in a group very easily. The Document and Media portlet has the multi-repository integration feature, which allows you to configure or mount any other repository very easily, such as SharePoint, Documentum, and Alfresco, utilizing the CMIS standard. Collaboration Collaboration features are generally ways in which users communicate with each other, such as the ones shown in the following list: The Dynamic data list portlet provides you with the facility of not writing a single line of code to create the form or data list. Say, for example, your corporate intranet needs the job posting done on a daily basis by the HR administrator. The administrator needs to develop the custom portlet to fulfill that requirement. Now, the dynamic data list portlet will allow the administrator to create a form for job posting. It's very easy to create and display new data types. The Blog portlet is one of the best features of Liferay. Blog portlets have two other related portlets, namely Recent Bloggers and Blogs Aggregator. The blog portlet provides the best possible ways for chronological publications of personal thoughts and web links in the intranet. Blog portlets can be placed for users of different sites/departments under the respective site//department page. The Calendar portlet provides the feature to create the event and schedule the event. It has many features that help the users in viewing the meeting schedule. The Message Board portlet is a full-featured forum solution with threaded views, categories, RSS capability, avatars, file attachments, previews, dynamic lists of recent posts, and forum statistics. Message Board portlets work with the fine-grained permissions and role-based access control model to give detailed levels of control to administrators and users. The Wiki portlet, like the Message Boards portlet, provides a straightforward wiki solution for both intranet and extranet portals that provides knowledge management among the users. It has all of the features you would expect in a state-of-the-art wiki. Again, it has the features of a file attachment preview, publishing the content, and versioning, and works with a fine-grained permission and role-based access control model. This again takes all the features of the Liferay platform. The Social Activity portlet allows you to tweak the measurements used to calculate user involvement within a site. The contribution and participation values determine the reward value of an action. It uses the blog entry, wiki, and message board points to calculate the user involvement in the site. The Marketplace portlet is placed inside the control panel. It's a hub for the applications provided by Liferay and other partners. You can find that many applications are free, and for certain applications, you need to pay an amount. It's more like an app store. This feature was introduced in Liferay Version 6.1. In the Liferay 6.2 control panel, under the Apps | Store link section, you will see apps that are stored in the Marketplace portlet. Liferay 6.2 comes with a new control panel that is very easy to manage for the portal's Admin users. Liferay Sync is not a portlet; it's a new feature of Liferay that allows you to synchronize documents of Liferay Document and Media with your local system. Liferay provide the Liferay Sync application, which has to be installed in your local system or mobile device. News RSS portlets provide RSS feeds. RSS portlets are used for the publishers by letting them syndicate content automatically. They benefit readers who want to subscribe to timely updates from their favorite websites or to aggregate feeds from many sites into one place. A Liferay RSS portlet is fully customizable, and it allows you to set the URL from which site you would like to get feeds. Social Activities portlets display portal-wide user activity, such as posting on message boards, creating wikis, and adding documents to Documents and Media. There are more portlets for social categories, such as User Statistics portlets, Group Statistics portlets, and Requests portlets. All these portlets are used for the social media. Tools The Search portlet provides faceted search features. When a search is performed, facet information will appear based on the results of the search. The number of each asset type and the most frequently occurring tags and categories as well as their frequency will all appear in the left-hand side column of the portlet. It searches through Bookmarks, Blogs Entries, Web Content Articles, Document Library Files, Users, Message Board, and Wiki. Finding more information on Liferay In this article, we looked at what Liferay can do for your corporate intranet and briefly saw why it's a good choice. If you want more background information on Liferay, the best place to start is the Liferay corporate website (http://www.liferay.com) itself. You can find the latest news and events, various training programs offered worldwide, presentations, demonstrations, and hosted trails. More interestingly, Liferay eats its own dog food; corporate websites within forums (called message boards), blogs, and wikis are built by Liferay using its own products. It is a real demo of Liferay Portal's software. Liferay is 100 percent open source and all downloads are available from the Liferay Portal website at http://www.liferay.com/web/guest/downloads/portal and the SourceForge website at http://sourceforge.net/projects/lportal/files. The source code repository is available at https://github.com/liferay. The Liferay website's wiki (http://www.liferay.com/web/guest/community/wiki) contains documentation, including a tutorial, user guide, developer guide, administrator guide, roadmap, and so on. The Liferay website's discussion forums can be accessed at http://www.liferay.com/web/guest/community/forums and the blogs at http://www.liferay.com/community/blogs/highlighted. The official plugins and the community plugins are available at http://www.liferay.com/marketplace and are the best place to share your thoughts, get tips and tricks about Liferay implementation, and use and contribute community plugins. If you would like to file a bug or know more about the fixes in a specific release, then you must visit the bug-tracking system at http://issues.liferay.com/. Summary In this article, we looked at what Liferay can offer your intranet and what we should consider while designing the company's enterprise site. We saw that our final intranet will provide shared documents, discussions, collaborative wikis, and more in a single, searchable portal. Well, Liferay is a great choice for an intranet because it provides so many features and is easy to use, free and open source, extensible, and well-integrated with other tools and standards. We also saw the other kinds of sites Liferay is good for, such as extranets, collaborative websites, content management, web publishing, and infrastructure portals. For the best example of an intranet and extranet, you can visit www.liferay.com. It will provide you with more background information. Resources for Article: Further resources on this subject: Working with a Liferay User / User Group / Organization[article] Liferay, its Installation and setup[article] Building your First Liferay Site [article]
Read more
  • 0
  • 6
  • 27532

article-image-working-entity-client-and-entity-sql
Packt
21 Aug 2015
11 min read
Save for later

Working with Entity Client and Entity SQL

Packt
21 Aug 2015
11 min read
In this article by Joydip Kanjilal, author of the book Entity Framework Tutorial - Second Edition explains how Entity Framework contains a powerful client-side query engine that allows you to execute queries against the conceptual model of data, irrespective of the underlying data store in use. This query engine works with a rich functional language called Entity SQL (or E-SQL for short), a derivative of Transact SQL (T-SQL), that enables you to query entities or a collection of entities. (For more resources related to this topic, see here.) An overview of the E-SQL language Entity Framework allows you to write programs against the EDM and also add a level of abstraction on top of the relational model. This isolation of the logical view of data from the Object Model is accomplished by expressing queries in terms of abstractions using an enhanced query language called E-SQL. This language is specially designed to query data from the EDM. E-SQL was designed to address the need for a language that can query data from its conceptual view, rather than its logical view. From T-SQL to E-SQL SQL is the primary language that has been in use for years for querying databases. Remember, SQL is a standard and not owned by any particular database vendor. SQL-92 is a standard, and is the most popular SQL standard currently in use. This standard was released in 1992. The 92 in the name reflects this fact. Different database vendors implemented their own flavors of the SQL-92 standard. The T-SQL language was designed by Microsoft as an SQL Server implementation of the SQL-92 standard. Similar to other SQL languages implemented by different database vendors, the E-SQL language is Entity Framework implementation of the SQL-92 standard that can be used to query data from the EDM. E-SQL is a text-based, provider independent, query language used by Entity Framework to express queries in terms of EDM abstractions and to query data from the conceptual layer of the EDM. One of the major differences between E-SQL and T-SQL is in nested queries. Note that you should always enclose your nested queries in E-SQL using parentheses as seen here: SELECT d, (SELECT DEREF (e) FROM NAVIGATE (d, PayrollEntities.FK_Employee_Department) AS e) AS Employees FROM PayrollEntities.Department AS d; The Select VALUE... statement is used to retrieve singleton values. It is also used to retrieve values that don't have any column names. However, the Select ROW... statement is used to select one or more rows. As an example, if you want a value as a collection from an entity without the column name, you can use the VALUE keyword in the SELECT statement as shown here: SELECT VALUE emp.EmployeeName FROM PayrollEntities.Employee as emp The preceding statement will return the employee names from the Employee entity as a collection of strings. In T-SQL, you can have the ORDER BY clause at the end of the last query when using UNION ALL. SELECT EmployeeID, EmployeeName From Employee UNION ALL SELECT EmployeeID, Basic, Allowances FROM Salary ORDER BY EmployeeID On the contrary, you do not have the ORDER BY clause in the UNION ALL operator in E-SQL. Why E-SQL when I already have LINQ to Entities? LINQ to Entities is a new version of LINQ, well suited for Entity Framework. But why do you need E-SQL when you already have LINQ to Entities available to you? LINQ to Entities queries are verified at the time of compilation. Therefore, it is not at all suited for building and executing dynamic queries. On the contrary, E-SQL queries are verified at runtime, so they can be used for building and executing dynamic queries. You now have a new ADO.NET provider in E-SQL, which is a sophisticated query engine that can be used to query your data from the conceptual model. It should be noted, however, that both LINQ and E-SQL queries are converted into canonical command trees that are in turn translated into database-specific query statements based on the underlying database provider in use, as shown in the following diagram: We will now take a quick look at the features of E-SQL before we delve deep into this language. Features of E-SQL These are the features of E-SQL: Provider neutrality: E-SQL is independent of the underlying ADO.NET data provider in use because it works on top of the conceptual model. SQL like: The syntax of E-SQL statements resemble T-SQL. Expressive with support for entities and types: You can write your E-SQL queries in terms of EDM abstractions. Composable and orthogonal: You can use a subquery wherever you have support for an expression of that type. The subqueries are all treated uniformly regardless of where they have been used. In the sections that follow, we will take a look at the E-SQL language in depth. We will discuss the following points: Operators Expressions Identifiers Variables Parameters Canonical functions Operators in E-SQL An operator is one that operates on a particular operand to perform an operation. Operators in E-SQL can broadly be classified into the following categories: Arithmetic operators: These are used to perform arithmetic operations. Comparison operators: You can use these to compare the values of two operands. Logical operators: These are used to perform logical operations. Reference operators: These act as logical pointers to a particular entity belonging to a particular entity set. Type operators: These can operate on the type of an expression. Case operators: These operate on a set of Boolean expressions. Set operators: These operate on set operations. Arithmetic operators Here is an example of an arithmetic operator: SELECT VALUE s FROM PayrollEntities.Salary AS s where s.Basic = 5000 + 1000 The following arithmetic operators are available in E-SQL: + (add) - (subtract) / (divide) % (modulo) * (multiply) Comparison operators Here is an example of a comparison operator: SELECT VALUE e FROM PayrollEntities.Employee AS e where e.EmployeeID = 1 The following is a list of the comparison operators available in E-SQL: = (equals) != (not equal to) <> (not equal to) > (greater than) < (less than) >= (greater than or equal to) <= (less than or equal to) Logical operators Here is an example of using logical operators in E-SQL: SELECT VALUE s FROM PayrollEntities.Salary AS s where s.Basic > 5000 && s.Allowances > 3000 This is a list of the logical operators available in E-SQL: && (And) ! (Not) || (Or) Reference operators The following is an example of how you can use a reference operator in E-SQL: SELECT VALUE REF(e).FirstName FROM PayrollEntities.Employee as e The following is a list of the reference operators available in E-SQL: Key Ref CreateRef DeRef Type operators Here is an example of a type operator that returns a collection of employees from a collection of persons: SELECT VALUE e FROM OFTYPE(PayrollEntities.Person, PayrollEntities.Employee) AS e The following is a list of the type operators available in E-SQL: OfType Cast Is [Not] Of Treat Set operators This is an example of how you can use a set operator in E-SQL: (Select VALUE e from PayrollEntities.Employee as e where e.FirstName Like 'J%') Union All ( select VALUE s from PayrollEntities.Employee as s where s.DepartmentID = 1) Here is a list of the set operators available in E-SQL: Set Union Element AnyElement Except [Not] Exists [Not] In Overlaps Intersect Operator precedence When you have multiple operators operating in a sequence, the order in which the operators will be executed will be determined by the operator precedence. The following table shows the operator, operator type, and their precedence levels in E-SQL language: Operators Operator type Precedence level . , [] () Primary Level 1 ! not Unary Level 2 * / % Multiplicative Level 3 + and - Additive Level 4 < > <= >= Relational Level 5 = != <> Equality Level 6 && Conditional And Level 7 || Conditional Or Level 8 Expressions in E-SQL Expressions are the building blocks of the E-SQL language. Here are some examples of how expressions are represented: 1; //This represents one scalar item {2}; //This represents a collection of one element {3, 4, 5} //This represents a collection of multiple elements Query expressions in E-SQL Query expressions are used in conjunction with query operators to perform a certain operation and return a result set. Query expressions in E-SQL are actually a series of clauses that are represented using one or more of the following: SELECT: This clause is used to specify or limit the number of elements that are returned when a query is executed in E-SQL. FROM: This clause is used to specify the source or collection for retrieval of the elements in a query. WHERE: This clause is used to specify a particular expression. HAVING: This clause is used to specify a filter condition for retrieval of the result set. GROUP BY: This clause is used to group the elements returned by a query. ORDER BY: This clause is used to order the elements returned in either ascending or descending order. Here is the complete syntax of query expressions in E-SQL: SELECT VALUE [ ALL | DISTINCT ] FROM expression [ ,...n ] as C [ WHERE expression ] [ GROUP BY expression [ ,...n ] ] [ HAVING search_condition ] [ ORDER BY expression] And here is an example of a typical E-SQL query with all clause types being used: SELECT emp.FirstName FROM PayrollEntities.Employee emp, PayrollEntities.Department dept Group By dept.DepartmentName Where emp.DepartmentID = dept.DepartmentID Having emp.EmployeeID > 5 Identifiers, variables, parameters, and types in E-SQL Identifiers in E-SQL are of the following two types: Simple identifiers Quoted identifiers Simple identifiers are a sequence of alphanumeric or underscore characters. Note that an identifier should always begin with an alphabetical character. As an example, the following are valid identifiers: a12_ab M_09cd W0001m However, the following are invalid identifiers: 9abcd _xyz 0_pqr Quoted identifiers are those that are enclosed within square brackets ([]). Here are some examples of quoted identifiers: SELECT emp.EmployeeName AS [Employee Name] FROM Employee as emp SELECT dept.DepartmentName AS [Department Name] FROM Department as dept Quoted identifiers cannot contain a new line, tab, backspace, or carriage return characters. In E-SQL, a variable is a reference to a named expression. Note that the naming conventions for variables follow the same rules for an identifier. In other words, a valid variable reference to a named expression in E-SQL should be a valid identifier too. Here is an example: SELECT emp FROM Employee as emp; In the preceding example, emp is a variable reference. Types can be of three versions: Primitive types like integers and strings Nominal types such as entity types, entity sets, and relationships Transient types like rows, collections, and references The E-SQL language supports the following type categories: Rows Collections References Row A row, which is also known as a tuple, has no identity or behavior and cannot be inherited. The following statement returns one row that contains six elements: ROW (1, 'Joydip'); Collections Collections represent zero or more instances of other instances. You can use SET () to retrieve unique values from a collection of values. Here is an example: SET({1,1,2,2,3,3,4,4,5,5,6,6}) The preceding example will return the unique values from the set. Specifically, 2, 3, 4, 5, and 6. This is equivalent to the following statement: Select Value Distinct x from {1,1,2,2,3,3,4,4,5,5,6,6} As x; You can create collections using MULTISET () or even using {} as shown in the following examples: MULTISET (1, 2, 3, 4, 5, 6) The following represents the same as the preceding example: {1, 2, 3, 4, 5, 6} Here is how you can return a collection of 10 identical rows each with six elements in them: SELECT ROW(1,'Joydip') from {1,2,3,4,5,6,7,8,9,10} To return a collection of all rows from the employee set, you can use the following: Select emp from PayrollEntities.Employee as emp; Similarly, to select all rows from the department set, you use the following: Select dept from PayrollEntities.Department as dept; Reference A reference denotes a logical pointer or reference, to a particular entity. In essence, it is a foreign key to a specific entity set. Operators are used to perform operations on one or more operands. In E-SQL, the following operators are available to construct, deconstruct, and also navigate through references: KEY REF CREATEREF DEREF To create a reference to an instance of Employee, you can use REF() as shown here: SELECT REF (emp) FROM PayrollEntities.Employee as emp Once you have created a reference to an entity using REF(), you can also dereference the entity using DREF() as shown: DEREF (CREATEREF(PayrollEntities.Employee, ROW(@EmployeeID))) Summary In this article, we explored E-SQL and how it can be used with the Entity Client provider to perform CRUD operations in our applications. We discussed the differences between E-SQL and T-SQL and the differences between E-SQL and LINQ. We also discussed when one should choose E-SQL instead of LINQ to query data in applications. Resources for Article: Further resources on this subject: Hosting the service in IIS using the TCP protocol [article] Entity Framework Code-First: Accessing Database Views and Stored Procedures [article] Entity Framework DB First – Inheritance Relationships between Entities [article]
Read more
  • 0
  • 0
  • 3045

article-image-advanced-data-access-patterns
Packt
11 Aug 2015
25 min read
Save for later

Advanced Data Access Patterns

Packt
11 Aug 2015
25 min read
In this article by, Suhas Chatekar, author of the book Learning NHibernate 4, we would dig deeper into that statement and try to understand what those downsides are and what can be done about them. In our attempt to address the downsides of repository, we would present two data access patterns, namely specification pattern and query object pattern. Specification pattern is a pattern adopted into data access layer from a general purpose pattern used for effectively filtering in-memory data. Before we begin, let me reiterate – repository pattern is not bad or wrong choice in every situation. If you are building a small and simple application involving a handful of entities then repository pattern can serve you well. But if you are building complex domain logic with intricate database interaction then repository may not do justice to your code. The patterns presented can be used in both simple and complex applications, and if you feel that repository is doing the job perfectly then there is no need to move away from it. (For more resources related to this topic, see here.) Problems with repository pattern A lot has been written all over the Internet about what is wrong with repository pattern. A simple Google search would give you lot of interesting articles to read and ponder about. We would spend some time trying to understand problems introduced by repository pattern. Generalization FindAll takes name of the employee as input along with some other parameters required for performing the search. When we started putting together a repository, we said that Repository<T> is a common repository class that can be used for any entity. But now FindAll takes a parameter that is only available on Employee, thus locking the implementation of FindAll to the Employee entity only. In order to keep the repository still reusable by other entities, we would need to part ways from the common Repository<T> class and implement a more specific EmployeeRepository class with Employee specific querying methods. This fixes the immediate problem but introduces another one. The new EmployeeRepository breaks the contract offered by IRepository<T> as the FindAll method cannot be pushed on the IRepository<T> interface. We would need to add a new interface IEmployeeRepository. Do you notice where this is going? You would end up implementing lot of repository classes with complex inheritance relationships between them. While this may seem to work, I have experienced that there are better ways of solving this problem. Unclear and confusing contract What happens if there is a need to query employees by a different criteria for a different business requirement? Say, we now need to fetch a single Employee instance by its employee number. Even if we ignore the above issue and be ready to add a repository class per entity, we would need to add a method that is specific to fetching the Employee instance matching the employee number. This adds another dimension to the code maintenance problem. Imagine how many such methods we would end up adding for a complex domain every time someone needs to query an entity using a new criteria. With several methods on repository contract that query same entity using different criteria makes the contract less clear and confusing for new developers. Such a pattern also makes it difficult to reuse code even if two methods are only slightly different from each other. Leaky abstraction In order to make methods on repositories reusable in different situations, lot of developers tend to add a single method on repository that does not take any input and return an IQueryable<T> by calling ISession.Query<T> inside it, as shown next: public IQueryable<T> FindAll() {    return session.Query<T>(); } IQueryable<T> returned by this method can then be used to construct any query that you want outside of repository. This is a classic case of leaky abstraction. Repository is supposed to abstract away any concerns around querying the database, but now what we are doing here is returning an IQueryable<T> to the consuming code and asking it to build the queries, thus leaking the abstraction that is supposed to be hidden into repository. IQueryable<T> returned by the preceding method holds an instance of ISession that would be used to ultimately interact with database. Since repository has no control over how and when this IQueryable would invoke database interaction, you might get in trouble. If you are using "session per request" kind of pattern then you are safeguarded against it but if you are not using that pattern for any reason then you need to watch out for errors due to closed or disposed session objects. God object anti-pattern A god object is an object that does too many things. Sometimes, there is a single class in an application that does everything. Such an implementation is almost always bad as it majorly breaks the famous single responsibility principle (SRP) and reduces testability and maintainability of code. A lot can be written about SRP and god object anti-pattern but since it is not the primary topic, I would leave the topic with underscoring the importance of staying away from god object anti-pattern. Avid readers can Google on the topic if they are interested. Repositories by nature tend to become single point of database interaction. Any new database interaction goes through repository. Over time, repositories grow organically with large number of methods doing too many things. You may spot the anti-pattern and decide to break the repository into multiple small repositories but the original single repository would be tightly integrated with your code in so many places that splitting it would be a difficult job. For a contained and trivial domain model, repository pattern can be a good choice. So do not abandon repositories entirely. It is around complex and changing domain that repositories start exhibiting the problems just discussed. You might still argue that repository is an unneeded abstraction and we can very well use NHibernate directly for a trivial domain model. But I would caution against any design that uses NHibernate directly from domain or domain services layer. No matter what design I use for data access, I would always adhere to "explicitly declare capabilities required" principle. The abstraction that offers required capability can be a repository interface or some other abstractions that we would learn. Specification pattern Specification pattern is a reusable and object-oriented way of applying business rules on domain entities. The primary use of specification pattern is to select subset of entities from a larger collection of entities based on some rules. An important characteristic of specification pattern is combining multiple rules by chaining them together. Specification pattern was in existence before ORMs and other data access patterns had set their feet in the development community. The original form of specification pattern dealt with in-memory collections of entities. The pattern was then adopted to work with ORMs such as NHibernate as people started seeing the benefits that specification pattern could bring about. We would first discuss specification pattern in its original form. That would give us a good understanding of the pattern. We would then modify the implementation to make it fit with NHibernate. Specification pattern in its original form Let's look into an example of specification pattern in its original form. A specification defines a rule that must be satisfied by domain objects. This can be generalized using an interface definition, as follows: public interface ISpecification<T> { bool IsSatisfiedBy(T entity); } ISpecification<T> defines a single method IsSatisifedBy. This method takes the entity instance of type T as input and returns a Boolean value depending on whether the entity passed satisfies the rule or not. If we were to write a rule for employees living in London then we can implement a specification as follows: public class EmployeesLivingIn : ISpecification<Employee> { public bool IsSatisfiedBy(Employee entity) {    return entity.ResidentialAddress.City == "London"; } } The EmployeesLivingIn class implements ISpecification<Employee> telling us that this is a specification for the Employee entity. This specification compares the city from the employee's ResidentialAddress property with literal string "London" and returns true if it matches. You may be wondering why I have named this class as EmployeesLivingIn. Well, I had some refactoring in mind and I wanted to make my final code read nicely. Let's see what I mean. We have hardcoded literal string "London" in the preceding specification. This effectively stops this class from being reusable. What if we need a specification for all employees living in Paris? Ideal thing to do would be to accept "London" as a parameter during instantiation of this class and then use that parameter value in the implementation of the IsSatisfiedBy method. Following code listing shows the modified code: public class EmployeesLivingIn : ISpecification<Employee> { private readonly string city;   public EmployeesLivingIn(string city) {    this.city = city; }   public bool IsSatisfiedBy(Employee entity) {    return entity.ResidentialAddress.City == city; } } This looks good without any hardcoded string literals. Now if I wanted my original specification for employees living in London then following is how I could build it: var specification = new EmployeesLivingIn("London"); Did you notice how the preceding code reads in plain English because of the way class is named? Now, let's see how to use this specification class. Usual scenario where specifications are used is when you have got a list of entities that you are working with and you want to run a rule and find out which of the entities in the list satisfy that rule. Following code listing shows a very simple use of the specification we just implemented: List<Employee> employees = //Loaded from somewhere List<Employee> employeesLivingInLondon = new List<Employee>(); var specification = new EmployeesLivingIn("London");   foreach(var employee in employees) { if(specification.IsSatisfiedBy(employee)) {    employeesLivingInLondon.Add(employee); } } We have a list of employees loaded from somewhere and we want to filter this list and get another list comprising of employees living in London. Till this point, the only benefit we have had from specification pattern is that we have managed to encapsulate the rule into a specification class which can be reused anywhere now. For complex rules, this can be very useful. But for simple rules, specification pattern may look like lot of plumbing code unless we overlook the composability of specifications. Most power of specification pattern comes from ability to chain multiple rules together to form a complex rule. Let's write another specification for employees who have opted for any benefit: public class EmployeesHavingOptedForBenefits : ISpecification<Employee> { public bool IsSatisfiedBy(Employee entity) {    return entity.Benefits.Count > 0; } } In this rule, there is no need to supply any literal value from outside so the implementation is quite simple. We just check if the Benefits collection on the passed employee instance has count greater than zero. You can use this specification in exactly the same way as earlier specification was used. Now if there is a need to apply both of these specifications to an employee collection, then very little modification to our code is needed. Let's start with adding an And method to the ISpecification<T> interface, as shown next: public interface ISpecification<T> { bool IsSatisfiedBy(T entity); ISpecification<T> And(ISpecification<T> specification); } The And method accepts an instance of ISpecification<T> and returns another instance of the same type. As you would have guessed, the specification that is returned from the And method would effectively perform a logical AND operation between the specification on which the And method is invoked and specification that is passed into the And method. The actual implementation of the And method comes down to calling the IsSatisfiedBy method on both the specification objects and logically ANDing their results. Since this logic does not change from specification to specification, we can introduce a base class that implements this logic. All specification implementations can then derive from this new base class. Following is the code for the base class: public abstract class Specification<T> : ISpecification<T> { public abstract bool IsSatisfiedBy(T entity);   public ISpecification<T> And(ISpecification<T> specification) {    return new AndSpecification<T>(this, specification); } } We have marked Specification<T> as abstract as this class does not represent any meaningful business specification and hence we do not want anyone to inadvertently use this class directly. Accordingly, the IsSatisfiedBy method is marked abstract as well. In the implementation of the And method, we are instantiating a new class AndSepcification. This class takes two specification objects as inputs. We pass the current instance and one that is passed to the And method. The definition of AndSpecification is very simple. public class AndSpecification<T> : Specification<T> { private readonly Specification<T> specification1; private readonly ISpecification<T> specification2;   public AndSpecification(Specification<T> specification1, ISpecification<T> specification2) {    this.specification1 = specification1;    this.specification2 = specification2; }   public override bool IsSatisfiedBy(T entity) {    return specification1.IsSatisfiedBy(entity) &&    specification2.IsSatisfiedBy(entity); } } AndSpecification<T> inherits from abstract class Specification<T> which is obvious. IsSatisfiedBy is simply performing a logical AND operation on the outputs of the ISatisfiedBy method on each of the specification objects passed into AndSpecification<T>. After we change our previous two business specification implementations to inherit from abstract class Specification<T> instead of interface ISpecification<T>, following is how we can chain two specifications using the And method that we just introduced: List<Employee> employees = null; //= Load from somewhere List<Employee> employeesLivingInLondon = new List<Employee>(); var specification = new EmployeesLivingIn("London")                                     .And(new EmployeesHavingOptedForBenefits());   foreach (var employee in employees) { if (specification.IsSatisfiedBy(employee)) {    employeesLivingInLondon.Add(employee); } } There is literally nothing changed in how the specification is used in business logic. The only thing that is changed is construction and chaining together of two specifications as depicted in bold previously. We can go on and implement other chaining methods but point to take home here is composability that the specification pattern offers. Now let's look into how specification pattern sits beside NHibernate and helps in fixing some of pain points of repository pattern. Specification pattern for NHibernate Fundamental difference between original specification pattern and the pattern applied to NHibernate is that we had an in-memory list of objects to work with in the former case. In case of NHibernate we do not have the list of objects in the memory. We have got the list in the database and we want to be able to specify rules that can be used to generate appropriate SQL to fetch the records from database that satisfy the rule. Owing to this difference, we cannot use the original specification pattern as is when we are working with NHibernate. Let me show you what this means when it comes to writing code that makes use of specification pattern. A query, in its most basic form, to retrieve all employees living in London would look something as follows: var employees = session.Query<Employee>()                .Where(e => e.ResidentialAddress.City == "London"); The lambda expression passed to the Where method is our rule. We want all the Employee instances from database that satisfy this rule. We want to be able to push this rule behind some kind of abstraction such as ISpecification<T> so that this rule can be reused. We would need a method on ISpecification<T> that does not take any input (there are no entities in-memory to pass) and returns a lambda expression that can be passed into the Where method. Following is how that method could look: public interface ISpecification<T> where T : EntityBase<T> { Expression<Func<T, bool>> IsSatisfied(); } Note the differences from the previous version. We have changed the method name from IsSatisfiedBy to IsSatisfied as there is no entity being passed into this method that would warrant use of word By in the end. This method returns an Expression<Fund<T, bool>>. If you have dealt with situations where you pass lambda expressions around then you know what this type means. If you are new to expression trees, let me give you a brief explanation. Func<T, bool> is a usual function pointer. This pointer specifically points to a function that takes an instance of type T as input and returns a Boolean output. Expression<Func<T, bool>> takes this function pointer and converts it into a lambda expression. An implementation of this new interface would make things more clear. Next code listing shows the specification for employees living in London written against the new contract: public class EmployeesLivingIn : ISpecification<Employee> { private readonly string city;   public EmployeesLivingIn(string city) {    this.city = city; }   public override Expression<Func<Employee, bool>> IsSatisfied() {    return e => e.ResidentialAddress.City == city; } } There is not much changed here compared to the previous implementation. Definition of IsSatisfied now returns a lambda expression instead of a bool. This lambda is exactly same as the one we used in the ISession example. If I had to rewrite that example using the preceding specification then following is how that would look: var specification = new EmployeeLivingIn("London"); var employees = session.Query<Employee>()                .Where(specification.IsSatisfied()); We now have a specification wrapped in a reusable object that we can send straight to NHibernate's ISession interface. Now let's think about how we can use this from within domain services where we used repositories before. We do not want to reference ISession or any other NHibernate type from domain services as that would break onion architecture. We have two options. We can declare a new capability that can take a specification and execute it against the ISession interface. We can then make domain service classes take a dependency on this new capability. Or we can use the existing IRepository capability and add a method on it which takes the specification and executes it. We started this article with a statement that repositories have a downside, specifically when it comes to querying entities using different criteria. But now we are considering an option to enrich the repositories with specifications. Is that contradictory? Remember that one of the problems with repository was that every time there is a new criterion to query an entity, we needed a new method on repository. Specification pattern fixes that problem. Specification pattern has taken the criterion out of the repository and moved it into its own class so we only ever need a single method on repository that takes in ISpecification<T> and execute it. So using repository is not as bad as it sounds. Following is how the new method on repository interface would look: public interface IRepository<T> where T : EntityBase<T> { void Save(T entity); void Update(int id, Employee employee); T GetById(int id); IEnumerable<T> Apply(ISpecification<T> specification); } The Apply method in bold is the new method that works with specification now. Note that we have removed all other methods that ran various different queries and replaced them with this new method. Methods to save and update the entities are still there. Even the method GetById is there as the mechanism used to get entity by ID is not same as the one used by specifications. So we retain that method. One thing I have experimented with in some projects is to split read operations from write operations. The IRepository interface represents something that is capable of both reading from the database and writing to database. Sometimes, we only need a capability to read from database, in which case, IRepository looks like an unnecessarily heavy object with capabilities we do not need. In such a situation, declaring a new capability to execute specification makes more sense. I would leave the actual code for this as a self-exercise for our readers. Specification chaining In the original implementation of specification pattern, chaining was simply a matter of carrying out logical AND between the outputs of the IsSatisfiedBy method on the specification objects involved in chaining. In case of NHibernate adopted version of specification pattern, the end result boils down to the same but actual implementation is slightly more complex than just ANDing the results. Similar to original specification pattern, we would need an abstract base class Specification<T> and a specialized AndSepcificatin<T> class. I would just skip these details. Let's go straight into the implementation of the IsSatisifed method on AndSpecification where actual logical ANDing happens. public override Expression<Func<T, bool>> IsSatisfied() { var p = Expression.Parameter(typeof(T), "arg1"); return Expression.Lambda<Func<T, bool>>(Expression.AndAlso(          Expression.Invoke(specification1.IsSatisfied(), p),          Expression.Invoke(specification2.IsSatisfied(), p)), p); } Logical ANDing of two lambda expression is not a straightforward operation. We need to make use of static methods available on helper class System.Linq.Expressions.Expression. Let's try to go from inside out. That way it is easier to understand what is happening here. Following is the reproduction of innermost call to the Expression class: Expression.Invoke(specification1.IsSatisfied(), parameterName) In the preceding code, we are calling the Invoke method on the Expression class by passing the output of the IsSatisfied method on the first specification. Second parameter passed to this method is a temporary parameter of type T that we created to satisfy the method signature of Invoke. The Invoke method returns an InvocationExpression which represents the invocation of the lambda expression that was used to construct it. Note that actual lambda expression is not invoked yet. We do the same with second specification in question. Outputs of both these operations are then passed into another method on the Expression class as follows: Expression.AndAlso( Expression.Invoke(specification1.IsSatisfied(), parameterName), Expression.Invoke(specification2.IsSatisfied(), parameterName) ) Expression.AndAlso takes the output from both specification objects in the form of InvocationExpression type and builds a special type called BinaryExpression which represents a logical AND between the two expressions that were passed to it. Next we convert this BinaryExpression into an Expression<Func<T, bool>> by passing it to the Expression.Lambda<Func<T, bool>> method. This explanation is not very easy to follow and if you have never used, built, or modified lambda expressions programmatically like this before, then you would find it very hard to follow. In that case, I would recommend not bothering yourself too much with this. Following code snippet shows how logical ORing of two specifications can be implemented. Note that the code snippet only shows the implementation of the IsSatisfied method. public override Expression<Func<T, bool>> IsSatisfied() { var parameterName = Expression.Parameter(typeof(T), "arg1"); return Expression.Lambda<Func<T, bool>>(Expression.OrElse( Expression.Invoke(specification1.IsSatisfied(), parameterName), Expression.Invoke(specification2.IsSatisfied(), parameterName)), parameterName); } Rest of the infrastructure around chaining is exactly same as the one presented during discussion of original specification pattern. I have avoided giving full class definitions here to save space but you can download the code to look at complete implementation. That brings us to end of specification pattern. Though specification pattern is a great leap forward from where repository left us, it does have some limitations of its own. Next, we would look into what these limitations are. Limitations Specification pattern is great and unlike repository pattern, I am not going to tell you that it has some downsides and you should try to avoid it. You should not. You should absolutely use it wherever it fits. I would only like to highlight two limitations of specification pattern. Specification pattern only works with lambda expressions. You cannot use LINQ syntax. There may be times when you would prefer LINQ syntax over lambda expressions. One such situation is when you want to go for theta joins which are not possible with lambda expressions. Another situation is when lambda expressions do not generate optimal SQL. I will show you a quick example to understand this better. Suppose we want to write a specification for employees who have opted for season ticket loan benefit. Following code listing shows how that specification could be written: public class EmployeeHavingTakenSeasonTicketLoanSepcification :Specification<Employee> { public override Expression<Func<Employee, bool>> IsSatisfied() {    return e => e.Benefits.Any(b => b is SeasonTicketLoan); } } It is a very simple specification. Note the use of Any to iterate over the Benefits collection to check if any of the Benefit in that collection is of type SeasonTicketLoan. Following SQL is generated when the preceding specification is run: SELECT employee0_.Id           AS Id0_,        employee0_.Firstname     AS Firstname0_,        employee0_.Lastname     AS Lastname0_,        employee0_.EmailAddress AS EmailAdd5_0_,        employee0_.DateOfBirth   AS DateOfBi6_0_,        employee0_.DateOfJoining AS DateOfJo7_0_,        employee0_.IsAdmin       AS IsAdmin0_,        employee0_.Password     AS Password0_ FROM   Employee employee0_ WHERE EXISTS (SELECT benefits1_.Id FROM   Benefit benefits1_ LEFT OUTER JOIN Leave benefits1_1_ ON benefits1_.Id = benefits1_1_.Id LEFT OUTER JOIN SkillsEnhancementAllowance benefits1_2_ ON benefits1_.Id = benefits1_2_.Id LEFT OUTER JOIN SeasonTicketLoan benefits1_3_ ON benefits1_.Id = benefits1_3_.Id WHERE employee0_.Id = benefits1_.Employee_Id AND CASE WHEN benefits1_1_.Id IS NOT NULL THEN 1      WHEN benefits1_2_.Id IS NOT NULL THEN 2      WHEN benefits1_3_.Id IS NOT NULL THEN 3       WHEN benefits1_.Id IS NOT NULL THEN 0      END = 3) Isn't that SQL too complex? It is not only complex on your eyes but this is not how I would have written the needed SQL in absence of NHibernate. I would have just inner-joined the Employee, Benefit, and SeasonTicketLoan tables to get the records I need. On large databases, the preceding query may be too slow. There are some other such situations where queries written using lambda expressions tend to generate complex or not so optimal SQL. If we use LINQ syntax instead of lambda expressions, then we can get NHibernate to generate just the SQL. Unfortunately, there is no way of fixing this with specification pattern. Summary Repository pattern has been around for long time but suffers through some issues. General nature of its implementation comes in the way of extending repository pattern to use it with complex domain models involving large number of entities. Repository contract can be limiting and confusing when there is a need to write complex and very specific queries. Trying to fix these issues with repositories may result in leaky abstraction which can bite us later. Moreover, repositories maintained with less care have a tendency to grow into god objects and maintaining them beyond that point becomes a challenge. Specification pattern and query object pattern solve these issues on the read side of the things. Different applications have different data access requirements. Some applications are write-heavy while others are read-heavy. But there are a minute number of applications that fit into former category. A large number of applications developed these days are read-heavy. I have worked on applications that involved more than 90 percent database operations that queried data and only less than 10 percent operations that actually inserted/updated data into database. Having this knowledge about the application you are developing can be very useful in determining how you are going to design your data access layer. That brings use to the end of our NHibernate journey. Not quite, but yes, in a way. Resources for Article: Further resources on this subject: NHibernate 3: Creating a Sample Application [article] NHibernate 3.0: Using LINQ Specifications in the data access layer [article] NHibernate 2: Mapping relationships and Fluent Mapping [article]
Read more
  • 0
  • 0
  • 1113
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-entering-people-information
Packt
24 Jun 2015
9 min read
Save for later

Entering People Information

Packt
24 Jun 2015
9 min read
In this article by Pravin Ingawale, author of the book Oracle E-Business Suite R12.x HRMS – A Functionality Guide, will learn about entering a person's information in Oracle HRMS. We will understand the hiring process in Oracle. This, actually, is part of the Oracle I-recruitment module in Oracle apps. Then we will see how to create an employee in Core HR. Then, we will learn the concept of person types and defining person types. We will also learn about entering information for an employee, including additional information. Let's see how to create an employee in core HR. (For more resources related to this topic, see here.) Creating an employee An employee is the most important entity in an organization. Before creating an employee, the HR officer must know the date from which the employee will be active in the organization. In Oracle terminology, you can call it the employee's hire date. Apart from this, the HR officer must know basic details of the employee such as first name, last name, date of birth, and so on. Navigate to US HRMS Manager | People | Enter and Maintain. This is the basic form, called People in Oracle HRMS, which is used to create an employee in the application. As you can see in the form, there is a field named Last, which is marked in yellow. This indicates that this is mandatory to create an employee record. First, you need to set the effective date on the form. You can set this by clicking on the icon, as shown in the following screenshot: You need to enter the mandatory field data along with additional data. The following screenshot shows the data entered: Once you enter the required data, you need to specify the action for the entered record. The action we have selected is Create Employment. The Create Employment action will create an employee in the application. There are other actions such as Create Applicant, which is used to create an applicant for I-Recruitment. The Create Placement action is used to create a contingent worker in your enterprise. Once you select this action, it will prompt you to enter the person type of this employee as in the following screenshot. Select the Person Type as Employee and save the record. We will see the concept of person type in the next section. Once you select the employee person type and then save the record, the system will automatically generate the employee number for the person. In our case, the system has generated an employee number 10160. So now, we have created an employee in the application. Concept of person types In any organization, you need to identify different types of people. Here, you can say that you need to group different types of people. There are basically three types of people you capture in HRMS system. They are as follows: Employees: These include current employees and past employees. Past employees are those who were part of your enterprise earlier and are no longer active in the system. You can call them terminated or ex-employees. Applicants: If you are using I-recruitment, applicants can be created. External people: Contact is a special category of external type. Contacts are associated with an employee or an applicant. For example, there might be a need to record the name, address, and phone number of an emergency contact for each employee in your organization. There might also be a need to keep information on dependents of an employee for medical insurance purposes or for some payments in payroll processing. Using person types There are predefined person types in Oracle HRMS. You can add more person types as per your requirements. You can also change the name of existing person types when you install the system. Let's take an example for your understanding. Your organization has employees. There might be employees of different types; you might have regular employees and employees who are contractors in your organization. Hence, you can categorize employees in your organization into two types: Regular employees Consultants The reason for creating these categories is to easily identify the employee type and store different types of information for each category. Similarly, if you are using I-recruitment, then you will have candidates. Hence, you can categorize candidates into two types. One will be internal candidate and the other will be external candidate. Internal candidates will be employees within your organization who can apply for an opening within your organization. An external candidate is an applicant who does not work for your organization but is applying for a position that is open in your company. Defining person types In an earlier section, you learned the concept of person types, and now you will learn how to define person types in the system. Navigate to US HRMS Manager | Other Definitions | Person Types. In the preceding screenshot, you can see four fields, that is, User Name, System Name, Active, and Default flag. There are eight person types recognized by the system and identified by a system name. For each system name, there are predefined usernames. A username can be changed as per your needs. There must be one username that should be the default. While creating an employee, the person types that are marked by the default flag will come by default. To change a username for a person type, delete the contents of the User Name field and type the name you'd prefer to keep. To add a new username to a person type system name: Select New Record from the Edit menu. Enter a unique username and select the system name you want to use. Deactivating person types You cannot delete person types, but you can deactivate them by unchecking the Active checkbox. Entering personal and additional information Until now, you learned how to create an employee by entering basic details such as title, gender, and date of birth. In addition to this, you can enter some other information for an employee. As you can see on the people form, there are various tabs such as Employment, Office details, Background, and so on. Each tab has some fields that can store information. For example, in our case, we have stored the e-mail address of the employee in the Office Details tab. Whenever you enter any data for an employee and then click on the Save button, it will give you two options as shown in the following screenshot: You have to select one of the options to save the data. The differences between both the options are explained with an example. Let's say you have hired a new employee as of 01-Jan-2014. Hence, a new record will be created in the application with the start date as 01-Jan-2014. This is called an effective start date of the record. There is no end date for this record, so Oracle gives it a default end date, which is 31-Dec-4712. This is called the effective end date of the record. Now, in our case, Oracle has created a single record with the start date and end date as 01-Jan-2014 and 31-Dec-4712, respectively. When we try to enter additional data for this record (in our case, it is phone number) then Oracle will prompt you to select the Correction or Update option. This is called the date-tracked option. If you select the correction mode, then Oracle will update an existing record in the application. Now, if you date track to, say, 01-Aug-2014 and then enter the phone number and select the update mode, then it will end the historical data with the new date minus one and create a new record with the start date 01-Aug-2014 with the phone number that you have entered. Thus, the historical data will be preserved and a new record will be created with the start date 01-Aug-2014 and a phone number. The following tabular representation will help you understand better in Correction mode: Employee Number LastName Effective Start Date Effective End Date Phone Number 10160 Test010114 01-Jan-2014 31-Dec-4712 +0099999999 Now, if you want to change the phone number from 01-Aug-2014 in Update mode (date 01-Aug-2014), then the record will be as follows: Employee Number LastName Effective Start Date Effective End Date Phone Number 10160 Test010114 01-Jan-2014 31-Jul-2014 +0099999999 10160 Test010114 01-Aug-2014 31-Jul-2014 +0088888888 Thus, in update mode, you can see that historical data is intact. If HR wants to view some historical data, then the HR employee can easily view this data. Everything associated with Oracle HRMS is date-tracked. Every characteristic about the organization, person, position, salary, and benefits is tightly date-tracked. This concept is very important in Oracle and is used in almost all the forms in which you store employee-related information. Thus, you have learned about the date tracking concept in Oracle APPS. There are some additional fields, which can be configured as per your requirements. Additional personal data can be stored in these fields. These are called as descriptive flexfields in Oracle. We created personal DFF to store data about Years of Industry Experience and whether an employee is Oracle Certified or not. This data can be stored in the People form DFF as marked in the following screenshot: When you click on the box, it will open the new form as shown in the following screenshot. Here, you can enter the additional data. This is called Additional Personal Details DFF. It is stored in personal data; this is normally referred to as the People form DFF. We have created a Special Information Types (SIT) to store information on languages known by an employee. This data will have two attributes, namely, the language known and the fluency. This can be entered by navigating to US HRMS Manager | People | Enter and Maintain | Special Info. Click on the Details section. This will open a new form to enter the required details. Each record in the SIT is date-tracked. You can enter the start date and the end date. Thus, we have seen DFF in which you stored additional person data and we have seen KFF, where you enter the SIT data. Summary In this article, you have learned about creating a new employee, entering employee data, and additional data using DFF and KFF. You also learned the concept of person type. Resources for Article: Further resources on this subject: Knowing the prebuilt marketing, sales, and service organizations [article] Oracle E-Business Suite with Desktop Integration [article] Oracle Integration and Consolidation Products [article]
Read more
  • 0
  • 0
  • 5038

article-image-working-liferay-user-user-group-organization
Packt
04 Jun 2015
23 min read
Save for later

Working with a Liferay User / User Group / Organization

Packt
04 Jun 2015
23 min read
In this article by Piotr Filipowicz and Katarzyna Ziółkowska, authors of the book Liferay 6.x Portal Enterprise Intranets Cookbook, we will cover the basic functionalities that will allow us to manage the structure and users of the intranet. In this article, we will cover the following topics: Managing an organization structure Creating a new user group Adding a new user Assigning users to organizations Assigning users to user groups Exporting users (For more resources related to this topic, see here.) The first step in creating an intranet, beyond answering the question of who the users will be, is to determine its structure. The structure of the intranet is often a derivative of the organizational structure of the company or institution. Liferay Portal CMS provides several tools that allow mapping of a company's structure in the system. The hierarchy is built by organizations that match functional or localization departments of the company. Each organization represents one department or localization and assembles users who represent employees of these departments. However, sometimes, there are other groups of employees in the company. These groups exist beyond the company's organizational structure, and can be reflected in the system by the User Groups functionality. Managing an organization structure Building an organizational structure in Liferay resembles the process of managing folders on a computer drive. An organization may have its suborganizations and—except the first level organization—at the same time, it can be a suborganization of another one. This folder-similar mechanism allows you to create a tree structure of organizations. Let's imagine that we are obliged to create an intranet for a software development company. The company's headquarter is located in London. There are also two other offices in Liverpool and Glasgow. The company is divided into finance, marketing, sales, IT, human resources, and legal departments. Employees from Glasgow and Liverpool belong to the IT department. How to do it… In order to create a structure described previously, these are the steps: Log in as an administrator and go to Admin | Control Panel | Users | Users and Organizations. Click on the Add button. Choose the type of organization you want to create (in our example, it will be a regular organization called software development company, but it is also possible to choose a location). Provide a name for the top-level organization. Choose the parent organization (if a top-level organization is created, this must be skipped). Click on the Save button: Click on the Change button and upload a file, with a graphic representation of your company (for example, logo). Use the right column menu to navigate to data sections you want to fill in with the information. Click on the Save button. Go back to the Users and Organizations list by clicking on the back icon (the left-arrow icon next to the Edit Software Development Company header). Click on the Actions button, located near the name of the newly created organization. Choose the Add Regular Organization option. Provide a name for the organization (in our example, it is IT). Click on the Save button. Go back to the Users and Organizations list by clicking on the back icon (left-arrow icon next to Edit IT header). Click on the Actions button, located near the name of the newly created organization (in our case, it is IT). Choose the Add Location option. Provide a name for the organization (for instance, IT Liverpool). Provide a country. Provide a region (if available). Click on the Save button. How it works… Let's take a look at what we did throughout the previous recipe. In steps 1 through 6, we created a new top-level organization called software development company. With steps 7 through 9, we defined a set of attributes of the newly created organization. Starting from step 11, we created suborganizations: standard organization (IT) and its location (IT Liverpool). Creating an organization There are two types of organizations: regular organizations and locations. The regular organization provides the possibility to create a multilevel structure, each unit of which can have parent organizations and suborganizations (there is one exception: the top-level organization cannot have any parent organizations). The localization is a special kind of organization that allows us to provide some additional data, such as country and region. However, it does not enable us to create suborganizations. When creating the tree of organizations, it is possible to combine regular organizations and locations, where, for instance, the top-level organization will be the regular organization and, both locations and regular organizations will be used as child organizations. When creating a new organization, it is very important to choose the organization type wisely, because it is the only organization parameter, which cannot be modified further. As was described previously, organizations can be arranged in a tree structure. The position of the organization in a tree is determined by the parent organization parameter, which is set by creating a new organization or by editing an existing one. If the parent organization is not set, a top-level organization is always created. There are two ways of creating a suborganization. It is possible to add a new organization by using the Add button and choosing a parent organization manually. The other way is to go to a specific organization's action menu and choose the Add Regular Organization action. While creating a new organization using this option, the parent organization parameter will be set automatically. Setting attributes Similarly, just like its counterpart in reality, every organization in Liferay has a set of attributes that are grouped and can be modified through the organization profile form. This form is available after clicking on the Edit button from the organization's action list (see the There's more… section). All the available attributes are divided into the following groups: The ORGANIZATION INFORMATION group, which contains the following sections: The Details section, which allows us to change the organization name, parent organization, country, or region (available for locations only). The name of the organization is the only required organization parameter. It is used by the search mechanism to search for organizations. It is also a part of an URL address of the organization's sites. The Organization Sites section, which allows us to enable the private and public pages of the organization's website. The Categorization section, which provides tags and categories. They can be assigned to an organization. IDENTIFICATION, which groups the Addresses, Phone Numbers, Additional Email Addresses, Websites, and Services sections. MISCELLANEOUS, which consists of: The Comments section, which allows us to manage an organization's comments The Reminder Queries section, in which reminder queries for different languages can be set The Custom Fields section, which provides a tool to manage values of custom attributes defined for the organization Customizing an organization functionalities Liferay provides the possibility to customize an organization's functionality. In the portal.properties file located in the portal-impl/src folder, there is a section called Organizations. All these settings can be overridden in the portal-ext.properties file. We mentioned that top-level organization cannot have any parent organizations. If we look deeper into portal settings, we can dig out the following properties: organizations.rootable[regular-Organization]=true organizations.rootable[location]=false These properties determine which type of organization can be created as a root organization. In many cases, users want to add a new organization's type. To achieve this goal, it is necessary to set a few properties that describe a new type: organizations.types=regular-Organization,location,my-Organization organizations.rootable[my-organization]=false organizations.children.types[my-organization]=location organizations.country.enabled[my-organization]=false organizations.country.required[my-organization]=false The first property defines a list of available types. The second one denies the possibility to create an organization as a root. The next one specifies a list of types that we can create as children. In our case, this is only the location type. The last two properties turn off the country list in the creation process. This option is useful when the location is not important. Another interesting feature is the ability to customize an organization's profile form. It is possible to indicate which sections are available on the creation form and which are available on the modification form. The following properties aggregate this feature: organizations.form.add.main=details,organization-site organizations.form.add.identification= organizations.form.add.miscellaneous=   organizations.form.update.main=details,organization-site,categorization organizations.form.update.identification=addresses,phone-numbers,additional-email-addresses,websites,services organizations.form.update.miscellaneous=comments,reminder-queries,custom-fields There's more… It is also possible to modify an existing organization and its attributes and to manage its members using actions available in the organization Actions menu. There are several possible actions that can be performed on an organization: The Edit action allows us to modify the attributes of an organization. The Manage Site action redirects the user to the Site Settings section in Control Panel and allows us to manage the organization's public and private sites (if the organization site has been already created). The Assign Organization Roles action allows us to set organization roles to members of an organization. The Assign Users action allows us to assign users already existing in the Liferay database to the specific organization. The Add User action allows us to create a new user, who will be automatically assigned to this specific organization. The Add Regular Organization action enables us to create a new child regular organization (the current organization will be automatically set as a parent organization of a new one). The Add Location action enables us to create a new location (the current organization will be automatically set as a parent organization of a new one). The Delete action allows us to remove an organization. While removing an organization, all pages with portlets and content are also removed. An organization cannot be removed if there are suborganizations or users assigned to it. In order to edit an organization, assign or add users, create a new suborganization (regular organization or location) or delete an organization. Perform the following steps: Log in as an administrator and go to Admin | Control panel | Users | Users and Organizations. Click on the Actions button, located near the name of the organization you want to modify. Click on the name of the chosen action. Creating a new user group Sometimes, in addition to the hierarchy, within the company, there are other groups of people linked by common interests or occupations, such as people working on a specific project, people occupying the same post, and so on. Such groups in Liferay are represented by user groups. This functionality is similar to the LDAP users group where it is possible to set group permissions. One user can be assigned into many user groups. How to do it… In order to create a new user group, follow these steps: Log in as an administrator and go to Admin | Control panel | Users | User Groups. Click on the Add button. Provide Name (required) and Description of the user group. Leave the default values in the User Group Site section. Click on the Save button. How it works… The user groups functionality allows us to create a collection of users and provide them with a public and/or private site, which contain a bunch of tools for collaboration. Unlike the organization, the user group cannot be used to produce a multilevel structure. It enables us to create non-hierarchical groups of users, which can be used by other functionalities. For example, a user group can be used as an additional information targeting tool for the announcements portlet, which presents short messages sent by authorized users (the announcements portlet allows us to direct a message to all users from a specific organization or user group). It is also possible to set permissions to a user group and decide which actions can be performed by which roles within this particular user group. It is worth noting that user groups can assemble users who are already members of organizations. This mechanism is often used when, aside from the company organizational structure, there exist other groups of people who need a common place to store data or for information exchange. There's more… It is also possible to modify an existing user group and its attributes and to manage its members using actions available in the user group Actions menu. There are several possible actions that can be performed on a user group. They are as follows: The Edit action allows us to modify attributes of a user group The Permissions action allows us to decide which roles can assign members of this user group, delete the user group, manage announcements, set permissions, and update or view the user group The Manage Site Pages action redirects the user to the site settings section in Control Panel and allows us to manage the user group's public and private sites The Go to the Site's Public Pages action opens the user group's public pages in a new window (if any public pages of User Group Site has been created) The Go to the Site's Private Pages action opens the user group's private pages in a new window (if any public pages of User Group Site has been created) The Assign Members action allows us to assign users already existing in the Liferay database to this specific user group The Delete action allows us to delete a user group A user group cannot be removed if there are users assigned to it. In order to edit a user group, set permissions, assign members, manage site pages, or delete a user group, perform these steps: Go to Admin | Control panel | Users | User Groups. Click on the Actions button, located near the name of the user group you want to modify: Click on the name of the chosen action. Adding a new user Each system is created for users. Liferay Portal CMS provides a few different ways of adding users to the system that can be enabled or disabled depending on the requirements. The first way is to enable users by creating their own accounts via the Create Account form. This functionality allows all users who can enter the site containing the form to register and gain access to the designated content of the website. In this case, the system automatically assigns the default user account parameters, which indicate the range of activities that may be carried by them in the system. The second solution (which we presented in this recipe) is to reserve the users' account creation to the administrators, who will decide what parameters should be assigned to each account. How to do it… To add a new user, you need to follow these steps: Log in as an administrator and go to Admin | Control panel | Users | Users and Organizations. Click on the Add button. Choose the User option. Fill in the form by providing the user's details in the Email Address (Required), Title, First Name (Required), Middle Name, Last Name, Suffix, Birthday, and Job Title fields (if the Autogenerated User Screen Names option in the Portal Settings | Users section is disabled, the screen name field will be available): Click on the Save button: Using the right column menu, navigate to the data sections you want to fill in with the information. Click on the Save button. How it works… In steps 1 through 5, we created a new user. With steps 6 and 7, we defined a set of attributes of the newly created user. This user is active and can already perform activities according to their memberships and roles. To understand all the mechanisms that influence the user's possible behavior in the system, we have to take a deeper look at these attributes. User as a member of organizations, user groups, and sites The first and most important thing to know about users is that they can be members of organizations, user groups, and sites. The range of activities performed by users within each organization, user group, or site they belong to is determined by the roles assigned to them. All the roles must be assigned for each user of an organization and site individually. This means it is possible, for instance, to make a user the administrator of one organization and only a power user of another. User attributes Each user in Liferay has a set of attributes that are grouped and can be modified through the user profile form. This form is available after clicking on the Edit button from the user's actions list (see, the There's more… section). All the available attributes are divided into the following groups: USER INFORMATION, which contains the following sections: The Details section enables us to provide basic user information, such as Screen Name, Email Address, Title, First Name, Middle Name, Last Name, Suffix, Birthday, Job Title, and Avatar The Password section allows us to set a new password or force a user to change their current password The Organizations section enables us to choose the organizations of which the user is a member The Sites section enables us to choose the sites of which the user is a member The User Groups section enables us to choose user groups of which the user is a member The Roles tab allows us to assign user roles The Personal Site section helps direct the public and private sites to the user The Categorization section provides tags and categories, which can be assigned to a user IDENTIFICATION allows us to to set additional user information, such as Addresses, Phone Numbers, Additional Email Addresses, Websites, Instant Messenger, Social Network, SMS, and OpenID MISCELLANEOUS, which contains the following sections: The Announcements section allows us to set the delivery options for alerts and announcements The Display Settings section covers the Language, Time Zone, and Greeting text options The Comments section allows us to manage the user's comments The Custom Fields section provides a tool to manage values of custom attributes defined for the user User site As it was mentioned earlier, each user in Liferay may have access to different kinds of sites: organization sites, user group sites, and standalone sites. In addition to these, however, users may also have their own public and private sites, which can be managed by them. The user's public and private sites can be reached from the user's menu located on the dockbar (the My Profile and My Dashboard links). It is also possible to enter these sites using their addresses, which are /web/username/home and /user/username/home, respectively. Customizing users Liferay gives us a whole bunch of settings in portal.properties under the Users section. If you want to override some of the properties, put them into the portal-ext.properties file. It is possible to deny deleting a user by setting the following property: users.delete=false As in the case of organizations, there is a functionality that lets us customize sections on the creation or modification form: users.form.add.main=details,Organizations,personal-site users.form.add.identification= users.form.add.miscellaneous=   users.form.update.main=details,password,Organizations,sites,user-groups,roles,personal-site,categorization users.form.update.identification=addresses,phone-numbers,additional-email-addresses,websites,instant-messenger,social-network,sms,open-id users.form.update.miscellaneous=announcements,display-settings,comments,custom-fields There are many other properties, but we will not discuss all of them. In portal.properties, located in the portal-impl/src folder, under the Users section, it is possible to find all the settings, and every line is documented by comment. There's more… Each user in the system can be active or inactive. An active user can log into their user account and use all resources available for them within their roles and memberships. Inactive user cannot enter his account, access places and perform activities, which are reserved for authorized and authenticated users only. It is worth noticing that active users cannot be deleted. In order to remove a user from Liferay, you need to to deactivate them first. To deactivate a user, follow these steps: Log in as an administrator and go to Admin | Control panel | Users | Users and Organizations. Click on the Actions button located near the name of the user. Go to the All Users tab. Find the active user you want to deactivate. Click on the Deactivate button. Confirm this action by clicking on the Ok button. To activate a user, follow these steps: Log in as an administrator and go to Admin | Control panel | Users | Users and Organizations. Go to the All Users tab. Find the inactive user you want to activate. Click on the Actions button located near the name of the user. Click on the Activate button. Sometimes, when using the system, users report some irregularities or get a little confused and require assistance. You need to look at the page through the user's eyes. Liferay provides a very useful functionality that allows authorized users to impersonate another user. In order to use this functionality, perform these steps: Log in as an administrator and go to Control Panel | Users | Users and Organizations. Click on the Actions button located near the name of the user. Click on the Impersonate user button. See also For more information on managing users, refer to the Exporting users recipe from this article Assigning users to organizations There are several ways a user can be assigned to an organization. It can be done by editing the user account that has already been created (see the User attributes section in Adding a new user recipe) or using the Assign Users action from the organization actions menu. In this recipe, we will show you how to assign a user to an organization using the option available in the organization actions menu. Getting ready To go through this recipe, you will need an organization and a user (refer to Managing an organization structure and Adding a new user recipes from this article). How to do it… In order to assign a user to an organization from the organization menu, follow these steps: Log in as an administrator and go to Admin | Control panel | Users | Users and Organizations. Click on the Actions button located near the name of the organization to which you want to assign the user. Choose the Assign Users option. Click on the Available tab. Mark a user or group of users you want to assign. Click on the Update Associations button. How it works… Each user in Liferay can be assigned to as many regular organizations as required and to exactly one location. When a user is assigned to the organization, they appear on the list of users of the organization. They become members of the organization and gain access to the organization's public and private pages according to the assigned roles and permissions. As was shown in the previous recipe, while editing the list of assigned users in the organization menu, it is possible to assign multiple users. It is worth noting that an administrator can assign the users of the organizations and suborganizations tasks that she or he can manage. To allow any administrator of an organization to be able to assign any user to that organization, set the following property in the portal-ext.properties file: Organizations.assignment.strict=true In many cases, when our organizations have a tree structure, it is not necessary that a member of a child organization has access to the ancestral ones. To disable this structure set the following property: Organizations.membership.strict=true See also For information on how to create user accounts, refer to the Adding a new user recipe from this article For information on assigning users to user groups, refer to the Assigning users to a user group recipe from this article Assigning users to a user group In addition to being a member of the organization, each user can be a member of one or more user groups. As a member of a user group, a user can profit by getting access to the user group's sites or other information directed exclusively to its members, for instance, messages sent by the Announcements portlet. A user becomes a member of the group when they are assigned to it. This assignment can be done by editing the user account that has already been created (see the User attributes description in Adding a new user recipe) or using the Assign Members action from the User Groups actions menu. In this recipe, we will show you how to assign a user to a user group using the option available in the User Groups actions menu. Getting ready To step through this recipe, first, you have to create a user group and a user (see the Creating a new user group and Adding a new user recipes). How to do it… In order to assign a user to a user group from the User Groups menu, perform these steps: Log in as an administrator and go to Admin | Control panel | Users | User Groups. Click on the Actions button located near the name of the user group to which you want to assign the user. Click on the Assign Members button. Click on the Available tab. Mark a user or group of users you want to assign. Click on the Update Associations button. How it works… As was shown in this recipe, one or more users can be assigned to a user group by editing the list of assigned users in the user group menu. Each user assigned to a user group becomes a member of this group and gains access to the user group's public and private pages according to assigned roles and permissions. See also For information on how to create user accounts, refer to the Adding a new user recipe from this article For information about assigning users to organization, refer to the Assigning users to organizations recipe from this article Exporting users Liferay Portal CMS provides a simple export mechanism, which allows us to export a list of all the users stored in the database or a list of all the users from a specific organization to a file. How to do it… In order to export the list of all users from the database to a file, follow these steps: Log in as an administrator and go to Admin | Control Panel | Users | Users and Organizations. Click on the Export Users button. In order to export the list of all users from the specific organization to a file, follow these steps: Log in as an administrator and go to Admin | Control Panel | Users | Users and Organizations. Click on the All Organizations tab. Click on the name of an organization to which the users are supposed to be exported. Click on the Export Users button. How it works… As mentioned previously, Liferay allows us to export users from a particular organization to a .csv file. The .csv file contains a list of user names and corresponding e-mail addresses. It is also possible to export all the users by clicking on the Export Users button located on the All Users tab. You will find this tab by going to Admin | Control panel | Users | Users and Organizations. See also For information on how to create user accounts, refer to the Adding a new user recipe from this article For information on how to assign users to organizations, refer to the Assigning users to organizations recipe from this article Summary In this article, you have learnt how to manage an organization structure by creating users and assigning them to organizations and user groups. You have also learnt how to export users using Liferay's export mechanism. Resources for Article: Further resources on this subject: Cache replication [article] Portlet [article] Liferay, its Installation and setup [article]
Read more
  • 0
  • 1
  • 5320

article-image-mailing-spring-mail
Packt
04 Jun 2015
19 min read
Save for later

Mailing with Spring Mail

Packt
04 Jun 2015
19 min read
In this article, by Anjana Mankale, author of the book Mastering Spring Application Development we shall see how we can use the Spring mail template to e-mail recipients. We shall also demonstrate using Spring mailing template configurations using different scenarios. (For more resources related to this topic, see here.) Spring mail message handling process The following diagram depicts the flow of a Spring mail message process. With this, we can clearly understand the process of sending mail using a Spring mailing template. A message is created and sent to the transport protocol, which interacts with internet protocols. Then, the message is received by the recipients. The Spring mail framework requires a mail configuration, or SMTP configuration, as the input and message that needs to be sent. The mail API interacts with internet protocols to send messages. In the next section, we shall look at the classes and interfaces in the Spring mail framework. Interfaces and classes used for sending mails with Spring The package org.springframework.mail is used for mail configuration in the spring application. The following are the three main interfaces that are used for sending mail: MailSender: This interface is used to send simple mail messages. JavaMailSender: This interface is a subinterface of the MailSender interface and supports sending mail messages. MimeMessagePreparator: This interface is a callback interface that supports the JavaMailSender interface in the preparation of mail messages. The following classes are used for sending mails using Spring: SimpleMailMessage: This is a class which has properties such as to, from, cc, bcc, sentDate, and many others. The SimpleMailMessage interface sends mail with MailSenderImp classes. JavaMailSenderImpl: This class is an implementation class of the JavaMailSender interface. MimeMessageHelper: This class helps with preparing MIME messages. Sending mail using the @Configuration annotation We shall demonstrate here how we can send mail using the Spring mail API. First, we provide all the SMTP details in the .properties file and read it to the class file with the @Configuration annotation. The name of the class is MailConfiguration. mail.properties file contents are shown below: mail.protocol=smtp mail.host=localhost mail.port=25 mail.smtp.auth=false mail.smtp.starttls.enable=false mail.from=me@localhost mail.username= mail.password=   @Configuration @PropertySource("classpath:mail.properties") public class MailConfiguration { @Value("${mail.protocol}") private String protocol; @Value("${mail.host}") private String host; @Value("${mail.port}") private int port; @Value("${mail.smtp.auth}") private boolean auth; @Value("${mail.smtp.starttls.enable}") private boolean starttls; @Value("${mail.from}") private String from; @Value("${mail.username}") private String username; @Value("${mail.password}") private String password;   @Bean public JavaMailSender javaMailSender() {    JavaMailSenderImpl mailSender = new JavaMailSenderImpl();    Properties mailProperties = new Properties();    mailProperties.put("mail.smtp.auth", auth);    mailProperties.put("mail.smtp.starttls.enable", starttls);    mailSender.setJavaMailProperties(mailProperties);    mailSender.setHost(host);    mailSender.setPort(port);    mailSender.setProtocol(protocol);    mailSender.setUsername(username);    mailSender.setPassword(password);    return mailSender; } } The next step is to create a rest controller to send mail; to do so, click on Submit. We shall use the SimpleMailMessage interface since we don't have any attachment. @RestController class MailSendingController { private final JavaMailSender javaMailSender; @Autowired MailSubmissionController(JavaMailSender javaMailSender) {    this.javaMailSender = javaMailSender; } @RequestMapping("/mail") @ResponseStatus(HttpStatus.CREATED) SimpleMailMessage send() {    SimpleMailMessage mailMessage = new SimpleMailMessage();    mailMessage.setTo("packt@localhost");    mailMessage.setReplyTo("anjana@localhost");    mailMessage.setFrom("Sonali@localhost");    mailMessage.setSubject("Vani veena Pani");  mailMessage.setText("MuthuLakshmi how are you?Call      Me Please [...]");    javaMailSender.send(mailMessage);    return mailMessage; } } Sending mail using MailSender and Simple Mail Message with XML configuration "Simple mail message" means the e-mail sent will only be text-based with no HTML formatting, no images, and no attachments. In this section, consider a scenario where we are sending a welcome mail to the user as soon as the user gets their order placed in the application. In this scenario, the mail will be sent after the database insertion operation is successful. Create a separate folder, called com.packt.mailService, for the mail service. The following are the steps for sending mail using the MailSender interface and SimpleMailMessage class. Create a new Maven web project with the name Spring4MongoDB_MailChapter3. We have also used the same Eshop db database with MongoDB for CRUD operations on Customer, Order, and Product. We have also used the same mvc configurations and source files. Use the same dependencies as used previously. We need to add dependencies to the pom.xml file: <dependency> <groupId>org.springframework.integration</groupId> <artifactId>spring-integration-mail</artifactId> <version>3.0.2.RELEASE</version> <scope>runtime</scope> </dependency> <dependency> <groupId>javax.activation</groupId> <artifactId>activation</artifactId> <version>1.1-rev-1</version> <scope>runtime</scope> </dependency> <dependency> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> <version>1.4.3</version> </dependency> Compile the Maven project. Create a separate folder called com.packt.mailService for the mail service. Create a simple class named MailSenderService and autowire the MailSender and SimpleMailMessage classes. The basic skeleton is shown here: public class MailSenderService { @Autowired private MailSender mailSender; @AutoWired private SimpleMailMessage simplemailmessage; public void sendmail(String from, String to, String    subject, String body){    /*Code */ }   } Next, create an object of SimpleMailMessage and set mail properties, such as from, to, and subject to it. public void sendmail(String from, String to, String subject, String body){ SimpleMailMessage message=new SimpleMailMessage(); message.setFrom(from); message.setSubject(subject); message.setText(body); mailSender.send(message); } We need to configure the SMTP details. Spring Mail Support provides this flexibility of configuring SMTP details in the XML file. <bean id="mailSender" class="org.springframework.mail.javamail. JavaMailSenderImpl"> <property name="host" value="smtp.gmail.com" /> <property name="port" value="587" /> <property name="username" value="username" /> <property name="password" value="password" />   <property name="javaMailProperties"> <props>    <prop key="mail.smtp.auth">true</prop>    <prop key="mail.smtp.starttls.enable">true</prop> </props> </property> </bean>   <bean id="mailSenderService" class=" com.packt.mailserviceMailSenderService "> <property name="mailSender" ref="mailSender" /> </bean>   </beans> We need to send mail to the customer after the order has been placed successfully in the MongoDB database. Update the addorder() method as follows: @RequestMapping(value = "/order/save", method = RequestMethod.POST) // request insert order recordh public String addorder(@ModelAttribute("Order")    Order order,Map<String, Object> model) {    Customer cust=new Customer();    cust=customer_respository.getObject      (order.getCustomer().getCust_id());      order.setCustomer(cust);    order.setProduct(product_respository.getObject      (order.getProduct().getProdid()));    respository.saveObject(order);    mailSenderService.sendmail      ("anjana.mprasad@gmail.com",cust.getEmail(),      "Dear"+cust.getName()+"Your order      details",order.getProduct().getName()+"-price-"+order      .getProduct().getPrice());    model.put("customerList", customerList);    model.put("productList", productList);    return "order"; } Sending mail to multiple recipients If you want to intimate the user regarding the latest products or promotions in the application, you can create a mail sending group and send mail to multiple recipients using Spring mail sending support. We have created an overloaded method in the same class, MailSenderService, which will accept string arrays. The code snippet in the class will look like this: public class MailSenderService { @Autowired private MailSender mailSender; @AutoWired private SimpleMailMessage simplemailmessage; public void sendmail(String from, String to, String subject,    String body){    /*Code */ }   public void sendmail(String from, String []to, String subject,    String body){    /*Code */ }   } The following is the code snippet for listing the set of users from MongoDB who have subscribed to promotional e-mails: public List<Customer> getAllObjectsby_emailsubscription(String    status) {    return mongoTemplate.find(query(      where("email_subscribe").is("yes")), Customer.class); } Sending MIME messages Multipurpose Internet Mail Extension (MIME) allows attachments to be sent over the Internet. This class just demonstrates how we can send mail with MIME messages. Using a MIME message sender type class is not advisible if you are not sending any attachments with the mail message. In the next section, we will look at the details of how we can send mail with attachments. Update the MailSenderService class with another method. We have used the MIME message preparator and have overridden the prepare method() to set properties for the mail. public class MailSenderService { @Autowired private MailSender mailSender; @AutoWired private SimpleMailMessage simplemailmessage;   public void sendmail(String from, String to, String subject,    String body){    /*Code */ } public void sendmail(String from, String []to, String subject,    String body){    /*Code */ } public void sendmime_mail(final String from, final String to,    final String subject, final String body) throws MailException{    MimeMessagePreparator message = new MimeMessagePreparator() {      public void prepare(MimeMessage mimeMessage)        throws Exception {        mimeMessage.setRecipient(Message.RecipientType.TO,new          InternetAddress(to));        mimeMessage.setFrom(new InternetAddress(from));        mimeMessage.setSubject(subject);        mimeMessage.setText(msg);    } }; mailSender.send(message); } Sending attachments with mail We can also attach various kinds of files to the mail. This functionality is supported by the MimeMessageHelper class. If you just want to send a MIME message without an attachment, you can opt for MimeMesagePreparator. If the requirement is to have an attachment to be sent with the mail, we can go for the MimeMessageHelper class with file APIs. Spring provides a file class named org.springframework.core.io.FileSystemResource, which has a parameterized constructor that accepts file objects. public class SendMailwithAttachment { public static void main(String[] args)    throws MessagingException {    AnnotationConfigApplicationContext ctx =      new AnnotationConfigApplicationContext();    ctx.register(AppConfig.class);    ctx.refresh();    JavaMailSenderImpl mailSender =      ctx.getBean(JavaMailSenderImpl.class);    MimeMessage mimeMessage = mailSender.createMimeMessage();    //Pass true flag for multipart message    MimeMessageHelper mailMsg = new MimeMessageHelper(mimeMessage,      true);    mailMsg.setFrom("ANJUANJU02@gmail.com");    mailMsg.setTo("RAGHY03@gmail.com");    mailMsg.setSubject("Test mail with Attachment");    mailMsg.setText("Please find Attachment.");    //FileSystemResource object for Attachment    FileSystemResource file = new FileSystemResource(new      File("D:/cp/ GODGOD. jpg"));    mailMsg.addAttachment("GODGOD.jpg", file);    mailSender.send(mimeMessage);    System.out.println("---Done---"); }   } Sending preconfigured mail In this example, we shall provide a message that is to be sent in the mail, and we will configure it in an XML file. Sometimes when it comes to web applications, you may have to send messages on maintenance. Think of a scenario where the content of the mail changes, but the sender and receiver are preconfigured. In such a case, you can add another overloaded method to the MailSender class. We have fixed the subject of the mail, and the content can be sent by the user. Think of it as "an application which sends mails to users whenever the build fails". <?xml version="1.0" encoding="UTF-8"?> <beans xsi_schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/ context/spring-context-3.0.xsd"> <context:component-scan base-package="com.packt" /> <!-- SET default mail properties --> <bean id="mailSender" class= "org.springframework.mail.javamail.JavaMailSenderImpl"> <property name="host" value="smtp.gmail.com"/> <property name="port" value="25"/> <property name="username" value="anju@gmail.com"/> <property name="password" value="password"/> <property name="javaMailProperties"> <props>    <prop key="mail.transport.protocol">smtp</prop>    <prop key="mail.smtp.auth">true</prop>    <prop key="mail.smtp.starttls.enable">true</prop>    <prop key="mail.debug">true</prop> </props> </property> </bean>   <!-- You can have some pre-configured messagess also which are ready to send --> <bean id="preConfiguredMessage" class= "org.springframework.mail.SimpleMailMessage"> <property name="to" value="packt@gmail.com"></property> <property name="from" value="anju@gmail.com"></property> <property name="subject" value="FATAL ERROR- APPLICATION AUTO    MAINTENANCE STARTED-BUILD FAILED!!"/> </bean> </beans> Now we shall sent two different bodies for the subjects. public class MyMailer { public static void main(String[] args){    try{      //Create the application context      ApplicationContext context = new        FileSystemXmlApplicationContext(        "application-context.xml");        //Get the mailer instance      ApplicationMailer mailer = (ApplicationMailer)        context.getBean("mailService");      //Send a composed mail      mailer.sendMail("nikhil@gmail.com", "Test Subject",        "Testing body");    }catch(Exception e){      //Send a pre-configured mail      mailer.sendPreConfiguredMail("build failed exception occured        check console or logs"+e.getMessage());    } } } Using Spring templates with Velocity to send HTML mails Velocity is the templating language provided by Apache. It can be integrated into the Spring view layer easily. The latest Velocity version used during this book is 1.7. In the previous section, we demonstrated using Velocity to send e-mails using the @Bean and @Configuration annotations. In this section, we shall see how we can configure Velocity to send mails using XML configuration. All that needs to be done is to add the following bean definition to the .xml file. In the case of mvc, you can add it to the dispatcher-servlet.xml file. <bean id="velocityEngine" class= "org.springframework.ui.velocity.VelocityEngineFactoryBean"> <property name="velocityProperties"> <value>    resource.loader=class    class.resource.loader.class=org.apache.velocity    .runtime.resource.loader.ClasspathResourceLoader </value> </property> </bean> Create a new Maven web project with the name Spring4MongoDB_Mail_VelocityChapter3. Create a package and name it com.packt.velocity.templates. Create a file with the name orderconfirmation.vm. <html> <body> <h3> Dear Customer,<h3> <p>${customer.firstName} ${customer.lastName}</p> <p>We have dispatched your order at address.</p> ${Customer.address} </body> </html> Use all the dependencies that we have added in the previous sections. To the existing Maven project, add this dependency: <dependency> <groupId>org.apache.velocity</groupId> <artifactId>velocity</artifactId> <version>1.7</version> </dependency> To ensure that Velocity gets loaded on application startup, we shall create a class. Let's name the class VelocityConfiguration.java. We have used the annotations @Configuration and @Bean with the class. import java.io.IOException; import java.util.Properties;   import org.apache.velocity.app.VelocityEngine; import org.apache.velocity.exception.VelocityException; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.ui.velocity.VelocityEngineFactory; @Configuration public class VelocityConfiguration { @Bean public VelocityEngine getVelocityEngine() throws VelocityException, IOException{    VelocityEngineFactory velocityEngineFactory = new      VelocityEngineFactory();    Properties props = new Properties();    props.put("resource.loader", "class");    props.put("class.resource.loader.class",      "org.apache.velocity.runtime.resource.loader." +      "ClasspathResourceLoader");    velocityEngineFactory.setVelocityProperties(props);    return factory.createVelocityEngine(); } } Use the same MailSenderService class and add another overloaded sendMail() method in the class. public void sendmail(final Customer customer){ MimeMessagePreparator preparator = new    MimeMessagePreparator() {    public void prepare(MimeMessage mimeMessage)    throws Exception {      MimeMessageHelper message =        new MimeMessageHelper(mimeMessage);      message.setTo(user.getEmailAddress());      message.setFrom("webmaster@packt.com"); // could be        parameterized      Map model = new HashMap();      model.put("customer", customer);      String text =        VelocityEngineUtils.mergeTemplateIntoString(        velocityEngine, "com/packt/velocity/templates/        orderconfirmation.vm", model);      message.setText(text, true);    } }; this.mailSender.send(preparator); } Update the controller class to send mail using the Velocity template. @RequestMapping(value = "/order/save", method = RequestMethod.POST) // request insert order recordh public String addorder(@ModelAttribute("Order") Order order,Map<String, Object> model) { Customer cust=new Customer(); cust=customer_respository.getObject(order.getCustomer()    .getCust_id());   order.setCustomer(cust); order.setProduct(product_respository.getObject    (order.getProduct().getProdid())); respository.saveObject(order); // to send mail using velocity template. mailSenderService.sendmail(cust);   return "order"; } Sending Spring mail over a different thread There are other options for sending Spring mail asynchronously. One way is to have a separate thread to the mail sending job. Spring comes with the taskExecutor package, which offers us a thread pooling functionality. Create a class called MailSenderAsyncService that implements the MailSender interface. Import the org.springframework.core.task.TaskExecutor package. Create a private class called MailRunnable. Here is the complete code for MailSenderAsyncService: public class MailSenderAsyncService implements MailSender{ @Resource(name = "mailSender") private MailSender mailSender;   private TaskExecutor taskExecutor;   @Autowired public MailSenderAsyncService(TaskExecutor taskExecutor){    this.taskExecutor = taskExecutor; } public void send(SimpleMailMessage simpleMessage) throws    MailException {    taskExecutor.execute(new MailRunnable(simpleMessage)); }   public void send(SimpleMailMessage[] simpleMessages)    throws MailException {    for (SimpleMailMessage message : simpleMessages) {      send(message);    } }   private class SimpleMailMessageRunnable implements    Runnable {    private SimpleMailMessage simpleMailMessage;    private SimpleMailMessageRunnable(SimpleMailMessage      simpleMailMessage) {      this.simpleMailMessage = simpleMailMessage;    }      public void run() {    mailSender.send(simpleMailMessage);    } } private class SimpleMailMessagesRunnable implements    Runnable {    private SimpleMailMessage[] simpleMessages;    private SimpleMailMessagesRunnable(SimpleMailMessage[]      simpleMessages) {      this.simpleMessages = simpleMessages;    }      public void run() {      mailSender.send(simpleMessages);    } } } Configure the ThreadPool executor in the .xml file. <bean id="taskExecutor" class="org.springframework. scheduling.concurrent.ThreadPoolTaskExecutor" p_corePoolSize="5" p_maxPoolSize="10" p_queueCapacity="100"    p_waitForTasksToCompleteOnShutdown="true"/> Test the source code. import javax.annotation.Resource;   import org.springframework.mail.MailSender; import org.springframework.mail.SimpleMailMessage; import org.springframework.test.context.ContextConfiguration;   @ContextConfiguration public class MailSenderAsyncService { @Resource(name = " mailSender ") private MailSender mailSender; public void testSendMails() throws Exception {    SimpleMailMessage[] mailMessages = new      SimpleMailMessage[5];      for (int i = 0; i < mailMessages.length; i++) {      SimpleMailMessage message = new SimpleMailMessage();      message.setSubject(String.valueOf(i));      mailMessages[i] = message;    }    mailSender.send(mailMessages); } public static void main (String args[]){    MailSenderAsyncService asyncservice=new      MailSenderAsyncService();    Asyncservice. testSendMails(); } } Sending Spring mail with AOP We can also send mails by integrating the mailing functionality with Aspect Oriented Programming (AOP). This can be used to send mails after the user registers with an application. Think of a scenario where the user receives an activation mail after registration. This can also be used to send information about an order placed on an application. Use the following steps to create a MailAdvice class using AOP: Create a package called com.packt.aop. Create a class called MailAdvice. public class MailAdvice { public void advice (final ProceedingJoinPoint    proceedingJoinPoint) {    new Thread(new Runnable() {    public void run() {      System.out.println("proceedingJoinPoint:"+        proceedingJoinPoint);      try {        proceedingJoinPoint.proceed();      } catch (Throwable t) {        // All we can do is log the error.         System.out.println(t);      }    } }).start(); } } This class creates a new thread and starts it. In the run method, the proceedingJoinPoint.proceed() method is called. ProceddingJoinPoint is a class available in AspectJ.jar. Update the dispatcher-servlet.xml file with aop configurations. Update the xlmns namespace using the following code: advice"> <aop:around method="fork"    pointcut="execution(* org.springframework.mail    .javamail.JavaMailSenderImpl.send(..))"/> </aop:aspect> </aop:config> Summary In this article, we demonstrated how to create a mailing service and configure it using Spring API. We also demonstrated how to send mails with attachments using MIME messages. We also demonstrated how to create a dedicated thread for sending mails using ExecutorService. We saw an example in which mail can be sent to multiple recipients, and saw an implementation of using the Velocity engine to create templates and send mails to recipients. In the last section, we demonstrated how the Spring framework supported mails can be sent using Spring AOP and threads. Resources for Article: Further resources on this subject: Time Travelling with Spring [article] Welcome to the Spring Framework [article] Creating a Spring Application [article]
Read more
  • 0
  • 0
  • 9684

article-image-mapreduce-api
Packt
02 Jun 2015
10 min read
Save for later

Map/Reduce API

Packt
02 Jun 2015
10 min read
 In this article by Wagner Roberto dos Santos, author of the book Infinispan Data Grid Platform Definitive Guide, we will see the usage of Map/Reduce API and its introduction in Infinispan. Using the Map/Reduce API According to Gartner, from now on in-memory data grids and in-memory computing will be racing towards mainstream adoption and the market for this kind of technology is going to reach 1 billion by 2016. Thinking along these lines, Infinispan already provides a MapReduce API for distributed computing, which means that we can use Infinispan cache to process all the data stored in heap memory across all Infinispan instances in parallel. If you're new to MapReduce, don't worry, we're going to describe it in the next section in a way that gets you up to speed quickly. An introduction to Map/Reduce MapReduce is a programming model introduced by Google, which allows for massive scalability across hundreds or thousands of servers in a data grid. It's a simple concept to understand for those who are familiar with distributed computing and clustered environments for data processing solutions. You can find the paper about MapReduce in the following link:http://research.google.com/archive/mapreduce.html The MapReduce has two distinct computational phases; as the name states, the phases are map and reduce: In the map phase, a function called Map is executed, which is designed to take a set of data in a given cache and simultaneously perform filtering, sorting operations, and outputs another set of data on all nodes. In the reduce phase, a function called Reduce is executed, which is designed to reduce the final form of the results of the map phase in one output. The reduce function is always performed after the map phase. Map/Reduce in the Infinispan platform The Infinispan MapReduce model is an adaptation of the Google original MapReduce model. There are four main components in each map reduce task, they are as follows: MapReduceTask: This is a distributed task allowing a large-scale computation to be transparently parallelized across Infinispan cluster nodes. This class provides a constructor that takes a cache whose data will be used as the input for this task. The MapReduceTask orchestrates the execution of the Mapper and Reducer seamlessly across Infinispan nodes. Mapper: A Mapper is used to process each input cache entry K,V. A Mapper is invoked by MapReduceTask and is migrated to an Infinispan node, to transform the K,V input pair into intermediate keys before emitting them to a Collector. Reducer: A Reducer is used to process a set of intermediate key results from the map phase. Each execution node will invoke one instance of Reducer and each instance of the Reducer only reduces intermediate keys results that are locally stored on the execution node. Collator: This collates results from reducers executed on the Infinispan cluster and assembles a final result returned to an invoker of MapReduceTask. The following image shows that in a distributed environment, an Infinispan MapReduceTask is responsible for starting the process for a given cache, unless you specify an onKeys(Object...) filter, all available key/value pairs of the cache will be used as input data for the map reduce task:   In the preceding image, the Map/Reduce processes are performing the following steps: The MapReduceTask in the Master Task Node will start the Map Phase by hashing the task input keys and grouping them by the execution node they belong to and then, the Infinispan master node will send a map function and input keys to each node. In each destination, the map will be locally loaded with the corresponding value using the given keys. The map function is executed on each node, resulting in a map< KOut, VOut > object on each node. The Combine Phase is initiated when all results are collected, if a combiner is specified (via combineWith(Reducer<KOut, VOut> combiner) method), the combiner will extract the KOut keys and invoke the reduce phase on keys. Before starting the Reduce Phase, Infinispan will execute an intermediate migration phase, where all intermediate keys and values are grouped. At the end of the Combine Phase, a list of KOut keys are returned to the initial Master Task Node. At this stage, values (VOut) are not returned, because they are not needed in the master node. At this point, Infinispan is ready to start the Reduce Phase; the Master Task Node will group KOut keys by the execution node and send a reduce command to each node where keys are hashed. The reducer is invoked and for each KOut key, the reducer will grab a list of VOut values from a temporary cache belonging to MapReduceTask, wraps it with an iterator, and invokes the reduce method on it. Each reducer will return one map with the KOut/VOut result values. The reduce command will return to the Master Task Node, which in turn will combine all resulting maps into one single map and return it as a result of MapReduceTask. Sample application – find a destination Now that we have seen what map and reduce are, and how the Infinispan model works, let's create a Find Destination application that illustrates the concepts we have discussed. To demonstrate how CDI works, in the last section, we created a web service that provides weather information. Now, based on this same weather information service, let's create a map/reduce engine for the best destination based on simple business rules, such as destination type (sun destination, golf, skiing, and so on). So, the first step is to create the WeatherInfo cache object that will hold information about the weather: public class WeatherInfo implements Serializable {  private static final long serialVersionUID =     -3479816816724167384L;  private String country;  private String city;  private Date day;  private Double temp;  private Double tempMax;  private Double tempMin;  public WeatherInfo(String country, String city, Date day,     Double temp) {    this(country, city, day, temp, temp + 5, temp - 5);  }  public WeatherInfo(String country, String city, Date day,     Double temp,    Double tempMax, Double tempMin) {    super();    this.country = country;    this.city = city;    this.day = day;    this.temperature = temp;    this.temperatureMax = tempMax;    this.temperatureMin = tempMin;  }// Getters and Setters ommitted  @Override  public String toString() {    return "{WeatherInfo:{ country:" + country + ", city:" +       city + ", day:" + day + ", temperature:" + temperature + ",       temperatureMax:" + temperatureMax + ", temperatureMin:" +           temperatureMin + "}";  }} Now, let's create an enum object to define the type of destination a user can select and the rules associated with each destination. To keep it simple, we are going to have only two destinations, sun and skiing. The temperature value will be used to evaluate if the destination can be considered the corresponding type: public enum DestinationTypeEnum {SUN(18d, "Sun Destination"), SKIING(-5d, "Skiing Destination");private Double temperature;private String description;DestinationTypeEnum(Double temperature, String description) {this.temperature = temperature;this.description = description;}public Double getTemperature() {return temperature;}public String getDescription() {return description;} Now it's time to create the Mapper class—this class is going to be responsible for validating whether each cache entry fits the destination requirements. To define the DestinationMapper class, just extend the Mapper<KIn, VIn, KOut, VOut> interface and implement your algorithm in the map method; public class DestinationMapper implementsMapper<String, WeatherInfo, DestinationTypeEnum, WeatherInfo> {private static final long serialVersionUID =-3418976303227050166L;public void map(String key, WeatherInfo weather,Collector<DestinationTypeEnum, WeatherInfo> c) {if (weather.getTemperature() >= SUN.getTemperature()){c.emit(SUN, weather);}else if (weather.getTemperature() <=SKIING.getTemperature()) {c.emit(SKIING, weather);}}} The role of the Reducer class in our application is to return the best destination among all destinations, based on the highest temperature for sun destinations and the lowest temperature for skiing destinations, returned by the mapping phase. To implement the Reducer class, you'll need to implement the Reducer<KOut, VOut> interface: public class DestinationReducer implementsReducer<DestinationTypeEnum, WeatherInfo> {private static final long serialVersionUID = 7711240429951976280L;public WeatherInfo reduce(DestinationTypeEnum key,Iterator<WeatherInfo> it) {WeatherInfo bestPlace = null;if (key.equals(SUN)) {while (it.hasNext()) {WeatherInfo w = it.next();if (bestPlace == null || w.getTemp() >bestPlace.getTemp()) {bestPlace = w;}}} else { /// Best for skiingwhile (it.hasNext()) {WeatherInfo w = it.next();if (bestPlace == null || w.getTemp() <bestPlace.getTemp()) {bestPlace = w;}}}return bestPlace;}} Finally, to execute our sample application, we can create a JUnit test case with the MapReduceTask. But first, we have to create a couple of cache entries before executing the task, which we are doing in the setUp() method: public class WeatherInfoReduceTest {private static final Log logger =LogFactory.getLog(WeatherInfoReduceTest.class);private Cache<String, WeatherInfo> weatherCache;@Beforepublic void setUp() throws Exception {Date today = new Date();EmbeddedCacheManager manager = new DefaultCacheManager();Configuration config = new ConfigurationBuilder().clustering().cacheMode(CacheMode.LOCAL).build();manager.defineConfiguration("weatherCache", config);weatherCache = manager.getCache("weatherCache");WeatherInfoweatherCache.put("1", new WeatherInfo("Germany", "Berlin",today, 12d));weatherCache.put("2", new WeatherInfo("Germany","Stuttgart", today, 11d));weatherCache.put("3", new WeatherInfo("England", "London",today, 8d));weatherCache.put("4", new WeatherInfo("England","Manchester", today, 6d));weatherCache.put("5", new WeatherInfo("Italy", "Rome",today, 17d));weatherCache.put("6", new WeatherInfo("Italy", "Napoli",today, 18d));weatherCache.put("7", new WeatherInfo("Ireland", "Belfast",today, 9d));weatherCache.put("8", new WeatherInfo("Ireland", "Dublin",today, 7d));weatherCache.put("9", new WeatherInfo("Spain", "Madrid",today, 19d));weatherCache.put("10", new WeatherInfo("Spain", "Barcelona",today, 21d));weatherCache.put("11", new WeatherInfo("France", "Paris",today, 11d));weatherCache.put("12", new WeatherInfo("France","Marseille", today, -8d));weatherCache.put("13", new WeatherInfo("Netherlands","Amsterdam", today, 11d));weatherCache.put("14", new WeatherInfo("Portugal", "Lisbon",today, 13d));weatherCache.put("15", new WeatherInfo("Switzerland","Zurich", today, -12d));}@Testpublic void execute() {MapReduceTask<String, WeatherInfo, DestinationTypeEnum,WeatherInfo> task = new MapReduceTask<String, WeatherInfo,DestinationTypeEnum, WeatherInfo>(weatherCache);task.mappedWith(new DestinationMapper()).reducedWith(newDestinationReducer());Map<DestinationTypeEnum, WeatherInfo> destination =task.execute();assertNotNull(destination);assertEquals(destination.keySet().size(), 2);logger.info("********** PRINTING RESULTS FOR WEATHER CACHE*************");for (DestinationTypeEnum destinationType :destination.keySet()){logger.infof("%s - Best Place: %sn",destinationType.getDescription(),destination.get(destinationType));}}} When we execute the application, you should expect to see the following output: INFO: Skiing DestinationBest Place: {WeatherInfo:{ country:Switzerland, city:Zurich,day:Mon Jun 02 19:42:22 IST 2014, temp:-12.0, tempMax:-7.0,tempMin:-17.0}INFO: Sun DestinationBest Place: {WeatherInfo:{ country:Spain, city:Barcelona, day:MonJun 02 19:42:22 IST 2014, temp:21.0, tempMax:26.0, tempMin:16.0} Summary In this article, you learned how to work with applications in modern distributed server architecture, using the Map Reduce API, and how it can abstract parallel programming into two simple primitives, the map and reduce methods. We have seen a sample use case Find Destination that demonstrated how use map reduce almost in real time. Resources for Article: Further resources on this subject: MapReduce functions [Article] Hadoop and MapReduce [Article] Introduction to MapReduce [Article]
Read more
  • 0
  • 0
  • 1315
article-image-creating-spring-application
Packt
25 May 2015
18 min read
Save for later

Creating a Spring Application

Packt
25 May 2015
18 min read
In this article by Jérôme Jaglale, author of the book Spring Cookbook , we will cover the following recipes: Installing Java, Maven, Tomcat, and Eclipse on Mac OS Installing Java, Maven, Tomcat, and Eclipse on Ubuntu Installing Java, Maven, Tomcat, and Eclipse on Windows Creating a Spring web application Running a Spring web application Using Spring in a standard Java application (For more resources related to this topic, see here.) Introduction In this article, we will first cover the installation of some of the tools for Spring development: Java: Spring is a Java framework. Maven: This is a build tool similar to Ant. It makes it easy to add Spring libraries to a project. Gradle is another option as a build tool. Tomcat: This is a web server for Java web applications. You can also use JBoss, Jetty, GlassFish, or WebSphere. Eclipse: This is an IDE. You can also use NetBeans, IntelliJ IDEA, and so on. Then, we will build a Springweb application and run it with Tomcat. Finally, we'll see how Spring can also be used in a standard Java application (not a web application). Installing Java, Maven, Tomcat, and Eclipse on Mac OS We will first install Java 8 because it's not installed by default on Mac OS 10.9 or higher version. Then, we will install Maven 3, a build tool similar to Ant, to manage the external Java libraries that we will use (Spring, Hibernate, and so on). Maven 3 also compiles source files and generates JAR and WAR files. We will also install Tomcat 8, a popular web server for Java web applications, which we will use throughout this book. JBoss, Jetty, GlassFish, or WebSphere could be used instead. Finally, we will install the Eclipse IDE, but you could also use NetBeans, IntelliJ IDEA, and so on. How to do it… Install Java first, then Maven, Tomcat, and Eclipse. Installing Java Download Java from the Oracle website http://oracle.com. In the Java SE downloads section, choose the Java SE 8 SDK. Select Accept the License Agreement and download the Mac OS X x64 package. The direct link to the page is http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html. Open the downloaded file, launch it, and complete the installation. In your ~/.bash_profile file, set the JAVA_HOME environment variable. Change jdk1.8.0_40.jdk to the actual folder name on your system (this depends on the version of Java you are using, which is updated regularly): export JAVA_HOME="/Library/Java/JavaVirtualMachines/ jdk1.8.0_40.jdk/Contents/Home" Open a new terminal and test whether it's working: $ java -versionjava version "1.8.0_40"Java(TM) SE Runtime Environment (build 1.8.0_40-b26)Java HotSpot(TM) 64-Bit Server VM (build 25.40-b25, mixed mode) Installing Maven Download Maven from the Apache website http://maven.apache.org/download.cgi. Choose the Binary zip file of the current stable version: Uncompress the downloaded file and move the extracted folder to a convenient location (for example, ~/bin). In your ~/.bash_profile file, add a MAVEN HOME environment variable pointing to that folder. For example: export MAVEN_HOME=~/bin/apache-maven-3.3.1 Add the bin subfolder to your PATH environment variable: export PATH=$PATH:$MAVEN_HOME/bin Open a new terminal and test whether it's working: $ mvn –vApache Maven 3.3.1 (12a6b3...Maven home: /Users/jerome/bin/apache-maven-3.3.1Java version: 1.8.0_40, vendor: Oracle CorporationJava home: /Library/Java/JavaVirtualMachines/jdk1.8.0_...Default locale: en_US, platform encoding: UTF-8OS name: "mac os x", version: "10.9.5", arch... … Installing Tomcat Download Tomcat from the Apache website http://tomcat.apache.org/download-80.cgi and choose the Core binary distribution. Uncompress the downloaded file and move the extracted folder to a convenient location (for example, ~/bin). Make the scripts in the bin subfolder executable: chmod +x bin/*.sh Launch Tomcat using the catalina.sh script: $ bin/catalina.sh runUsing CATALINA_BASE:   /Users/jerome/bin/apache-tomcat-7.0.54...INFO: Server startup in 852 ms Tomcat runs on the 8080 port by default. In a web browser, go to http://localhost:8080/ to check whether it's working. Installing Eclipse Download Eclipse from http://www.eclipse.org/downloads/. Choose the Mac OS X 64 Bit version of Eclipse IDE for Java EE Developers. Uncompress the downloaded file and move the extracted folder to a convenient location (for example, ~/bin). Launch Eclipse by executing the eclipse binary: ./eclipse There's more… Tomcat can be run as a background process using these two scripts: bin/startup.shbin/shutdown.sh On a development machine, it's convenient to put Tomcat's folder somewhere in the home directory (for example, ~/bin) so that its contents can be updated without root privileges. Installing Java, Maven, Tomcat, and Eclipse on Ubuntu We will first install Java 8. Then, we will install Maven 3, a build tool similar to Ant, to manage the external Java libraries that we will use (Spring, Hibernate, so on). Maven 3 also compiles source files and generates JAR and WAR files. We will also install Tomcat 8, a popular web server for Java web applications, which we will use throughout this book. JBoss, Jetty, GlassFish, or WebSphere could be used instead. Finally, we will install the EclipseIDE, but you could also use NetBeans, IntelliJ IDEA, and so on. How to do it… Install Java first, then Maven, Tomcat, and Eclipse. Installing Java Add this PPA (Personal Package Archive): sudo add-apt-repository -y ppa:webupd8team/java Refresh the list of the available packages: sudo apt-get update Download and install Java 8: sudo apt-get install –y oracle-java8-installer Test whether it's working: $ java -versionjava version "1.8.0_40"Java(TM) SE Runtime Environment (build 1.8.0_40-b25)...Java HotSpot(TM) 64-Bit Server VM (build 25.40-b25… Installing Maven Download Maven from the Apache website http://maven.apache.org/download.cgi. Choose the Binary zip file of the current stable version:   Uncompress the downloaded file and move the resulting folder to a convenient location (for example, ~/bin). In your ~/.bash_profile file, add a MAVEN HOME environment variable pointing to that folder. For example: export MAVEN_HOME=~/bin/apache-maven-3.3.1 Add the bin subfolder to your PATH environment variable: export PATH=$PATH:$MAVEN_HOME/bin Open a new terminal and test whether it's working: $ mvn –vApache Maven 3.3.1 (12a6b3...Maven home: /home/jerome/bin/apache-maven-3.3.1Java version: 1.8.0_40, vendor: Oracle Corporation... Installing Tomcat Download Tomcat from the Apache website http://tomcat.apache.org/download-80.cgi and choose the Core binary distribution.   Uncompress the downloaded file and move the extracted folder to a convenient location (for example, ~/bin). Make the scripts in the bin subfolder executable: chmod +x bin/*.sh Launch Tomcat using the catalina.sh script: $ bin/catalina.sh run Using CATALINA_BASE:   /Users/jerome/bin/apache-tomcat-7.0.54 ... INFO: Server startup in 852 ms Tomcat runs on the 8080 port by default. Go to http://localhost:8080/ to check whether it's working. Installing Eclipse Download Eclipse from http://www.eclipse.org/downloads/. Choose the Linux 64 Bit version of Eclipse IDE for Java EE Developers.   Uncompress the downloaded file and move the extracted folder to a convenient location (for example, ~/bin). Launch Eclipse by executing the eclipse binary: ./eclipse There's more… Tomcat can be run as a background process using these two scripts: bin/startup.sh bin/shutdown.sh On a development machine, it's convenient to put Tomcat's folder somewhere in the home directory (for example, ~/bin) so that its contents can be updated without root privileges. Installing Java, Maven, Tomcat, and Eclipse on Windows We will first install Java 8. Then, we will install Maven 3, a build tool similar to Ant, to manage the external Java libraries that we will use (Spring, Hibernate, and so on). Maven 3 also compiles source files and generates JAR and WAR files. We will also install Tomcat 8, a popular web server for Java web applications, which we will use throughout this book. JBoss, Jetty, GlassFish, or WebSphere could be used instead. Finally, we will install the Eclipse IDE, but you could also use NetBeans, IntelliJ IDEA, and so on. How to do it… Install Java first, then Maven, Tomcat, and Eclipse. Installing Java Download Java from the Oracle website http://oracle.com. In the Java SE downloads section, choose the Java SE 8 SDK. Select Accept the License Agreement and download the Windows x64 package. The direct link to the page is http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html.   Open the downloaded file, launch it, and complete the installation. Navigate to Control Panel | System and Security | System | Advanced system settings | Environment Variables…. Add a JAVA_HOME system variable with the C:Program FilesJavajdk1.8.0_40 value. Change jdk1.8.0_40 to the actual folder name on your system (this depends on the version of Java, which is updated regularly). Test whether it's working by opening Command Prompt and entering java –version. Installing Maven Download Maven from the Apache website http://maven.apache.org/download.cgi. Choose the Binary zip file of the current stable version:   Uncompress the downloaded file. Create a Programs folder in your user folder. Move the extracted folder to it. Navigate to Control Panel | System and Security | System | Advanced system settings | Environment Variables…. Add a MAVEN_HOME system variable with the path to the Maven folder. For example, C:UsersjeromeProgramsapache-maven-3.2.1. Open the Path system variable. Append ;%MAVEN_HOME%bin to it.   Test whether it's working by opening a Command Prompt and entering mvn –v.   Installing Tomcat Download Tomcat from the Apache website http://tomcat.apache.org/download-80.cgi and choose the 32-bit/64-bit Windows Service Installer binary distribution.   Launch and complete the installation. Tomcat runs on the 8080 port by default. Go to http://localhost:8080/ to check whether it's working. Installing Eclipse Download Eclipse from http://www.eclipse.org/downloads/. Choose the Windows 64 Bit version of Eclipse IDE for Java EE Developers.   Uncompress the downloaded file. Launch the eclipse program. Creating a Spring web application In this recipe, we will build a simple Spring web application with Eclipse. We will: Create a new Maven project Add Spring to it Add two Java classes to configure Spring Create a "Hello World" web page In the next recipe, we will compile and run this web application. How to do it… In this section, we will create a Spring web application in Eclipse. Creating a new Maven project in Eclipse In Eclipse, in the File menu, select New | Project…. Under Maven, select Maven Project and click on Next >. Select the Create a simple project (skip archetype selection) checkbox and click on Next >. For the Group Id field, enter com.springcookbook. For the Artifact Id field, enter springwebapp. For Packaging, select war and click on Finish. Adding Spring to the project using Maven Open Maven's pom.xml configuration file at the root of the project. Select the pom.xml tab to edit the XML source code directly. Under the project XML node, define the versions for Java and Spring. Also add the Servlet API, Spring Core, and Spring MVC dependencies: <properties> <java.version>1.8</java.version> <spring.version>4.1.5.RELEASE</spring.version> </properties>   <dependencies> <!-- Servlet API --> <dependency>    <groupId>javax.servlet</groupId>    <artifactId>javax.servlet-api</artifactId>    <version>3.1.0</version>    <scope>provided</scope> </dependency>   <!-- Spring Core --> <dependency>    <groupId>org.springframework</groupId>    <artifactId>spring-context</artifactId>    <version>${spring.version}</version> </dependency>   <!-- Spring MVC --> <dependency>    <groupId>org.springframework</groupId>    <artifactId>spring-webmvc</artifactId>    <version>${spring.version}</version> </dependency> </dependencies> Creating the configuration classes for Spring Create the Java packages com.springcookbook.config and com.springcookbook.controller; in the left-hand side pane Package Explorer, right-click on the project folder and select New | Package…. In the com.springcookbook.config package, create the AppConfig class. In the Source menu, select Organize Imports to add the needed import declarations: package com.springcookbook.config; @Configuration @EnableWebMvc @ComponentScan (basePackages = {"com.springcookbook.controller"}) public class AppConfig { } Still in the com.springcookbook.config package, create the ServletInitializer class. Add the needed import declarations similarly: package com.springcookbook.config;   public class ServletInitializer extends AbstractAnnotationConfigDispatcherServletInitializer {    @Override    protected Class<?>[] getRootConfigClasses() {        return new Class<?>[0];    }       @Override    protected Class<?>[] getServletConfigClasses() {        return new Class<?>[]{AppConfig.class};    }      @Override    protected String[] getServletMappings() {        return new String[]{"/"};    } } Creating a "Hello World" web page In the com.springcookbook.controller package, create the HelloController class and its hi() method: @Controller public class HelloController { @RequestMapping("hi") @ResponseBody public String hi() {      return "Hello, world."; } } How it works… This section will give more you details of what happened at every step. Creating a new Maven project in Eclipse The generated Maven project is a pom.xml configuration file along with a hierarchy of empty directories: pom.xml src |- main    |- java    |- resources    |- webapp |- test    |- java    |- resources Adding Spring to the project using Maven The declared Maven libraries and their dependencies are automatically downloaded in the background by Eclipse. They are listed under Maven Dependencies in the left-hand side pane Package Explorer. Tomcat provides the Servlet API dependency, but we still declared it because our code needs it to compile. Maven will not include it in the generated .war file because of the <scope>provided</scope> declaration. Creating the configuration classes for Spring AppConfig is a Spring configuration class. It is a standard Java class annotated with: @Configuration: This declares it as a Spring configuration class @EnableWebMvc: This enables Spring's ability to receive and process web requests @ComponentScan(basePackages = {"com.springcookbook.controller"}): This scans the com.springcookbook.controller package for Spring components ServletInitializer is a configuration class for Spring's servlet; it replaces the standard web.xml file. It will be detected automatically by SpringServletContainerInitializer, which is automatically called by any Servlet 3. ServletInitializer extends the AbstractAnnotationConfigDispatcherServletInitializer abstract class and implements the required methods: getServletMappings(): This declares the servlet root URI. getServletConfigClasses(): This declares the Spring configuration classes. Here, we declared the AppConfig class that was previously defined. Creating a "Hello World" web page We created a controller class in the com.springcookbook.controller package, which we declared in AppConfig. When navigating to http://localhost:8080/hi, the hi()method will be called and Hello, world. will be displayed in the browser. Running a Spring web application In this recipe, we will use the Spring web application from the previous recipe. We will compile it with Maven and run it with Tomcat. How to do it… Here are the steps to compile and run a Spring web application: In pom.xml, add this boilerplate code under the project XML node. It will allow Maven to generate .war files without requiring a web.xml file: <build>    <finalName>springwebapp</finalName> <plugins>    <plugin>      <groupId>org.apache.maven.plugins</groupId>      <artifactId>maven-war-plugin</artifactId>      <version>2.5</version>      <configuration>       <failOnMissingWebXml>false</failOnMissingWebXml>      </configuration>    </plugin> </plugins> </build> In Eclipse, in the left-hand side pane Package Explorer, select the springwebapp project folder. In the Run menu, select Run and choose Maven install or you can execute mvn clean install in a terminal at the root of the project folder. In both cases, a target folder will be generated with the springwebapp.war file in it. Copy the target/springwebapp.war file to Tomcat's webapps folder. Launch Tomcat. In a web browser, go to http://localhost:8080/springwebapp/hi to check whether it's working.   How it works… In pom.xml the boilerplate code prevents Maven from throwing an error because there's no web.xml file. A web.xml file was required in Java web applications; however, since Servlet specification 3.0 (implemented in Tomcat 7 and higher versions), it's not required anymore. There's more… On Mac OS and Linux, you can create a symbolic link in Tomcat's webapps folder pointing to the.war file in your project folder. For example: ln -s ~/eclipse_workspace/spring_webapp/target/springwebapp.war ~/bin/apache-tomcat/webapps/springwebapp.war So, when the.war file is updated in your project folder, Tomcat will detect that it has been modified and will reload the application automatically. Using Spring in a standard Java application In this recipe, we will build a standard Java application (not a web application) using Spring. We will: Create a new Maven project Add Spring to it Add a class to configure Spring Add a User class Define a User singleton in the Spring configuration class Use the User singleton in the main() method How to do it… In this section, we will cover the steps to use Spring in a standard (not web) Java application. Creating a new Maven project in Eclipse In Eclipse, in the File menu, select New | Project.... Under Maven, select Maven Project and click on Next >. Select the Create a simple project (skip archetype selection) checkbox and click on Next >. For the Group Id field, enter com.springcookbook. For the Artifact Id field, enter springapp. Click on Finish. Adding Spring to the project using Maven Open Maven's pom.xml configuration file at the root of the project. Select the pom.xml tab to edit the XML source code directly. Under the project XML node, define the Java and Spring versions and add the Spring Core dependency: <properties> <java.version>1.8</java.version> <spring.version>4.1.5.RELEASE</spring.version> </properties>   <dependencies> <!-- Spring Core --> <dependency>    <groupId>org.springframework</groupId>    <artifactId>spring-context</artifactId>    <version>${spring.version}</version> </dependency> </dependencies> Creating a configuration class for Spring Create the com.springcookbook.config Java package; in the left-hand side pane Package Explorer, right-click on the project and select New | Package…. In the com.springcookbook.config package, create the AppConfig class. In the Source menu, select Organize Imports to add the needed import declarations: @Configuration public class AppConfig { } Creating the User class Create a User Java class with two String fields: public class User { private String name; private String skill; public String getName() {    return name; } public void setName(String name) {  this.name = name; } public String getSkill() {    return skill; } public void setSkill(String skill) {    this.skill = skill; } } Defining a User singleton in the Spring configuration class In the AppConfig class, define a User bean: @Bean public User admin(){    User u = new User();    u.setName("Merlin");    u.setSkill("Magic");    return u; } Using the User singleton in the main() method Create the com.springcookbook.main package with the Main class containing the main() method: package com.springcookbook.main; public class Main { public static void main(String[] args) { } } In the main() method, retrieve the User singleton and print its properties: AnnotationConfigApplicationContext springContext = new AnnotationConfigApplicationContext(AppConfig.class);   User admin = (User) springContext.getBean("admin");   System.out.println("admin name: " + admin.getName()); System.out.println("admin skill: " + admin.getSkill());   springContext.close(); Test whether it's working; in the Run menu, select Run.   How it works... We created a Java project to which we added Spring. We defined a User bean called admin (the bean name is by default the bean method name). In the Main class, we created a Spring context object from the AppConfig class and retrieved the admin bean from it. We used the bean and finally, closed the Spring context. Summary In this article, we have learned how to install some of the tools for Spring development. Then, we learned how to build a Springweb application and run it with Tomcat. Finally, we saw how Spring can also be used in a standard Java application.
Read more
  • 0
  • 0
  • 4138

article-image-financial-derivative-options
Packt
22 May 2015
27 min read
Save for later

Financial Derivative – Options

Packt
22 May 2015
27 min read
In this article by Michael Heydt, author of Mastering pandas for Finance, we will examine working with options data provided by Yahoo! Finance using pandas. Options are a type of financial derivative and can be very complicated to price and use in investment portfolios. Because of their level of complexity, there have been many books written that are very heavy on the mathematics of options. Our goal will not be to cover the mathematics in detail but to focus on understanding several core concepts in options, retrieving options data from the Internet, manipulating it using pandas, including determining their value, and being able to check the validity of the prices offered in the market. (For more resources related to this topic, see here.) Introducing options An option is a contract that gives the buyer the right, but not the obligation, to buy or sell an underlying security at a specific price on or before a certain date. Options are considered derivatives as their price is derived from one or more underlying securities. Options involve two parties: the buyer and the seller. The parties buy and sell the option, not the underlying security. There are two general types of options: the call and the put. Let's look at them in detail: Call: This gives the holder of the option the right to buy an underlying security at a certain price within a specific period of time. They are similar to having a long position on a stock. The buyer of a call is hoping that the value of the underlying security will increase substantially before the expiration of the option and, therefore, they can buy the security at a discount from the future value. Put: This gives the option holder the right to sell an underlying security at a certain price within a specific period of time. A put is similar to having a short position on a stock. The buyer of a put is betting that the price of the underlying security will fall before the expiration of the option and they will, thereby, be able to gain a profit by benefitting from receiving the payment in excess of the future market value. The basic idea is that one side of the party believes that the underlying security will increase in value and the other believes it will decrease. They will agree upon a price known as the strike price, where they place their bet on whether the price of the underlying security finishes above or below this strike price on the expiration date of the option. Through the contract of the option, the option seller agrees to give the buyer the underlying security on the expiry of the option if the price is above the strike price (for a call). The price of the option is referred to as the premium. This is the amount the buyer will pay to the seller to receive the option. This price of an option depends upon many factors, of which the following are the primary factors: The current price of the underlying security How long the option needs to be held before it expires (the expiry date) The strike price on the expiry date of the option The interest rate of capital in the market The volatility of the underlying security There being an adequate interest between buyer and seller around the given option The premium is often established so that the buyer can speculate on the future value of the underlying security and be able to gain rights to the underlying security in the future at a discount in the present. The holder of the option, known as the buyer, is not obliged to exercise the option on its expiration date, but the writer, also referred to as the seller, however, is obliged to buy or sell the instrument if the option is exercised. Options can provide a variety of benefits such as the ability to limit risk and the advantage of providing leverage. They are often used to diversify an investment portfolio to lower risk during times of rising or falling markets. There are four types of participants in an options market: Buyers of calls Sellers of calls Buyers of puts Sellers of puts Buyers of calls believe that the underlying security will exceed a certain level and are not only willing to pay a certain amount to see whether that happens, but also lose their entire premium if it does not. Their goal is that the resulting payout of the option exceeds their initial premium and they, therefore, make a profit. However, they are willing to forgo their premium in its entirety if it does not clear the strike price. This then becomes a game of managing the risk of the profit versus the fixed potential loss. Sellers of calls are on the other side of buyers. They believe the price will drop and that the amount they receive in payment for the premium will exceed any loss in the price. Normally, the seller of a call would already own the stock. They do not believe the price will exceed the strike price and that they will be able to keep the underlying security and profit if the underlying security stays below the strike by an amount that does not exceed the received premium. Loss is potentially unbounded as the stock increases in price above the strike price, but that is the risk for an upfront receipt of cash and potential gains on loss of price in the underlying instrument. A buyer of a put is betting that the price of the stock will drop beyond a certain level. By buying a put they gain the option to force someone to buy the underlying instrument at a fixed price. By doing this, they are betting that they can force the sale of the underlying instrument at a strike price that is higher than the market price and in excess of the premium that they pay to the seller of the put option. On the other hand, the seller of the put is betting that they can make an offer on an instrument that is perceived to lose value in the future. They will offer the option for a price that gives them cash upfront, and they plan that at maturity of the option, they will not be forced to purchase the underlying instrument. Therefore, it keeps the premium as pure profit. Or, the price of the underlying instruments drops only a small amount so that the price of buying the underlying instrument relative to its market price does not exceed the premium that they received. Notebook setup The examples in this article will be based on the following configuration in IPython: In [1]:    import pandas as pd    import numpy as np    import pandas.io.data as web    from datetime import datetime      import matplotlib.pyplot as plt    %matplotlib inline      pd.set_option('display.notebook_repr_html', False)    pd.set_option('display.max_columns', 7)    pd.set_option('display.max_rows', 15)    pd.set_option('display.width', 82)    pd.set_option('precision', 3) Options data from Yahoo! Finance Options data can be obtained from several sources. Publicly listed options are exchanged on the Chicago Board Options Exchange (CBOE) and can be obtained from their website. Through the DataReader class, pandas also provides built-in (although in the documentation referred to as experimental) access to options data. The following command reads all currently available options data for AAPL: In [2]:    aapl_options = web.Options('AAPL', 'yahoo') aapl_options = aapl_options.get_all_data().reset_index() This operation can take a while as it downloads quite a bit of data. Fortunately, it is cached so that subsequent calls will be quicker, and there are other calls to limit the types of data downloaded (such as getting just puts). For convenience, the following command will save this data to a file for quick reload at a later time. Also, it helps with repeatability of the examples. The data retrieved changes very frequently, so the actual examples in the book will use the data in the file provided with the book. It saves the data for later use (it's commented out for now so as not to overwrite the existing file). Here's the command we are talking about: In [3]:    #aapl_options.to_csv('aapl_options.csv') This data file can be reloaded with the following command: In [4]:    aapl_options = pd.read_csv('aapl_options.csv',                              parse_dates=['Expiry']) Whether from the Web or the file, the following command restructures and tidies the data into a format best used in the examples to follow: In [5]:    aos = aapl_options.sort(['Expiry', 'Strike'])[      ['Expiry', 'Strike', 'Type', 'IV', 'Bid',          'Ask', 'Underlying_Price']]    aos['IV'] = aos['IV'].apply(lambda x: float(x.strip('%'))) Now, we can take a look at the data retrieved: In [6]:    aos   Out[6]:            Expiry Strike Type     IV   Bid   Ask Underlying_Price    158 2015-02-27     75 call 271.88 53.60 53.85           128.79    159 2015-02-27     75 put 193.75 0.00 0.01           128.79    190 2015-02-27     80 call 225.78 48.65 48.80           128.79    191 2015-02-27     80 put 171.88 0.00 0.01           128.79    226 2015-02-27     85 call 199.22 43.65 43.80           128.79 There are 1,103 rows of options data available. The data is sorted by Expiry and then Strike price to help demonstrate examples. Expiry is the data at which the particular option will expire and potentially be exercised. We have the following expiry dates that were retrieved. Options typically are offered by an exchange on a monthly basis and within a short overall duration from several days to perhaps two years. In this dataset, we have the following expiry dates: In [7]:    aos['Expiry'].unique()   Out[7]:    array(['2015-02-26T17:00:00.000000000-0700',          '2015-03-05T17:00:00.000000000-0700',          '2015-03-12T18:00:00.000000000-0600',          '2015-03-19T18:00:00.000000000-0600',          '2015-03-26T18:00:00.000000000-0600',          '2015-04-01T18:00:00.000000000-0600',          '2015-04-16T18:00:00.000000000-0600',          '2015-05-14T18:00:00.000000000-0600',          '2015-07-16T18:00:00.000000000-0600',          '2015-10-15T18:00:00.000000000-0600',          '2016-01-14T17:00:00.000000000-0700',          '2017-01-19T17:00:00.000000000-0700'], dtype='datetime64[ns]') For each option's expiration date, there are multiple options available, split between puts and calls, and with different strike values, prices, and associated risk values. As an example, the option with the index 158 that expires on 2015-02-27 is for buying a call on AAPL with a strike price of $75. The price we would pay for each share of AAPL would be the bid price of $53.60. Options typically sell 100 units of the underlying security, and, therefore, this would mean that this option would cost of 100 x $53.60 or $5,360 upfront: In [8]:    aos.loc[158]   Out[8]:    Expiry             2015-02-27 00:00:00    Strike                               75    Type                              call    IV                                 272    Bid                               53.6    Ask                               53.9    Underlying_Price                   129    Name: 158, dtype: object This $5,360 does not buy us the 100 shares of AAPL. It gives us the right to buy 100 shares of AAPL on 2015-02-27 at $75 per share. We should only buy if the price of AAPL is above $75 on 2015-02-27. If not, we will have lost our premium of $5360 and purchasing below will only increase our loss. Also, note that these quotes were retrieved on 2015-02-25. This specific option has only two days until it expires. That has a huge effect on the pricing: We have paid $5,360 for the option to buy 100 shares of AAPL on 2015-02-27 if the price of AAPL is above $75 on that date. The price of AAPL when the option was priced was $128.79 per share. If we were to buy 100 shares of AAPL now, we would have paid $12,879 now. If AAPL is above $75 on 2015-02-27, we can buy 100 shares for $7500. There is not a lot of time between the quote and Expiry of this option. With AAPL being at $128.79, it is very likely that the price will be above $75 in two days. Therefore, in two days: We can walk away if the price is $75 or above. Since we paid $5360, we probably wouldn't want to do that. At $75 or above, we can force execution of the option, where we give the seller $7,500 and receive 100 shares of AAPL. If the price of AAPL is still $128.79 per share, then we will have bought $12,879 of AAPL for $7,500+$5,360, or $12,860 in total. In technicality, we will have saved $19 over two days! But only if the price didn't drop. If for some reason, AAPL dropped below $75 in two days, we kept our loss to our premium of $5,360. This is not great, but if we had bought $12,879 of AAPL on 2015-02-5 and it dropped to $74.99 on 2015-02-27, we would have lost $12,879 – $7,499, or $5,380. So, we actually would have saved $20 in loss by buying the call option. It is interesting how this math works out. Excluding transaction fees, options are a zero-loss game. It just comes down to how much risk is involved in the option versus your upfront premium and how the market moves. If you feel you know something, it can be quite profitable. Of course, it can also be devastatingly unprofitable. We will not examine the put side of this example. It would suffice to say it works out similarly from the side of the seller. Implied volatility There is one more field in our dataset that we didn't look at—implied volatility (IV). We won't get into the details of the mathematics of how this is calculated, but this reflects the amount of volatility that the market has factored into the option. This is different than historical volatility (typically the standard deviation of the previous year of returns). In general, it is informative to examine the IV relative to the strike price on a particular Expiry date. The following command shows this in tabular form for calls on 2015-02-27: In [9]:    calls1 = aos[(aos.Expiry=='2015-02-27') & (aos.Type=='call')]    calls1[:5]   Out[9]:            Expiry Strike Type     IV   Bid   Ask Underlying_Price    158 2015-02-27     75 call 271.88 53.60 53.85           128.79    159 2015-02-27     75   put 193.75 0.00   0.01           128.79    190 2015-02-27     80 call 225.78 48.65 48.80           128.79    191 2015-02-27     80   put 171.88 0.00   0.01           128.79    226 2015-02-27     85 call 199.22 43.65 43.80           128.79 It appears that as the strike price approaches the underlying price, the implied volatility decreases. Plotting this shows it even more clearly: In [10]:    ax = aos[(aos.Expiry=='2015-02-27') & (aos.Type=='call')] \            .set_index('Strike')[['IV']].plot(figsize=(12,8))    ax.axvline(calls1.Underlying_Price.iloc[0], color='g'); The shape of this curve is important as it defines points where options are considered to be either in or out of the money. A call option is referred to as in the money when the options strike price is below the market price of the underlying instrument. A put option is in the money when the strike price is above the market price of the underlying instrument. Being in the money does not mean that you will profit; it simply means that the option is worth exercising. Where and when an option is in our out of the money can be visualized by examining the shape of its implied volatility curve. Because of this curved shape, it is generally referred to as a volatility smile as both ends tend to turn upwards on both ends, particularly, if the curve has a uniform shape around its lowest point. This is demonstrated in the following graph, which shows the nature of in/out of the money for both puts and calls: A skew on the smile demonstrates a relative demand that is greater toward the option being in or out of the money. When this occurs, the skew is often referred to as a smirk. Volatility smirks Smirks can either be reverse or forward. The following graph demonstrates a reverse skew, similar to what we have seen with our AAPL 2015-02-27 call: In a reverse-skew smirk, the volatility for options at lower strikes is higher than at higher strikes. This is the case with our AAPL options expiring on 2015-02-27. This means that the in-the-money calls and out-of-the-money puts are more expensive than out-of-the-money calls and in-the-money puts. A popular explanation for the manifestation of the reverse volatility skew is that investors are generally worried about market crashes and buy puts for protection. One piece of evidence supporting this argument is the fact that the reverse skew did not show up for equity options until after the crash of 1987. Another possible explanation is that in-the-money calls have become popular alternatives to outright stock purchases as they offer leverage and, hence, increased ROI. This leads to greater demand for in-the-money calls and, therefore, increased IV at the lower strikes. The other variant of the volatility smirk is the forward skew. In the forward-skew pattern, the IV for options at the lower strikes is lower than the IV at higher strikes. This suggests that out-of-the-money calls and in-the-money puts are in greater demand compared to in-the-money calls and out-of-the-money puts: The forward-skew pattern is common for options in the commodities market. When supply is tight, businesses would rather pay more to secure supply than to risk supply disruption. For example, if weather reports indicate a heightened possibility of an impending frost, fear of supply disruption will cause businesses to drive up demand for out-of-the-money calls for the affected crops. Calculating payoff on options The payoff of an option is a relatively straightforward calculation based upon the type of the option and is derived from the price of the underlying security on expiry relative to the strike price. The formula for the call option payoff is as follows: The formula for the put option payoff is as follows: We will model both of these functions and visualize their payouts. The call option payoff calculation An option gives the buyer of the option the right to buy (a call option) or sell (a put option) an underlying security at a point in the future and at a predetermined price. A call option is basically a bet on whether or not the price of the underlying instrument will exceed the strike price. Your bet is the price of the option (the premium). On the expiry date of a call, the value of the option is 0 if the strike price has not been exceeded. If it has been exceeded, its value is the market value of the underlying security. The general value of a call option can be calculated with the following function: In [11]:    def call_payoff(price_at_maturity, strike_price):        return max(0, price_at_maturity - strike_price) When the price of the underlying instrument is below the strike price, the value is 0 (out of the money). This can be seen here: In [12]:    call_payoff(25, 30)   Out[12]:    0 When it is above the strike price (in the money), it will be the difference of the price and the strike price: In [13]:    call_payoff(35, 30)   Out[13]:    5 The following function returns a DataFrame object that calculates the return for an option over a range of maturity prices. It uses np.vectorize() to efficiently apply the call_payoff() function to each item in the specific column of the DataFrame: In [14]:    def call_payoffs(min_maturity_price, max_maturity_price,                    strike_price, step=1):        maturities = np.arange(min_maturity_price,                              max_maturity_price + step, step)        payoffs = np.vectorize(call_payoff)(maturities, strike_price)        df = pd.DataFrame({'Strike': strike_price, 'Payoff': payoffs},                          index=maturities)        df.index.name = 'Maturity Price'    return df The following command demonstrates the use of this function to calculate payoff of an underlying security at finishing prices ranging from 10 to 25 and with a strike price of 15: In [15]:    call_payoffs(10, 25, 15)   Out[15]:                    Payoff Strike    Maturity Price                  10                   0     15    11                   0     15    12                   0     15    13                   0     15    14                   0     15    ...               ...     ...    21                   6     15    22                  7     15    23                   8     15    24                   9     15    25                 10     15      [16 rows x 2 columns] Using this result, we can visualize the payoffs using the following function: In [16]:    def plot_call_payoffs(min_maturity_price, max_maturity_price,                          strike_price, step=1):        payoffs = call_payoffs(min_maturity_price, max_maturity_price,                              strike_price, step)        plt.ylim(payoffs.Payoff.min() - 10, payoffs.Payoff.max() + 10)        plt.ylabel("Payoff")        plt.xlabel("Maturity Price")        plt.title('Payoff of call option, Strike={0}'                  .format(strike_price))        plt.xlim(min_maturity_price, max_maturity_price)        plt.plot(payoffs.index, payoffs.Payoff.values); The payoffs are visualized as follows: In [17]:    plot_call_payoffs(10, 25, 15) The put option payoff calculation The value of a put option can be calculated with the following function: In [18]:    def put_payoff(price_at_maturity, strike_price):        return max(0, strike_price - price_at_maturity) While the price of the underlying is below the strike price, the value is 0: In [19]:    put_payoff(25, 20)   Out[19]:    0 When the price is below the strike price, the value of the option is the difference between the strike price and the price: In [20]:    put_payoff(15, 20)   Out [20]:    5 This payoff for a series of prices can be calculated with the following function: In [21]:    def put_payoffs(min_maturity_price, max_maturity_price,                    strike_price, step=1):        maturities = np.arange(min_maturity_price,                              max_maturity_price + step, step)        payoffs = np.vectorize(put_payoff)(maturities, strike_price)       df = pd.DataFrame({'Payoff': payoffs, 'Strike': strike_price},                          index=maturities)        df.index.name = 'Maturity Price'        return df The following command demonstrates the values of the put payoffs for prices of 10 through 25 with a strike price of 25: In [22]:    put_payoffs(10, 25, 15)   Out [22]:                    Payoff Strike    Maturity Price                  10                   5     15    11                   4     15    12                   3     15    13                  2     15    14                   1     15    ...               ...     ...    21                   0     15    22                   0     15    23                   0     15    24                   0     15    25                   0      15      [16 rows x 2 columns] The following function will generate a graph of payoffs: In [23]:    def plot_put_payoffs(min_maturity_price,                        max_maturity_price,                        strike_price,                        step=1):        payoffs = put_payoffs(min_maturity_price,                              max_maturity_price,                              strike_price, step)        plt.ylim(payoffs.Payoff.min() - 10, payoffs.Payoff.max() + 10)        plt.ylabel("Payoff")      plt.xlabel("Maturity Price")        plt.title('Payoff of put option, Strike={0}'                  .format(strike_price))        plt.xlim(min_maturity_price, max_maturity_price)        plt.plot(payoffs.index, payoffs.Payoff.values); The following command demonstrates the payoffs for prices between 10 and 25 with a strike price of 15: In [24]:    plot_put_payoffs(10, 25, 15) Summary In this article, we examined several techniques for using pandas to calculate the prices of options, their payoffs, and profit and loss for the various combinations of calls and puts for both buyers and sellers. Resources for Article: Further resources on this subject: Why Big Data in the Financial Sector? [article] Building Financial Functions into Excel 2010 [article] Using indexes to manipulate pandas objects [article]
Read more
  • 0
  • 0
  • 1093

article-image-welcome-spring-framework
Packt
30 Apr 2015
17 min read
Save for later

Welcome to the Spring Framework

Packt
30 Apr 2015
17 min read
In this article by Ravi Kant Soni, author of the book Learning Spring Application Development, you will be closely acquainted with the Spring Framework. Spring is an open source framework created by Rod Johnson to address the complexity of enterprise application development. Spring is now a long time de facto standard for Java enterprise software development. The framework was designed with developer productivity in mind and this makes it easier to work with the existing Java and JEE APIs. Using Spring, we can develop standalone applications, desktop applications, two tier applications, web applications, distributed applications, enterprise applications, and so on. (For more resources related to this topic, see here.) Features of the Spring Framework Lightweight: Spring is described as a lightweight framework when it comes to size and transparency. Lightweight frameworks reduce complexity in application code and also avoid unnecessary complexity in their own functioning. Non intrusive: Non intrusive means that your domain logic code has no dependencies on the framework itself. Spring is designed to be non intrusive. Container: Spring's container is a lightweight container, which contains and manages the life cycle and configuration of application objects. Inversion of control (IoC): Inversion of Control is an architectural pattern. This describes the Dependency Injection that needs to be performed by external entities instead of creating dependencies by the component itself. Aspect-oriented programming (AOP): Aspect-oriented programming refers to the programming paradigm that isolates supporting functions from the main program's business logic. It allows developers to build the core functionality of a system without making it aware of the secondary requirements of this system. JDBC exception handling: The JDBC abstraction layer of the Spring Framework offers a exceptional hierarchy that simplifies the error handling strategy. Spring MVC Framework: Spring comes with an MVC web application framework to build robust and maintainable web applications. Spring Security: Spring Security offers a declarative security mechanism for Spring-based applications, which is a critical aspect of many applications. ApplicationContext ApplicationContext is defined by the org.springframework.context.ApplicationContext interface. BeanFactory provides a basic functionality, while ApplicationContext provides advance features to our spring applications, which make them enterprise-level applications. Create ApplicationContext by using the ClassPathXmlApplicationContext framework API. This API loads the beans configuration file and it takes care of creating and initializing all the beans mentioned in the configuration file: import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext;   public class MainApp {   public static void main(String[] args) {      ApplicationContext context =    new ClassPathXmlApplicationContext("beans.xml");      HelloWorld helloWorld =    (HelloWorld) context.getBean("helloworld");      helloWorld.getMessage(); } } Autowiring modes There are five modes of autowiring that can be used to instruct Spring Container to use autowiring for Dependency Injection. You use the autowire attribute of the <bean/> element to specify the autowire mode for a bean definition. The following table explains the different modes of autowire: Mode Description no By default, the Spring bean autowiring is turned off, meaning no autowiring is to be performed. You should use the explicit bean reference called ref for wiring purposes. byName This autowires by the property name. If the bean property is the same as the other bean name, autowire it. The setter method is used for this type of autowiring to inject dependency. byType Data type is used for this type of autowiring. If the data type bean property is compatible with the data type of the other bean, autowire it. Only one bean should be configured for this type in the configuration file; otherwise, a fatal exception will be thrown. constructor This is similar to the byType autowire, but here a constructor is used to inject dependencies. autodetect Spring first tries to autowire by constructor; if this does not work, then it tries to autowire by byType. This option is deprecated. Stereotype annotation Generally, @Component, a parent stereotype annotation, can define all beans. The following table explains the different stereotype annotations: Annotation Use Description @Component Type This is a generic stereotype annotation for any Spring-managed component. @Service Type This stereotypes a component as a service and is used when defining a class that handles the business logic. @Controller Type This stereotypes a component as a Spring MVC controller. It is used when defining a controller class, which composes of a presentation layer and is available only on Spring MVC. @Repository Type This stereotypes a component as a repository and is used when defining a class that handles the data access logic and provide translations on the exception occurred at the persistence layer. Annotation-based container configuration For a Spring IoC container to recognize annotation, the following definition must be added to the configuration file: <?xml version="1.0" encoding="UTF-8"?> <beans xsi_schemaLocation="http://www.springframework.org/schema/beans    http://www.springframework.org/schema/beans/spring-beans.xsd    http://www.springframework.org/schema/context    http://www.springframework.org/schema/context/spring-context-    3.2.xsd">   <context:annotation-config />                             </beans> Aspect-oriented programming (AOP) supports in Spring AOP is used in Spring to provide declarative enterprise services, especially as a replacement for EJB declarative services. Application objects do what they're supposed to do—perform business logic—and nothing more. They are not responsible for (or even aware of) other system concerns, such as logging, security, auditing, locking, and event handling. AOP is a methodology of applying middleware services, such as security services, transaction management services, and so on on the Spring application. Declaring an aspect An aspect can be declared by annotating the POJO class with the @Aspect annotation. This aspect is required to import the org.aspectj.lang.annotation.aspect package. The following code snippet represents the aspect declaration in the @AspectJ form: import org.aspectj.lang.annotation.Aspect; import org.springframework.stereotype.Component;   @Aspect @Component ("myAspect") public class AspectModule { // ... } JDBC with the Spring Framework The DriverManagerDataSource class is used to configure the DataSource for application, which is defined in the Spring.xml configuration file. The central class of Spring JDBC's abstraction framework is the JdbcTemplate class that includes the most common logic in using the JDBC API to access data (such as handling the creation of connection, creation of statement, execution of statement, and release of resources). The JdbcTemplate class resides in the org.springframework.jdbc.core package. JdbcTemplate can be used to execute different types of SQL statements. DML is an abbreviation of data manipulation language and is used to retrieve, modify, insert, update, and delete data in a database. Examples of DML are SELECT, INSERT, or UPDATE statements. DDL is an abbreviation of data definition language and is used to create or modify the structure of database objects in a database. Examples of DDL are CREATE, ALTER, and DROP statements. The JDBC batch operation in Spring The JDBC batch operation allows you to submit multiple SQL DataSource to process at once. Submitting multiple SQL DataSource together instead of separately improves the performance: JDBC with batch processing Hibernate with the Spring Framework Data persistence is an ability of an object to save its state so that it can regain the same state. Hibernate is one of the ORM libraries that is available to the open source community. Hibernate is the main component available for a Java developer with features such as POJO-based approach and supports relationship definitions. The object query language used by Hibernate is called as Hibernate Query Language (HQL). HQL is an SQL-like textual query language working at a class level or a field level. Let's start learning the architecture of Hibernate. Hibernate annotations is the powerful way to provide the metadata for the object and relational table mapping. Hibernate provides an implementation of the Java Persistence API so that we can use JPA annotations with model beans. Hibernate will take care of configuring it to be used in CRUD operations. The following table explains JPA annotations: JPA annotation Description @Entity The javax.persistence.Entity annotation is used to mark a class as an entity bean that can be persisted by Hibernate, as Hibernate provides the JPA implementation. @Table The javax.persistence.Table annotation is used to define table mapping and unique constraints for various columns. The @Table annotation provides four attributes, which allows you to override the name of the table, its catalogue, and its schema. This annotation also allows you to enforce unique constraints on columns in the table. For now, we will just use the table name as Employee. @Id Each entity bean will have a primary key, which you annotate on the class with the @Id annotation. The javax.persistence.Id annotation is used to define the primary key for the table. By default, the @Id annotation will automatically determine the most appropriate primary key generation strategy to be used. @GeneratedValue javax.persistence.GeneratedValue is used to define the field that will be autogenerated. It takes two parameters, that is, strategy and generator. The GenerationType.IDENTITY strategy is used so that the generated id value is mapped to the bean and can be retrieved in the Java program. @Column javax.persistence.Column is used to map the field with the table column. We can also specify the length, nullable, and uniqueness for the bean properties. Object-relational mapping (ORM, O/RM, and O/R mapping) ORM stands for Object-relational Mapping. ORM is the process of persisting objects in a relational database such as RDBMS. ORM bridges the gap between object and relational schemas, allowing object-oriented application to persist objects directly without having the need to convert object to and from a relational format: Hibernate Query Language (HQL) Hibernate Query Language (HQL) is an object-oriented query language that works on persistence object and their properties instead of operating on tables and columns. To use HQL, we need to use a query object. Query interface is an object-oriented representation of HQL. The query interface provides many methods; let's take a look at a few of them: Method Description public int executeUpdate() This is used to execute the update or delete query public List list() This returns the result of the relation as a list public Query setFirstResult(int rowno) This specifies the row number from where a record will be retrieved public Query setMaxResult(int rowno) This specifies the number of records to be retrieved from the relation (table) public Query setParameter(int position, Object value) This sets the value to the JDBC style query parameter public Query setParameter(String name, Object value) This sets the value to a named query parameter The Spring Web MVC Framework Spring Framework supports web application development by providing comprehensive and intensive support. The Spring MVC framework is a robust, flexible, and well-designed framework used to develop web applications. It's designed in such a way that development of a web application is highly configurable to Model, View, and Controller. In an MVC design pattern, Model represents the data of a web application, View represents the UI, that is, user interface components, such as checkbox, textbox, and so on, that are used to display web pages, and Controller processes the user request. Spring MVC framework supports the integration of other frameworks, such as Struts and WebWork, in a Spring application. This framework also helps in integrating other view technologies, such as Java Server Pages (JSP), velocity, tiles, and FreeMarker in a Spring application. The Spring MVC Framework is designed around a DispatcherServlet. The DispatcherServlet dispatches the http request to handler, which is a very simple controller interface. The Spring MVC Framework provides a set of the following web support features: Powerful configuration of framework and application classes: The Spring MVC Framework provides a powerful and straightforward configuration of framework and application classes (such as JavaBeans). Easier testing: Most of the Spring classes are designed as JavaBeans, which enable you to inject the test data using the setter method of these JavaBeans classes. The Spring MVC framework also provides classes to handle the Hyper Text Transfer Protocol (HTTP) requests (HttpServletRequest), which makes the unit testing of the web application much simpler. Separation of roles: Each component of a Spring MVC Framework performs a different role during request handling. A request is handled by components (such as controller, validator, model object, view resolver, and the HandlerMapping interface). The whole task is dependent on these components and provides a clear separation of roles. No need of the duplication of code: In the Spring MVC Framework, we can use the existing business code in any component of the Spring MVC application. Therefore, no duplicity of code arises in a Spring MVC application. Specific validation and binding: Validation errors are displayed when any mismatched data is entered in a form. DispatcherServlet in Spring MVC The DispatcherServlet of the Spring MVC Framework is an implementation of front controller and is a Java Servlet component for Spring MVC applications. DispatcherServlet is a front controller class that receives all incoming HTTP client request for the Spring MVC application. DispatcherServlet is also responsible for initializing the framework components that will be used to process the request at various stages. The following code snippet declares the DispatcherServlet in the web.xml deployment descriptor: <servlet> <servlet-name>SpringDispatcher</servlet-name> <servlet-class>    org.springframework.web.DispatcherServlet </servlet-class> <load-on-startup>1</load-on-startup> </servlet>   <servlet-mapping> <servlet-name>SpringDispatcher</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> In the preceding code snippet, the user-defined name of the DispatcherServlet class is SpringDispatcher, which is enclosed with the <servlet-name> element. When our newly created SpringDispatcher class is loaded in a web application, it loads an application context from an XML file. DispatcherServlet will try to load the application context from a file named SpringDispatcher-servlet.xml, which will be located in the application's WEB-INF directory: <beans xsi_schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context- 3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd">   <mvc:annotation-driven />   <context:component-scan base- package="org.packt.Spring.chapter7.springmvc" />   <beanclass="org.springframework.web.servlet.view. InternalResourceViewResolver">    <property name="prefix" value="/WEB-INF/views/" />    <property name="suffix" value=".jsp" /> </bean>   </beans> Spring Security The Spring Security framework is the de facto standard to secure Spring-based applications. The Spring Security framework provides security services for enterprise Java software applications by handling authentication and authorization. The Spring Security framework handles authentication and authorization at the web request and the method invocation level. The two major operations provided by Spring Security are as follows: Authentication: Authentication is the process of assuring that a user is the one who he/she claims to be. It's a combination of identification and verification. The identification process can be performed in a number of different ways, that is, username and password that can be stored in a database, LDAP, or CAS (single sign-out protocol), and so on. Spring Security provides a password encoder interface to make sure that the user's password is hashed. Authorization: Authorization provides access control to an authenticated user. It's the process of assurance that the authenticated user is allowed to access only those resources that he/she is authorized for use. Let's take a look at an example of the HR payroll application, where some parts of the application have access to HR and to some other parts, all the employees have access. The access rights given to user of the system will determine the access rules. In a web-based application, this is often done by URL-based security and is implemented using filters that play an primary role in securing the Spring web application. Sometimes, URL-based security is not enough in web application because URLs can be manipulated and can have relative pass. So, Spring Security also provides method level security. An authorized user will only able to invoke those methods that he is granted access for. Securing web application's URL access HttpServletRequest is the starting point of Java's web application. To configure web security, it's required to set up a filter that provides various security features. In order to enable Spring Security, add filter and their mapping in the web.xml file: <!—Spring Security --> <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter. DelegatingFilterProxy</filter-class> </filter>   <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> Logging in to a web application There are multiple ways supported by Spring security for users to log in to a web application: HTTP basic authentication: This is supported by Spring Security by processing the basic credentials presented in the header of the HTTP request. It's generally used with stateless clients, who on each request pass their credential. Form-based login service: Spring Security supports the form-based login service by providing a default login form page for users to log in to the web application. Logout service: Spring Security supports logout services that allow users to log out of this application. Anonymous login: This service is provided by Spring Security that grants authority to an anonymous user, such as a normal user. Remember-me support: This is also supported by Spring Security and remembers the identity of a user across multiple browser sessions. Encrypting passwords Spring Security supports some hashing algorithms such as MD5 (Md5PasswordEncoder), SHA (ShaPasswordEncoder), and BCrypt (BCryptPasswordEncoder) for password encryption. To enable the password encoder, use the <password-encoder/> element and set the hash attribute, as shown in the following code snippet: <authentication-manager> <authentication-provider>    <password-encoder hash="md5" />    <jdbc-user-service data-source-    ref="dataSource"    . . .   </authentication-provider> </authentication-manager> Mail support in the Spring Framework The Spring Framework provides a simplified API and plug-in for full e-mail support, which minimizes the effect of the underlying e-mailing system specifications. The Sprig e-mail supports provide an abstract, easy, and implementation independent API to send e-mails. The Spring Framework provides an API to simplify the use of the JavaMail API. The classes handle the initialization, cleanup operations, and exceptions. The packages for the JavaMail API provided by the Spring Framework are listed as follows: Package Description org.springframework.mail This defines the basic set of classes and interfaces to send e-mails. org.springframework.mail.java This defines JavaMail API-specific classes and interfaces to send e-mails. Spring's Java Messaging Service (JMS) Java Message Service is a Java Message-oriented middleware (MOM) API responsible for sending messages between two or more clients. JMS is a part of the Java enterprise edition. JMS is a broker similar to a postman who acts like a middleware between the message sender and the receiver. Message is nothing, but just bytes of data or information exchanged between two parties. By taking different specifications, a message can be described in various ways. However, it's nothing, but an entity of communication. A message can be used to transfer a piece of information from one application to another, which may or may not run on the same platform. The JMS application Let's look at the sample JMS application pictorial, as shown in the following diagram: We have a Sender and a Receiver. The Sender is responsible for sending a message and the Receiver is responsible for receiving a message. We need a broker or MOM between the Sender and Receiver, who takes the sender's message and passes it from the network to the receiver. Message oriented middleware (MOM) is basically an MQ application such as ActiveMQ or IBM-MQ, which are two different message providers. The sender promises loose coupling and it can be .NET or mainframe-based application. The receiver can be Java or Spring-based application and it sends back the message to the sender as well. This is a two-way communication, which is loosely coupled. Summary This article covered the architecture of Spring Framework and how to set up the key components of the Spring application development environment. Resources for Article: Further resources on this subject: Creating an Extension in Yii 2 [article] Serving and processing forms [article] Time Travelling with Spring [article]
Read more
  • 0
  • 0
  • 4052
article-image-custom-coding-apex
Packt
27 Apr 2015
18 min read
Save for later

Custom Coding with Apex

Packt
27 Apr 2015
18 min read
In this article by Chamil Madusanka, author of the book Learning Force.com Application Development, you will learn about the custom coding in Apex and also about triggers. We have used many declarative methods such as creating the object's structure, relationships, workflow rules, and approval process to develop the Force.com application. The declarative development method doesn't require any coding skill and specific Integrated Development Environment (IDE). This article will show you how to extend the declarative capabilities using custom coding of the Force.com platform. Apex controllers and Apex triggers will be explained with examples of the sample application. The Force.com platform query language and data manipulation language will be described with syntaxes and examples. At the end of the article, there will be a section to describe bulk data handling methods in Apex. This article covers the following topics: Introducing Apex Working with Apex (For more resources related to this topic, see here.) Introducing Apex Apex is the world's first on-demand programming language that allows developers to implement and execute business flows, business logic, and transactions on the Force.com platform. There are two types of Force.com application development methods: declarative developments and programmatic developments. Apex is categorized under the programmatic development method. Since Apex is a strongly-typed, object-based language, it is connected with data in the Force.com platform and data manipulation using the query language and the search language. The Apex language has the following features: Apex provides a lot of built-in support for the Force.com platform features such as: Data Manipulation Language (DML) with the built-in exception handling (DmlException) to manipulate the data during the execution of the business logic. Salesforce Object Query Language (SOQL) and Salesforce Object Search Language (SOSL) to query and retrieve the list of sObjects records. Bulk data processing on multiple records at a time. Apex allows handling errors and warning using an in-built error-handling mechanism. Apex has its own record-locking mechanism to prevent conflicts of record updates. Apex allows building custom public Force.com APIs from stored Apex methods. Apex runs in a multitenant environment. The Force.com platform has multitenant architecture. Therefore, the Apex runtime engine obeys the multitenant environment. It prevents monopolizing of shared resources using the guard with limits. If any particular Apex code violates the limits, error messages will be displayed. Apex is hosted in the Force.com platform. Therefore, the Force.com platform interprets, executes, and controls Apex. Automatically upgradable and versioned: Apex codes are stored as metadata in the platform. Therefore, they are automatically upgraded with the platform. You don't need to rewrite your code when the platform gets updated. Each code is saved with the current upgrade version. You can manually change the version. It is easy to maintain the Apex code with the versioned mechanism. Apex can be used easily. Apex is similar to Java syntax and variables. The syntaxes and semantics of Apex are easy to understand and write codes. Apex is a data-focused programming language. Apex is designed for multithreaded query and DML statements in a single execution context on the Force.com servers. Many developers can use database stored procedures to run multiple transaction statements on the database server. Apex is different from other databases when it comes to stored procedures; it doesn't attempt to provide general support for rendering elements in the user interface. The execution context is one of the key concepts in Apex programming. It influences every aspect of software development on the Force.com platform. Apex is a strongly-typed language that directly refers to schema objects and object fields. If there is any error, it fails the compilation. All the objects, fields, classes, and pages are stored in metadata after successful compilation. Easy to perform unit testing. Apex provides a built-in feature for unit testing and test execution with the code coverage. Apex allows developers to write the logic in two ways: As an Apex class: The developer can write classes in the Force.com platform using Apex code. An Apex class includes action methods which related to the logic implementation. An Apex class can be called from a trigger. A class can be associated with a Visualforce page (Visualforce Controllers/Extensions) or can act as a supporting class (WebService, Email-to-Apex service/Helper classes, Batch Apex, and Schedules). Therefore, Apex classes are explicitly called from different places on the Force.com platform. As a database trigger: A trigger is executed related to a particular database interaction of a Force.com object. For example, you can create a trigger on the Leave Type object that fires whenever the Leave Type record is inserted. Therefore, triggers are implicitly called from a database action. Apex is included in the Unlimited Edition, Developer Edition, Enterprise Edition, Database.com, and Performance Edition. The developer can write Apex classes or Apex triggers in a developer organization or a sandbox of a production organization. After you finish the development of the Apex code, you can deploy the particular Apex code to the production organization. Before you deploy the Apex code, you have to write test methods to cover the implemented Apex code. Apex code in the runtime environment You already know that Apex code is stored and executed on the Force.com platform. Apex code also has a compile time and a runtime. When you attempt to save an Apex code, it checks for errors, and if there are no errors, it saves with the compilation. The code is compiled into a set of instructions that are about to execute at runtime. Apex always adheres to built-in governor limits of the Force.com platform. These governor limits protect the multitenant environment from runaway processes. Apex code and unit testing Unit testing is important because it checks the code and executes the particular method or trigger for failures and exceptions during test execution. It provides a structured development environment. We gain two good requirements for this unit testing, namely, best practice for development and best practice for maintaining the Apex code. The Force.com platform forces you to cover the Apex code you implemented. Therefore, the Force.com platform ensures that you follow the best practices on the platform. Apex governors and limits Apex codes are executed on the Force.com multitenant infrastructure and the shared resources are used across all customers, partners, and developers. When we are writing custom code using Apex, it is important that the Apex code uses the shared resources efficiently. Apex governors are responsible for enforcing runtime limits set by Salesforce. It discontinues the misbehaviors of the particular Apex code. If the code exceeds a limit, a runtime exception is thrown that cannot be handled. This error will be seen by the end user. Limit warnings can be sent via e-mail, but they also appear in the logs. Governor limits are specific to a namespace, so AppExchange certified managed applications have their own set of limits, independent of the other applications running in the same organization. Therefore, the governor limits have their own scope. The limit scope will start from the beginning of the code execution. It will be run through the subsequent blocks of code until the particular code terminates. Apex code and security The Force.com platform has a component-based security, record-based security and rich security framework, including profiles, record ownership, and sharing. Normally, Apex codes are executed as a system mode (not as a user mode), which means the Apex code has access to all data and components. However, you can make the Apex class run in user mode by defining the Apex class with the sharing keyword. The with sharing/without sharing keywords are employed to designate that the sharing rules for the running user are considered for the particular Apex class. Use the with sharing keyword when declaring a class to enforce the sharing rules that apply to the current user. Use the without sharing keyword when declaring a class to ensure that the sharing rules for the current user are not enforced. For example, you may want to explicitly turn off sharing rule enforcement when a class acquires sharing rules after it is called from another class that is declared using with sharing. The profile also can maintain the permission for developing Apex code and accessing Apex classes. The author's Apex permission is required to develop Apex codes and we can limit the access of Apex classes through the profile by adding or removing the granted Apex classes. Although triggers are built using Apex code, the execution of triggers cannot be controlled by the user. They depend on the particular operation, and if the user has permission for the particular operation, then the trigger will be fired. Apex code and web services Like other programming languages, Apex supports communication with the outside world through web services. Apex methods can be exposed as a web service. Therefore, an external system can invoke the Apex web service to execute the particular logic. When you write a web service method, you must use the webservice keyword at the beginning of the method declaration. The variables can also be exposed with the webservice keyword. After you create the webservice method, you can generate the Web Service Definition Language (WSDL), which can be consumed by an external application. Apex supports both Simple Object Access Protocol (SOAP) and Representational State Transfer (REST) web services. Apex and metadata Because Apex is a proprietary language, it is strongly typed to Salesforce metadata. The same sObject and fields that are created through the declarative setup menu can be referred to through Apex. Like other Force.com features, the system will provide an error if you try to delete an object or field that is used within Apex. Apex is not technically autoupgraded with each new Salesforce release, as it is saved with a specific version of the API. Therefore, Apex, like other Force.com features, will automatically work with future versions of Salesforce applications. Force.com application development tools use the metadata. Working with Apex Before you start coding with Apex, you need to learn a few basic things. Apex basics Apex has come up with a syntactical framework. Similar to Java, Apex is strongly typed and is an object-based language. If you have some experience with Java, it will be easy to understand Apex. The following table explains the similarities and differences between Apex and Java: Similarities Differences Both languages have classes, inheritance, polymorphism, and other common object oriented programming features Apex runs in a multitenant environment and is very controlled in its invocations and governor limits Both languages have extremely similar syntax and notations Apex is case sensitive Both languages are compiled, strongly-typed, and transactional Apex is on-demand and is compiled and executed in the cloud   Apex is not a general purpose programming language, but is instead a proprietary language used for specific business logic functions   Apex requires unit testing for deployment into a production environment This section will not discuss everything that is included in the Apex documentation from Salesforce, but it will cover topics that are essential for understanding concepts discussed in this article. With this basic knowledge of Apex, you can create Apex code in the Force.com platform. Apex data types In Apex classes and triggers, we use variables that contain data values. Variables must be bound to a data type and that particular variable can hold the values with the same data type. All variables and expressions have one of the following data types: Primitives Enums sObjects Collections An object created from the user or system-defined classes Null (for the null constant) Primitive data types Apex uses the same primitive data types as the web services API, most of which are similar to their Java counterparts. It may seem that Apex primitive variables are passed by value, but they actually use immutable references, similar to Java string behavior. The following are the primitive data types of Apex: Boolean: A value that can only be assigned true, false, or null. Date, Datetime, and Time: A Date value indicates particular day and not contains any information about time. A Datetime value indicates a particular day and time. A Time value indicates a particular time. Date, Datetime and Time values must always be created with a system static method. ID: 18 or 15 digits version. Integer, Long, Double, and Decimal: Integer is a 32-bit number that does not include decimal points. Integers have a minimum value of -2,147,483,648 and a maximum value of 2,147,483,647. Long is a 64-bit number that does not include a decimal point. Use this datatype when you need a range of values wider than those provided by Integer. Double is a 64-bit number that includes a decimal point. Both Long and Doubles have a minimum value of -263 and a maximum value of 263-1. Decimal is a number that includes a decimal point. Decimal is an arbitrary precision number. String: String is any set of characters surrounded by single quotes. Strings have no limit on the number of characters that can be included. But the heap size limit is used to ensure to the particular Apex program do not grow too large. Blob: Blob is a collection of binary data stored as a single object. Blog can be accepted as Web service argument, stored in a document or sent as attachments. Object: This can be used as the base type for any other data type. Objects are supported for casting. Enum data types Enum (or enumerated list) is an abstract data type that stores one value of a finite set of specified identifiers. To define an Enum, use the enum keyword in the variable declaration and then define the list of values. You can define and use enum in the following way: Public enum Status {NEW, APPROVED, REJECTED, CANCELLED} The preceding enum has four values: NEW, APPROVED, REJECTED, CANCELLED. By creating this enum, you have created a new data type called Status that can be used as any other data type for variables, return types, and method arguments. Status leaveStatus = Status. NEW; Apex provides Enums for built-in concepts such as API error (System.StatusCode). System-defined enums cannot be used in web service methods. sObject data types sObjects (short for Salesforce Object) are standard or custom objects that store record data in the Force.com database. There is also an sObject data type in Apex that is the programmatic representation of these sObjects and their data in code. Developers refer to sObjects and their fields by their API names, which can be found in the schema browser. sObject and field references within Apex are validated against actual object and field names when code is written. Force.com tracks the objects and fields used within Apex to prevent users from making the following changes: Changing a field or object name Converting from one data type to another Deleting a field or object Organization-wide changes such as record sharing It is possible to declare variables of the generic sObject data type. The new operator still requires a concrete sObject type, so the instances are all specific sObjects. The following is a code example: sObject s = new Employee__c(); Casting will be applied as expected as each row knows its runtime type and can be cast back to that type. The following casting works fine: Employee__c e = (Employee__c)s; However, the following casting will generate a runtime exception for data type collision: Leave__c leave = (Leave__c)s; sObject super class only has the ID variable. So we can only access the ID via the sObject class. This method can also be used with collections and DML operations, although only concrete types can be instantiated. Collection will be described in the upcoming section and DML operations will be discussed in the Data manipulation section on the Force.com platform. Let's have a look at the following code: sObject[] sList = new Employee__c[0]; List<Employee__c> = (List<Employee__c>)sList; Database.insert(sList); Collection data types Collection data types store groups of elements of other primitive, composite, or collection data types. There are three different types of collections in Apex: List: A list is an ordered collection of primitives or composite data types distinguished by its index. Each element in a list contains two pieces of information; an index (this is an integer) and a value (the data). The index of the first element is zero. You can define an Apex list in the following way: List<DataType> listName = new List<DataType>(); List<String> sList = new List< String >(); There are built-in methods that can be used with lists adding/removing elements from the end of the list, getting/setting values at a particular index, and sizing the list by obtaining the number of elements. A full set of list methods are listed at http://www.salesforce.com/us/developer/docs/dbcom_apex250/Content/apex_methods_system_list.htm. The Apex list is defined in the following way: List<String> sList = new List< String >(); sList.add('string1'); sList.add('string2'); sList.add('string3'); sList.add('string4'); Integer sListSize = sList.size(); // this will return the   value as 4 sList.get(3); //This method will return the value as   "string4" Apex allows developers familiar with the standard array syntax to use that interchangeably with the list syntax. The main difference is the use of square brackets, which is shown in the following code: String[] sList = new String[4]; sList [0] = 'string1'; sList [1] = 'string2'; sList [2] = 'string3'; sList [3] = 'string4'; Integer sListSize = sList.size(); // this will return the   value as 4 Lists, as well as maps, can be nested up to five levels deep. Therefore, you can create a list of lists in the following way: List<List<String>> nestedList = new List<List<String>> (); Set: A set is an unordered collection of data of one primitive data type or sObjects that must have unique values. The Set methods are listed at http://www.salesforce.com/us/developer/docs/dbcom_apex230/Content/apex_methods_system_set.htm. Similar to the declaration of List, you can define a Set in the following way: Set<DataType> setName = new Set<DataType>(); Set<String> setName = new Set<String>(); There are built-in methods for sets, including add/remove elements to/from the set, check whether the set contains certain elements, and the size of the set. Map: A map is an unordered collection of unique keys of one primitive data type and their corresponding values. The Map methods are listed in the following link at http://www.salesforce.com/us/developer/docs/dbcom_apex250/Content/apex_methods_system_map.htm. You can define a Map in the following way: Map<PrimitiveKeyDataType, DataType> = mapName = new   Map<PrimitiveKeyDataType, DataType>(); Map<Integer, String> mapName = new Map<Integer, String>(); Map<Integer, List<String>> sMap = new Map<Integer,   List<String>>(); Maps are often used to map IDs to sObjects. There are built-in methods that you can use with maps, including adding/removing elements on the map, getting values for a particular key, and checking whether the map contains certain keys. You can use these methods as follows: Map<Integer, String> sMap = new Map<Integer, String>(); sMap.put(1, 'string1'); // put key and values pair sMap.put(2, 'string2'); sMap.put(3, 'string3'); sMap.put(4, 'string4'); sMap.get(2); // retrieve the value of key 2 Apex logics and loops Like all programming languages, Apex language has the syntax to implement conditional logics (IF-THEN-ELSE) and loops (for, Do-while, while). The following table will explain the conditional logic and loops in Apex: IF Conditional IF statements in Apex are similar to Java. The IF-THEN statement is the most basic of all the control flow statements. It tells your program to execute a certain section of code only if a particular test evaluates to true. The IF-THEN-ELSE statement provides a secondary path of execution when an IF clause evaluates to false. if (Boolean_expression){ statement; statement; statement; statement;} else { statement; statement;} For There are three variations of the FOR loop in Apex, which are as follows: FOR(initialization;Boolean_exit_condition;increment) {     statement; }   FOR(variable : list_or_set) {     statement; }   FOR(variable : [inline_soql_query]) {     statement; } All loops allow for the following commands: break: This is used to exit the loop continue: This is used to skip to the next iteration of the loop While The while loop is similar, but the condition is checked before the first loop, as shown in the following code: while (Boolean_condition) { code_block; }; Do-While The do-while loop repeatedly executes as long as a particular Boolean condition remains true. The condition is not checked until after the first pass is executed, as shown in the following code: do { //code_block; } while (Boolean_condition); Summary In this article, you have learned to develop custom coding in the Force.com platform, including the Apex classes and triggers. And you learned two query languages in the Force.com platform. Resources for Article: Further resources on this subject: Force.com: Data Management [article] Configuration in Salesforce CRM [article] Learning to Fly with Force.com [article]
Read more
  • 0
  • 0
  • 3688

article-image-getting-ready-coffeescript
Packt
02 Apr 2015
20 min read
Save for later

Getting Ready with CoffeeScript

Packt
02 Apr 2015
20 min read
In this article by Mike Hatfield, author of the book, CoffeeScript Application Development Cookbook, we will see that JavaScript, though very successful, can be a difficult language to work with. JavaScript was designed by Brendan Eich in a mere 10 days in 1995 while working at Netscape. As a result, some might claim that JavaScript is not as well rounded as some other languages, a point well illustrated by Douglas Crockford in his book titled JavaScript: The Good Parts, O'Reilly Media. These pitfalls found in the JavaScript language led Jeremy Ashkenas to create CoffeeScript, a language that attempts to expose the good parts of JavaScript in a simple way. CoffeeScript compiles into JavaScript and helps us avoid the bad parts of JavaScript. (For more resources related to this topic, see here.) There are many reasons to use CoffeeScript as your development language of choice. Some of these reasons include: CoffeeScript helps protect us from the bad parts of JavaScript by creating function closures that isolate our code from the global namespace by reducing the curly braces and semicolon clutter and by helping tame JavaScript's notorious this keyword CoffeeScript helps us be more productive by providing features such as list comprehensions, classes with inheritance, and many others Properly written CoffeeScript also helps us write code that is more readable and can be more easily maintained As Jeremy Ashkenas says: "CoffeeScript is just JavaScript." We can use CoffeeScript when working with the large ecosystem of JavaScript libraries and frameworks on all aspects of our applications, including those listed in the following table: Part Some options User interfaces UI frameworks including jQuery, Backbone.js, AngularJS, and Kendo UI Databases Node.js drivers to access SQLite, Redis, MongoDB, and CouchDB Internal/external services Node.js with Node Package Manager (NPM) packages to create internal services and interfacing with external services Testing Unit and end-to-end testing with Jasmine, Qunit, integration testing with Zombie, and mocking with Persona Hosting Easy API and application hosting with Heroku and Windows Azure Tooling Create scripts to automate routine tasks and using Grunt Configuring your environment and tools One significant aspect to being a productive CoffeeScript developer is having a proper development environment. This environment typically consists of the following: Node.js and the NPM CoffeeScript Code editor Debugger In this recipe, we will look at installing and configuring the base components and tools necessary to develop CoffeeScript applications. Getting ready In this section, we will install the software necessary to develop applications with CoffeeScript. One of the appealing aspects of developing applications using CoffeeScript is that it is well supported on Mac, Windows, and Linux machines. To get started, you need only a PC and an Internet connection. How to do it... CoffeeScript runs on top of Node.js—the event-driven, non-blocking I/O platform built on Chrome's JavaScript runtime. If you do not have Node.js installed, you can download an installation package for your Mac OS X, Linux, and Windows machines from the start page of the Node.js website (http://nodejs.org/). To begin, install Node.js using an official prebuilt installer; it will also install the NPM. Next, we will use NPM to install CoffeeScript. Open a terminal or command window and enter the following command: npm install -g coffee-script This will install the necessary files needed to work with CoffeeScript, including the coffee command that provides an interactive Read Evaluate Print Loop (REPL)—a command to execute CoffeeScript files and a compiler to generate JavaScript. It is important to use the -g option when installing CoffeeScript, as this installs the CoffeeScript package as a global NPM module. This will add the necessary commands to our path. On some Windows machines, you might need to add the NPM binary directory to your path. You can do this by editing the environment variables and appending ;%APPDATA%npm to the end of the system's PATH variable. Configuring Sublime Text What you use to edit code can be a very personal choice, as you, like countless others, might use the tools dictated by your team or manager. Fortunately, most popular editing tools either support CoffeeScript out of the box or can be easily extended by installing add-ons, packages, or extensions. In this recipe, we will look at adding CoffeeScript support to Sublime Text and Visual Studio. Getting ready This section assumes that you have Sublime Text or Visual Studio installed. Sublime Text is a very popular text editor that is geared to working with code and projects. You can download a fully functional evaluation version from http://www.sublimetext.com. If you find it useful and decide to continue to use it, you will be encouraged to purchase a license, but there is currently no enforced time limit. How to do it... Sublime Text does not support CoffeeScript out of the box. Thankfully, a package manager exists for Sublime Text; this package manager provides access to hundreds of extension packages, including ones that provide helpful and productive tools to work with CoffeeScript. Sublime Text does not come with this package manager, but it can be easily added by following the instructions on the Package Control website at https://sublime.wbond.net/installation. With Package Control installed, you can easily install the CoffeeScript packages that are available using the Package Control option under the Preferences menu. Select the Install Package option. You can also access this command by pressing Ctrl + Shift + P, and in the command list that appears, start typing install. This will help you find the Install Package command quickly. To install the CoffeeScript package, open the Install Package window and enter CoffeeScript. This will display the CoffeeScript-related packages. We will use the Better CoffeeScript package: As you can see, the CoffeeScript package includes syntax highlighting, commands, shortcuts, snippets, and compilation. How it works... In this section, we will explain the different keyboard shortcuts and code snippets available with the Better CoffeeScript package for Sublime. Commands You can run the desired command by entering the command into the Sublime command pallet or by pressing the related keyboard shortcut. Remember to press Ctrl + Shift + P to display the command pallet window. Some useful CoffeeScript commands include the following: Command Keyboard shortcut Description Coffee: Check Syntax Alt + Shift + S This checks the syntax of the file you are editing or the currently selected code. The result will display in the status bar at the bottom. Coffee: Compile File Alt + Shift + C This compiles the file being edited into JavaScript. Coffee: Run Script Alt + Shift + R This executes the selected code and displays a buffer of the output. The keyboard shortcuts are associated with the file type. If you are editing a new CoffeeScript file that has not been saved yet, you can specify the file type by choosing CoffeeScript in the list of file types in the bottom-left corner of the screen. Snippets Snippets allow you to use short tokens that are recognized by Sublime Text. When you enter the code and press the Tab key, Sublime Text will automatically expand the snippet into the full form. Some useful CoffeeScript code snippets include the following: Token Expands to log[Tab] console.log cla class Name constructor: (arguments) ->    # ... forin for i in array # ... if if condition # ... ifel if condition # ... else # ... swi switch object when value    # ... try try # ... catch e # ... The snippets are associated with the file type. If you are editing a new CoffeeScript file that has not been saved yet, you can specify the file type by selecting CoffeeScript in the list of file types in the bottom-left corner of the screen. Configuring Visual Studio In this recipe, we will demonstrate how to add CoffeeScript support to Visual Studio. Getting ready If you are on the Windows platform, you can use Microsoft's Visual Studio software. You can download Microsoft's free Express edition (Express 2013 for Web) from http://www.microsoft.com/express. How to do it... If you are a Visual Studio user, Version 2010 and above can work quite effectively with CoffeeScript through the use of Visual Studio extensions. If you are doing any form of web development with Visual Studio, the Web Essentials extension is a must-have. To install Web Essentials, perform the following steps: Launch Visual Studio. Click on the Tools menu and select the Extensions and Updates menu option. This will display the Extensions and Updates window (shown in the next screenshot). Select Online in the tree on the left-hand side to display the most popular downloads. Select Web Essentials 2012 from the list of available packages and then click on the Download button. This will download the package and install it automatically. Once the installation is finished, restart Visual Studio by clicking on the Restart Now button. You will likely find Web Essentials 2012 ranked highly in the list of Most Popular packages. If you do not see it, you can search for Web Essentials using the Search box in the top-right corner of the window. Once installed, the Web Essentials package provides many web development productivity features, including CSS helpers, tools to work with Less CSS, enhancements to work with JavaScript, and, of course, a set of CoffeeScript helpers. To add a new CoffeeScript file to your project, you can navigate to File | New Item or press Ctrl + Shift + A. This will display the Add New Item dialog, as seen in the following screenshot. Under the Web templates, you will see a new CoffeeScript File option. Select this option and give it a filename, as shown here: When we have our CoffeeScript file open, Web Essentials will display the file in a split-screen editor. We can edit our code in the left-hand pane, while Web Essentials displays a live preview of the JavaScript code that will be generated for us. The Web Essentials CoffeeScript compiler will create two JavaScript files each time we save our CoffeeScript file: a basic JavaScript file and a minified version. For example, if we save a CoffeeScript file named employee.coffee, the compiler will create employee.js and employee.min.js files. Though I have only described two editors to work with CoffeeScript files, there are CoffeeScript packages and plugins for most popular text editors, including Emacs, Vim, TextMate, and WebMatrix. A quick dive into CoffeeScript In this recipe, we will take a quick look at the CoffeeScript language and command line. How to do it... CoffeeScript is a highly expressive programming language that does away with much of the ceremony required by JavaScript. It uses whitespace to define blocks of code and provides shortcuts for many of the programming constructs found in JavaScript. For example, we can declare variables and functions without the var keyword: firstName = 'Mike' We can define functions using the following syntax: multiply = (a, b) -> a * b Here, we defined a function named multiply. It takes two arguments, a and b. Inside the function, we multiplied the two values. Note that there is no return statement. CoffeeScript will always return the value of the last expression that is evaluated inside a function. The preceding function is equivalent to the following JavaScript snippet: var multiply = function(a, b) { return a * b; }; It's worth noting that the CoffeeScript code is only 28 characters long, whereas the JavaScript code is 50 characters long; that's 44 percent less code. We can call our multiply function in the following way: result = multiply 4, 7 In CoffeeScript, using parenthesis is optional when calling a function with parameters, as you can see in our function call. However, note that parenthesis are required when executing a function without parameters, as shown in the following example: displayGreeting = -> console.log 'Hello, world!' displayGreeting() In this example, we must call the displayGreeting() function with parenthesis. You might also wish to use parenthesis to make your code more readable. Just because they are optional, it doesn't mean you should sacrifice the readability of your code to save a couple of keystrokes. For example, in the following code, we used parenthesis even though they are not required: $('div.menu-item').removeClass 'selected' Like functions, we can define JavaScript literal objects without the need for curly braces, as seen in the following employee object: employee = firstName: 'Mike' lastName: 'Hatfield' salesYtd: 13204.65 Notice that in our object definition, we also did not need to use a comma to separate our properties. CoffeeScript supports the common if conditional as well as an unless conditional inspired by the Ruby language. Like Ruby, CoffeeScript also provides English keywords for logical operations such as is, isnt, or, and and. The following example demonstrates the use of these keywords: isEven = (value) -> if value % 2 is 0    'is' else    'is not'   console.log '3 ' + isEven(3) + ' even' In the preceding code, we have an if statement to determine whether a value is even or not. If the value is even, the remainder of value % 2 will be 0. We used the is keyword to make this determination. JavaScript has a nasty behavior when determining equality between two values. In other languages, the double equal sign is used, such as value == 0. In JavaScript, the double equal operator will use type coercion when making this determination. This means that 0 == '0'; in fact, 0 == '' is also true. CoffeeScript avoids this using JavaScript's triple equals (===) operator. This evaluation compares value and type such that 0 === '0' will be false. We can use if and unless as expression modifiers as well. They allow us to tack if and unless at the end of a statement to make simple one-liners. For example, we can so something like the following: console.log 'Value is even' if value % 2 is 0 Alternatively, we can have something like this: console.log 'Value is odd' unless value % 2 is 0 We can also use the if...then combination for a one-liner if statement, as shown in the following code: if value % 2 is 0 then console.log 'Value is even' CoffeeScript has a switch control statement that performs certain actions based on a list of possible values. The following lines of code show a simple switch statement with four branching conditions: switch task when 1    console.log 'Case 1' when 2    console.log 'Case 2' when 3, 4, 5    console.log 'Case 3, 4, 5' else    console.log 'Default case' In this sample, if the value of a task is 1, case 1 will be displayed. If the value of a task is 3, 4, or 5, then case 3, 4, or 5 is displayed, respectively. If there are no matching values, we can use an optional else condition to handle any exceptions. If your switch statements have short operations, you can turn them into one-liners, as shown in the following code: switch value when 1 then console.log 'Case 1' when 2 then console.log 'Case 2' when 3, 4, 5 then console.log 'Case 3, 4, 5' else console.log 'Default case' CoffeeScript provides a number of syntactic shortcuts to help us be more productive while writing more expressive code. Some people have claimed that this can sometimes make our applications more difficult to read, which will, in turn, make our code less maintainable. The key to highly readable and maintainable code is to use a consistent style when coding. I recommend that you follow the guidance provided by Polar in their CoffeeScript style guide at http://github.com/polarmobile/coffeescript-style-guide. There's more... With CoffeeScript installed, you can use the coffee command-line utility to execute CoffeeScript files, compile CoffeeScript files into JavaScript, or run an interactive CoffeeScript command shell. In this section, we will look at the various options available when using the CoffeeScript command-line utility. We can see a list of available commands by executing the following command in a command or terminal window: coffee --help This will produce the following output: As you can see, the coffee command-line utility provides a number of options. Of these, the most common ones include the following: Option Argument Example Description None None coffee This launches the REPL-interactive shell. None Filename coffee sample.coffee This command will execute the CoffeeScript file. -c, --compile Filename coffee -c sample.coffee This command will compile the CoffeeScript file into a JavaScript file with the same base name,; sample.js, as in our example. -i, --interactive   coffee -i This command will also launch the REPL-interactive shell. -m, --map Filename coffee--m sample.coffee This command generates a source map with the same base name, sample.js.map, as in our example. -p, --print Filename coffee -p sample.coffee This command will display the compiled output or compile errors to the terminal window. -v, --version None coffee -v This command will display the correct version of CoffeeScript. -w, --watch Filename coffee -w -c sample.coffee This command will watch for file changes, and with each change, the requested action will be performed. In our example, our sample.coffee file will be compiled each time we save it. The CoffeeScript REPL As we have been, CoffeeScript has an interactive shell that allows us to execute CoffeeScript commands. In this section, we will learn how to use the REPL shell. The REPL shell can be an excellent way to get familiar with CoffeeScript. To launch the CoffeeScript REPL, open a command window and execute the coffee command. This will start the interactive shell and display the following prompt: For example, if we enter the expression x = 4 and press return, we would see what is shownin the following screenshot In the coffee> prompt, we can assign values to variables, create functions, and evaluate results. When we enter an expression and press the return key, it is immediately evaluated and the value is displayed. For example, if we enter the expression x = 4 and press return, we would see what is shown in the following screenshot: This did two things. First, it created a new variable named x and assigned the value of 4 to it. Second, it displayed the result of the command. Next, enter timesSeven = (value) -> value * 7 and press return: You can see that the result of this line was the creation of a new function named timesSeven(). We can call our new function now: By default, the REPL shell will evaluate each expression when you press the return key. What if we want to create a function or expression that spans multiple lines? We can enter the REPL multiline mode by pressing Ctrl + V. This will change our coffee> prompt to a ------> prompt. This allows us to enter an expression that spans multiple lines, such as the following function: When we are finished with our multiline expression, press Ctrl + V again to have the expression evaluated. We can then call our new function: The CoffeeScript REPL offers some handy helpers such as expression history and tab completion. Pressing the up arrow key on your keyboard will circulate through the expressions we previously entered. Using the Tab key will autocomplete our function or variable name. For example, with the isEvenOrOdd() function, we can enter isEven and press Tab to have the REPL complete the function name for us. Debugging CoffeeScript using source maps If you have spent any time in the JavaScript community, you would have, no doubt, seen some discussions or rants regarding the weak debugging story for CoffeeScript. In fact, this is often a top argument some give for not using CoffeeScript at all. In this recipe, we will examine how to debug our CoffeeScript application using source maps. Getting ready The problem in debugging CoffeeScript stems from the fact that CoffeeScript compiles into JavaScript which is what the browser executes. If an error arises, the line that has caused the error sometimes cannot be traced back to the CoffeeScript source file very easily. Also, the error message is sometimes confusing, making troubleshooting that much more difficult. Recent developments in the web development community have helped improve the debugging experience for CoffeeScript by making use of a concept known as a source map. In this section, we will demonstrate how to generate and use source maps to help make our CoffeeScript debugging easier. To use source maps, you need only a base installation of CoffeeScript. How to do it... You can generate a source map for your CoffeeScript code using the -m option on the CoffeeScript command: coffee -m -c employee.coffee How it works... Source maps provide information used by browsers such as Google Chrome that tell the browser how to map a line from the compiled JavaScript code back to its origin in the CoffeeScript file. Source maps allow you to place breakpoints in your CoffeeScript file and analyze variables and execute functions in your CoffeeScript module. This creates a JavaScript file called employee.js and a source map called employee.js.map. If you look at the last line of the generated employee.js file, you will see the reference to the source map: //# sourceMappingURL=employee.js.map Google Chrome uses this JavaScript comment to load the source map. The following screenshot demonstrates an active breakpoint and console in Goggle Chrome: Debugging CoffeeScript using Node Inspector Source maps and Chrome's developer tools can help troubleshoot our CoffeeScript that is destined for the Web. In this recipe, we will demonstrate how to debug CoffeeScript that is designed to run on the server. Getting ready Begin by installing the Node Inspector NPM module with the following command: npm install -g node-inspector How to do it... To use Node Inspector, we will use the coffee command to compile the CoffeeScript code we wish to debug and generate the source map. In our example, we will use the following simple source code in a file named counting.coffee: for i in [1..10] if i % 2 is 0    console.log "#{i} is even!" else    console.log "#{i} is odd!" To use Node Inspector, we will compile our file and use the source map parameter with the following command: coffee -c -m counting.coffee Next, we will launch Node Inspector with the following command: node-debug counting.js How it works... When we run Node Inspector, it does two things. First, it launches the Node debugger. This is a debugging service that allows us to step through code, hit line breaks, and evaluate variables. This is a built-in service that comes with Node. Second, it launches an HTTP handler and opens a browser that allows us to use Chrome's built-in debugging tools to use break points, step over and into code, and evaluate variables. Node Inspector works well using source maps. This allows us to see our native CoffeeScript code and is an effective tool to debug server-side code. The following screenshot displays our Chrome window with an active break point. In the local variables tool window on the right-hand side, you can see that the current value of i is 2: The highlighted line in the preceding screenshot depicts the log message. Summary This article introduced CoffeeScript and lays the foundation to use CoffeeScript to develop all aspects of modern cloud-based applications. Resources for Article: Further resources on this subject: Writing Your First Lines of CoffeeScript [article] Why CoffeeScript? [article] ASP.Net Site Performance: Improving JavaScript Loading [article]
Read more
  • 0
  • 0
  • 1036