Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

406 Articles
article-image-introducing-jax-rs-api
Packt
21 Sep 2015
25 min read
Save for later

Introducing JAX-RS API

Packt
21 Sep 2015
25 min read
 In this article by Jobinesh Purushothaman, author of the book, RESTful Java Web Services, Second Edition, we will see that there are many tools and frameworks available in the market today for building RESTful web services. There are some recent developments with respect to the standardization of various framework APIs by providing unified interfaces for a variety of implementations. Let's take a quick look at this effort. (For more resources related to this topic, see here.) As you may know, Java EE is the industry standard for developing portable, robust, scalable, and secure server-side Java applications. The Java EE 6 release took the first step towards standardizing RESTful web service APIs by introducing a Java API for RESTful web services (JAX-RS). JAX-RS is an integral part of the Java EE platform, which ensures portability of your REST API code across all Java EE-compliant application servers. The first release of JAX-RS was based on JSR 311. The latest version is JAX-RS 2 (based on JSR 339), which was released as part of the Java EE 7 platform. There are multiple JAX-RS implementations available today by various vendors. Some of the popular JAX-RS implementations are as follows: Jersey RESTful web service framework: This framework is an open source framework for developing RESTful web services in Java. It serves as a JAX-RS reference implementation. You can learn more about this project at https://jersey.java.net. Apache CXF: This framework is an open source web services framework. CXF supports both JAX-WS and JAX-RS web services. To learn more about CXF, refer to http://cxf.apache.org. RESTEasy: This framework is an open source project from JBoss, which provides various modules to help you build a RESTful web service. To learn more about RESTEasy, refer to http://resteasy.jboss.org. Restlet: This framework is a lightweight, open source RESTful web service framework. It has good support for building both scalable RESTful web service APIs and lightweight REST clients, which suits mobile platforms well. You can learn more about Restlet at http://restlet.com. Remember that you are not locked down to any specific vendor here, the RESTful web service APIs that you build using JAX-RS will run on any JAX-RS implementation as long as you do not use any vendor-specific APIs in the code. JAX-RS annotations                                      The main goal of the JAX-RS specification is to make the RESTful web service development easier than it has been in the past. As JAX-RS is a part of the Java EE platform, your code becomes portable across all Java EE-compliant servers. Specifying the dependency of the JAX-RS API To use JAX-RS APIs in your project, you need to add the javax.ws.rs-api JAR file to the class path. If the consuming project uses Maven for building the source, the dependency entry for the javax.ws.rs-api JAR file in the Project Object Model (POM) file may look like the following: <dependency> <groupId>javax.ws.rs</groupId> <artifactId>javax.ws.rs-api</artifactId> <version>2.0.1</version><!-- set the tight version --> <scope>provided</scope><!-- compile time dependency --> </dependency> Using JAX-RS annotations to build RESTful web services Java annotations provide the metadata for your Java class, which can be used during compilation, during deployment, or at runtime in order to perform designated tasks. The use of annotations allows us to create RESTful web services as easily as we develop a POJO class. Here, we leave the interception of the HTTP requests and representation negotiations to the framework and concentrate on the business rules necessary to solve the problem at hand. If you are not familiar with Java annotations, go through the tutorial available at http://docs.oracle.com/javase/tutorial/java/annotations/. Annotations for defining a RESTful resource REST resources are the fundamental elements of any RESTful web service. A REST resource can be defined as an object that is of a specific type with the associated data and is optionally associated to other resources. It also exposes a set of standard operations corresponding to the HTTP method types such as the HEAD, GET, POST, PUT, and DELETE methods. @Path The @javax.ws.rs.Path annotation indicates the URI path to which a resource class or a class method will respond. The value that you specify for the @Path annotation is relative to the URI of the server where the REST resource is hosted. This annotation can be applied at both the class and the method levels. A @Path annotation value is not required to have leading or trailing slashes (/), as you may see in some examples. The JAX-RS runtime will parse the URI path templates in the same way even if they have leading or trailing slashes. Specifying the @Path annotation on a resource class The following code snippet illustrates how you can make a POJO class respond to a URI path template containing the /departments path fragment: import javax.ws.rs.Path; @Path("departments") public class DepartmentService { //Rest of the code goes here } The /department path fragment that you see in this example is relative to the base path in the URI. The base path typically takes the following URI pattern: http://host:port/<context-root>/<application-path>. Specifying the @Path annotation on a resource class method The following code snippet shows how you can specify @Path on a method in a REST resource class. Note that for an annotated method, the base URI is the effective URI of the containing class. For instance, you will use the URI of the following form to invoke the getTotalDepartments() method defined in the DepartmentService class: /departments/count, where departments is the @Path annotation set on the class. import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; @Path("departments") public class DepartmentService { @GET @Path("count") @Produces("text/plain") public Integer getTotalDepartments() { return findTotalRecordCount(); } //Rest of the code goes here } Specifying variables in the URI path template It is very common that a client wants to retrieve data for a specific object by passing the desired parameter to the server. JAX-RS allows you to do this via the URI path variables as discussed here. The URI path template allows you to define variables that appear as placeholders in the URI. These variables would be replaced at runtime with the values set by the client. The following example illustrates the use of the path variable to request for a specific department resource. The URI path template looks like /departments/{id}. At runtime, the client can pass an appropriate value for the id parameter to get the desired resource from the server. For instance, the URI path of the /departments/10 format returns the IT department details to the caller. The following code snippet illustrates how you can pass the department ID as a path variable for deleting a specific department record. The path URI looks like /departments/10. import javax.ws.rs.Path; import javax.ws.rs.DELETE; @Path("departments") public class DepartmentService { @DELETE @Path("{id}") public void removeDepartment(@PathParam("id") short id) { removeDepartmentEntity(id); } //Other methods removed for brevity } In the preceding code snippet, the @PathParam annotation is used for copying the value of the path variable to the method parameter. Restricting values for path variables with regular expressions JAX-RS lets you use regular expressions in the URI path template for restricting the values set for the path variables at runtime by the client. By default, the JAX-RS runtime ensures that all the URI variables match the following regular expression: [^/]+?. The default regular expression allows the path variable to take any character except the forward slash (/). What if you want to override this default regular expression imposed on the path variable values? Good news is that JAX-RS lets you specify your own regular expression for the path variables. For example, you can set the regular expression as given in the following code snippet in order to ensure that the department name variable present in the URI path consists only of lowercase and uppercase alphanumeric characters: @DELETE @Path("{name: [a-zA-Z][a-zA-Z_0-9]}") public void removeDepartmentByName(@PathParam("name") String deptName) { //Method implementation goes here } If the path variable does not match the regular expression set of the resource class or method, the system reports the status back to the caller with an appropriate HTTP status code, such as 404 Not Found, which tells the caller that the requested resource could not be found at this moment. Annotations for specifying request-response media types The Content-Type header field in HTTP describes the body's content type present in the request and response messages. The content types are represented using the standard Internet media types. A RESTful web service makes use of this header field to indicate the type of content in the request or response message body. JAX-RS allows you to specify which Internet media types of representations a resource can produce or consume by using the @javax.ws.rs.Produces and @javax.ws.rs.Consumes annotations, respectively. @Produces The @javax.ws.rs.Produces annotation is used for defining the Internet media type(s) that a REST resource class method can return to the client. You can define this either at the class level (which will get defaulted for all methods) or the method level. The method-level annotations override the class-level annotations. The possible Internet media types that a REST API can produce are as follows: application/atom+xml application/json application/octet-stream application/svg+xml application/xhtml+xml application/xml text/html text/plain text/xml The following example uses the @Produces annotation at the class level in order to set the default response media type as JSON for all resource methods in this class. At runtime, the binding provider will convert the Java representation of the return value to the JSON format. import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; @Path("departments") @Produces(MediaType.APPLICATION_JSON) public class DepartmentService{ //Class implementation goes here... } @Consumes The @javax.ws.rs.Consumes annotation defines the Internet media type(s) that the resource class methods can accept. You can define the @Consumes annotation either at the class level (which will get defaulted for all methods) or the method level. The method-level annotations override the class-level annotations. The possible Internet media types that a REST API can consume are as follows: application/atom+xml application/json application/octet-stream application/svg+xml application/xhtml+xml application/xml text/html text/plain text/xml multipart/form-data application/x-www-form-urlencoded The following example illustrates how you can use the @Consumes attribute to designate a method in a class to consume a payload presented in the JSON media type. The binding provider will copy the JSON representation of an input message to the Department parameter of the createDepartment() method. import javax.ws.rs.Consumes; import javax.ws.rs.core.MediaType; import javax.ws.rs.POST; @POST @Consumes(MediaType.APPLICATION_JSON) public void createDepartment(Department entity) { //Method implementation goes here… } The javax.ws.rs.core.MediaType class defines constants for all media types supported in JAX-RS. To learn more about the MediaType class, visit the API documentation available at http://docs.oracle.com/javaee/7/api/javax/ws/rs/core/MediaType.html. Annotations for processing HTTP request methods In general, RESTful web services communicate over HTTP with the standard HTTP verbs (also known as method types) such as GET, PUT, POST, DELETE, HEAD, and OPTIONS. @GET A RESTful system uses the HTTP GET method type for retrieving the resources referenced in the URI path. The @javax.ws.rs.GET annotation designates a method of a resource class to respond to the HTTP GET requests. The following code snippet illustrates the use of the @GET annotation to make a method respond to the HTTP GET request type. In this example, the REST URI for accessing the findAllDepartments() method may look like /departments. The complete URI path may take the following URI pattern: http://host:port/<context-root>/<application-path>/departments. //imports removed for brevity @Path("departments") public class DepartmentService { @GET @Produces(MediaType.APPLICATION_JSON) public List<Department> findAllDepartments() { //Find all departments from the data store List<Department> departments = findAllDepartmentsFromDB(); return departments; } //Other methods removed for brevity } @PUT The HTTP PUT method is used for updating or creating the resource pointed by the URI. The @javax.ws.rs.PUT annotation designates a method of a resource class to respond to the HTTP PUT requests. The PUT request generally has a message body carrying the payload. The value of the payload could be any valid Internet media type such as the JSON object, XML structure, plain text, HTML content, or binary stream. When a request reaches a server, the framework intercepts the request and directs it to the appropriate method that matches the URI path and the HTTP method type. The request payload will be mapped to the method parameter as appropriate by the framework. The following code snippet shows how you can use the @PUT annotation to designate the editDepartment() method to respond to the HTTP PUT request. The payload present in the message body will be converted and copied to the department parameter by the framework: @PUT @Path("{id}") @Consumes(MediaType.APPLICATION_JSON) public void editDepartment(@PathParam("id") Short id, Department department) { //Updates department entity to data store updateDepartmentEntity(id, department); } @POST The HTTP POST method posts data to the server. Typically, this method type is used for creating a resource. The @javax.ws.rs.POST annotation designates a method of a resource class to respond to the HTTP POST requests. The following code snippet shows how you can use the @POST annotation to designate the createDepartment() method to respond to the HTTP POST request. The payload present in the message body will be converted and copied to the department parameter by the framework: @POST public void createDepartment(Department department) { //Create department entity in data store createDepartmentEntity(department); } @DELETE The HTTP DELETE method deletes the resource pointed by the URI. The @javax.ws.rs.DELETE annotation designates a method of a resource class to respond to the HTTP DELETE requests. The following code snippet shows how you can use the @DELETE annotation to designate the removeDepartment() method to respond to the HTTP DELETE request. The department ID is passed as the path variable in this example. @DELETE @Path("{id}") public void removeDepartment(@PathParam("id") Short id) { //remove department entity from data store removeDepartmentEntity(id); } @HEAD The @javax.ws.rs.HEAD annotation designates a method to respond to the HTTP HEAD requests. This method is useful for retrieving the metadata present in the response headers, without having to retrieve the message body from the server. You can use this method to check whether a URI pointing to a resource is active or to check the content size by using the Content-Length response header field, and so on. The JAX-RS runtime will offer the default implementations for the HEAD method type if the REST resource is missing explicit implementation. The default implementation provided by runtime for the HEAD method will call the method designated for the GET request type, ignoring the response entity retuned by the method. @OPTIONS The @javax.ws.rs.OPTIONS annotation designates a method to respond to the HTTP OPTIONS requests. This method is useful for obtaining a list of HTTP methods allowed on a resource. The JAX-RS runtime will offer a default implementation for the OPTIONS method type, if the REST resource is missing an explicit implementation. The default implementation offered by the runtime sets the Allow response header to all the HTTP method types supported by the resource. Annotations for accessing request parameters You can use this offering to extract the following parameters from a request: a query, URI path, form, cookie, header, and matrix. Mostly, these parameters are used in conjunction with the GET, POST, PUT, and DELETE methods. @PathParam A URI path template, in general, has a URI part pointing to the resource. It can also take the path variables embedded in the syntax; this facility is used by clients to pass parameters to the REST APIs as appropriate. The @javax.ws.rs.PathParam annotation injects (or binds) the value of the matching path parameter present in the URI path template into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. Typically, this annotation is used in conjunction with the HTTP method type annotations such as @GET, @POST, @PUT, and @DELETE. The following example illustrates the use of the @PathParam annotation to read the value of the path parameter, id, into the deptId method parameter. The URI path template for this example looks like /departments/{id}: //Other imports removed for brevity javax.ws.rs.PathParam @Path("departments") public class DepartmentService { @DELETE @Path("{id}") public void removeDepartment(@PathParam("id") Short deptId) { removeDepartmentEntity(deptId); } //Other methods removed for brevity } The REST API call to remove the department resource identified by id=10 looks like DELETE /departments/10 HTTP/1.1. We can also use multiple variables in a URI path template. For example, we can have the URI path template embedding the path variables to query a list of departments from a specific city and country, which may look like /departments/{country}/{city}. The following code snippet illustrates the use of @PathParam to extract variable values from the preceding URI path template: @Produces(MediaType.APPLICATION_JSON) @Path("{country}/{city} ") public List<Department> findAllDepartments( @PathParam("country") String countyCode, @PathParam("city") String cityCode) { //Find all departments from the data store for a country //and city List<Department> departments = findAllMatchingDepartmentEntities(countyCode, cityCode ); return departments; } @QueryParam The @javax.ws.rs.QueryParam annotation injects the value(s) of a HTTP query parameter into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The following example illustrates the use of @QueryParam to extract the value of the desired query parameter present in the URI. This example extracts the value of the query parameter, name, from the request URI and copies the value into the deptName method parameter. The URI that accesses the IT department resource looks like /departments?name=IT: @GET @Produces(MediaType.APPLICATION_JSON) public List<Department> findAllDepartmentsByName(@QueryParam("name") String deptName) { List<Department> depts= findAllMatchingDepartmentEntities (deptName); return depts; } @MatrixParam Matrix parameters are another way of defining parameters in the URI path template. The matrix parameters take the form of name-value pairs in the URI path, where each pair is preceded by semicolon (;). For instance, the URI path that uses a matrix parameter to list all departments in Bangalore city looks like /departments;city=Bangalore. The @javax.ws.rs.MatrixParam annotation injects the matrix parameter value into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The following code snippet demonstrates the use of the @MatrixParam annotation to extract the matrix parameters present in the request. The URI path used in this example looks like /departments;name=IT;city=Bangalore. @GET @Produces(MediaType.APPLICATION_JSON) @Path("matrix") public List<Department> findAllDepartmentsByNameWithMatrix(@MatrixParam("name") String deptName, @MatrixParam("city") String locationCode) { List<Department> depts=findAllDepartmentsFromDB(deptName, city); return depts; } You can use PathParam, QueryParam, and MatrixParam to pass the desired search parameters to the REST APIs. Now, you may ask when to use what? Although there are no strict rules here, a very common practice followed by many is to use PathParam to drill down to the entity class hierarchy. For example, you may use the URI of the following form to identify an employee working in a specific department: /departments/{dept}/employees/{id}. QueryParam can be used for specifying attributes to locate the instance of a class. For example, you may use URI with QueryParam to identify employees who have joined on January 1, 2015, which may look like /employees?doj=2015-01-01. The MatrixParam annotation is not used frequently. This is useful when you need to make a complex REST style query to multiple levels of resources and subresources. MatrixParam is applicable to a particular path element, while the query parameter is applicable to the entire request. @HeaderParam The HTTP header fields provide necessary information about the request and response contents in HTTP. For example, the header field, Content-Length: 348, for an HTTP request says that the size of the request body content is 348 octets (8-bit bytes). The @javax.ws.rs.HeaderParam annotation injects the header values present in the request into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The following example extracts the referrer header parameter and logs it for audit purposes. The referrer header field in HTTP contains the address of the previous web page from which a request to the currently processed page originated: @POST public void createDepartment(@HeaderParam("Referer") String referer, Department entity) { logSource(referer); createDepartmentInDB(department); } Remember that HTTP provides a very wide selection of headers that cover most of the header parameters that you are looking for. Although you can use custom HTTP headers to pass some application-specific data to the server, try using standard headers whenever possible. Further, avoid using a custom header for holding properties specific to a resource, or the state of the resource, or parameters directly affecting the resource. @CookieParam The @javax.ws.rs.CookieParam annotation injects the matching cookie parameters present in the HTTP headers into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The following code snippet uses the Default-Dept cookie parameter present in the request to return the default department details: @GET @Path("cook") @Produces(MediaType.APPLICATION_JSON) public Department getDefaultDepartment(@CookieParam("Default-Dept") short departmentId) { Department dept=findDepartmentById(departmentId); return dept; } @FormParam The @javax.ws.rs.FormParam annotation injects the matching HTML form parameters present in the request body into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The request body carrying the form elements must have the content type specified as application/x-www-form-urlencoded. Consider the following HTML form that contains the data capture form for a department entity. This form allows the user to enter the department entity details: <!DOCTYPE html> <html> <head> <title>Create Department</title> </head> <body> <form method="POST" action="/resources/departments"> Department Id: <input type="text" name="departmentId"> <br> Department Name: <input type="text" name="departmentName"> <br> <input type="submit" value="Add Department" /> </form> </body> </html> Upon clicking on the submit button on the HTML form, the department details that you entered will be posted to the REST URI, /resources/departments. The following code snippet shows the use of the @FormParam annotation for extracting the HTML form fields and copying them to the resource class method parameter: @Path("departments") public class DepartmentService { @POST //Specifies content type as //"application/x-www-form-urlencoded" @Consumes(MediaType.APPLICATION_FORM_URLENCODED) public void createDepartment(@FormParam("departmentId") short departmentId, @FormParam("departmentName") String departmentName) { createDepartmentEntity(departmentId, departmentName); } } @DefaultValue The @javax.ws.rs.DefaultValue annotation specifies a default value for the request parameters accessed using one of the following annotations: PathParam, QueryParam, MatrixParam, CookieParam, FormParam, or HeaderParam. The default value is used if no matching parameter value is found for the variables annotated using one of the preceding annotations. The following REST resource method will make use of the default value set for the from and to method parameters if the corresponding query parameters are found missing in the URI path: @GET @Produces(MediaType.APPLICATION_JSON) public List<Department> findAllDepartmentsInRange (@DefaultValue("0") @QueryParam("from") Integer from, @DefaultValue("100") @QueryParam("to") Integer to) { findAllDepartmentEntitiesInRange(from, to); } @Context The JAX-RS runtime offers different context objects, which can be used for accessing information associated with the resource class, operating environment, and so on. You may find various context objects that hold information associated with the URI path, request, HTTP header, security, and so on. Some of these context objects also provide the utility methods for dealing with the request and response content. JAX-RS allows you to reference the desired context objects in the code via dependency injection. JAX-RS provides the @javax.ws.rs.Context annotation that injects the matching context object into the target field. You can specify the @Context annotation on a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The following example illustrates the use of the @Context annotation to inject the javax.ws.rs.core.UriInfo context object into a method variable. The UriInfo instance provides access to the application and request URI information. This example uses UriInfo to read the query parameter present in the request URI path template, /departments/IT: @GET @Produces(MediaType.APPLICATION_JSON) public List<Department> findAllDepartmentsByName( @Context UriInfo uriInfo){ String deptName = uriInfo.getPathParameters().getFirst("name"); List<Department> depts= findAllMatchingDepartmentEntities (deptName); return depts; } Here is a list of the commonly used classes and interfaces, which can be injected using the @Context annotation: javax.ws.rs.core.Application: This class defines the components of a JAX-RS application and supplies additional metadata javax.ws.rs.core.UriInfo: This interface provides access to the application and request URI information javax.ws.rs.core.Request: This interface provides a method for request processing such as reading the method type and precondition evaluation. javax.ws.rs.core.HttpHeaders: This interface provides access to the HTTP header information javax.ws.rs.core.SecurityContext: This interface provides access to security-related information javax.ws.rs.ext.Providers: This interface offers the runtime lookup of a provider instance such as MessageBodyReader, MessageBodyWriter, ExceptionMapper, and ContextResolver javax.ws.rs.ext.ContextResolver<T>: This interface supplies the requested context to the resource classes and other providers javax.servlet.http.HttpServletRequest: This interface provides the client request information for a servlet javax.servlet.http.HttpServletResponse: This interface is used for sending a response to a client javax.servlet.ServletContext: This interface provides methods for a servlet to communicate with its servlet container javax.servlet.ServletConfig: This interface carries the servlet configuration parameters @BeanParam The @javax.ws.rs.BeanParam annotation allows you to inject all matching request parameters into a single bean object. The @BeanParam annotation can be set on a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The bean class can have fields or properties annotated with one of the request parameter annotations, namely @PathParam, @QueryParam, @MatrixParam, @HeaderParam, @CookieParam, or @FormParam. Apart from the request parameter annotations, the bean can have the @Context annotation if there is a need. Consider the example that we discussed for @FormParam. The createDepartment() method that we used in that example has two parameters annotated with @FormParam: public void createDepartment( @FormParam("departmentId") short departmentId, @FormParam("departmentName") String departmentName) Let's see how we can use @BeanParam for the preceding method to give a more logical, meaningful signature by grouping all the related fields into an aggregator class, thereby avoiding too many parameters in the method signature. The DepartmentBean class that we use for this example is as follows: public class DepartmentBean { @FormParam("departmentId") private short departmentId; @FormParam("departmentName") private String departmentName; //getter and setter for the above fields //are not shown here to save space } The following code snippet demonstrates the use of the @BeanParam annotation to inject the DepartmentBean instance that contains all the FormParam values extracted from the request message body: @POST public void createDepartment(@BeanParam DepartmentBean deptBean) { createDepartmentEntity(deptBean.getDepartmentId(), deptBean.getDepartmentName()); } @Encoded By default, the JAX-RS runtime decodes all request parameters before injecting the extracted values into the target variables annotated with one of the following annotations: @FormParam, @PathParam, @MatrixParam, or @QueryParam. You can use @javax.ws.rs.Encoded to disable the automatic decoding of the parameter values. With the @Encoded annotation, the value of parameters will be provided in the encoded form itself. This annotation can be used on a class, method, or parameters. If you set this annotation on a method, it will disable decoding for all parameters defined for this method. You can use this annotation on a class to disable decoding for all parameters of all methods. In the following example, the value of the path parameter called name is injected into the method parameter in the URL encoded form (without decoding). The method implementation should take care of the decoding of the values in such cases: @GET @Produces(MediaType.APPLICATION_JSON) public List<Department> findAllDepartmentsByName(@QueryParam("name") String deptName) { //Method body is removed for brevity } URL encoding converts a string into a valid URL format, which may contain alphabetic characters, numerals, and some special characters supported in the URL string. To learn about the URL specification, visit http://www.w3.org/Addressing/URL/url-spec.html. Summary With the use of annotations, the JAX-RS API provides a simple development model for RESTful web service programming. In case you are interested in knowing other Java RESTful Web Services books that Packt has in store for you, here is the link: RESTful Java Web Services, Jose Sandoval RESTful Java Web Services Security, René Enríquez, Andrés Salazar C Resources for Article: Further resources on this subject: The Importance of Securing Web Services[article] Understanding WebSockets and Server-sent Events in Detail[article] Adding health checks [article]
Read more
  • 0
  • 0
  • 5494

article-image-crud-operations-rest
Packt
16 Sep 2015
11 min read
Save for later

CRUD Operations in REST

Packt
16 Sep 2015
11 min read
In this article by Ludovic Dewailly, the author of Building a RESTful Web Service with Spring, we will learn how requests to retrieve data from a RESTful endpoint, created to access the rooms in a sample property management system, are typically mapped to the HTTP GET method in RESTful web services. We will expand on this by implementing some of the endpoints to support all the CRUD (Create, Read, Update, Delete) operations. In this article, we will cover the following topics: Mapping the CRUD operations to the HTTP methods Creating resources Updating resources Deleting resources Testing the RESTful operations Emulating the PUT and DELETE methods (For more resources related to this topic, see here.) Mapping the CRUD operations[km1]  to HTTP [km2] [km3] methods The HTTP 1.1 specification defines the following methods: OPTIONS: This method represents a request for information about the communication options available for the requested URI. This is, typically, not directly leveraged with REST. However, this method can be used as a part of the underlying communication. For example, this method may be used when consuming web services from a web page (as a part of the C[km4] ross-origin resource sharing mechanism). GET: This method retrieves the information identified by the request URI. In the context of the RESTful web services, this method is used to retrieve resources. This is the method used for read operations (the R in CRUD). HEAD: The HEAD requests are semantically identical to the GET requests except the body of the response is not transmitted. This method is useful for obtaining meta-information about resources. Similar to the OPTIONS method, this method is not typically used directly in REST web services. POST: This method is used to instruct the server to accept the entity enclosed in the request as a new resource. The create operations are typically mapped to this HTTP method. PUT: This method requests the server to store the enclosed entity under the request URI. To support the updating of REST resources, this method can be leveraged. As per the HTTP specification, the server can create the resource if the entity does not exist. It is up to the web service designer to decide whether this behavior should be implemented or resource creation should only be handled by POST requests. DELETE: The last operation not yet mapped is for the deletion of resources. The HTTP specification defines a DELETE method that is semantically aligned with the deletion of RESTful resources. TRACE: This method is used to perform actions on web servers. These actions are often aimed to aid development and the testing of HTTP applications. The TRACE requests aren't usually mapped to any particular RESTful operations. CONNECT: This HTTP method is defined to support HTTP tunneling through a proxy server. Since it deals with transport layer concerns, this method has no natural semantic mapping to the RESTful operations. The RESTful architecture does not mandate the use of HTTP as a communication protocol. Furthermore, even if HTTP is selected as the underlying transport, no provisions are made regarding the mapping of the RESTful operations to the HTTP method. Developers could feasibly support all operations through POST requests. This being said, the following CRUD to HTTP method mapping is commonly used in REST web services: Operation HTTP method Create POST Read GET Update PUT Delete DELETE Our sample web service will use these HTTP methods to support CRUD operations. The rest of this article will illustrate how to build such operations. Creating r[km5] esources The inventory component of our sample property management system deals with rooms. If we have already built an endpoint to access the rooms. Let's take a look at how to define an endpoint to create new resources: @RestController @RequestMapping("/rooms") public class RoomsResource { @RequestMapping(method = RequestMethod.POST) public ApiResponse addRoom(@RequestBody RoomDTO room) { Room newRoom = createRoom(room); return new ApiResponse(Status.OK, new RoomDTO(newRoom)); } } We've added a new method to our RoomsResource class to handle the creation of new rooms. @RequestMapping is used to map requests to the Java method. Here we map the POST requests to addRoom(). Not specifying a value (that is, path) in @RequestMapping is equivalent to using "/". We pass the new room as @RequestBody. This annotation instructs Spring to map the body of the incoming web request to the method parameter. Jackson is used here to convert the JSON request body to a Java object. With this new method, the POSTing requests to http://localhost:8080/rooms with the following JSON body will result in the creation of a new room: { name: "Cool Room", description: "A room that is very cool indeed", room_category_id: 1 } Our new method will return the newly created room: { "status":"OK", "data":{ "id":2, "name":"Cool Room", "room_category_id":1, "description":"A room that is very cool indeed" } } We can decide to return only the ID of the new resource in response to the resource creation. However, since we may sanitize or otherwise manipulate the data that was sent over, it is a good practice to return the full resource. Quickly testing endpoints[km6]  For the purpose of quickly testing our newly created endpoint, let's look at testing the new rooms created using Postman. Postman (https://www.getpostman.com) is a Google Chrome plugin extension that provides tools to build and test web APIs. This following screenshot illustrates how Postman can be used to test this endpoint: In Postman, we specify the URL to send the POST request to http://localhost:8080/rooms, with the "[km7] application/json" content type header and the body of the request. Sending this requesting will result in a new room being created and returned as shown in the following: We have successfully added a room to our inventory service using Postman. It is equally easy to create incomplete requests to ensure our endpoint performs any necessary sanity checks before persisting data into the database. JSON versus[km8]  form data Posting forms is the traditional way of creating new entities on the web and could easily be used to create new RESTful resources. We can change our method to the following: @RequestMapping(method = RequestMethod.POST, consumes = MediaType.APPLICATION_FORM_URLENCODED_VALUE) public ApiResponse addRoom(String name, String description, long roomCategoryId) { Room room = createRoom(name, description, roomCategoryId); return new ApiResponse(Status.OK, new RoomDTO(room)); } The main difference with the previous method is that we tell Spring to map form requests (that is, with application/x-www-form-urlencoded the content type) instead of JSON requests. In addition, rather than expecting an object as a parameter, we receive each field individually. By default, Spring will use the Java method attribute names to map incoming form inputs. Developers can change this behavior by annotating attribute with @RequestParam("…") to specify the input name. In situations where the main web service consumer is a web application, using form requests may be more applicable. In most cases, however, the former approach is more in line with RESTful principles and should be favored. Besides, when complex resources are handled, form requests will prove cumbersome to use. From a developer standpoint, it is easier to delegate object mapping to a third-party library such as Jackson. Now that we have created a new resource, let's see how we can update it. Updating r[km9] esources Choosing URI formats is an important part of designing RESTful APIs. As seen previously, rooms are accessed using the /rooms/{roomId} path and created under /rooms. You may recall that as per the HTTP specification, PUT requests can result in creation of entities, if they do not exist. The decision to create new resources on update requests is up to the service designer. It does, however, affect the choice of path to be used for such requests. Semantically, PUT requests update entities stored under the supplied request URI. This means the update requests should use the same URI as the GET requests: /rooms/{roomId}. However, this approach hinders the ability to support resource creation on update since no room identifier will be available. The alternative path we can use is /rooms with the room identifier passed in the body of the request. With this approach, the PUT requests can be treated as POST requests when the resource does not contain an identifier. Given the first approach is semantically more accurate, we will choose not to support resource create on update, and we will use the following path for the PUT requests: /rooms/{roomId} Update endpoint[km10]  The following method provides the necessary endpoint to modify the rooms: @RequestMapping(value = "/{roomId}", method = RequestMethod.PUT) public ApiResponse updateRoom(@PathVariable long roomId, @RequestBody RoomDTO updatedRoom) { try { Room room = updateRoom(updatedRoom); return new ApiResponse(Status.OK, new RoomDTO(room)); } catch (RecordNotFoundException e) { return new ApiResponse(Status.ERROR, null, new ApiError(999, "No room with ID " + roomId)); } } As discussed in the beginning of this article, we map update requests to the HTTP PUT verb. Annotating this method with @RequestMapping(value = "/{roomId}", method = RequestMethod.PUT) instructs Spring to direct the PUT requests here. The room identifier is part of the path and mapped to the first method parameter. In fashion similar to the resource creation requests, we map the body to our second parameter with the use of @RequestBody. Testing update requests[km11]  With Postman, we can quickly create a test case to update the room we created. To do so, we send a PUT request with the following body: { id: 2, name: "Cool Room", description: "A room that is really very cool indeed", room_category_id: 1 } The resulting response will be the updated room, as shown here: { "status": "OK", "data": { "id": 2, "name": "Cool Room", "room_category_id": 1, "description": "A room that is really very cool indeed." } } Should we attempt to update a nonexistent room, the server will generate the following response: { "status": "ERROR", "error": { "error_code": 999, "description": "No room with ID 3" } } Since we do not support resource creation on update, the server returns an error indicating that the resource cannot be found. Deleting resources[km12]  It will come as no surprise that we will use the DELETE verb to delete REST resources. Similarly, the reader will have already figured out that the path to delete requests will be /rooms/{roomId}. The Java method that deals with room deletion is as follows: @RequestMapping(value = "/{roomId}", method = RequestMethod.DELETE) public ApiResponse deleteRoom(@PathVariable long roomId) { try { Room room = inventoryService.getRoom(roomId); inventoryService.deleteRoom(room.getId()); return new ApiResponse(Status.OK, null); } catch (RecordNotFoundException e) { return new ApiResponse(Status.ERROR, null, new ApiError( 999, "No room with ID " + roomId)); } } By declaring the request mapping method to be RequestMethod.DELETE, Spring will make this method handle the DELETE requests. Since the resource is deleted, returning it in the response would not make a lot of sense. Service designers may choose to return a boolean flag to indicate the resource was successfully deleted. In our case, we leverage the status element of our response to carry this information back to the consumer. The response to deleting a room will be as follows: { "status": "OK" } With this operation, we have now a full-fledged CRUD API for our Inventory Service. Before we conclude this article, let's discuss how REST developers can deal with situations where not all HTTP verbs can be utilized. HTTP method override In certain situations (for example, when the service or its consumers are behind an overzealous corporate firewall, or if the main consumer is a web page), only the GET and POST HTTP methods might be available. In such cases, it is possible to emulate the missing verbs by passing a customer header in the requests. For example, resource updates can be handle using POST requests by setting a customer header (for example, X-HTTP-Method-Override) to PUT to indicate that we are emulating a PUT request via a POST request. The following method will handle this scenario: @RequestMapping(value = "/{roomId}", method = RequestMethod.POST, headers = {"X-HTTP-Method-Override=PUT"}) public ApiResponse updateRoomAsPost(@PathVariable("roomId") long id, @RequestBody RoomDTO updatedRoom) { return updateRoom(id, updatedRoom); } By setting the headers attribute on the mapping annotation, Spring request routing will intercept the POST requests with our custom header and invoke this method. Normal POST requests will still map to the Java method we had put together to create new rooms. Summary In this article, we've performed the implementation of our sample RESTful web service by adding all the CRUD operations necessary to manage the room resources. We've discussed how to organize URIs to best embody the REST principles and looked at how to quickly test endpoints using Postman. Now that we have a fully working component of our system, we can take some time to discuss performance. Resources for Article: Further resources on this subject: Introduction to Spring Web Application in No Time[article] Aggregators, File exchange Over FTP/FTPS, Social Integration, and Enterprise Messaging[article] Time Travelling with Spring[article]
Read more
  • 0
  • 0
  • 15335

article-image-working-user-interface
Packt
16 Sep 2015
17 min read
Save for later

Working on the User Interface

Packt
16 Sep 2015
17 min read
In this article by Fabrizio Caldarelli, author of the book Yii By Example, will cover the following topics related to the user interface in this article: Customizing JavaScript and CSS Using AJAX Using the Bootstrap widget Viewing multiple models in the same view Saving linked models in the same view It is now time for you to learn what Yii2 supports in order to customize the JavaScript and CSS parts of web pages. A recurrent use of JavaScript is to handle AJAX calls, that is, to manage widgets and compound controls (such as a dependent drop-down list) from jQuery and Bootstrap. Finally, we will employ jQuery to dynamically create more models from the same class in the form, which will be passed to the controller in order to be validated and saved. (For more resources related to this topic, see here.) Customize JavaScript and CSS Using JavaScript and CSS is fundamental to customize frontend output. Differently from Yii1, where calling JavaScript and CSS scripts and files was done using the Yii::app() singleton, in the new framework version, Yii2, this task is part of the yiiwebView class. There are two ways to call JavaScript or CSS: either directly passing the code to be executed or passing the path file. The registerJs() function allows us to execute the JavaScript code with three parameters: The first parameter is the JavaScript code block to be registered The second parameter is the position where the JavaScript tag should be inserted (the header, the beginning of the body section, the end of the body section, enclosed within the jQuery load() method, or enclosed within the jQuery document.ready() method, which is the default) The third and last parameter is a key that identifies the JavaScript code block (if it is not provided, the content of the first parameter will be used as the key) On the other hand, the registerJsFile() function allows us to execute a JavaScript file with three parameters: The first parameter is the path file of the JavaScript file The second parameter is the HTML attribute for the script tag, with particular attention given to the depends and position values, which are not treated as tag attributes The third parameter is a key that identifies the JavaScript code block (if it's not provided, the content of the first parameter will be used as the key) CSS, similar to JavaScript, can be executed using the code or by passing the path file. Generally, JavaScript or CSS files are published in the basic/web folder, which is accessible without restrictions. So, when we have to use custom JavaScript or CSS files, it is recommended to put them in a subfolder of the basic/web folder, which can be named as css or js. In some circumstances, we might be required to add a new CSS or JavaScript file for all web application pages. The most appropriate place to put these entries is AppAsset.php, a file located in basic/assets/AppAsset.php. In it, we can add CSS and JavaScript entries required in web applications, even using dependencies if we need to. Using AJAX Yii2 provides appropriate attributes for some widgets to make AJAX calls; sometimes, however, writing a JavaScript code in these attributes will make code hard to read, especially if we are dealing with complex codes. Consequently, to make an AJAX call, we will use external JavaScript code executed by registerJs(). This is a template of the AJAX class using the GET or POST method: <?php $this->registerJs( <<< EOT_JS // using GET method $.get({ url: url, data: data, success: success, dataType: dataType }); // using POST method $.post({ url: url, data: data, success: success, dataType: dataType }); EOT_JS ); ?> An AJAX call is usually the effect of a user interface event (such as a click on a button, a link, and so on). So, it is directly connected to jQuery .on() event on element. For this reason, it is important to remember how Yii2 renders the name and id attributes of input fields. When we call Html::activeTextInput($model, $attribute) or in the same way use <?= $form->field($model, $attribute)->textInput() ?>. The name and id attributes of the input text field will be rendered as follows: id : The model class name separated with a dash by the attribute name in lowercase; for example, if the model class name is Room and the attribute is floor, the id attribute will be room-floor name: The model class name that encloses the attribute name, for example, if the model class name is Reservation and the attribute is price_per_day, the name attribute will be Reservation[price_per_day]; so every field owned by the Reservation model will be enclosed all in a single array In this example, there are two drop-down lists and a detail box. The two drop-down lists refer to customers and reservations; when user clicks on a customer list item, the second drop-down list of reservations will be filled out according to their choice. Finally, when a user clicks on a reservation list item, a details box will be filled out with data about the selected reservation. In an action named actionDetailDependentDropdown():   public function actionDetailDependentDropdown() { $showDetail = false; $model = new Reservation(); if(isset($_POST['Reservation'])) { $model->load( Yii::$app->request->post() ); if(isset($_POST['Reservation']['id'])&& ($_POST['Reservation']['id']!=null)) { $model = Reservation::findOne($_POST['Reservation']['id']); $showDetail = true; } } return $this->render('detailDependentDropdown', [ 'model' => $model, 'showDetail' => $showDetail ]); } In this action, we will get the customer_id and id parameters from a form based on the Reservation model data and if it are filled out, the data will be used to search for the correct reservation model to be passed to the view. There is a flag called $showDetail that displays the reservation details content if the id attribute of the model is received. In Controller, there is also an action that will be called using AJAX when the user changes the customer selection in the drop-down list:   public function actionAjaxDropDownListByCustomerId($customer_id) { $output = ''; $items = Reservation::findAll(['customer_id' => $customer_id]); foreach($items as $item) { $content = sprintf('reservation #%s at %s', $item->id, date('Y-m-d H:i:s', strtotime($item- >reservation_date))); $output .= yiihelpersHtml::tag('option', $content, ['value' => $item->id]); } return $output; } This action will return the <option> HTML tags filled out with reservations data filtered by the customer ID passed as a parameter. If the customer drop-down list is changed, the detail div will be hidden, an AJAX call will get all the reservations filtered by customer_id, and the result will be passed as content to the reservations drop-down list. If the reservations drop-down list is changed, a form will be submitted. Next in the form declaration, we can find the first of all the customer drop-down list and then the reservations list, which use a closure to get the value from the ArrayHelper::map() methods. We could add a new property in the Reservation model by creating a function starting with the prefix get, such as getDescription(), and put in it the content of the closure, or rather: public function getDescription() { $content = sprintf('reservation #%s at %s', $this>id, date('Y-m-d H:i:s', strtotime($this>reservation_date))); return $content; } Or we could use a short syntax to get data from ArrayHelper::map() in this way: <?= $form->field($model, 'id')->dropDownList(ArrayHelper::map( $reservations, 'id', 'description'), [ 'prompt' => '--- choose' ]); ?> Finally, if $showDetail is flagged, a simple details box with only the price per day of the reservation will be displayed. Using the Bootstrap widget Yii2 supports Bootstrap as a core feature. Bootstrap framework CSS and JavaScript files are injected by default in all pages, and we could use this feature even to only apply CSS classes or call our own JavaScript function provided by Bootstrap. However, Yii2 embeds Bootstrap as a widget, and we can access this framework's capabilities like any other widget. The most used are: Class name Description yiibootstrapAlert This class renders an alert Bootstrap component yiibootstrapButton This class renders a Bootstrap button yiibootstrapDropdown This class renders a Bootstrap drop-down menu component yiibootstrapNav This class renders a nav HTML component yiibootstrapNavBar This class renders a navbar HTML component For example, yiibootstrapNav and yiibootstrapNavBar are used in the default main template.   <?php NavBar::begin([ 'brandLabel' => 'My Company', 'brandUrl' => Yii::$app->homeUrl, 'options' => [ 'class' => 'navbar-inverse navbar-fixed-top', ], ]); echo Nav::widget([ 'options' => ['class' => 'navbar-nav navbar- right'], 'items' => [ ['label' => 'Home', 'url' => ['/site/index']], ['label' => 'About', 'url' => ['/site/about']], ['label' => 'Contact', 'url' => ['/site/contact']], Yii::$app->user->isGuest ? ['label' => 'Login', 'url' => ['/site/login']] : ['label' => 'Logout (' . Yii::$app->user- >identity->username . ')', 'url' => ['/site/logout'], 'linkOptions' => ['data-method' => 'post']], ], ]); NavBar::end(); ?> Yii2 also supports, by itself, many jQuery UI widgets through the JUI extension for Yii 2, yii2-jui. If we do not have the yii2-jui extension in the vendor folder, we can get it from Composer using this command: php composer.phar require --prefer-dist yiisoft/yii2-jui In this example, we will discuss the two most used widgets: datepicker and autocomplete. Let's have a look at the datepicker widget. This widget can be initialized using a model attribute or by filling out a value property. The following is an example made using a model instance and one of its attributes: echo DatePicker::widget([ 'model' => $model, 'attribute' => 'from_date', //'language' => 'it', //'dateFormat' => 'yyyy-MM-dd', ]); And, here is a sample of the value property's use: echo DatePicker::widget([ 'name' => 'from_date', 'value' => $value, //'language' => 'it', //'dateFormat' => 'yyyy-MM-dd', ]); When data is sent via POST, the date_from and date_to fields will be converted from the d/m/y to the y-m-d format to make it possible for the database to save data. Then, the model object is updated through the save() method. Using the Bootstrap widget, an alert box will be displayed in the view after updating the model. Create the datePicker view: <?php use yiihelpersHtml; use yiiwidgetsActiveForm; use yiijuiDatePicker; ?> <div class="row"> <div class="col-lg-6"> <h3>Date Picker from Value<br />(using MM/dd/yyyy format and English language)</h3> <?php $value = date('Y-m-d'); echo DatePicker::widget([ 'name' => 'from_date', 'value' => $value, 'language' => 'en', 'dateFormat' => 'MM/dd/yyyy', ]); ?> </div> <div class="col-lg-6"> <?php if($reservationUpdated) { ?> <?php echo yiibootstrapAlert::widget([ 'options' => [ 'class' => 'alert-success', ], 'body' => 'Reservation successfully updated', ]); ?> <?php } ?> <?php $form = ActiveForm::begin(); ?> <h3>Date Picker from Model<br />(using dd/MM/yyyy format and italian language)</h3> <br /> <label>Date from</label> <?php // First implementation of DatePicker Widget echo DatePicker::widget([ 'model' => $reservation, 'attribute' => 'date_from', 'language' => 'it', 'dateFormat' => 'dd/MM/yyyy', ]); ?> <br /> <br /> <?php // Second implementation of DatePicker Widget echo $form->field($reservation, 'date_to')- >widget(yiijuiDatePicker::classname(), [ 'language' => 'it', 'dateFormat' => 'dd/MM/yyyy', ]) ?> <?php echo Html::submitButton('Send', ['class' => 'btn btn- primary']) ?> <?php $form = ActiveForm::end(); ?> </div> </div> The view is split into two columns, left and right. The left column simply displays a DataPicker example from the value (fixed to the current date). The right column displays an alert box if the $reservation model has been updated, and the next two kinds of widget declaration too; the first one without using $form and the second one using $form, both outputting the same HTML code. In either case, the DatePicker date output format is set to dd/MM/yyyy through the dateFormat property and the language is set to Italian through the language property. Multiple models in the same view Often, we can find many models of same or different class in a single view. First of all, remember that Yii2 encapsulates all the views' form attributes in the same container, named the same as the model class name. Therefore, when the controller receives the data, these will all be organized in a key of the $_POST array named the same as the model class name. If the model class name is Customer, every form input name attribute will be Customer[attributeA_of_model] This is built with: $form->field($model, 'attributeA_of_model')->textInput(). In the case of multiple models of the same class, the container will again be named as the model class name, but every attribute of each model will be inserted in an array, such as: Customer[0][attributeA_of_model_0] Customer[0][attributeB_of_model_0] … … … Customer[n][attributeA_of_model_n] Customer[n][attributeB_of_model_n] These are built with: $form->field($model, '[0]attributeA_of_model')->textInput(); $form->field($model, '[0]attributeB_of_model')->textInput(); … … … $form->field($model, '[n]attributeA_of_model')->textInput(); $form->field($model, '[n]attributeB_of_model')->textInput(); Notice that the array key information is inserted in the attribute name! So when data is passed to the controller, $_POST['Customer'] will be an array composed by the Customer models and every key of this array, for example, $_POST['Customer'][0] is a model of the Customer class. Let's see now how to save three customers at once. We will create three containers, one for each model class that will contain some fields of the Customer model. Create a view containing a block of input fields repeated for every model passed from the controller: <?php use yiihelpersHtml; use yiiwidgetsActiveForm; /* @var $this yiiwebView */ /* @var $model appmodelsRoom */ /* @var $form yiiwidgetsActiveForm */ ?> <div class="room-form"> <?php $form = ActiveForm::begin(); ?> <div class="model"> <?php for($k=0;$k<sizeof($models);$k++) { ?> <?php $model = $models[$k]; ?> <hr /> <label>Model #<?php echo $k+1 ?></label> <?= $form->field($model, "[$k]name")->textInput() ?> <?= $form->field($model, "[$k]surname")->textInput() ?> <?= $form->field($model, "[$k]phone_number")- >textInput() ?> <?php } ?> </div> <hr /> <div class="form-group"> <?= Html::submitButton('Save', ['class' => 'btn btn- primary']) ?> </div> <?php ActiveForm::end(); ?> </div> For each model, all the fields will have the same validator rules of the Customer class, and every single model object will be validated separately. Saving linked models in the same view It could be convenient to save different kind of models in the same view. This approach allows us to save time and to navigate from every single detail until a final item that merges all data is created. Handling different kind of models linked to each other it is not so different from what we have seen so far. The only point to take care of is the link (foreign keys) between models, which we must ensure is valid. Therefore, the controller action will receive the $_POST data encapsulated in the model's class name container; if we are thinking, for example, of the customer and reservation models, we will have two arrays in the $_POST variable, $_POST['Customer'] and $_POST['Reservation'], containing all the fields about the customer and reservation models. Then, all data must be saved together. It is advisable to use a database transaction while saving data because the action can be considered as ended only when all the data has been saved. Using database transactions in Yii2 is incredibly simple! A database transaction starts with calling beginTransaction() on the database connection object and finishes with calling the commit() or rollback() method on the database transaction object created by beginTransaction(). To start a transaction: $dbTransaction = Yii::$app->db->beginTransaction(); Commit transaction, to save all the database activities: $dbTransaction->commit(); Rollback transaction, to clear all the database activities: $dbTransaction->rollback(); So, if a customer was saved and the reservation was not (for any possible reason), our data would be partial and incomplete. Using a database transaction, we will avoid this danger. We now want to create both the customer and reservation models in the same view in a single step. In this way, we will have a box containing the customer model fields and a box with the reservation model fields in the view. Create a view the fields from the customer and reservation models: <?php use yiihelpersHtml; use yiiwidgetsActiveForm; use yiihelpersArrayHelper; use appmodelsRoom; ?> <div class="room-form"> <?php $form = ActiveForm::begin(); ?> <div class="model"> <?php echo $form->errorSummary([$customer, $reservation]); ?> <h2>Customer</h2> <?= $form->field($customer, "name")->textInput() ?> <?= $form->field($customer, "surname")->textInput() ?> <?= $form->field($customer, "phone_number")->textInput() ?> <h2>Reservation</h2> <?= $form->field($reservation, "room_id")- >dropDownList(ArrayHelper::map(Room::find()->all(), 'id', function($room, $defaultValue) { return sprintf('Room n.%d at floor %d', $room- >room_number, $room->floor); })); ?> <?= $form->field($reservation, "price_per_day")->textInput() ?> <?= $form->field($reservation, "date_from")->textInput() ?> <?= $form->field($reservation, "date_to")->textInput() ?> </div> <div class="form-group"> <?= Html::submitButton('Save customer and room', ['class' => 'btn btn-primary']) ?> </div> <?php ActiveForm::end(); ?> </div> We have created two blocks in the form to fill out the fields for the customer and the reservation. Now, create a new action named actionCreateCustomerAndReservation in ReservationsController in basic/controllers/ReservationsController.php:   public function actionCreateCustomerAndReservation() { $customer = new appmodelsCustomer(); $reservation = new appmodelsReservation(); // It is useful to set fake customer_id to reservation model to avoid validation error (because customer_id is mandatory) $reservation->customer_id = 0; if( $customer->load(Yii::$app->request->post()) && $reservation->load(Yii::$app->request->post()) && $customer->validate() && $reservation->validate() ) { $dbTrans = Yii::$app->db->beginTransaction(); $customerSaved = $customer->save(); if($customerSaved) { $reservation->customer_id = $customer->id; $reservationSaved = $reservation->save(); if($reservationSaved) { $dbTrans->commit(); } else { $dbTrans->rollback(); } } else { $dbTrans->rollback(); } } return $this->render('createCustomerAndReservation', [ 'customer' => $customer, 'reservation' => $reservation ]); } Summary In this article, we saw how to embed JavaScript and CSS in a layout and views, with file content or an inline block. This was applied to an example that showed you how to change the number of columns displayed based on the browser's available width; this is a typically task for websites or web apps that display advertising columns. Again on the subject of JavaScript, you learned how implement direct AJAX calls, taking an example where the reservation detail was dynamically loaded from the customers dropdown list. Next we looked at Yii's core user interface library, which is built on Bootstrap and we illustrated how to use the main Bootstrap widgets natively, together with DatePicker, probably the most commonly used jQuery UI widget. Finally, the last topics covered were multiple models of the same and different classes. We looked at two examples on these topics: the first one to save multiple customers at the same time and the second to create a customer and reservation in the same view. Resources for Article: Further resources on this subject: Yii: Adding Users and User Management to Your Site [article] Meet Yii [article] Creating an Extension in Yii 2 [article]
Read more
  • 0
  • 0
  • 2326

article-image-object-oriented-programming-typescript
Packt
15 Sep 2015
12 min read
Save for later

Writing SOLID JavaScript code with TypeScript

Packt
15 Sep 2015
12 min read
In this article by Remo H. Jansen, author of the book Learning TypeScript, explains that in the early days of software development, developers used to write code with procedural programing languages. In procedural programming languages, the programs follow a top to bottom approach and the logic is wrapped with functions. New styles of computer programming like modular programming or structured programming emerged when developers realized that procedural computer programs could not provide them with the desired level of abstraction, maintainability and reusability. The development community created a series of recommended practices and design patterns to improve the level of abstraction and reusability of procedural programming languages but some of these guidelines required certain level of expertise. In order to facilitate the adherence to these guidelines, a new style of computer programming known as object-oriented programming (OOP) was created. (For more resources related to this topic, see here.) Developers quickly noticed some common OOP mistakes and came up with five rules that every OOP developer should follow to create a system that is easy to maintain and extend over time. These five rules are known as the SOLID principles. SOLID is an acronym introduced by Michael Feathers, which stands for the each following principles: Single responsibility principle (SRP): This principle states that software component (function, class or module) should focus on one unique tasks (have only one responsibility). Open/closed principle (OCP): This principle states that software entities should be designed with the application growth (new code) in mind (be open to extension), but the application growth should require the smaller amount of changes to the existing code as possible (be closed for modification). Liskov substitution principle (LSP): This principle states that we should be able to replace a class in a program with another class as long as both classes implement the same interface. After replacing the class no other changes should be required and the program should continue to work as it did originally. Interface segregation principle (ISP): This principle states that we should split interfaces which are very large (general-purpose interfaces) into smaller and more specific ones (many client-specific interfaces) so that clients will only have to know about the methods that are of interest to them. Dependency inversion principle (DIP): This principle states that entities should depend on abstractions (interfaces) as opposed to depend on concretion (classes). JavaScript does not support interfaces and most developers find its class support (prototypes) not intuitive. This may lead us to think that writing JavaScript code that adheres to the SOLID principles is not possible. However, with TypeScript we can write truly SOLID JavaScript. In this article we will learn how to write TypeScript code that adheres to the SOLID principles so our applications are easy to maintain and extend over time. Let's start by taking a look to interface and classes in TypeScript. Interfaces The feature that we will miss the most when developing large-scale web applications with JavaScript is probably interfaces. Following the SOLID principles can help us to improve the quality of our code and writing good code is a must when working on a large project. The problem is that if we attempt to follow the SOLID principles with JavaScript we will soon realize that without interfaces we will never be able to write truly OOP code that adheres to the SOLID principles. Fortunately for us, TypeScript features interfaces. The Wikipedia's definition of interfaces in OOP is: In object-oriented languages, the term interface is often used to define an abstract type that contains no data or code, but defines behaviors as method signatures. Implementing an interface can be understood as signing a contract. The interface is a contract and when we sign it (implement it) we must follow its rules. The interface rules are the signatures of the methods and properties and we must implement them. Usually in OOP languages, a class can extend another class and implement one or more interfaces. On the other hand, an interface can implement one or more interfaces and cannot extend another class or interfaces. In TypeScript, interfaces doesn't strictly follow this behavior. The main two differences are that in TypeScript: An interface can extend another interface or class. An interface can define data and behavior as opposed to only behavior. An interface in TypeScript can be declared using the interface keyword: interface IPerson { greet(): void; } Classes The support of Classes is another essential feature to write code that adheres to the SOLID principles. We can create classes in JavaScript using prototypes but its is not as trivial as it is in other OOP languages like Java or C#. The ECMAScript 6 (ES6) specification of JavaScript introduces native support for the class keyword but unfortunately ES6 is not compatible with many old browsers that still around. However, TypeScript features classes and allow us to use them today because can indicate to the compiler which version of JavaScript we would like to use (including ES3, ES5, and ES6). Let's start by declaring a simple class: class Person implements Iperson { public name : string; public surname : string; public email : string; constructor(name : string, surname : string, email : string){ this.email = email; this.name = name; this.surname = surname; } greet() { alert("Hi!"); } } var me : Person = new Person("Remo", "Jansen", "remo.jansen@wolksoftware.com"); We use classes to represent the type of an object or entity. A class is composed of a name, attributes, and methods. The class above is named Person and contains three attributes or properties (name, surname, and email) and two methods (constructor and greet). The class attributes are used to describe the objects characteristics while the class methods are used to describe its behavior. The class above uses the implements keyword to implement the IPerson interface. All the methods (greet) declared by the IPerson interface must be implemented by the Person class. A constructor is an especial method used by the new keyword to create instances (also known as objects) of our class. We have declared a variable named me, which holds an instance of the class Person. The new keyword uses the Person class's constructor to return an object which type is Person. Single Responsibility Principle This principle states that a software component (usually a class) should adhere to the Single Responsibility Principle (SRP). The Person class above represents a person including all its characteristics (attributes) and behaviors (methods). Now, let's add some email is validation logic to showcase the advantages of the SRP: class Person { public name : string; public surname : string; public email : string; constructor(name : string, surname : string, email : string) { this.surname = surname; this.name = name; if(this.validateEmail(email)) { this.email = email; } else { throw new Error("Invalid email!"); } } validateEmail() { var re = /S+@S+.S+/; return re.test(this.email); } greet() { alert("Hi! I'm " + this.name + ". You can reach me at " + this.email); } } When an object doesn't follow the SRP and it knows too much (has too many properties) or does too much (has too many methods) we say that the object is a God object. The preceding class Person is a God object because we have added a method named validateEmail that is not really related to the Person class behavior. Deciding which attributes and methods should or should not be part of a class is a relatively subjective decision. If we spend some time analyzing our options we should be able to find a way to improve the design of our classes. We can refactor the Person class by declaring an Email class, which is responsible for the e-mail validation and use it as an attribute in the Person class: class Email { public email : string; constructor(email : string){ if(this.validateEmail(email)) { this.email = email; } else { throw new Error("Invalid email!"); } } validateEmail(email : string) { var re = /S+@S+.S+/; return re.test(email); } } Now that we have an Email class we can remove the responsibility of validating the e-mails from the Person class and update its email attribute to use the type Email instead of string. class Person { public name : string; public surname : string; public email : Email; constructor(name : string, surname : string, email : Email){ this.email = email; this.name = name; this.surname = surname; } greet() { alert("Hi!"); } } Making sure that a class has a single responsibility makes it easier to see what it does and how we can extend/improve it. We can further improve our Person an Email classes by increasing the level of abstraction of our classes. For example, when we use the Email class we don't really need to be aware of the existence of validateEmail method so this method could be private or internal (invisible from the outside of the Email class). As a result, the Email class would be much simpler to understand. When we increase the level of abstraction of an object, we can say that we are encapsulating that object. Encapsulation is also known as information hiding. For example, in the Email class allow us to use e-mails without having to worry about the e-mail validation because the class will deal with it for us. We can make this more clearly by using access modifiers (public or private) to flag as private all the class attributes and methods that we want to abstract from the usage of the Email class: class Email { private email : string; constructor(email : string){ if(this.validateEmail(email)) { this.email = email; } else { throw new Error("Invalid email!"); } } private validateEmail(email : string) { var re = /S+@S+.S+/; return re.test(email); } get():string { return this.email; } } We can then simply use the Email class without explicitly perform any kind of validation: var email = new Email("remo.jansen@wolksoftware.com"); Liskov Substitution Principle Liskov Substitution Principle (LSP) states: Subtypes must be substitutable for their base types. Let's take a look at an example to understand what this means. We are going to declare a class which responsibility is to persist some objects into some kind of storage. We will start by declaring the following interface: interface IPersistanceService { save(entity : any) : number; } After declaring the IPersistanceService interface we can implement it. We will use cookies the storage for the application's data: class CookiePersitanceService implements IPersistanceService{ save(entity : any) : number { var id = Math.floor((Math.random() * 100) + 1); // Cookie persistance logic... return id; } } We will continue by declaring a class named FavouritesController, which has a dependency on the IPersistanceService interface: class FavouritesController { private _persistanceService : IPersistanceService; constructor(persistanceService : IPersistanceService) { this._persistanceService = persistanceService; } public saveAsFavourite(articleId : number) { return this._persistanceService.save(articleId); } } We can finally create and instance of FavouritesController and pass an instance of CookiePersitanceService via its constructor. var favController = new FavouritesController(new CookiePersitanceService()); The LSP allows us to replace a dependency with another implementation as long as both implementations are based in the same base type. For example, we decide to stop using cookies as storage and use the HTML5 local storage API instead without having to worry about the FavouritesController code being affected by this change: class LocalStoragePersitanceService implements IpersistanceService { save(entity : any) : number { var id = Math.floor((Math.random() * 100) + 1); // Local storage persistance logic... return id; } } We can then replace it without having to add any changes to the FavouritesController controller class: var favController = new FavouritesController(new LocalStoragePersitanceService()); Interface Segregation Principle In the previous example, our interface was IPersistanceService and it was implemented by the cases LocalStoragePersitanceService and CookiePersitanceService. The interface was consumed by the class FavouritesController so we say that this class is a client of the IPersistanceService API. Interface Segregation Principle (ISP) states that no client should be forced to depend on methods it does not use. To adhere to the ISP we need to keep in mind that when we declare the API (how two or more software components cooperate and exchange information with each other) of our application's components the declaration of many client-specific interfaces is better than the declaration of one general-purpose interface. Let's take a look at an example. If we are designing an API to control all the elements in a vehicle (engine, radio, heating, navigation, lights, and so on) we could have one general-purpose interface, which allows controlling every single element of the vehicle: interface IVehicle { getSpeed() : number; getVehicleType: string; isTaxPayed() : boolean; isLightsOn() : boolean; isLightsOff() : boolean; startEngine() : void; acelerate() : number; stopEngine() : void; startRadio() : void; playCd : void; stopRadio() : void; } If a class has a dependency (client) in the IVehicle interface but it only wants to use the radio methods we would be facing a violation of the ISP because, as we have already learned, no client should be forced to depend on methods it does not use. The solution is to split the IVehicle interface into many client-specific interfaces so our class can adhere to the ISP by depending only on Iradio: interface IVehicle { getSpeed() : number; getVehicleType: string; isTaxPayed() : boolean; isLightsOn() : boolean; } interface ILights { isLightsOn() : boolean; isLightsOff() : boolean; } interface IRadio { startRadio() : void; playCd : void; stopRadio() : void; } interface IEngine { startEngine() : void; acelerate() : number; stopEngine() : void; } Dependency Inversion Principle Dependency Inversion (DI) principle states that we should: Depend upon Abstractions. Do not depend upon concretions In the previous section, we implemented FavouritesController and we were able to replace an implementation of IPersistanceService with another without having to perform any additional change to FavouritesController. This was possible because we followed the DI principle as FavouritesController has a dependency on the IPersistanceService interface (abstractions) rather than LocalStoragePersitanceService class or CookiePersitanceService class (concretions). The DI principle also allow us to use an inversion of control (IoC) container. An IoC container is a tool used to reduce the coupling between the components of an application. Refer to Inversion of Control Containers and the Dependency Injection pattern by Martin Fowler at http://martinfowler.com/articles/injection.html. If you want to learn more about IoC. Summary In this article, we looked upon classes, interfaces, and the SOLID principles. Resources for Article: Further resources on this subject: Welcome to JavaScript in the full stack [article] Introduction to Spring Web Application in No Time [article] Introduction to TypeScript [article]
Read more
  • 0
  • 0
  • 11014

article-image-welcome-javascript-full-stack
Packt
15 Sep 2015
12 min read
Save for later

Welcome to JavaScript in the full stack

Packt
15 Sep 2015
12 min read
In this article by Mithun Satheesh, the author of the book Web Development with MongoDB and NodeJS, you will not only learn how to use JavaScript to develop a complete single-page web application such as Gmail, but you will also know how to achieve the following projects with JavaScript throughout the remaining part of the book: Completely power the backend using Node.js and Express.js Persist data with a powerful document oriented database such as MongoDB Write dynamic HTML pages using Handlebars.js Deploy your entire project to the cloud using services such as Heroku and AWS With the introduction of Node.js, JavaScript has officially gone in a direction that was never even possible before. Now, you can use JavaScript on the server, and you can also use it to develop full-scale enterprise-level applications. When you combine this with the power of MongoDB and its JSON-powered data, you can work with JavaScript in every layer of your application. (For more resources related to this topic, see here.) A short introduction to Node.js One of the most important things that people get confused about while getting acquainted with Node.js is understanding what exactly it is. Is it a different language altogether or is it just a framework or is it something else? Node.js is definitely not a new language, and it is not just a framework on JavaScript too. It can be considered as a runtime environment for JavaScript built on top of Google's V8 engine. So, it provides us with a context where we can write JS code on any platform and where Node.js can be installed. That is anywhere! Now a bit of history; back in 2009, Ryan Dahl gave a presentation at JSConf that changed JavaScript forever. During his presentation, he introduced Node.js to the JavaScript community, and after a, roughly, 45-minute talk, he concluded it, receiving a standing ovation from the audience in the process. He was inspired to write Node.js after he saw a simple file upload progress bar on Flickr, the image-sharing site. Realizing that the site was going about the whole process the wrong way, he decided that there had to be a better solution. Now let's go through the features of Node.js that make it unique from other server side programming languages. The advantage that the V8 engine brings in The V8 engine was developed by Google and was made open source in 2008. As we all know, JavaScript is an interpreted language and it will not be as efficient as a compiled language as each line of code gets interpreted one by one while the code is executed. The V8 engine brings in an efficient model here where the JavaScript code will be compiled into machine level code and the executions will happen on the compiled code instead of interpreting the JavaScript. But even though Node.js is using V8 engine, Joyent, which is the company that is maintaining Node.js development, does not always update the V8 engine to the latest versions that Google actively releases. Node is single threaded! You might be asking how does a single threaded model help? Typical PHP, ASP.NET, Ruby, or Java based servers follow a model where each client request results in instantiation of a new thread or even a process. When it comes to Node.js, requests are run on the same thread with even shared resources. A common question that we might be asking will be the advantage of using such a model. To understand this, we should understand the problem that Node.js tries to resolve. It tries to do an asynchronous processing on a single thread to provide more performance and scalability for applications, which are supposed to handle too much web traffic. Imagine web applications that handle millions of concurrent requests; the server makes a new thread for handling each request that comes in, it will consume many resources. We would end up trying to add more and more servers to add the scalability of the application. The single threaded asynchronous processing model has its advantage in the previous context and you can get to process much more concurrent requests with a less number of server side resources. But there is a downside to this approach, that Node.js, by default, will not utilize the number of CPU cores available on the server it is running on without using an extra module such as pm2. The point that Node.js is single threaded doesn't mean that Node doesn't use threads internally. It is that the developer and the execution context that his code has exposure to have no control over the threading model internally used by Node.js. If you are new to the concept of threads and processes, we would suggest you to go through some preliminary articles regarding this. There are plenty of YouTube videos as well on the same topic. The following reference could be used as a starting point: http://www.cs.ucsb.edu/~rich/class/cs170/notes/IntroThreads/ Nonblocking asynchronous execution One of the most powerful features of Node is that it is event-driven and asynchronous. So how does an asynchronous model work? Imagine you have a block of code and at some nth line you have an operation, which is time consuming. So what happens to the lines that follow the nth line while this code gets executed? In normal synchronous programming models, the lines, which follow the nth line, will have to wait until the operation at the nth line completes. An asynchronous model handles this case differently. To handle this scenario in an asynchronous approach, we need to segment the code that follows the nth line into two sections. The first section is dependent on the result from the operation at the nth line and the second is independent of the result. We wrap the dependent code in a function with the result of the operation as its parameter and register it as a callback to the operation on its success. So once the operation completes, the callback function will be triggered with its result. And meanwhile, we can continue executing the result-independent lines without waiting for the result. So, in this scenario, the execution is never blocked for a process to complete. It just goes on with callback functions registered on each ones completion. Simply put, you assign a callback function to an operation, and when the Node determines that the completion event has been fired, it will execute your callback function at that moment. We can look at the following example to understand the asynchronous nature in detail: console.log('One'); console.log('Two'); setTimeout(function() { console.log('Three'); }, 2000); console.log('Four'); console.log('Five'); In a typical synchronous programming language, executing the preceding code will yield the following output: One Two ... (2 second delay) ... Three Four Five However, in an asynchronous approach, the following output is seen: One Two Four Five ... (approx. 2 second delay) ... Three The function that actually logs three is known as a callback to the setTimeout function. If you are still interested in learning more about asynchronous models and the callback concept in JavaScript, Mozilla Developer Network (MDN) has many articles, which explain these concepts in detail. Node Package Manager Writing applications with Node is really enjoyable when you realize the sheer wealth of information and tools at your disposal! Using Node's built-in Package Manager (npm), you can literally find tens of thousands of modules that can be installed and used within your application with just a few keystrokes! One of the reasons for the biggest success of Node.js is npm, which is one of the best package managers out there with a very minute learning curve. If this is the first ever package manager that you are being exposed to, then you should consider yourself lucky! On a regular monthly basis, npm handles more than a billion downloads and it has around 1,50,000 packages currently available for you to download. You can view the library of available modules by visiting www.npmjs.com. Downloading and installing any module within your application is as simple as executing the npm install package command. Have you written a module that you want to share with the world? You can package it using npm, and upload it to the public www.npmjs.org registry just as easily! If you are not sure how a module you installed works, the source code is right there in your projects' node_modules folder waiting to be explored! Sharing and reusing JavaScript While you develop web applications, you will always end up doing the validations for your UI both as client and server as the client side validations are required for a better UI experience and server side validations for better security of app. Think about two different languages in action, you will have the same logic implemented in both server and client side. With Node.js, you can think of sharing the common function between server and client reducing the code duplication to a bigger extent. Ever worked on optimizing the load time for client side components of your Single Page Application (SPA) loaded from template engines like underscore? That would end up in you thinking about a way we could share the rendering of templates in both server and client at the same time—some call it hybrid templating. Node.js resolves the context of duplication of client templates better than any other server side technologies just because we can use the same JS templating framework and the templates both at server and client. If you are taking this point lightly, the problem it resolves is not just the issue of reusing validations or templates on server and client. Think about a single page application being built, you will need to implement the subsets of server-side models in the client-side MV* framework also. Now think about the templates, models, and controller subsets being shared on both client and server. We are solving a higher scenario of code redundancy. Isn't it? Not just for building web servers! Node.js is not just to write JavaScript in server side. Yes, we have discussed this point earlier. Node.js sets up the environment for the JavaScript code to work anywhere it can be installed. It can be a powerful solution to create command-line tools as well as full-featured locally run applications that have nothing to do with the Web or a browser. Grunt.js is a great example of a Node-powered command-line tool that many web developers use daily to automate everyday tasks such as build processes, compiling Coffee Script, launching Node servers, running tests, and more. In addition to command-line tools, Node is increasingly popular among the hardware crowd with the Node bots movement. Johnny-Five and Cylon.js are two popular Node libraries that exist to provide a framework to work with robotics. Search YouTube for Node robots and you will see a lot of examples. Also, there is a chance that you might be using a text editor developed on Node.js. Github's open source editor named Atom is one such kind, which is hugely popular. Real-time web with Socket.io One of the important reasons behind the origin of Node.js was to support real time web applications. Node.js has a couple of frameworks built for real-time web applications, which are hugely popular namely socket.io and sock.js. These frameworks make it quite simple to build instant collaboration based applications such as Google Drive and Mozilla's together.js. Before the introduction of WebSockets in the modern browsers, this was achieved via long polling, which was not a great solution for real-time experience. While WebSockets is a feature that is only supported in modern browsers, Socket.io acts as a framework, which also features seamless fallback implementations for legacy browsers. If you need to understand more on the use of web sockets in applictions, here is a good resource on MDN that you can explore: https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_client_applications Networking and file IO In addition to the powerful nonblocking asynchronous nature of Node, it also has very robust networking and filesystem tools available via its core modules. With Node's networking modules, you can create server and client applications that accept network connections and communicate via streams and pipes. The origin of io.js io.js is nothing but a fork of Node.js that was created to stay updated with the latest development on both V8 and other developments in JS community. Joyent was taking care of the releases in Node.js and the process, which was followed in taking care of the release management of Node.js, lacked an open governance model. It leads to scenarios where the newer developments in V8 as well as the JS community were not incorporated into its releases. For example, if you want to write JavaScript using the latest EcmaScript6 (ES6) features, you will have to run it in the harmony mode. Joyent is surely not to be blamed on this as they were more concerned about stability of Node.js releases than frequent updates in the stack. This led to the io.js fork, which is kept up to date with the latest JavaScript and V8 updates. So it's better to keep your eyes on the releases on both Node and io.js to keep updated with the Node.js world. Summary We discussed the amazing current state of JavaScript and how it can be used to power the full stack of a web application. Not that you needed any convincing in the first place, but I hope you're excited and ready to get started writing web applications using Node.js and MongoDB! Resources for Article: Further resources on this subject: Introduction and Composition [article] Deployment and Maintenance [article] Node.js Fundamentals [article]
Read more
  • 0
  • 0
  • 1988

article-image-smart-features-improve-your-efficiency
Packt
15 Sep 2015
11 min read
Save for later

Smart Features to Improve Your Efficiency

Packt
15 Sep 2015
11 min read
In this article by Denis Patin and Stefan Rosca authors of the book WebStorm Essentials, we are going to deal with a number of really smart features that will enable you to fundamentally change your approach to web development and learn how to gain maximum benefit from WebStorm. We are going to study the following in this article: On-the-fly code analysis Smart code features Multiselect feature Refactoring facility (For more resources related to this topic, see here.) On-the-fly code analysis WebStorm will preform static code analysis on your code on the fly. The editor will check the code based on the language used and the rules you specify and highlight warnings and errors as you type. This is a very powerful feature that means you don't need to have an external linter and will catch most errors quickly thus making a dynamic and complex language like JavaScript more predictable and easy to use. Runtime error and any other error, such as syntax or performance, are two things. To investigate the first one, you need tests or a debugger, and it is obvious that they have almost nothing in common with the IDE itself (although, when these facilities are integrated into the IDE, such a synergy is better, but that is not it). You can also examine the second type of errors the same way but is it convenient? Just imagine that you need to run tests after writing the next line of code. It is no go! Won't it be more efficient and helpful to use something that keeps an eye on and analyzes each word being typed in order to notify about probable performance issues and bugs, code style and workflow issues, various validation issues, warn of dead code and other likely execution issues before executing the code, to say nothing of reporting inadvertent misprints. WebStorm is the best fit for it. It performs a deep-level analysis of each line, each word in the code. Moreover, you needn't break off your developing process when WebStorm scans your code; it is performed on the fly and thus so called: WebStorm also enables you to get a full inspection report on demand. For getting it, go to the menu: Code | Inspect Code. It pops up the Specify Inspection Scope dialog where you can define what exactly you would like to inspect, and click OK. Depending on what is selected and of what size, you need to wait a little for the process to finish, and you will see the detailed results where the Terminal window is located: You can expand all the items, if needed. To the right of this inspection result list you can see an explanation window. To jump to the erroneous code lines, you can simply click on the necessary item, and you will flip into the corresponding line. Besides simple indicating where some issue is located, WebStorm also unequivocally suggests the ways to eliminate this issue. And you even needn't make any changes yourself—WebStorm already has quick solutions, which you need just to click on, and they will be instantly inserted into the code: Smart code features Being an Integrated Development Environment (IDE) and tending to be intelligent, WebStorm provides a really powerful pack of features by using which you can strongly improve your efficiency and save a lot of time. One of the most useful and hot features is code completion. WebStorm continually analyzes and processes the code of the whole project, and smartly suggests the pieces of code appropriate in the current context, and even more—alongside the method names you can find the usage of these methods. Of course, code completion itself is not a fresh innovation; however WebStorm performs it in a much smarter way than other IDEs do. WebStorm can auto-complete a lot things: Class and function names, keywords and parameters, types and properties, punctuation, and even file paths. By default, the code completion facility is on. To invoke it, simply start typing some code. For example, in the following image you can see how WebStorm suggests object methods: You can navigate through the list of suggestions using your mouse or the Up and Down arrow keys. However, the list can be very long, which makes it not very convenient to browse. To reduce it and retain only the things appropriate in the current context, keep on typing the next letters. Besides typing only initial consecutive letter of the method, you can either type something from the middle of the method name, or even use the CamelCase style, which is usually the quickest way of typing really long method names: It may turn out for some reason that the code completion isn't working automatically. To manually invoke it, press Control + Space on Mac or Ctrl + Space on Windows. To insert the suggested method, press Enter; to replace the string next to the current cursor position with the suggested method, press Tab. If you want the facility to also arrange correct syntactic surroundings for the method, press Shift + ⌘ + Enter on Mac or Ctrl + Shift + Enter on Windows, and missing brackets or/and new lines will be inserted, up to the styling standards of the current language of the code. Multiselect feature With the multiple selection (or simply multiselect) feature, you can place the cursor in several locations simultaneously, and when you will type the code it will be applied at all these positions. For example, you need to add different background colors for each table cell, and then make them of twenty-pixel width. In this case, what you need to not perform these identical tasks repeatedly and save a lot of time, is to place the cursor after the <td> tag, press Alt, and put the cursor in each <td> tag, which you are going to apply styling to: Now you can start typing the necessary attribute—it is bgcolor. Note that WebStorm performs smart code completion here too, independently of you typing something on a single line or not. You get empty values for bgcolor attributes, and you fill them out individually a bit later. You need also to change the width so you can continue typing. As cell widths are arranged to be fixed-sized, simply add the value for width attributes as well. What you get in the following image: Moreover, the multiselect feature can select identical values or just words independently, that is, you needn't place the cursor in multiple locations. Let us watch this feature by another example. Say, you changed your mind and decided to colorize not backgrounds but borders of several consecutive cells. You may instantly think of using a simple replace feature but you needn't replace all attribute occurrences, only several consecutive ones. For doing this, you can place the cursor on the first attribute, which you are going to perform changes from, and click Ctrl + G on Mac or Alt + J on Windows as many times as you need. One by one the same attributes will be selected, and you can replace the bgcolor attribute for the bordercolor one: You can also select all occurrences of any word by clicking Ctrl + command + G on Mac or Ctrl + Alt + Shift + J. To get out of the multiselect mode you have to click in a different position or use the Esc key. Refactoring facility Throughout the development process, it is almost unavoidable that you have to use refactoring. Also, the bigger code base you have, the more difficult it becomes to control the code, and when you need to refactor some code, you can most likely be up against some issues relating to, examples. naming omission or not taking into consideration function usage. You learned that WebStorm performs a thorough code analysis so it understands what is connected with what and if some changes occur it collates them and decide what is acceptable and what is not to perform in the rest of the code. Let us try a simple example. In a big HTML file you have the following line: <input id="search" type="search" placeholder="search" /> And in a big JavaScript file you have another one: var search = document.getElementById('search'); You decided to rename the id attribute's value of the input element to search_field because it is less confusing. You could simply rename it here but after that you would have to manually find all the occurrences of the word search in the code. It is evident that the word is rather frequent so you would spend a lot of time recognizing usage cases appropriate in the current context or not. And there is a high probability that you forget something important, and even more time will be spent on investigating an issue. Instead, you can entrust WebStorm with this task. Select the code unit to refactor (in our case, it is the search value of the id attribute), and click Shift + T on Mac or Ctrl + Alt + Shift + T on Windows (or simply click the Refactor menu item) to call the Refactor This dialog. There, choose the Rename… item and enter the new name for the selected code unit (search_field in our case). To get only a preview of what will happen during the refactoring process, click the Preview button, and all the changes to apply will be displayed in the bottom. You can walk through the hierarchical tree and either apply the change by clicking the Do Refactor button, or not. If you need a preview, you can simply click the Refactor button. What you will see is that the id attribute got the search_field value, not the type or placeholder values, even if they have the same value, and in the JavaScript file you got getElementById('search_field'). Note that even though WebStorm can perform various smart tasks, it still remains a program, and there can occur some issues caused by so-called artificial intelligence imperfection, so you should always be careful when performing the refactoring. In particular, manually check the var declarations because WebStorm sometimes can apply the changes to them as well but it is not always necessary because of the scope. Of course, it is just a little of what you are enabled to perform with refactoring. The basic things that the refactoring facility allows you to do are as follows: The elements in the preceding screenshot are explained as follows: Rename…: You have already got familiar with this refactoring. Once again, with it you can rename code units, and WebStorm automatically will fix all references of them in the code. The shortcut is Shift + F6. Change Signature…: This feature is used basically for changing function names, and adding/removing, reordering, or renaming function parameters, that is, changing the function signature. The shortcut is ⌘ + F6 for Mac and Ctrl + F6 for Windows. Move…: This feature enables you to move files or directories within a project, and it simultaneously repairs all references to these project elements in the code so you needn't manually repair them. The shortcut is F6. Copy…: With this feature, you can copy a file or directory or even a class, with its structure, from one place to another. The shortcut is F5. Safe Delete…: This feature is really helpful. It allows you to safely delete any code or entire files from the project. When performing this refactoring, you will be asked about whether it is needed to inspect comments and strings or all text files for the occurrence of the required piece of code or not. The shortcut is ⌘ + delete for Mac and Alt + Delete for Windows. Variable…: This refactoring feature declares a new variable whereto the result of the selected statement or expression is put. It can be useful when you realize there are too many occurrences of a certain expression so it can be turned into a variable, and the expression can just initialize it. The shortcut is Alt +⌘ + V for Mac and Ctrl + Alt + V for Windows. Parameter…: When you need to add a new parameter to some method and appropriately update its calls, use this feature. The shortcut is Alt + ⌘ + P for Mac and Ctrl + Alt + P for Windows. Method…: During this refactoring, the code block you selected undergoes analysis, through which the input and output variables get detected, and the extracted function receives the output variable as a return value. The shortcut is Alt + ⌘ + M for Mac and Ctrl + Alt + M for Windows. Inline…: The inline refactoring is working contrariwise to the extract method refactoring—it replaces surplus variables with their initializers making the code more compact and concise. The shortcut is Alt + ⌘ + N for Mac and Ctrl + Alt + N for Windows. Summary In this article, you have learned about the most distinctive features of WebStorm, which are the core constituents of improving your efficiency in building web applications. Resources for Article: Further resources on this subject: Introduction to Spring Web Application in No Time [article] Applications of WebRTC [article] Creating Java EE Applications [article]
Read more
  • 0
  • 0
  • 1946
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-deploy-nodejs-apps-aws-code-deploy
Ankit Patial
14 Sep 2015
7 min read
Save for later

Deploy Node.js Apps with AWS Code Deploy

Ankit Patial
14 Sep 2015
7 min read
As an application developer, you must be familiar with the complexity of deploying apps to a fleet of servers with minimum down time. AWS introduced a new service called AWS Code Deploy to ease out the deployment of applications to an EC2 instance on the AWS cloud. Before explaining the full process, I will assume that you are using AWS VPC and are having all of your EC2 instances inside VPC, and that each instance is having an IAM role. Let's see how we can deploy a Node.js application to AWS. Install AWS Code Deploy Agent The first thing you need to do is to install aws-codedeploy-agent on each machine that you want your code deployed on. Before installing a client, please make sure that you have trust relationship for codedeploy.us-west-2.amazonaws.com and codedeploy.us-east-1.amazonaws.com added in IAM role that EC2 instance is using. Not sure what it is? Then, click on the top left dropdown with your account name in AWS console, select Security Credentials option you will be redirected to a new page, select Roles from left menu and look for IAM role that EC2 instance is using, click it and scroll to them bottom, and you will see Edit Trust Relationship button. Click this button to edit trust relationship, and make sure it looks like the following. ... "Principal": { "Service": [ "ec2.amazonaws.com", "codedeploy.us-west-2.amazonaws.com", "codedeploy.us-east-1.amazonaws.com" ] } ... Ok we are good to install the AWS Code Deploy Agent, so make sure ruby2.0 is installed. Use the following script to install code deploy agent. aws s3 cp s3://aws-codedeploy-us-east-1/latest/install ./install-aws-codedeploy-agent --region us-east-1 chmod +x ./install-aws-codedeploy-agent sudo ./install-aws-codedeploy-agent auto rm install-aws-codedeploy-agent Ok, hopefully nothing will go wrong and agent will be installed up and running. To check if its running or not, try the following command: sudo service codedeploy-agent status Let's move to the next step. Create Code Deploy Application Login to your AWS account. Under Deployment & Management click on Code Deploy link, on next screen click on the Get Started Now button and complete the following things: Choose Custom Deployment and click the Skip Walkthrough button. Create New Application; the following are steps to create an application. –            Application Name: display name for application you want to deploy. –            Deployment Group Name: this is something similar to environments like LIVE, STAGING and QA. –            Add Instances: you can choose Amazon EC2 instances by name group name etc. In case you are using autoscaling feature, you can add that auto scaling group too. –            Deployment Config: its a way to specify how we want to deploy application, whether we want to deploy one server at-a-time or half of servers at-a-time or deploy all at-once. –            Service Role: Choose the IAM role that has access to S3 bucket that we will use to hold code revisions. –            Hit the Create Application button. Ok, we just created a Code Deploy application. Let's hold it here and move to our NodeJs app to get it ready for deployment. Code Revision Ok, you have written your app and you are ready to deploy it. The most important thing your app need is appspec.yml. This file will be used by code deploy agent to perform various steps during the deployment life cycle. In simple words the deployment process includes the following steps: Stop the previous application if already deployed; if its first time then this step will not exist. Update the latest code, such as copy files to the application directory. Install new packages or run DB migrations. Start the application. Check if the application is working. Rollback if something went wrong. All above steps seem easy, but they are time consuming and painful to perform each time. Let's see how we can perform these steps easily with AWS code deploy. Lets say we have a following appspec.yml file in our code and also we have bin folder in an app that contain executable sh scripts to perform certain things that I will explain next. First of all take an example of appspec.yml: version: 0.0 os: linux files: - source: / destination: /home/ec2-user/my-app permissions: - object: / pattern: "**" owner: ec2-user group: ec2-user hooks: ApplicationStop: - location: bin/app-stop timeout: 10 runas: ec2-user AfterInstall: - location: bin/install-pkgs timeout: 1200 runas: ec2-user ApplicationStart: - location: bin/app-start timeout: 60 runas: ec2-user ValidateService: - location: bin/app-validate timeout: 10 runas: ec2-user It's a way to tell Code Deploy to copy and provide a destination of those files. files: - source: / destination: /home/ec2-user/my-app We can specify the permissions to be set for the source file on copy. permissions: - object: / pattern: "**" owner: ec2-user group: ec2-user Hooks are executed in an order during the Code Deploy life cycle. We have ApplicationStop, DownloadBundle, BeforeInstall, Install, AfterInstall, ApplicationStart and ValidateService hooks that all have the same syntax. hooks: deployment-lifecycle-event-name - location: script-location timeout: timeout-in-seconds runas: user-name location is the relative path from code root to script file that you want to execute. timeout is the maximum time a script can run. runas is an os user to run the script, and some time you may want to run a script with diff user privileges. Lets bundle your app, exclude the unwanted files such as node_modules folder, and zip it. I use AWS CLI to deploy my code revisions, but you can install awscli using PPI (Python Package Index). sudo pip install awscli I am using awscli profile that has access to s3 code revision bucket in my account. Here is code sample that can help: aws --profile simsaw-baas deploy push --no-ignore-hidden-files --application-name MY_CODE_DEPLOY_APP_NAME --s3-location s3://MY_CODE_REVISONS/MY_CODE_DEPLOY_APP_NAME/APP_BUILD --source MY_APP_BUILD.zip Now Code Revision is published to s3 and also the same revision is registered with the Code Deploy application with the name MY_CODE_DEPLOY_APP_NAME (it will be name of the application you created earlier in the second step.) Now go back to AWS console, Code Deploy. Deploy Code Revision Select your Code Deploy application from the application list show on the Code Deploy Dashboard. It will take you to the next window where you can see the published revision(s), expand the revision and click on Deploy This Revision. You will be redirected to a new window with options like application and deployment group. Choose them carefully and hit deploy. Wait for magic to happen. Code Deploy has a another option to deploy your app from github. The process for it will be almost the same, except you need not push code revisions to S3. About the author Ankit Patial has a Masters in Computer Applications, and nine years of experience with custom APIs, web and desktop applications using .NET technologies, ROR and NodeJs. As a CTO with SimSaw Inc and Pink Hand Technologies, his job is to learn and help his team to implement the best practices of using Cloud Computing and JavaScript technologies.
Read more
  • 0
  • 0
  • 18076

article-image-introduction-spring-web-application-no-time
Packt
10 Sep 2015
8 min read
Save for later

Introduction to Spring Web Application in No Time

Packt
10 Sep 2015
8 min read
 Many official Spring tutorials have both a Gradle build and a Maven build, so you will find examples easily if you decide to stick with Maven. Spring 4 is fully compatible with Java 8, so it would be a shame not to take advantage of lambdas to simplify our code base. In this article by Geoffroy Warin, author of the book Mastering Spring MVC 4, we will see some Git commands. It's a good idea to keep track of your progress and commit when you are in a stable state. (For more resources related to this topic, see here.) Getting started with Spring Tool Suite One of the best ways to get started with Spring and discover the numerous tutorials and starter projects that the Spring community offers is to download Spring Tool Suite (STS). STS is a custom version of eclipse designed to work with various Spring projects, as well as Groovy and Gradle. Even if, like me, you have another IDE that you would rather work with, we recommend that you give STS a shot because it gives you the opportunity to explore Spring's vast ecosystem in a matter of minutes with the "Getting Started" projects. So, let's visit https://Spring.io/tools/sts/all and download the latest release of STS. Before we generate our first Spring Boot project we will need to install the Gradle support for STS. You can find a Manage IDE Extensions button on the dashboard. You will then need to download the Gradle Support software in the Language and framework tooling section. Its recommend installing the Groovy Eclipse plugin along with the Groovy 2.4 compiler, as shown in the following screenshot. These will be needed later in this article when we set up acceptance tests with geb: We now have two main options to get started. The first option is to navigate to File | New | Spring Starter Project, as shown in the following screenshot. This will give you the same options as http://start.Spring.io, embedded in your IDE: The second way is to navigate to File | New | Import Getting Started Content. This will give you access to all the tutorials available on Spring.io. You will have the choice of working with either Gradle or Maven, as shown in the following screenshot: You can also check out the starter code to follow along with the tutorial, or get the complete code directly. There is a lot of very interesting content available in the Getting Started Content. It will demonstrate the integration of Spring with various technologies that you might be interested in. For the moment, we will generate a web project as shown in the preceding image. It will be a Gradle application, producing a JAR file and using Java 8. Here is the configuration we want to use: Property Value Name masterSpringMvc Type Gradle project Packaging Jar Java version 1.8 Language Java Group masterSpringMvc Artifact masterSpringMvc Version 0.0.1-SNAPSHOT Description Be creative! Package masterSpringMvc On the second screen you will be asked for the Spring Boot version you want to use and the the dependencies that should be added to the project. At the time of writing this, the latest version of Spring boot was 1.2.5. Ensure that you always check out the latest release. The latest snapshot version of Spring boot will also be available by the time you read this. If Spring boot 1.3 isn't released by then, you can probably give it a shot. One of its big features is the awesome devs tools. Refer to https://spring.io/blog/2015/06/17/devtools-in-spring-boot-1-3 for more details. At the bottom the configuration window you will see a number of checkboxes representing the various boot starter libraries. These are dependencies that can be appended to your build file. They provide autoconfigurations for various Spring projects. We are only interested in Spring MVC for the moment, so we will check only the Web checkbox. A JAR for a web application? Some of you might find it odd to package your web application as a JAR file. While it is still possible to use WAR files for packaging, it is not always the recommended practice. By default, Spring boot will create a fat JAR, which will include all the application's dependencies and provide a convenient way to start a web server using Java -jar. Our application will be packaged as a JAR file. If you want to create a war file, refer to http://spring.io/guides/gs/convert-jar-to-war/. Have you clicked on Finish yet? If you have, you should get the following project structure: We can see our main class MasterSpringMvcApplication and its test suite MasterSpringMvcApplicationTests. There are also two empty folders, static and templates, where we will put our static web assets (images, styles, and so on) and obviously our templates (jsp, freemarker, Thymeleaf). The last file is an empty application.properties file, which is the default Spring boot configuration file. It's a very handy file and we'll see how Spring boot uses it throughout this article. The last is build.gradle file, the build file that we will detail in a moment. If you feel ready to go, run the main method of the application. This will launch a web server for us. To do this, go to the main method of the application and navigate to Run as | Spring Application in the toolbar either by right-clicking on the class or clicking on the green play button in the toolbar. Doing so and navigating to http://localhost:8080 will produce an error. Don't worry, and read on. Now we will show you how to generate the same project without STS, and we will come back to all these files. Getting started with IntelliJ IntelliJ IDEA is a very popular tool among Java developers. For the past few years I've been very pleased to pay Jetbrains a yearly fee for this awesome editor. IntelliJ also has a way of creating Spring boot projects very quickly. Go to the new project menu and select the Spring Initializr project type: This will give us exactly the same options as STS. You will need to import the Gradle project into IntelliJ. we recommend generating the Gradle wrapper first (refer to the following Gradle build section). If needed, you can reimport the project by opening its build.gradle file again. Getting started with start.Spring.io Go to http://start.Spring.io to get started with start.Spring.io. The system behind this remarkable Bootstrap-like website should be familiar to you! You will see the following screenshot when you go to the previously mentioned link: Indeed, the same options available with STS can be found here. Clicking on Generate Project will download a ZIP file containing our starter project. Getting started with the command line For those of you who are addicted to the console, it is possible to curl http://start.Spring.io. Doing so will display instructions on how to structure your curl request. For instance, to generate the same project as earlier, you can issue the following command: $ curl http://start.Spring.io/starter.tgz -d name=masterSpringMvc -d dependencies=web -d language=java -d JavaVersion=1.8 -d type=gradle-project -d packageName=masterSpringMvc -d packaging=jar -d baseDir=app | tar -xzvf - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 1255 100 1119 100 136 1014 123 0:00:01 0:00:01 --:--:-- 1015 x app/ x app/src/ x app/src/main/ x app/src/main/Java/ x app/src/main/Java/com/ x app/src/main/Java/com/geowarin/ x app/src/main/resources/ x app/src/main/resources/static/ x app/src/main/resources/templates/ x app/src/test/ x app/src/test/Java/ x app/src/test/Java/com/ x app/src/test/Java/com/geowarin/ x app/build.Gradle x app/src/main/Java/com/geowarin/AppApplication.Java x app/src/main/resources/application.properties x app/src/test/Java/com/geowarin/AppApplicationTests.Java And viola! You are now ready to get started with Spring without leaving the console, a dream come true. You might consider creating an alias with the previous command, it will help you prototype the Spring application very quickly. Summary In this article, we leveraged Spring Boot's autoconfiguration capabilities to build an application with zero boilerplate or configuration files. We configured Spring Boot tool suite, IntelliJ,and start.spring.io and how to configure it! Resources for Article: Further resources on this subject: Welcome to the Spring Framework[article] Mailing with Spring Mail[article] Creating a Spring Application [article]
Read more
  • 0
  • 0
  • 2130

article-image-testing-exceptional-flow
Packt
03 Sep 2015
22 min read
Save for later

Testing Exceptional Flow

Packt
03 Sep 2015
22 min read
 In this article by Frank Appel, author of the book Testing with JUnit, we will learn that special care has to be taken when testing a component's functionality under exception-raising conditions. You'll also learn how to use the various capture and verification possibilities and discuss their pros and cons. As robust software design is one of the declared goals of the test-first approach, we're going to see how tests intertwine with the fail fast strategy on selected boundary conditions. Finally, we're going to conclude with an in-depth explanation of working with collaborators under exceptional flow and see how stubbing of exceptional behavior can be achieved. The topics covered in this article are as follows: Testing patterns Treating collaborators (For more resources related to this topic, see here.) Testing patterns Testing exceptional flow is a bit trickier than verifying the outcome of normal execution paths. The following section will explain why and introduce the different techniques available to get this job done. Using the fail statement "Always expect the unexpected"                                                                                  – Adage based on Heraclitus Testing corner cases often results in the necessity to verify that a functionality throws a particular exception. Think, for example, of a java.util.List implementation. It quits the retrieval attempt of a list's element by means of a non-existing index number with java.lang.ArrayIndexOutOfBoundsException. Working with exceptional flow is somewhat special as without any precautions, the exercise phase would terminate immediately. But this is not what we want since it eventuates in a test failure. Indeed, the exception itself is the expected outcome of the behavior we want to check. From this, it follows that we have to capture the exception before we can verify anything. As we all know, we do this in Java with a try-catch construct. The try block contains the actual invocation of the functionality we are about to test. The catch block again allows us to get a grip on the expected outcome—the exception thrown during the exercise. Note that we usually keep our hands off Error, so we confine the angle of view in this article to exceptions. So far so good, but we have to bring up to our minds that in case no exception is thrown, this has to be classified as misbehavior. Consequently, the test has to fail. JUnit's built-in assertion capabilities provide the org.junit.Assert.fail method, which can be used to achieve this. The method unconditionally throws an instance of java.lang.AssertionError if called. The classical approach of testing exceptional flow with JUnit adds a fail statement straight after the functionality invocation within the try block. The idea behind is that this statement should never be reached if the SUT behaves correctly. But if not, the assertion error marks the test as failed. It is self-evident that capturing should narrow down the expected exception as much as possible. Do not catch IOException if you expect FileNotFoundException, for example. Unintentionally thrown exceptions must pass the catch block unaffected, lead to a test failure and, therefore, give you a good hint for troubleshooting with their stack trace. We insinuated that the fetch-count range check of our timeline example would probably be better off throwing IllegalArgumentException on boundary violations. Let's have a look at how we can change the setFetchCountExceedsLowerBound test to verify different behaviors with the try-catch exception testing pattern (see the following listing): @Test public void setFetchCountExceedsLowerBound() { int tooSmall = Timeline.FETCH_COUNT_LOWER_BOUND - 1; try { timeline.setFetchCount( tooSmall ); fail(); } catch( IllegalArgumentException actual ) { String message = actual.getMessage(); String expected = format( Timeline.ERROR_EXCEEDS_LOWER_BOUND, tooSmall ); assertEquals( expected, message ); assertTrue( message.contains( valueOf( tooSmall ) ) ); } } It can be clearly seen how setFetchCount, the functionality under test, is called within the try block, directly followed by a fail statement. The caught exception is narrowed down to the expected type. The test avails of the inline fixture setup to initialize the exceeds-lower-bound value in the tooSmall local variable because it is used more than once. The verification checks that the thrown message matches an expected one. Our test calculates the expectation with the aid of java.lang.String.format (static import) based on the same pattern, which is also used internally by the timeline to produce the text. Once again, we loosen encapsulation a bit to ensure that the malicious value gets mentioned correctly. Purists may prefer only the String.contains variant, which, on the other hand would be less accurate. Although this works fine, it looks pretty ugly and is not very readable. Besides, it blurs a bit the separation of the exercise and verification phases, and so it is no wonder that there have been other techniques invented for exception testing. Annotated expectations After the arrival of annotations in the Java language, JUnit got a thorough overhauling. We already mentioned the @Test type used to mark a particular method as an executable test. To simplify exception testing, it has been given the expected attribute. This defines that the anticipated outcome of a unit test should be an exception and it accepts a subclass of Throwable to specify its type. Running a test of this kind captures exceptions automatically and checks whether the caught type matches the specified one. The following snippet shows how this can be used to validate that our timeline constructor doesn't accept null as the injection parameter: @Test( expected = IllegalArgumentException.class ) public void constructWithNullAsItemProvider() { new Timeline( null, mock( SessionStorage.class ) ); } Here, we've got a test, the body statements of which merge setup and exercise in one line for compactness. Although the verification result is specified ahead of the method's signature definition, of course, it gets evaluated at last. This means that the runtime test structure isn't twisted. But it is a bit of a downside from the readability point of view as it breaks the usual test format. However, the approach bears a real risk when using it in more complex scenarios. The next listing shows an alternative of setFetchCountExceedsLowerBound using the expected attribute: @Test( expected = IllegalArgumentException.class ) public void setFetchCountExceedsLowerBound() { Timeline timeline = new Timeline( null, null ); timeline.setFetchCount( Timeline.FETCH_COUNT_LOWER_BOUND - 1 ); } On the face of it, this might look fine because the test run would succeed apparently with a green bar. But given that the timeline constructor already throws IllegalArgumentException due to the initialization with null, the virtual point of interest is never reached. So any setFetchCount implementation will pass this test. This renders it not only useless, but it even lulls you into a false sense of security! Certainly, the approach is most hazardous when checking for runtime exceptions because they can be thrown undeclared. Thus, they can emerge practically everywhere and overshadow the original test intention unnoticed. Not being able to validate the state of the thrown exception narrows down the reasonable operational area of this concept to simple use cases, such as the constructor parameter verification mentioned previously. Finally, here are two more remarks on the initial example. First, it might be debatable whether IllegalArgumentException is appropriate for an argument-not-null-check from a design point of view. But as this discussion is as old as the hills and probably will never be settled, we won't argue about that. IllegalArgumentException was favored over NullPointerException basically because it seemed to be an evident way to build up a comprehensible example. To specify a different behavior of the tested use case, one simply has to define another Throwable type as the expected value. Second, as a side effect, the test shows how a generated test double can make our life much easier. You've probably already noticed that the session storage stand-in created on the fly serves as a dummy. This is quite nice as we don't have to implement one manually and as it decouples the test from storage-related signatures, which may break the test in future when changing. But keep in mind that such a created-on-the-fly dummy lacks the implicit no-operation-check. Hence, this approach might be too fragile under some circumstances. With annotations being too brittle for most usage scenarios and the try-fail-catch pattern being too crabbed, JUnit provides a special test helper called ExpectedException, which we'll take a look at now. Verification with the ExpectedException rule The third possibility offered to verify exceptions is the ExpectedException class. This type belongs to a special category of test utilities. For the moment, it is sufficient to know that rules allow us to embed a test method into custom pre- and post-operations at runtime. In doing so, the expected exception helper can catch the thrown instance and perform the appropriate verifications. A rule has to be defined as a nonstatic public field, annotated with @Rule, as shown in the following TimelineTest excerpt. See how the rule object gets set up implicitly here with a factory method: public class TimelineTest { @Rule public ExpectedException thrown = ExpectedException.none(); [...] @Test public void setFetchCountExceedsUpperBound() { int tooLargeValue = FETCH_COUNT_UPPER_BOUND + 1; thrown.expect( IllegalArgumentException.class ); thrown.expectMessage( valueOf( tooLargeValue ) ); timeline.setFetchCount( tooLargeValue ); } [...] } Compared to the try-fail-catch approach, the code is easier to read and write. The helper instance supports several methods to specify the anticipated outcome. Apart from the static imports of constants used for compactness, this specification reproduces pretty much the same validations as the original test. ExpectedException#expectedMessage expects a substring of the actual message in case you wonder, and we omitted the exact formatting here for brevity. In case the exercise phase of setFetchCountExceedsUpperBound does not throw an exception, the rule ensures that the test fails. In this context, it is about time we mentioned the utility's factory method none. Its name indicates that as long as no expectations are configured, the helper assumes that a test run should terminate normally. This means that no artificial fail has to be issued. This way, a mix of standard and exceptional flow tests can coexist in one and the same test case. Even so, the test helper has to be configured prior to the exercise phase, which still leaves room for improvement with respect to canonizing the test structure. As we'll see next, the possibility of Java 8 to compact closures into lambda expressions enables us to write even leaner and cleaner structured exceptional flow tests. Capturing exceptions with closures When writing tests, we strive to end up with a clear representation of separated test phases in the correct order. All of the previous approaches for testing exceptional flow did more or less a poor job in this regard. Looking once more at the classical try-fail-catch pattern, we recognize, however, that it comes closest. It strikes us that if we put some work into it, we can extract exception capturing into a reusable utility method. This method would accept a functional interface—the representation of the exception-throwing functionality under test—and return the caught exception. The ThrowableCaptor test helper puts the idea into practice: public class ThrowableCaptor { @FunctionalInterface public interface Actor { void act() throws Throwable; } public static Throwable thrownBy( Actor actor ) { try { actor.act(); } catch( Throwable throwable ) { return throwable; } return null; } } We see the Actor interface that serves as a functional callback. It gets executed within a try block of the thrownBy method. If an exception is thrown, which should be the normal path of execution, it gets caught and returned as the result. Bear in mind that we have omitted the fail statement of the original try-fail-catch pattern. We consider the capturer as a helper for the exercise phase. Thus, we merely return null if no exception is thrown and leave it to the afterworld to deal correctly with the situation. How capturing using this helper in combination with a lambda expression works is shown by the next variant of setFetchCountExceedsUpperBound, and this time, we've achieved the clear phase separation we're in search of: @Test public void setFetchCountExceedsUpperBound() { int tooLarge = FETCH_COUNT_UPPER_BOUND + 1; Throwable actual = thrownBy( ()-> timeline.setFetchCount( tooLarge ) ); String message = actual.getMessage(); assertNotNull( actual ); assertTrue( actual instanceof IllegalArgumentException ); assertTrue( message.contains( valueOf( tooLarge ) ) ); assertEquals( format( ERROR_EXCEEDS_UPPER_BOUND, tooLarge ), message ); } Please note that we've added an additional not-null-check compared to the verifications of the previous version. We do this as a replacement for the non-existing failure enforcement. Indeed, the following instanceof check would fail implicitly if actual was null. But this would also be misleading since it overshadows the true failure reason. Stating that actual must not be null points out clearly the expected post condition that has not been met. One of the libraries presented there will be AssertJ. The latter is mainly intended to improve validation expressions. But it also provides a test helper, which supports the closure pattern you've just learned to make use of. Another choice to avoid writing your own helper could be the library Fishbowl, [FISBOW]. Now that we understand the available testing patterns, let's discuss a few system-spanning aspects when dealing with exceptional flow in practice. Treating collaborators Considerations we've made about how a software system can be built upon collaborating components, foreshadows that we have to take good care when modelling our overall strategy for exceptional flow. Because of this, we'll start this section with an introduction of the fail fast strategy, which is a perfect match to the test-first approach. The second part of the section will show you how to deal with checked exceptions thrown by collaborators. Fail fast Until now, we've learned that exceptions can serve in corner cases as an expected outcome, which we need to verify with tests. As an example, we've changed the behavior of our timeline fetch-count setter. The new version throws IllegalArgumentException if the given parameter is out of range. While we've explained how to test this, you may have wondered whether throwing an exception is actually an improvement. On the contrary, you might think, doesn't the exception make our program more fragile as it bears the risk of an ugly error popping up or even of crashing the entire application? Aren't those things we want to prevent by all means? So, wouldn't it be better to stick with the old version and silently ignore arguments that are out of range? At first sight, this may sound reasonable, but doing so is ostrich-like head-in-the-sand behavior. According to the motto: if we can't see them, they aren't there, and so, they can't hurt us. Ignoring an input that is obviously wrong can lead to misbehavior of our software system later on. The reason for the problem is probably much harder to track down compared to an immediate failure. Generally speaking, this practice disconnects the effects of a problem from its cause. As a consequence, you often have to deal with stack traces leading to dead ends or worse. Consider, for example, that we'd initialize the timeline fetch-count as an invariant employed by a constructor argument. Moreover, the value we use would be negative and silently ignored by the component. In addition, our application would make some item position calculations based on this value. Sure enough, the calculation results would be faulty. If we're lucky, an exception would be thrown, when, for example, trying to access a particular item based on these calculations. However, the given stack trace would reveal nothing about the reason that originally led to the situation. However, if we're unlucky, the misbehavior will not be detected until the software has been released to end users. On the other hand, with the new version of setFetchCount, this kind of translocated problem can never occur. A failure trace would point directly to the initial programming mistake, hence avoiding follow-up issues. This means failing immediately and visibly increases robustness due to short feedback cycles and pertinent exceptions. Jim Shore has given this design strategy the name fail fast, [SHOR04]. Shore points out that the heart of fail fast are assertions. Similar to the JUnit assert statements, an assertion fails on a condition that isn't met. Typical assertions might be not-null-checks, in-range-checks, and so on. But how do we decide if it's necessary to fail fast? While assertions of input arguments are apparently a potential use case scenario, checking of return values or invariants may also be so. Sometimes, such conditions are described in code comments, such as // foo should never be null because..., which is a clear indication that suggests to replace the note with an appropriate assertion. See the next snippet demonstrating the principle: public void doWithAssert() { [...] boolean condition = ...; // check some invariant if( !condition ) { throw new IllegalStateException( "Condition not met." ) } [...] } But be careful not to overdo things because in most cases, code will fail fast by default. So, you don't have to include a not-null-check after each and every variable assignment for example. Such paranoid programming styles decrease readability for no value-add at all. A last point to consider is your overall exception-handling strategy. The intention of assertions is to reveal programming or configuration mistakes as early as possible. Because of this, we strictly make use of runtime exception types only. Catching exceptions at random somewhere up the call stack of course thwarts the whole purpose of this approach. So, beware of the absurd try-catch-log pattern that you often see scattered all over the code of scrubs, and which is demonstrated in the next listing as a deterrent only: private Data readSomeData() { try { return source.readData(); } catch( Exception hardLuck ) { // NEVER DO THIS! hardLuck.printStackTrace(); } return null; } The sample code projects exceptional flow to null return values and disguises the fact that something seriously went wrong. It surely does not get better using a logging framework or even worse, by swallowing the exception completely. Analysis of an error by means of stack trace logs is cumbersome and often fault-prone. In particular, this approach usually leads to logs jammed with ignored traces, where one more or less does not attract attention. In such an environment, it's like looking for a needle in a haystack when trying to find out why a follow-up problem occurs. Instead, use the central exception handling mechanism at reasonable boundaries. You can create a bottom level exception handler around a GUI's message loop. Ensure that background threads report problems appropriately or secure event notification mechanisms for example. Otherwise, you shouldn't bother with exception handling in your code. As outlined in the next paragraph, securing resource management with try-finally should most of the time be sufficient. The stubbing of exceptional behavior Every now and then, we come across collaborators, which declare checked exceptions in some or all of their method signatures. There is a debate going on for years now whether or not checked exceptions are evil, [HEVEEC]. However, in our daily work, we simply can't elude them as they pop up in adapters around third-party code or get burnt in legacy code we aren't able to change. So, what are the options we have in these situations? "It is funny how people think that the important thing about exceptions is handling them. That's not the important thing about exceptions. In a well-written application there's a ratio of ten to one, in my opinion, of try finally to try catch."                                                                            – Anders Hejlsberg, [HEVEEC] Cool. This means that we also declare the exception type in question on our own method signature and let someone else up on the call stack solve the tricky things, right? Although it makes life easier for us for at the moment, acting like this is probably not the brightest idea. If everybody follows that strategy, the higher we get on the stack, the more exception types will occur. This doesn't scale well and even worse, it exposes details from the depths of the call hierarchy. Because of this, people sometimes simplify things by declaring java.lang.Exception as thrown type. Indeed, this gets them rid of the throws declaration tail. But it's also a pauper's oath as it reduces the Java type concept to absurdity. Fair enough. So, we're presumably better off when dealing with checked exceptions as soon as they occur. But hey, wouldn't this contradict Hejlsberg's statement? And what shall we do with the gatecrasher, meaning is there always a reasonable handling approach? Fortunately there is, and it absolutely conforms with the quote and the preceding fail fast discussion. We envelope the caught checked exception into an appropriate runtime exception, which we afterwards throw instead. This way, every caller of our component's functionality can use it without worrying about exception handling. If necessary, it is sufficient to use a try-finally block to ensure the disposal or closure of open resources for example. As described previously, we leave exception handling to bottom line handlers around the message loop or the like. Now that we know what we have to do, the next question is how can we achieve this with tests? Luckily, with the knowledge about stubs, you're almost there. Normally handling a checked exception represents a boundary condition. We can regard the thrown exception as an indirect input to our SUT. All we have to do is let the stub throw an expected exception (precondition) and check if the envelope gets delivered properly (postcondition). For better understanding, let's comprehend the steps in our timeline example. We consider for this section that our SessionStorage collaborator declares IOException on its methods for any reason whatsoever. The storage interface is shown in the next listing. public interface SessionStorage { void storeTop( Item top ) throws IOException; Item readTop() throws IOException; } Next, we'll have to write a test that reflects our thoughts. At first, we create an IOException instance that will serve as an indirect input. Looking at the next snippet, you can see how we configure our storage stub to throw this instance on a call to storeTop. As the method does not return anything, the Mockito stubbing pattern looks a bit different than earlier. This time, it starts with the expectation definition. In addition, we use Mockito's any matcher, which defines the exception that should be thrown for those calls to storeTop, where the given argument is assignment-compatible with the specified type token. After this, we're ready to exercise the fetchItems method and capture the actual outcome. We expect it to be an instance of IllegalStateException just to keep things simple. See how we verify that the caught exception wraps the original cause and that the message matches a predefined constant on our component class: @Test public void fetchItemWithExceptionOnStoreTop() throws IOException { IOException cause = new IOException(); doThrow( cause ).when( storage ).storeTop( any( Item.class ) ); Throwable actual = thrownBy( () -> timeline.fetchItems() ); assertNotNull( actual ); assertTrue( actual instanceof IllegalStateException ); assertSame( cause, actual.getCause() ); assertEquals( Timeline.ERROR_STORE_TOP, actual.getMessage() ); } With the test in place, the implementation is pretty easy. Let's assume that we have the item storage extracted to a private timeline method named storeTopItem. It gets called somewhere down the road of fetchItem and again calls a private method, getTopItem. Fixing the compile errors, we end up with a try-catch block because we have to deal with IOException thrown by storeTop. Our first error handling should be empty to ensure that our test case actually fails. The following snippet shows the ultimate version, which will make the test finally pass: static final String ERROR_STORE_TOP = "Unable to save top item"; [...] private void storeTopItem() { try { sessionStorage.storeTop( getTopItem() ); } catch( IOException cause ) { throw new IllegalStateException( ERROR_STORE_TOP, cause ); } } Of course, real-world situations can sometimes be more challenging, for example, when the collaborator throws a mix of checked and runtime exceptions. At times, this results in tedious work. But if the same type of wrapping exception can always be used, the implementation can often be simplified. First, re-throw all runtime exceptions; second, catch exceptions by their common super type and re-throw them embedded within a wrapping runtime exception (the following listing shows the principle): private void storeTopItem() { try { sessionStorage.storeTop( getTopItem() ); } catch( RuntimeException rte ) { throw rte; } catch( Exception cause ) { throw new IllegalStateException( ERROR_STORE_TOP, cause ); } } Summary In this article, you learned how to validate the proper behavior of an SUT with respect to exceptional flow. You experienced how to apply the various capture and verification options, and we discussed their strengths and weaknesses. Supplementary to the test-first approach, you were taught the concepts of the fail fast design strategy and recognized how adapting it increases the overall robustness of applications. Last but not least, we explained how to handle collaborators that throw checked exceptions and how to stub their exceptional bearing. Resources for Article: Further resources on this subject: Progressive Mockito[article] Using Mock Objects to Test Interactions[article] Ensuring Five-star Rating in the MarketPlace [article]
Read more
  • 0
  • 0
  • 1973

article-image-understanding-tdd
Packt
03 Sep 2015
31 min read
Save for later

Understanding TDD

Packt
03 Sep 2015
31 min read
 In this article by Viktor Farcic and Alex Garcia, the authors of the book Test-Driven Java Development, we will go through TDD in a simple procedure of writing tests before the actual implementation. It's an inversion of a traditional approach where testing is performed after the code is written. (For more resources related to this topic, see here.) Red-green-refactor Test-driven development is a process that relies on the repetition of a very short development cycle. It is based on the test-first concept of extreme programming (XP) that encourages simple design with a high level of confidence. The procedure that drives this cycle is called red-green-refactor. The procedure itself is simple and it consists of a few steps that are repeated over and over again: Write a test. Run all tests. Write the implementation code. Run all tests. Refactor. Run all tests. Since a test is written before the actual implementation, it is supposed to fail. If it doesn't, the test is wrong. It describes something that already exists or it was written incorrectly. Being in the green state while writing tests is a sign of a false positive. Tests like these should be removed or refactored. While writing tests, we are in the red state. When the implementation of a test is finished, all tests should pass and then we will be in the green state. If the last test failed, implementation is wrong and should be corrected. Either the test we just finished is incorrect or the implementation of that test did not meet the specification we had set. If any but the last test failed, we broke something and changes should be reverted. When this happens, the natural reaction is to spend as much time as needed to fix the code so that all tests are passing. However, this is wrong. If a fix is not done in a matter of minutes, the best thing to do is to revert the changes. After all, everything worked not long ago. Implementation that broke something is obviously wrong, so why not go back to where we started and think again about the correct way to implement the test? That way, we wasted minutes on a wrong implementation instead of wasting much more time to correct something that was not done right in the first place. Existing test coverage (excluding the implementation of the last test) should be sacred. We change the existing code through intentional refactoring, not as a way to fix recently written code. Do not make the implementation of the last test final, but provide just enough code for this test to pass. Write the code in any way you want, but do it fast. Once everything is green, we have confidence that there is a safety net in the form of tests. From this moment on, we can proceed to refactor the code. This means that we are making the code better and more optimum without introducing new features. While refactoring is in place, all tests should be passing all the time. If, while refactoring, one of the tests failed, refactor broke an existing functionality and, as before, changes should be reverted. Not only that at this stage we are not changing any features, but we are also not introducing any new tests. All we're doing is making the code better while continuously running all tests to make sure that nothing got broken. At the same time, we're proving code correctness and cutting down on future maintenance costs. Once refactoring is finished, the process is repeated. It's an endless loop of a very short cycle. Speed is the key Imagine a game of ping pong (or table tennis). The game is very fast; sometimes it is hard to even follow the ball when professionals play the game. TDD is very similar. TDD veterans tend not to spend more than a minute on either side of the table (test and implementation). Write a short test and run all tests (ping), write the implementation and run all tests (pong), write another test (ping), write implementation of that test (pong), refactor and confirm that all tests are passing (score), and then repeat—ping, pong, ping, pong, ping, pong, score, serve again. Do not try to make the perfect code. Instead, try to keep the ball rolling until you think that the time is right to score (refactor). Time between switching from tests to implementation (and vice versa) should be measured in minutes (if not seconds). It's not about testing T in TDD is often misunderstood. Test-driven development is the way we approach the design. It is the way to force us to think about the implementation and to what the code needs to do before writing it. It is the way to focus on requirements and implementation of just one thing at a time—organize your thoughts and better structure the code. This does not mean that tests resulting from TDD are useless—it is far from that. They are very useful and they allow us to develop with great speed without being afraid that something will be broken. This is especially true when refactoring takes place. Being able to reorganize the code while having the confidence that no functionality is broken is a huge boost to the quality. The main objective of test-driven development is testable code design with tests as a very useful side product. Testing Even though the main objective of test-driven development is the approach to code design, tests are still a very important aspect of TDD and we should have a clear understanding of two major groups of techniques as follows: Black-box testing White-box testing The black-box testing Black-box testing (also known as functional testing) treats software under test as a black-box without knowing its internals. Tests use software interfaces and try to ensure that they work as expected. As long as functionality of interfaces remains unchanged, tests should pass even if internals are changed. Tester is aware of what the program should do, but does not have the knowledge of how it does it. Black-box testing is most commonly used type of testing in traditional organizations that have testers as a separate department, especially when they are not proficient in coding and have difficulties understanding it. This technique provides an external perspective on the software under test. Some of the advantages of black-box testing are as follows: Efficient for large segments of code Code access, understanding the code, and ability to code are not required Separation between user's and developer's perspectives Some of the disadvantages of black-box testing are as follows: Limited coverage, since only a fraction of test scenarios is performed Inefficient testing due to tester's lack of knowledge about software internals Blind coverage, since tester has limited knowledge about the application If tests are driving the development, they are often done in the form of acceptance criteria that is later used as a definition of what should be developed. Automated black-box testing relies on some form of automation such as behavior-driven development (BDD). The white-box testing White-box testing (also known as clear-box testing, glass-box testing, transparent-box testing, and structural testing) looks inside the software that is being tested and uses that knowledge as part of the testing process. If, for example, an exception should be thrown under certain conditions, a test might want to reproduce those conditions. White-box testing requires internal knowledge of the system and programming skills. It provides an internal perspective on the software under test. Some of the advantages of white-box testing are as follows: Efficient in finding errors and problems Required knowledge of internals of the software under test is beneficial for thorough testing Allows finding hidden errors Programmers introspection Helps optimizing the code Due to the required internal knowledge of the software, maximum coverage is obtained Some of the disadvantages of white-box testing are as follows: It might not find unimplemented or missing features Requires high-level knowledge of internals of the software under test Requires code access Tests are often tightly coupled to the implementation details of the production code, causing unwanted test failures when the code is refactored. White-box testing is almost always automated and, in most cases, has the form of unit tests. When white-box testing is done before the implementation, it takes the form of TDD. The difference between quality checking and quality assurance The approach to testing can also be distinguished by looking at the objectives they are trying to accomplish. Those objectives are often split between quality checking (QC) and quality assurance (QA). While quality checking is focused on defects identification, quality assurance tries to prevent them. QC is product-oriented and intends to make sure that results are as expected. On the other hand, QA is more focused on processes that assure that quality is built-in. It tries to make sure that correct things are done in the correct way. While quality checking had a more important role in the past, with the emergence of TDD, acceptance test-driven development (ATDD), and later on behavior-driven development (BDD), focus has been shifting towards quality assurance. Better tests No matter whether one is using black-box, white-box, or both types of testing, the order in which they are written is very important. Requirements (specifications and user stories) are written before the code that implements them. They come first so they define the code, not the other way around. The same can be said for tests. If they are written after the code is done, in a certain way, that code (and the functionalities it implements) is defining tests. Tests that are defined by an already existing application are biased. They have a tendency to confirm what code does, and not to test whether client's expectations are met, or that the code is behaving as expected. With manual testing, that is less the case since it is often done by a siloed QC department (even though it's often called QA). They tend to work on tests' definition in isolation from developers. That in itself leads to bigger problems caused by inevitably poor communication and the police syndrome where testers are not trying to help the team to write applications with quality built-in, but to find faults at the end of the process. The sooner we find problems, the cheaper it is to fix them. Tests written in the TDD fashion (including its flavors such as ATDD and BDD) are an attempt to develop applications with quality built-in from the very start. It's an attempt to avoid having problems in the first place. Mocking In order for tests to run fast and provide constant feedback, code needs to be organized in such a way that the methods, functions, and classes can be easily replaced with mocks and stubs. A common word for this type of replacements of the actual code is test double. Speed of the execution can be severely affected with external dependencies; for example, our code might need to communicate with the database. By mocking external dependencies, we are able to increase that speed drastically. Whole unit tests suite execution should be measured in minutes, if not seconds. Designing the code in a way that it can be easily mocked and stubbed, forces us to better structure that code by applying separation of concerns. More important than speed is the benefit of removal of external factors. Setting up databases, web servers, external APIs, and other dependencies that our code might need, is both time consuming and unreliable. In many cases, those dependencies might not even be available. For example, we might need to create a code that communicates with a database and have someone else create a schema. Without mocks, we would need to wait until that schema is set. With or without mocks, the code should be written in a way that we can easily replace one dependency with another. Executable documentation Another very useful aspect of TDD (and well-structured tests in general) is documentation. In most cases, it is much easier to find out what the code does by looking at tests than the implementation itself. What is the purpose of some methods? Look at the tests associated with it. What is the desired functionality of some part of the application UI? Look at the tests associated with it. Documentation written in the form of tests is one of the pillars of TDD and deserves further explanation. The main problem with (traditional) software documentation is that it is not up to date most of the time. As soon as some part of the code changes, the documentation stops reflecting the actual situation. This statement applies to almost any type of documentation, with requirements and test cases being the most affected. The necessity to document code is often a sign that the code itself is not well written.Moreover, no matter how hard we try, documentation inevitably gets outdated. Developers shouldn't rely on system documentation because it is almost never up to date. Besides, no documentation can provide as detailed and up-to-date description of the code as the code itself. Using code as documentation, does not exclude other types of documents. The key is to avoid duplication. If details of the system can be obtained by reading the code, other types of documentation can provide quick guidelines and a high-level overview. Non-code documentation should answer questions such as what the general purpose of the system is and what technologies are used by the system. In many cases, a simple README is enough to provide the quick start that developers need. Sections such as project description, environment setup, installation, and build and packaging instructions are very helpful for newcomers. From there on, code is the bible. Implementation code provides all needed details while test code acts as the description of the intent behind the production code. Tests are executable documentation with TDD being the most common way to create and maintain it. Assuming that some form of Continuous Integration (CI) is in use, if some part of test-documentation is incorrect, it will fail and be fixed soon afterwards. CI solves the problem of incorrect test-documentation, but it does not ensure that all functionality is documented. For this reason (among many others), test-documentation should be created in the TDD fashion. If all functionality is defined as tests before the implementation code is written and execution of all tests is successful, then tests act as a complete and up-to-date information that can be used by developers. What should we do with the rest of the team? Testers, customers, managers, and other non coders might not be able to obtain the necessary information from the production and test code. As we saw earlier, two most common types of testing are black-box and white-box testing. This division is important since it also divides testers into those who do know how to write or at least read code (white-box testing) and those who don't (black-box testing). In some cases, testers can do both types. However, more often than not, they do not know how to code so the documentation that is usable for developers is not usable for them. If documentation needs to be decoupled from the code, unit tests are not a good match. That is one of the reasons why BDD came in to being. BDD can provide documentation necessary for non-coders, while still maintaining the advantages of TDD and automation. Customers need to be able to define new functionality of the system, as well as to be able to get information about all the important aspects of the current system. That documentation should not be too technical (code is not an option), but it still must be always up to date. BDD narratives and scenarios are one of the best ways to provide this type of documentation. Ability to act as acceptance criteria (written before the code), be executed frequently (preferably on every commit), and be written in natural language makes BDD stories not only always up to date, but usable by those who do not want to inspect the code. Documentation is an integral part of the software. As with any other part of the code, it needs to be tested often so that we're sure that it is accurate and up to date. The only cost-effective way to have accurate and up-to-date information is to have executable documentation that can be integrated into your continuous integration system. TDD as a methodology is a good way to move towards this direction. On a low level, unit tests are a best fit. On the other hand, BDD provides a good way to work on a functional level while maintaining understanding accomplished using natural language. No debugging We (authors of this article) almost never debug applications we're working on! This statement might sound pompous, but it's true. We almost never debug because there is rarely a reason to debug an application. When tests are written before the code and the code coverage is high, we can have high confidence that the application works as expected. This does not mean that applications written using TDD do not have bugs—they do. All applications do. However, when that happens, it is easy to isolate them by simply looking for the code that is not covered with tests. Tests themselves might not include some cases. In that situation, the action is to write additional tests. With high code coverage, finding the cause of some bug is much faster through tests than spending time debugging line by line until the culprit is found. With all this in mind, let's go through the TDD best practices. Best practices Coding best practices are a set of informal rules that the software development community has learned over time, which can help improve the quality of software. While each application needs a level of creativity and originality (after all, we're trying to build something new or better), coding practices help us avoid some of the problems others faced before us. If you're just starting with TDD, it is a good idea to apply some (if not all) of the best practices generated by others. For easier classification of test-driven development best practices, we divided them into four categories: Naming conventions Processes Development practices Tools As you'll see, not all of them are exclusive to TDD. Since a big part of test-driven development consists of writing tests, many of the best practices presented in the following sections apply to testing in general, while others are related to general coding best practices. No matter the origin, all of them are useful when practicing TDD. Take the advice with a certain dose of skepticism. Being a great programmer is not only about knowing how to code, but also about being able to decide which practice, framework or style best suits the project and the team. Being agile is not about following someone else's rules, but about knowing how to adapt to circumstances and choose the best tools and practices that suit the team and the project. Naming conventions Naming conventions help to organize tests better, so that it is easier for developers to find what they're looking for. Another benefit is that many tools expect that those conventions are followed. There are many naming conventions in use, and those presented here are just a drop in the ocean. The logic is that any naming convention is better than none. Most important is that everyone on the team knows what conventions are being used and are comfortable with them. Choosing more popular conventions has the advantage that newcomers to the team can get up to speed fast since they can leverage existing knowledge to find their way around. Separate the implementation from the test code Benefits: It avoids accidentally packaging tests together with production binaries; many build tools expect tests to be in a certain source directory. Common practice is to have at least two source directories. Implementation code should be located in src/main/java and test code in src/test/java. In bigger projects, the number of source directories can increase but the separation between implementation and tests should remain as is. Build tools such as Gradle and Maven expect source directories separation as well as naming conventions. You might have noticed that the build.gradle files that we used throughout this article did not have explicitly specified what to test nor what classes to use to create a .jar file. Gradle assumes that tests are in src/test/java and that the implementation code that should be packaged into a jar file is in src/main/java. Place test classes in the same package as implementation Benefits: Knowing that tests are in the same package as the code helps finding code faster. As stated in the previous practice, even though packages are the same, classes are in the separate source directories. All exercises throughout this article followed this convention. Name test classes in a similar fashion to the classes they test Benefits: Knowing that tests have a similar name to the classes they are testing helps in finding the classes faster. One commonly used practice is to name tests the same as the implementation classes, with the suffix Test. If, for example, the implementation class is TickTackToe, the test class should be TickTackToeTest. However, in all cases, with the exception of those we used throughout the refactoring exercises, we prefer the suffix Spec. It helps to make a clear distinction that test methods are primarily created as a way to specify what will be developed. Testing is a great subproduct of those specifications. Use descriptive names for test methods Benefits: It helps in understanding the objective of tests. Using method names that describe tests is beneficial when trying to figure out why some tests failed or when the coverage should be increased with more tests. It should be clear what conditions are set before the test, what actions are performed and what is the expected outcome. There are many different ways to name test methods and our preferred method is to name them using the Given/When/Then syntax used in the BDD scenarios. Given describes (pre)conditions, When describes actions, and Then describes the expected outcome. If some test does not have preconditions (usually set using @Before and @BeforeClass annotations), Given can be skipped. Let's take a look at one of the specifications we created for our TickTackToe application:   @Test public void whenPlayAndWholeHorizontalLineThenWinner() { ticTacToe.play(1, 1); // X ticTacToe.play(1, 2); // O ticTacToe.play(2, 1); // X ticTacToe.play(2, 2); // O String actual = ticTacToe.play(3, 1); // X assertEquals("X is the winner", actual); } Just by reading the name of the method, we can understand what it is about. When we play and the whole horizontal line is populated, then we have a winner. Do not rely only on comments to provide information about the test objective. Comments do not appear when tests are executed from your favorite IDE nor do they appear in reports generated by CI or build tools. Processes TDD processes are the core set of practices. Successful implementation of TDD depends on practices described in this section. Write a test before writing the implementation code Benefits: It ensures that testable code is written; ensures that every line of code gets tests written for it. By writing or modifying the test first, the developer is focused on requirements before starting to work on the implementation code. This is the main difference compared to writing tests after the implementation is done. The additional benefit is that with the tests written first, we are avoiding the danger that the tests work as quality checking instead of quality assurance. We're trying to ensure that quality is built in as opposed to checking later whether we met quality objectives. Only write new code when the test is failing Benefits: It confirms that the test does not work without the implementation. If tests are passing without the need to write or modify the implementation code, then either the functionality is already implemented or the test is defective. If new functionality is indeed missing, then the test always passes and is therefore useless. Tests should fail for the expected reason. Even though there are no guarantees that the test is verifying the right thing, with fail first and for the expected reason, confidence that verification is correct should be high. Rerun all tests every time the implementation code changes Benefits: It ensures that there is no unexpected side effect caused by code changes. Every time any part of the implementation code changes, all tests should be run. Ideally, tests are fast to execute and can be run by the developer locally. Once code is submitted to version control, all tests should be run again to ensure that there was no problem due to code merges. This is specially important when more than one developer is working on the code. Continuous integration tools such as Jenkins (http://jenkins-ci.org/), Hudson (http://hudson-ci.org/), Travis (https://travis-ci.org/), and Bamboo (https://www.atlassian.com/software/bamboo) should be used to pull the code from the repository, compile it, and run tests. All tests should pass before a new test is written Benefits: The focus is maintained on a small unit of work; implementation code is (almost) always in working condition. It is sometimes tempting to write multiple tests before the actual implementation. In other cases, developers ignore problems detected by existing tests and move towards new features. This should be avoided whenever possible. In most cases, breaking this rule will only introduce technical debt that will need to be paid with interest. One of the goals of TDD is that the implementation code is (almost) always working as expected. Some projects, due to pressures to reach the delivery date or maintain the budget, break this rule and dedicate time to new features, leaving the task of fixing the code associated with failed tests for later. These projects usually end up postponing the inevitable. Refactor only after all tests are passing Benefits: This type of refactoring is safe. If all implementation code that can be affected has tests and they are all passing, it is relatively safe to refactor. In most cases, there is no need for new tests. Small modifications to existing tests should be enough. The expected outcome of refactoring is to have all tests passing both before and after the code is modified. Development practices Practices listed in this section are focused on the best way to write tests. Write the simplest code to pass the test Benefits: It ensures cleaner and clearer design; avoids unnecessary features. The idea is that the simpler the implementation, the better and easier it is to maintain the product. The idea adheres to the keep it simple stupid (KISS) principle. This states that most systems work best if they are kept simple rather than made complex; therefore, simplicity should be a key goal in design, and unnecessary complexity should be avoided. Write assertions first, act later Benefits: This clarifies the purpose of the requirements and tests early. Once the assertion is written, the purpose of the test is clear and the developer can concentrate on the code that will accomplish that assertion and, later on, on the actual implementation. Minimize assertions in each test Benefits: This avoids assertion roulette; allows execution of more asserts. If multiple assertions are used within one test method, it might be hard to tell which of them caused a test failure. This is especially common when tests are executed as part of the continuous integration process. If the problem cannot be reproduced on a developer's machine (as may be the case if the problem is caused by environmental issues), fixing the problem may be difficult and time consuming. When one assert fails, execution of that test method stops. If there are other asserts in that method, they will not be run and information that can be used in debugging is lost. Last but not least, having multiple asserts creates confusion about the objective of the test. This practice does not mean that there should always be only one assert per test method. If there are other asserts that test the same logical condition or unit of functionality, they can be used within the same method. Let's go through few examples: @Test public final void whenOneNumberIsUsedThenReturnValueIsThatSameNumber() { Assert.assertEquals(3, StringCalculator.add("3")); } @Test public final void whenTwoNumbersAreUsedThenReturnValueIsTheirSum() { Assert.assertEquals(3+6, StringCalculator.add("3,6")); } The preceding code contains two specifications that clearly define what the objective of those tests is. By reading the method names and looking at the assert, there should be clarity on what is being tested. Consider the following for example: @Test public final void whenNegativeNumbersAreUsedThenRuntimeExceptionIsThrown() { RuntimeException exception = null; try { StringCalculator.add("3,-6,15,-18,46,33"); } catch (RuntimeException e) { exception = e; } Assert.assertNotNull("Exception was not thrown", exception); Assert.assertEquals("Negatives not allowed: [-6, -18]", exception.getMessage()); } This specification has more than one assert, but they are testing the same logical unit of functionality. The first assert is confirming that the exception exists, and the second that its message is correct. When multiple asserts are used in one test method, they should all contain messages that explain the failure. This way debugging the failed assert is easier. In the case of one assert per test method, messages are welcome, but not necessary since it should be clear from the method name what the objective of the test is. @Test public final void whenAddIsUsedThenItWorks() { Assert.assertEquals(0, StringCalculator.add("")); Assert.assertEquals(3, StringCalculator.add("3")); Assert.assertEquals(3+6, StringCalculator.add("3,6")); Assert.assertEquals(3+6+15+18+46+33, StringCalculator.add("3,6,15,18,46,33")); Assert.assertEquals(3+6+15, StringCalculator.add("3,6n15")); Assert.assertEquals(3+6+15, StringCalculator.add("//;n3;6;15")); Assert.assertEquals(3+1000+6, StringCalculator.add("3,1000,1001,6,1234")); } This test has many asserts. It is unclear what the functionality is, and if one of them fails, it is unknown whether the rest would work or not. It might be hard to understand the failure when this test is executed through some of the CI tools. Do not introduce dependencies between tests Benefits: The tests work in any order independently, whether all or only a subset is run Each test should be independent from the others. Developers should be able to execute any individual test, a set of tests, or all of them. Often, due to the test runner's design, there is no guarantee that tests will be executed in any particular order. If there are dependencies between tests, they might easily be broken with the introduction of new ones. Tests should run fast Benefits: These tests are used often. If it takes a lot of time to run tests, developers will stop using them or run only a small subset related to the changes they are making. The benefit of fast tests, besides fostering their usage, is quick feedback. The sooner the problem is detected, the easier it is to fix it. Knowledge about the code that produced the problem is still fresh. If the developer already started working on the next feature while waiting for the completion of the execution of the tests, he might decide to postpone fixing the problem until that new feature is developed. On the other hand, if he drops his current work to fix the bug, time is lost in context switching. Tests should be so quick that developers can run all of them after each change without getting bored or frustrated. Use test doubles Benefits: This reduces code dependency and test execution will be faster. Mocks are prerequisites for fast execution of tests and ability to concentrate on a single unit of functionality. By mocking dependencies external to the method that is being tested, the developer is able to focus on the task at hand without spending time in setting them up. In the case of bigger teams, those dependencies might not even be developed. Also, the execution of tests without mocks tends to be slow. Good candidates for mocks are databases, other products, services, and so on. Use set-up and tear-down methods Benefits: This allows set-up and tear-down code to be executed before and after the class or each method. In many cases, some code needs to be executed before the test class or before each method in a class. For this purpose, JUnit has @BeforeClass and @Before annotations that should be used as the setup phase. @BeforeClass executes the associated method before the class is loaded (before the first test method is run). @Before executes the associated method before each test is run. Both should be used when there are certain preconditions required by tests. The most common example is setting up test data in the (hopefully in-memory) database. At the opposite end are @After and @AfterClass annotations, which should be used as the tear-down phase. Their main purpose is to destroy data or a state created during the setup phase or by the tests themselves. As stated in one of the previous practices, each test should be independent from the others. Moreover, no test should be affected by the others. Tear-down phase helps to maintain the system as if no test was previously executed. Do not use base classes in tests Benefits: It provides test clarity. Developers often approach test code in the same way as implementation. One of the common mistakes is to create base classes that are extended by tests. This practice avoids code duplication at the expense of tests clarity. When possible, base classes used for testing should be avoided or limited. Having to navigate from the test class to its parent, parent of the parent, and so on in order to understand the logic behind tests introduces often unnecessary confusion. Clarity in tests should be more important than avoiding code duplication. Tools TDD, coding and testing in general, are heavily dependent on other tools and processes. Some of the most important ones are as follows. Each of them is too big a topic to be explored in this article, so they will be described only briefly. Code coverage and Continuous integration (CI) Benefits: It gives assurance that everything is tested Code coverage practice and tools are very valuable in determining that all code, branches, and complexity is tested. Some of the tools are JaCoCo (http://www.eclemma.org/jacoco/), Clover (https://www.atlassian.com/software/clover/overview), and Cobertura (http://cobertura.github.io/cobertura/). Continuous Integration (CI) tools are a must for all except the most trivial projects. Some of the most used tools are Jenkins (http://jenkins-ci.org/), Hudson (http://hudson-ci.org/), Travis (https://travis-ci.org/), and Bamboo (https://www.atlassian.com/software/bamboo). Use TDD together with BDD Benefits: Both developer unit tests and functional customer facing tests are covered. While TDD with unit tests is a great practice, in many cases, it does not provide all the testing that projects need. TDD is fast to develop, helps the design process, and gives confidence through fast feedback. On the other hand, BDD is more suitable for integration and functional testing, provides better process for requirement gathering through narratives, and is a better way of communicating with clients through scenarios. Both should be used, and together they provide a full process that involves all stakeholders and team members. TDD (based on unit tests) and BDD should be driving the development process. Our recommendation is to use TDD for high code coverage and fast feedback, and BDD as automated acceptance tests. While TDD is mostly oriented towards white-box, BDD often aims at black-box testing. Both TDD and BDD are trying to focus on quality assurance instead of quality checking. Summary You learned that it is a way to design the code through short and repeatable cycle called red-green-refactor. Failure is an expected state that should not only be embraced, but enforced throughout the TDD process. This cycle is so short that we move from one phase to another with great speed. While code design is the main objective, tests created throughout the TDD process are a valuable asset that should be utilized and severely impact on our view of traditional testing practices. We went through the most common of those practices such as white-box and black-box testing, tried to put them into the TDD perspective, and showed benefits that they can bring to each other. You discovered that mocks are a very important tool that is often a must when writing tests. Finally, we discussed how tests can and should be utilized as executable documentation and how TDD can make debugging much less necessary. Now that we are armed with theoretical knowledge, it is time to set up the development environment and get an overview and comparison of different testing frameworks and tools. Now we will walk you through all the TDD best practices in detail and refresh the knowledge and experience you gained throughout this article. Resources for Article: Further resources on this subject: RESTful Services JAX-RS 2.0[article] Java Refactoring in NetBeans[article] Developing a JavaFX Application for iOS [article]
Read more
  • 0
  • 0
  • 2201
article-image-phalcons-orm
Packt
25 Aug 2015
9 min read
Save for later

Phalcon's ORM

Packt
25 Aug 2015
9 min read
In this article by Calin Rada, the author of the book Learning Phalcon PHP, we will go through a few of the ORM CRUD operations (update, and delete) and database transactions (For more resources related to this topic, see here.) By using the ORM, there is virtually no need to write any SQL in your code. Everything is OOP, and it is using the models to perform operations. The first, and the most basic, operation is retrieving data. In the old days, you would do this: $result = mysql_query("SELECT * FROM article"); The class that our models are extending is PhalconMvcModel. This  class has some very useful methods built in, such as find(), findFirst(), count(), sum(), maximum(), minimum(), average(), save(), create(), update(), and delete(). CRUD – updating data Updating data is as easy as creating it. The only thing that we need to do is find the record that we want to update. Open the article manager and add the following code: public function update($id, $data) { $article = Article::findFirstById($id); if (!$article) { throw new Exception('Article not found', 404); } $article->setArticleShortTitle($data[ 'article_short_title']); $article->setUpdatedAt(new PhalconDbRawValue('NOW()')); if (false === $article->update()) { foreach ($article->getMessages() as $message) { $error[] = (string) $message; } throw new Exception(json_encode($error)); } return $article; } As you can see, we are passing a new variable, $id, to the update method and searching for an article that has its ID equal to the value of the $id variable. For the sake of an example, this method will update only the article title and the updated_at field for now. Next, we will create a new dummy method as we did for the article, create. Open modules/Backoffice/Controllers/ArticleController.php and add the following code: public function updateAction($id) { $this->view->disable(); $article_manager = $this->getDI()->get( 'core_article_manager'); try { $article = $article_manager->update($id, [ 'article_short_title' => 'Modified article 1' ]); echo $article->getId(), " was updated."; } catch (Exception $e) { echo $e->getMessage(); } } If you access http://www.learning-phalcon.localhost/backoffice/article/update/1 now, you should be able to see the 1 was updated. response. Going back to the article list, you will see the new title, and the Updated column will have a new value. CRUD – deleting data Deleting data is easier, since we don't need to do more than calling the built-in delete() method. Open the article manager, and add the following code: public function delete($id) { $article = Article::findFirstById($id); if (!$article) { throw new Exception('Article not found', 404); } if (false === $article->delete()) { foreach ($article->getMessages() as $message) { $error[] = (string) $message; } throw new Exception(json_encode($error)); } return true; } We will once again create a dummy method to delete records. Open modules/Backoffice/Controllers/ArticleControllers.php, and add the following code: public function deleteAction($id) { $this->view->disable(); $article_manager = $this->getDI()->get('core_article_manager'); try { $article_manager->delete($id); echo "Article was deleted."; } catch (Exception $e) { echo $e->getMessage(); } } To test this, simply access http://www.learning-phalcon.localhost/backoffice/article/delete/1. If everything went well, you should see the Article was deleted. message. Going back to, article list, you won't be able to see the article with ID 1 anymore. These are the four basic methods: create, read, update, and delete. Later in this book, we will use these methods a lot. If you need/want to, you can use the Phalcon Developer Tools to generate CRUD automatically. Check out https://github.com/phalcon/phalcon-devtools for more information. Using PHQL Personally, I am not a fan of PHQL. I prefer using ORM or Raw queries. But if you are going to feel comfortable with it, feel free to use it. PHQL is quite similar to writing raw SQL queries. The main difference is that you will need to pass a model instead of a table name, and use a models manager service or directly call the PhalconMvcModelQuery class. Here is a method similar to the built-in find() method: public function find() { $query = new PhalconMvcModelQuery("SELECT * FROM AppCoreModelsArticle", $this->getDI()); $articles = $query->execute(); return $articles; } To use the models manager, we need to inject this new service. Open the global services file, config/service.php, and add the following code: $di['modelsManager'] = function () { return new PhalconMvcModelManager(); }; Now let's rewrite the find() method by making use of the modelsManager service: public function find() { $query = $this->modelsManager->createQuery( "SELECT * FROM AppCoreModelsArticle"); $articles = $query->execute(); return $articles; } If we need to bind parameters, the method can look like this one: public function find() { $query = $this->modelsManager->createQuery( "SELECT * FROM AppCoreModelsArticle WHERE id = :id:"); $articles = $query->execute(array( 'id' => 2 )); return $articles; } We are not going to use PHQL at all in our project. If you are interested in it, you can find more information in the official documentation at http://docs.phalconphp.com/en/latest/reference/phql.html. Using raw SQL Sometimes, using raw SQL is the only way of performing complex queries. Let's see what a raw SQL will look like for a custom find() method and a custom update() method : <?php use PhalconMvcModelResultsetSimple as Resultset; class Article extends Base { public static function rawFind() { $sql = "SELECT * FROM robots WHERE id > 0"; $article = new self(); return new Resultset(null, $article, $article->getReadConnection()->query($sql)); } public static function rawUpdate() { $sql = "UPDATE article SET is_published = 1"; $this->getReadConnection()->execute($sql); } } As you can see, the rawFind() method returns an instance of PhalconMvcModelResultsetSimple. The rawUpdate() method just executes the query (in this example, we will mark all the articles as published). You might have noticed the getReadConnection() method. This method is very useful when you need to iterate over a large amount of data or if, for example, you use a master-slave connection. As an example, consider the following code snippet: <?php class Article extends Base { public function initialize() { $this->setReadConnectionService('a_slave_db_connection_service'); // By default is 'db' $this->setWriteConnectionService('db'); } } Working with models might be a complex thing. We cannot cover everything in this book, but we will work with many common techniques to achieve this part of our project. Please spare a little time and read more about working with models at http://docs.phalconphp.com/en/latest/reference/models.html. Database transactions If you need to perform multiple database operations, then in most cases you need to ensure that every operation is successful, for the sake of data integrity. A good database architecture in not always enough to solve potential integrity issues. This is the case where you should use transactions. Let's take as an example a virtual wallet that can be represented as shown in the next few tables. The User table looks like the following: ID NAME 1 John Doe The Wallet table looks like this: ID USER_ID BALANCE 1 1 5000 The Wallet transactions table looks like the following: ID WALLET_ID AMOUNT DESCRIPTION 1 1 5000 Bonus credit 2 1 -1800 Apple store How can we create a new user, credit their wallet, and then debit it as the result of a purchase action? This can be achieved in three ways using transactions: Manual transactions Implicit transactions Isolated transactions A manual transactions example Manual transactions are useful when we are using only one connection and the transactions are not very complex. For example, if any error occurs during an update operation, we can roll back the changes without affecting the data integrity: <?php class UserController extends PhalconMvcController { public function saveAction() { $this->db->begin(); $user = new User(); $user->name = "John Doe"; if (false === $user->save() { $this->db->rollback(); return; } $wallet = new Wallet(); $wallet->user_id = $user->id; $wallet->balance = 0; if (false === $wallet->save()) { $this->db->rollback(); return; } $walletTransaction = new WalletTransaction(); $walletTransaction->wallet_id = $wallet->id; $walletTransaction->amount = 5000; $walletTransaction->description = 'Bonus credit'; if (false === $walletTransaction1->save()) { $this->db->rollback(); return; } $walletTransaction1 = new WalletTransaction(); $walletTransaction1->wallet_id = $wallet->id; $walletTransaction1->amount = -1800; $walletTransaction1->description = 'Apple store'; if (false === $walletTransaction1->save()) { $this->db->rollback(); return; } $this->db->commit(); } } An implicit transactions example Implicit transactions are very useful when we need to perform operations on related tables / exiting relationships: <?php class UserController extends PhalconMvcController { public function saveAction() { $walletTransactions[0] = new WalletTransaction(); $walletTransactions[0]->wallet_id = $wallet->id; $walletTransactions[0]->amount = 5000; $walletTransactions[0]->description = 'Bonus credit'; $walletTransactions[1] = new WalletTransaction(); $walletTransactions[1]->wallet_id = $wallet->id; $walletTransactions[1]->amount = -1800; $walletTransactions[1]->description = 'Apple store'; $wallet = new Wallet(); $wallet->user_id = $user->id; $wallet->balance = 0; $wallet->transactions = $walletTransactions; $user = new User(); $user->name = "John Doe"; $user->wallet = $wallet; } } An isolated transactions example Isolated transactions are always executed in a separate connection, and they require a transaction manager: <?php use PhalconMvcModelTransactionManager as TxManager, PhalconMvcModelTransactionFailed as TxFailed; class UserController extends PhalconMvcController { public function saveAction() { try { $manager = new TxManager(); $transaction = $manager->get(); $user = new User(); $user->setTransaction($transaction); $user->name = "John Doe"; if ($user->save() == false) { $transaction->rollback("Cannot save user"); } $wallet = new Wallet(); $wallet->setTransaction($transaction); $wallet->user_id = $user->id; $wallet->balance = 0; if ($wallet->save() == false) { $transaction->rollback("Cannot save wallet"); } $walletTransaction = new WalletTransaction(); $walletTransaction->setTransaction($transaction);; $walletTransaction->wallet_id = $wallet->id; $walletTransaction->amount = 5000; $walletTransaction->description = 'Bonus credit'; if ($walletTransaction1->save() == false) { $transaction->rollback("Cannot create transaction"); } $walletTransaction1 = new WalletTransaction(); $walletTransaction1->setTransaction($transaction); $walletTransaction1->wallet_id = $wallet->id; $walletTransaction1->amount = -1800; $walletTransaction1->description = 'Apple store'; if ($walletTransaction1->save() == false) { $transaction->rollback("Cannot create transaction"); } $transaction->commit(); } catch(TxFailed $e) { echo "Error: ", $e->getMessage(); } } Summary In this article, you learned something about ORM in general and how to use some of the main built-in methods to perform CRUD operations. You also learned about database transactions and how to use PHQL or raw SQL queries. Resources for Article: Further resources on this subject: Using Phalcon Models, Views, and Controllers[article] Your first FuelPHP application in 7 easy steps[article] PHP Magic Features [article]
Read more
  • 0
  • 0
  • 3704

article-image-using-form-builder
Packt
11 Aug 2015
18 min read
Save for later

Using the Form Builder

Packt
11 Aug 2015
18 min read
In this article by Christopher John Pecoraro, author of the book, Mastering Laravel, you learn the fundamentals of using the form builder. (For more resources related to this topic, see here.) Building web pages with Laravel Laravel's approach to building web content is flexible. As much or as little of Laravel can be used to create HTML. Laravel uses the filename.blade.php convention to state that the file should be parsed by the blade parser, which actually converts the file into plain PHP. The name blade was inspired by the .NET's razor templating engine, so this may be familiar to someone who has used it. Laravel 5 provides a working demonstration of a form in the /resources/views/ directory. This view is shown when the /home route is requested and the user is not currently logged in. This form is obviously not created using the Laravel form methods. The route is defined in the routes file as follows: Route::get('home', 'HomeController@index'); The master template This is the following app (or master) template: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Laravel</title> <link href="/css/app.css" rel="stylesheet"> <!-- Fonts --> <link href='//fonts.googleapis.com/css?family=Roboto:400,300' rel='stylesheet' type='text/css'> <!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries --> <!-- WARNING: Respond.js doesn't work if you view the page via file:// --> <!--[if lt IE 9]> <script src="https://oss.maxcdn.com/html5shiv/3.7.2/ html5shiv.min.js"></script> <script src="https://oss.maxcdn.com/respond/1.4.2/ respond.min.js"></script> <![endif]--> </head> <body> <nav class="navbarnavbar-default"> <div class="container-fluid"> <div class="navbar-header"> <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" datatarget="# bs-example-navbar-collapse-1"> <span class="sr-only">Toggle Navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="#">Laravel</a> </div> <div class="collapse navbar-collapse" id=" bs-example-navbar-collapse-1"> <ul class="navnavbar-nav"> <li><a href="/">Home</a></li> </ul> <ul class="navnavbar-navnavbar-right"> @if (Auth::guest()) <li><a href="{{ route('auth.login') }}">Login</a></li> <li><a href="/auth/register"> Register</a></li> @else <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false">{{ Auth::user()->name }} <span class="caret"></span></a> <ul class="dropdown-menu" role="menu"> <li><a href="/auth/ logout">Logout</a></li> </ul> </li> @endif </ul> </div> </div> </nav> @yield('content') <!-- Scripts --> <script src="//cdnjs.cloudflare.com/ajax/libs/jquery/ 2.1.3/jquery.min.js"></script> <script src="//cdnjs.cloudflare.com/ajax/libs/twitterbootstrap/ 3.3.1/js/bootstrap.min.js"></script> </body> </html> The Laravel 5 master template is a standard HTML5 template with the following features: If the browser is older than Internet Explorer 9: Uses the HTML5 Shim from the CDN Uses the Respond.js JavaScript code from the CDN to retrofit media queries and CSS3 features Using @if (Auth::guest()), if the user is not authenticated, the login form is displayed; otherwise, the logout option is displayed Twitter bootstrap 3.x is included in the CDN The jQuery2.x is included in the CDN Any template that extends this template can override the content section An example page The source code for the login page is as follows: @extends('app') @section('content') <div class="container-fluid"> <div class="row"> <div class="col-md-8 col-md-offset-2"> <div class="panel panel-default"> <div class="panel-heading">Login</div> <div class="panel-body"> @if (count($errors) > 0) <div class="alert alert-danger"> <strong>Whoops!</strong> There were some problems with your input.<br><br> <ul> @foreach ($errors->all() as $error) <li>{{ $error }}</li> @endforeach </ul> </div> @endif <form class="form-horizontal" role="form" method="POST" action="/auth/login"> <input type="hidden" name="_token" value="{{ csrf_token() }}"> <div class="form-group"> <label class="col-md-4 controllabel"> E-Mail Address</label> <div class="col-md-6"> <input type="email" class="formcontrol" name="email" value="{{ old('email') }}"> </div> </div> <div class="form-group"> <label class="col-md-4 controllabel"> Password</label> <div class="col-md-6"> <input type="password" class="form-control" name="password"> </div> </div> <div class="form-group"> <div class="col-md-6 col-md-offset-4"> <div class="checkbox"> <label> <input type="checkbox" name="remember"> Remember Me </label> </div> </div> </div> <div class="form-group"> <div class="col-md-6 col-md-offset-4"> <button type="submit" lass="btn btn-primary" style="margin-right: 15px;"> Login </button> <a href="/password/email">Forgot Your Password?</a> </div> </div> </form> </div> </div> </div> </div> </div> @endsection From static HTML to static methods This login page begins with the following: @extends('app') It obviously uses the object-oriented paradigm to state that the app.blade.php template will be rendered. The following line overrides the content: @section('content') For this exercise, the form builder will be used instead of the static HTML. The form tag We will convert a static form tag to a FormBuilder method. The HTML is as follows: <form class="form-horizontal" role="form" method="POST"   action="/auth/login"> The method facade that we will use is as follows: Form::open(); In the FormBuilder.php class, the $reserved attribute is defined as follows: protected $reserved = ['method', 'url', 'route',   'action', 'files']; The attributes that we need to pass to an array to the open() method are class, role, method, and action. Since method and action are reserved words, it is necessary to pass the parameters in the following manner: Laravel form facade method array element HTML Form tag attribute method method url action role role class class Thus, the method call looks like this: {!! Form::open(['class'=>'form-horizontal', 'role =>'form', 'method'=>'POST', 'url'=>'/auth/login']) !!} The {!! !!} tags are used to start and end parsing of the form builder methods. The form method, POST, is placed first in the list of attributes in the HTML form tag. The action attribute actually needs to be a url. If the action parameter is used, then it refers to the controller action. In this case, the url parameter produces the action attribute of the form tag. Other attributes will be passed to the array and added to the list of attributes. The resultant HTML will be produced as follows: <form method="POST" action="http://laravel.example/auth/login" accept-charset="UTF-8" class="form-horizontal" role="form"> <input name="_token" type="hidden" value="wUY2hFSEWCzKHFfhywHvFbq9TXymUDiRUFreJD4h"> The CRSF token is automatically added, as the form method is POST. The text input field To convert the input fields, a facade is used. The input field's HTML is as follows: <input type="email" class="form-control" name="email" value="{{ old('email') }}"> Converting the preceding input field using a façade looks like this: {!! Form::input('email','email',old('email'), ['class'=>'form-control' ]) !!} Similarly, the text field becomes: {!! Form::input('password','password',null, ['class'=>'form-control']) !!} The input fields have the same signature. Of course, this can be refactored as follows: <?php $inputAttributes = ['class'=>'form-control'] ?> {!! Form::input('email','email',old('email'), $inputAttributes ) !!} ... {!! Form::input('password','password',null,$inputAttributes ) !!} The label tag The label tags are as follows: <label class="col-md-4 control-label">E-Mail Address</label> <label class="col-md-4 control-label">Password</label> To convert the label tags (E-Mail Address and Password), we will first create an array to hold the attributes, and then pass this array to the labels, as follows: $labelAttributes = ['class'=>'col-md-4 control-label']; Here is the form label code: {!! Form::label('email', 'E-Mail Address', $labelAttributes) !!} {!! Form::label('password', 'Password', $labelAttributes) !!} Checkbox To convert the checkbox to a facade, we will convert this: <input type="checkbox" name="remember"> Remember Me The preceding code is converted to the following code: {!! Form::checkbox('remember','') !!} Remember Me Remember that the PHP parameters should be sent in single quotation marks if there are no variables or other special characters, such as line breaks, inside the string to parse, while the HTML produced will have double quotes. The submit button Lastly, the submit button will be converted as follows: <button type="submit" class="btn btn-primary" style="margin-right: 15px;"> Login </button> The preceding code after conversion is as follows:   {!! Form::submit('Login', ['class'=>'btn btn-primary', 'style'=>'margin-right: 15px;']) !!} Note that the array parameter provides an easy way to provide any desired attributes, even those that are not among the list of standard HTML form elements. The anchor tag with links To convert the links, a helper method is used. Consider the following line of code: <a href="/password/email">Forgot Your Password?</a> The preceding line of code after conversion becomes: {!! link_to('/password/email', $title = 'Forgot Your Password?', $attributes = array(), $secure = null) !!} The link_to_route() method may be used to link to a route. For similar helper functions, visit http://laravelcollective.com/docs/5.0/html. Closing the form To end the form, we'll convert the traditional HTML form tag </form> to a Laravel {!! Form::close() !!} form method. The resultant form By putting everything together, the page now looks like this: @extends('app') @section('content') <div class="container-fluid"> <div class="row"> <div class="col-md-8 col-md-offset-2"> <div class="panel panel-default"> <div class="panel-heading">Login</div> <div class="panel-body"> @if (count($errors) > 0) <div class="alert alert-danger"> <strong>Whoops!</strong> There were some problems with your input.<br><br> <ul> @foreach ($errors->all() as $error) <li>{{ $error }}</li> @endforeach </ul> </div> @endif <?php $inputAttributes = ['class'=>'form-control']; $labelAttributes = ['class'=>'col-md-4 control-label']; ?> {!! Form::open(['class'=>'form-horizontal','role'=> 'form','method'=>'POST','url'=>'/auth/login']) !!} <div class="form-group"> {!! Form::label('email', 'E-Mail Address',$labelAttributes) !!} <div class="col-md-6"> {!! Form::input('email','email',old('email'), $inputAttributes) !!} </div> </div> <div class="form-group"> {!! Form::label('password', 'Password',$labelAttributes) !!} <div class="col-md-6"> {!! Form::input('password', 'password',null,$inputAttributes) !!} </div> </div> <div class="form-group"> <div class="col-md-6 col-md-offset-4"> <div class="checkbox"> <label> {!! Form::checkbox('remember','') !!} Remember Me </label> </div> </div> </div> <div class="form-group"> <div class="col-md-6 col-md-offset-4"> {!! Form::submit('Login',['class'=> 'btn btn-primary', 'style'=> 'margin-right: 15px;']) !!} {!! link_to('/password/email', $title = 'Forgot Your Password?', $attributes = array(), $secure = null); !!} </div> </div> {!! Form::close() !!} </div> </div> </div> </div> </div> @endsection Our example If we want to create a form to reserve a room in our accommodation, we can easily call a route from our controller: /** * Show the form for creating a new resource. * * @return Response */ public function create() { return view('auth/reserve'); } Now we need to create a new view that is located at resources/views/auth/reserve.blade.php. In this view, we can create a form to reserve a room in an accommodation where the user can select the start date, which comprises of the start day of the month and year, and the end date, which also comprises of the start day of the month and year. The form would begin as before, with a POST to reserve-room. Then, the form label would be placed next to the select input fields. Finally, the day, the month, and the year select form elements would be created as follows: {!! Form::open(['class'=>'form-horizontal', 'role'=>'form', 'method'=>'POST', 'url'=>'reserve-room']) !!} {!! Form::label(null, 'Start Date',$labelAttributes) !!} {!! Form::selectMonth('month',date('m')) !!} {!! Form::selectRange('date',1,31,date('d')) !!} {!! Form::selectRange('year',date('Y'),date('Y')+3) !!} {!! Form::label(null, 'End Date',$labelAttributes) !!} {!! Form::selectMonth('month',date('m')) !!} {!! Form::selectRange('date',1,31,date('d')) !!} {!! Form::selectRange('year',date('Y'), date('Y')+3,date('Y')) !!} {!! Form::submit('Reserve', ['class'=>'btn btn-primary', 'style'=>'margin-right: 15px;']) !!} {!! Form::close() !!} Month select Firstly, in the selectMonth method, the first parameter is the name of the input attribute, while the second attribute is the default value. Here, the PHP date method is used to extract the numeric portion of the current month—March in this case: {!! Form::selectMonth('month',date('m')) !!} The output, shown here formatted, is as follows: <select name="month"> <option value="1">January</option> <option value="2">February</option> <option value="3" selected="selected">March</option> <option value="4">April</option> <option value="5">May</option> <option value="6">June</option> <option value="7">July</option> <option value="8">August</option> <option value="9">September</option> <option value="10">October</option> <option value="11">November</option> <option value="12">December</option> </select> Date select A similar technique is applied for the selection of the date, but using the selectRange method, the range of the days in the month are passed to the method. Similarly, the PHP date function is used to send the current date to the method as the fourth parameter: {!! Form::selectRange('date',1,31,date('d')) !!} Here is the formatted output: <select name="date"> <option value="1">1</option> <option value="2">2</option> <option value="3">3</option> <option value="4">4</option> ... <option value="28">28</option> <option value="29">29</option> <option value="30" selected="selected">30</option> <option value="31">31</option> </select> The date that should be selected is 30, since today is March 30, 2015. For the months that do not have 31 days, usually a JavaScript method would be used to modify the number of days based on the month and/or the year. Year select The same technique that is used for the date range is applied for the selection of the year; once again, using the selectRange method. The range of the years is passed to the method. The PHP date function is used to send the current year to the method as the fourth parameter: {!! Form::selectRange('year',date('Y'),date('Y')+3,date('Y')) !!} Here is the formatted output: <select name="year"> <option value="2015" selected="selected">2015</option> <option value="2016">2016</option> <option value="2017">2017</option> <option value="2018">2018</option> </select> Here, the current year that is selected is 2015. Form macros We have the same code that generates our month, date, and year selection form block two times: once for the start date and once for the end date. To refactor the code, we can apply the DRY (don't repeat yourself) principle and create a form macro. This will allow us to avoid calling the form element creation method twice, as follows: <?php Form::macro('monthDayYear',function($suffix='') { echo Form::selectMonth(($suffix!=='')?'month- '.$suffix:'month',date('m')); echo Form::selectRange(($suffix!=='')?'date- '.$suffix:'date',1,31,date('d')); echo Form::selectRange(($suffix!=='')?'year- '.$suffix:'year',date('Y'),date('Y')+3,date('Y')); }); ?> Here, the month, date, and year generation code is placed into a macro, which is inside the PHP tags, and it is necessary to add echo to print out the result. The monthDayYear name is given to this macro method. Calling our macro two times: once after each label; each time adding a different suffix via the $suffix variable. Now, our form code looks like this: <?php Form::macro('monthDayYear',function($suffix='') { echo Form::selectMonth(($suffix!=='')?'month- '.$suffix:'month',date('m')); echo Form::selectRange(($suffix!=='')?'date- '.$suffix:'date',1,31,date('d')); echo Form::selectRange(($suffix!=='')?'year- '.$suffix:'year',date('Y'),date('Y')+3,date('Y')); }); ?> {!! Form::open(['class'=>'form-horizontal', 'role'=>'form', 'method'=>'POST', 'url'=>'/reserve-room']) !!} {!! Form::label(null, 'Start Date',$labelAttributes) !!} {!! Form::monthDayYear('-start') !!} {!! Form::label(null, 'End Date',$labelAttributes) !!} {!! Form::monthDayYear('-end') !!} {!! Form::submit('Reserve',['class'=>'btn btn-primary', 'style'=>'margin-right: 15px;']) !!} {!! Form::close() !!} Conclusion The choice to include the HTML form generation package in Laravel 5 can ease the burden of having to create numerous HTML forms. This approach allows developers to use methods, create reusable macros, and use a familiar Laravel approach to build the frontend. Once the basic methods are learned, it is very easy to simply copy and paste the previously created form elements, and then change their element's name and/or the array that is sent to them. Depending on the size of the project, this approach may or may not be the right choice. For a very small application, the difference in the amount of code that needs to be written is not very evident, although, as is the case with the selectMonth and selectRange methods, the amount of code necessary is drastic. This technique, combined with the use of macros, makes it easy to reduce the occurrence of copy duplication. Also, one of the major problems with the frontend design is that the contents of the class of the various elements may need to change throughout the entire application. This would mean performing a large find and replace operation, where changes are required to be made to HTML, such as changing class attributes. By creating an array of attributes, including class, for similar elements, changes made to the entire form can be performed simply by modifying the array that those elements use. In a larger project, however, where parts of forms may be repeated throughout the application, the wise use of macros can easily reduce the amount of code necessary to be written. Not only this, but macros can isolate the code inside from changes that would require more than one block of code to be changed throughout multiple files. In the example, where the month, date, and year is to be selected, it is possible that this could be used up to 20 times in a large application. Any changes made to the desired block of HTML can be simply done to the macro and the result would be reflected in all of the elements that use it. Ultimately, the choice of whether or not to use this package will reside with the developer and the designer. Since a designer who wants to use an alternative frontend design tool may not prefer, nor be competent, to work with the methods in the package, he or she may want to not use it. Summary The construction of the master template was explained and then the form components, such as the various form input types, were shown through examples. Finally, the construction of a form for the room reservation, was explained, as well as a "do not repeat yourself" form macro creation technique. Resources for Article: Further resources on this subject: Eloquent… without Laravel! [Article] Laravel 4 - Creating a Simple CRUD Application in Hours [Article] Exploring and Interacting with Materials using Blueprints [Article]
Read more
  • 0
  • 0
  • 2707

article-image-working-gradle
Packt
11 Aug 2015
18 min read
Save for later

Working with Gradle

Packt
11 Aug 2015
18 min read
In this article by Mainak Mitra, author of the book Mastering Gradle, we cover some plugins such as War and Scala, which will be helpful in building web applications and Scala applications. Additionally, we will discuss diverse topics such as Property Management, Multi-Project build, and logging aspects. In the Multi-project build section, we will discuss how Gradle supports multi-project build through the root project's build file. It also provides the flexibility of treating each module as a separate project, plus all the modules together like a single project. (For more resources related to this topic, see here.) The War plugin The War plugin is used to build web projects, and like any other plugin, it can be added to the build file by adding the following line: apply plugin: 'war' War plugin extends the Java plugin and helps to create the war archives. The war plugin automatically applies the Java plugin to the build file. During the build process, the plugin creates a war file instead of a jar file. The war plugin disables the jar task of the Java plugin and adds a default war archive task. By default, the content of the war file will be compiled classes from src/main/java; content from src/main/webapp and all the runtime dependencies. The content can be customized using the war closure as well. In our example, we have created a simple servlet file to display the current date and time, a web.xml file and a build.gradle file. The project structure is displayed in the following screenshot: Figure 6.1 The SimpleWebApp/build.gradle file has the following content: apply plugin: 'war'   repositories { mavenCentral() }   dependencies { providedCompile "javax.servlet:servlet-api:2.5" compile("commons-io:commons-io:2.4") compile 'javax.inject:javax.inject:1' } The war plugin adds the providedCompile and providedRuntime dependency configurations on top of the Java plugin. The providedCompile and providedRuntime configurations have the same scope as compile and runtime respectively, but the only difference is that the libraries defined in these configurations will not be a part of the war archive. In our example, we have defined servlet-api as the providedCompile time dependency. So, this library is not included in the WEB-INF/lib/ folder of the war file. This is because this library is provided by the servlet container such as Tomcat. So, when we deploy the application in a container, it is added by the container. You can confirm this by expanding the war file as follows: SimpleWebApp$ jar -tvf build/libs/SimpleWebApp.war    0 Mon Mar 16 17:56:04 IST 2015 META-INF/    25 Mon Mar 16 17:56:04 IST 2015 META-INF/MANIFEST.MF    0 Mon Mar 16 17:56:04 IST 2015 WEB-INF/    0 Mon Mar 16 17:56:04 IST 2015 WEB-INF/classes/    0 Mon Mar 16 17:56:04 IST 2015 WEB-INF/classes/ch6/ 1148 Mon Mar 16 17:56:04 IST 2015 WEB-INF/classes/ch6/DateTimeServlet.class    0 Mon Mar 16 17:56:04 IST 2015 WEB-INF/lib/ 185140 Mon Mar 16 12:32:50 IST 2015 WEB-INF/lib/commons-io-2.4.jar 2497 Mon Mar 16 13:49:32 IST 2015 WEB-INF/lib/javax.inject-1.jar 578 Mon Mar 16 16:45:16 IST 2015 WEB-INF/web.xml Sometimes, we might need to customize the project's structure as well. For example, the webapp folder could be under the root project folder, not in the src folder. The webapp folder can also contain new folders such as conf and resource to store the properties files, Java scripts, images, and other assets. We might want to rename the webapp folder to WebContent. The proposed directory structure might look like this: Figure 6.2 We might also be interested in creating a war file with a custom name and version. Additionally, we might not want to copy any empty folder such as images or js to the war file. To implement these new changes, add the additional properties to the build.gradle file as described here. The webAppDirName property sets the new webapp folder location to the WebContent folder. The war closure defines properties such as version and name, and sets the includeEmptyDirs option as false. By default, includeEmptyDirs is set to true. This means any empty folder in the webapp directory will be copied to the war file. By setting it to false, the empty folders such as images and js will not be copied to the war file. The following would be the contents of CustomWebApp/build.gradle: apply plugin: 'war'   repositories { mavenCentral() } dependencies { providedCompile "javax.servlet:servlet-api:2.5" compile("commons-io:commons-io:2.4") compile 'javax.inject:javax.inject:1' } webAppDirName="WebContent"   war{ baseName = "simpleapp" version = "1.0" extension = "war" includeEmptyDirs = false } After the build is successful, the war file will be created as simpleapp-1.0.war. Execute the jar -tvf build/libs/simpleapp-1.0.war command and verify the content of the war file. You will find the conf folder is added to the war file, whereas images and js folders are not included. You might also find the Jetty plugin interesting for web application deployment, which enables you to deploy the web application in an embedded container. This plugin automatically applies the War plugin to the project. The Jetty plugin defines three tasks; jettyRun, jettyRunWar, and jettyStop. Task jettyRun runs the web application in an embedded Jetty web container, whereas the jettyRunWar task helps to build the war file and then run it in the embedded web container. Task jettyStopstops the container instance. For more information please refer to the Gradle API documentation. Here is the link: https://docs.gradle.org/current/userguide/war_plugin.html. The Scala plugin The Scala plugin helps you to build the Scala application. Like any other plugin, the Scala plugin can be applied to the build file by adding the following line: apply plugin: 'scala' The Scala plugin also extends the Java plugin and adds a few more tasks such as compileScala, compileTestScala, and scaladoc to work with Scala files. The task names are pretty much all named after their Java equivalent, simply replacing the java part with scala. The Scala project's directory structure is also similar to a Java project structure where production code is typically written under src/main/scala directory and test code is kept under the src/test/scala directory. Figure 6.3 shows the directory structure of a Scala project. You can also observe from the directory structure that a Scala project can contain a mix of Java and Scala source files. The HelloScala.scala file has the following content. The output is Hello, Scala... on the console. This is a very basic code and we will not be able to discuss much detail on the Scala programming language. We request readers to refer to the Scala language documentation available at http://www.scala-lang.org/. package ch6   object HelloScala {    def main(args: Array[String]) {      println("Hello, Scala...")    } } To support the compilation of Scala source code, Scala libraries should be added in the dependency configuration: dependencies { compile('org.scala-lang:scala-library:2.11.6') } Figure 6.3 As mentioned, the Scala plugin extends the Java plugin and adds a few new tasks. For example, the compileScala task depends on the compileJava task and the compileTestScala task depends on the compileTestJava task. This can be understood easily, by executing classes and testClasses tasks and looking at the output. $ gradle classes :compileJava :compileScala :processResources UP-TO-DATE :classes   BUILD SUCCESSFUL $ gradle testClasses :compileJava UP-TO-DATE :compileScala UP-TO-DATE :processResources UP-TO-DATE :classes UP-TO-DATE :compileTestJava UP-TO-DATE :compileTestScala UP-TO-DATE :processTestResources UP-TO-DATE :testClasses UP-TO-DATE   BUILD SUCCESSFUL Scala projects are also packaged as jar files. The jar task or assemble task creates a jar file in the build/libs directory. $ jar -tvf build/libs/ScalaApplication-1.0.jar 0 Thu Mar 26 23:49:04 IST 2015 META-INF/ 94 Thu Mar 26 23:49:04 IST 2015 META-INF/MANIFEST.MF 0 Thu Mar 26 23:49:04 IST 2015 ch6/ 1194 Thu Mar 26 23:48:58 IST 2015 ch6/Customer.class 609 Thu Mar 26 23:49:04 IST 2015 ch6/HelloScala$.class 594 Thu Mar 26 23:49:04 IST 2015 ch6/HelloScala.class 1375 Thu Mar 26 23:48:58 IST 2015 ch6/Order.class The Scala plugin does not add any extra convention to the Java plugin. Therefore, the conventions defined in the Java plugin, such as lib directory and report directory can be reused in the Scala plugin. The Scala plugin only adds few sourceSet properties such as allScala, scala.srcDirs, and scala to work with source set. The following task example displays different properties available to the Scala plugin. The following is a code snippet from ScalaApplication/build.gradle: apply plugin: 'java' apply plugin: 'scala' apply plugin: 'eclipse'   version = '1.0'   jar { manifest { attributes 'Implementation-Title': 'ScalaApplication',     'Implementation-Version': version } }   repositories { mavenCentral() }   dependencies { compile('org.scala-lang:scala-library:2.11.6') runtime('org.scala-lang:scala-compiler:2.11.6') compile('org.scala-lang:jline:2.9.0-1') }   task displayScalaPluginConvention << { println "Lib Directory: $libsDir" println "Lib Directory Name: $libsDirName" println "Reports Directory: $reportsDir" println "Test Result Directory: $testResultsDir"   println "Source Code in two sourcesets: $sourceSets" println "Production Code: ${sourceSets.main.java.srcDirs},     ${sourceSets.main.scala.srcDirs}" println "Test Code: ${sourceSets.test.java.srcDirs},     ${sourceSets.test.scala.srcDirs}" println "Production code output:     ${sourceSets.main.output.classesDir} &        ${sourceSets.main.output.resourcesDir}" println "Test code output: ${sourceSets.test.output.classesDir}      & ${sourceSets.test.output.resourcesDir}" } The output of the task displayScalaPluginConvention is shown in the following code: $ gradle displayScalaPluginConvention … :displayScalaPluginConvention Lib Directory: <path>/ build/libs Lib Directory Name: libs Reports Directory: <path>/build/reports Test Result Directory: <path>/build/test-results Source Code in two sourcesets: [source set 'main', source set 'test'] Production Code: [<path>/src/main/java], [<path>/src/main/scala] Test Code: [<path>/src/test/java], [<path>/src/test/scala] Production code output: <path>/build/classes/main & <path>/build/resources/main Test code output: <path>/build/classes/test & <path>/build/resources/test   BUILD SUCCESSFUL Finally, we will conclude this section by discussing how to execute Scala application from Gradle; we can create a simple task in the build file as follows. task runMain(type: JavaExec){ main = 'ch6.HelloScala' classpath = configurations.runtime + sourceSets.main.output +     sourceSets.test.output } The HelloScala source file has a main method which prints Hello, Scala... in the console. The runMain task executes the main method and displays the output in the console: $ gradle runMain .... :runMain Hello, Scala...   BUILD SUCCESSFUL Logging Until now we have used println everywhere in the build script to display the messages to the user. If you are coming from a Java background you know a println statement is not the right way to give information to the user. You need logging. Logging helps the user to classify the categories of messages to show at different levels. These different levels help users to print a correct message based on the situation. For example, when a user wants complete detailed tracking of your software, they can use debug level. Similarly, whenever a user wants very limited useful information while executing a task, they can use quiet or info level. Gradle provides the following different types of logging: Log Level Description ERROR This is used to show error messages QUIET This is used to show limited useful information WARNING This is used to show warning messages LIFECYCLE This is used to show the progress (default level) INFO This is used to show information messages DEBUG This is used to show debug messages (all logs) By default, the Gradle log level is LIFECYCLE. The following is the code snippet from LogExample/build.gradle: task showLogging << { println "This is println example" logger.error "This is error message" logger.quiet "This is quiet message" logger.warn "This is WARNING message" logger.lifecycle "This is LIFECYCLE message" logger.info "This is INFO message" logger.debug "This is DEBUG message" } Now, execute the following command: $ gradle showLogging   :showLogging This is println example This is error message This is quiet message This is WARNING message This is LIFECYCLE message   BUILD SUCCESSFUL Here, Gradle has printed all the logger statements upto the lifecycle level (including lifecycle), which is Gradle's default log level. You can also control the log level from the command line. -q This will show logs up to the quiet level. It will include error and quiet messages -i This will show logs up to the info level. It will include error, quiet, warning, lifecycle and info messages. -s This prints out the stacktrace for all exceptions. -d This prints out all logs and debug information. This is most expressive log level, which will also print all the minor details. Now, execute gradle showLogging -q: This is println example This is error message This is quiet message Apart from the regular lifecycle, Gradle provides an additional option to provide stack trace in case of any exception. Stack trace is different from debug. In case of any failure, it allows tracking of all the nested functions, which are called in sequence up to the point where the stack trace is generated. To verify, add the assert statement in the preceding task and execute the following: task showLogging << { println "This is println example" .. assert 1==2 }   $ gradle showLogging -s …… * Exception is: org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':showLogging'. at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter. executeActions(ExecuteActionsTaskExecuter.java:69)        at …. org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter. execute(SkipOnlyIfTaskExecuter.java:53)        at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter. execute(ExecuteAtMostOnceTaskExecuter.java:43)        at org.gradle.api.internal.AbstractTask.executeWithoutThrowingTaskFailure (AbstractTask.java:305) ... With stracktrace, Gradle also provides two options: -s or --stracktrace: This will print truncated stracktrace -S or --full-stracktrace: This will print full stracktrace File management One of the key features of any build tool is I/O operations and how easily you can perform the I/O operations such as reading files, writing files, and directory-related operations. Developers with Ant or Maven backgrounds know how painful and complex it was to handle the files and directory operations in old build tools; sometimes you had to write custom tasks and plugins to perform these kinds of operations due to XML limitations in Ant and Maven. Since Gradle uses Groovy, it will make your life much easier while dealing with files and directory-related operations. Reading files Gradle provides simple ways to read the file. You just need to use the File API (application programing interface) and it provides everything to deal with the file. The following is the code snippet from FileExample/build.gradle: task showFile << { File file1 = file("readme.txt") println file1   // will print name of the file file1.eachLine {    println it // will print contents line by line } } To read the file, we have used file(<file Name>). This is the default Gradle way to reference files because Gradle adds some path behavior ($PROJECT_PATH/<filename>) due to absolute and relative referencing of files. Here, the first println statement will print the name of the file which is readme.txt. To read a file, Groovy provides the eachLine method to the File API, which reads all the lines of the file one by one. To access the directory, you can use the following file API: def dir1 = new File("src") println "Checking directory "+dir1.isFile() // will return false   for directory println "Checking directory "+dir1.isDirectory() // will return true for directory Writing files To write to the files, you can use either the append method to add contents to the end of the file or overwrite the file using the setText or write methods: task fileWrite << { File file1 = file ("readme.txt")   // will append data at the end file1.append("nAdding new line. n")   // will overwrite contents file1.setText("Overwriting existing contents")   // will overwrite contents file1.write("Using write method") } Creating files/directories You can create a new file by just writing some text to it: task createFile << { File file1 = new File("newFile.txt") file1.write("Using write method") } By writing some data to the file, Groovy will automatically create the file if it does not exist. To write content to file you can also use the leftshift operator (<<), it will append data at the end of the file: file1 << "New content" If you want to create an empty file, you can create a new file using the createNewFile() method. task createNewFile << { File file1 = new File("createNewFileMethod.txt") file1.createNewFile() } A new directory can be created using the mkdir command. Gradle also allows you to create nested directories in a single command using mkdirs: task createDir << { def dir1 = new File("folder1") dir1.mkdir()   def dir2 = new File("folder2") dir2.createTempDir()   def dir3 = new File("folder3/subfolder31") dir3.mkdirs() // to create sub directories in one command } In the preceding example, we are creating two directories, one using mkdir() and the other using createTempDir(). The difference is when we create a directory using createTempDir(), that directory gets automatically deleted once your build script execution is completed. File operations We will see examples of some of the frequently used methods while dealing with files, which will help you in build automation: task fileOperations << { File file1 = new File("readme.txt") println "File size is "+file1.size() println "Checking existence "+file1.exists() println "Reading contents "+file1.getText() println "Checking directory "+file1.isDirectory() println "File length "+file1.length() println "Hidden file "+file1.isHidden()   // File paths println "File path is "+file1.path println "File absolute path is "+file1.absolutePath println "File canonical path is "+file1.canonicalPath   // Rename file file1.renameTo("writeme.txt")   // File Permissions file1.setReadOnly() println "Checking read permission "+ file1.canRead()+" write permission "+file1.canWrite() file1.setWritable(true) println "Checking read permission "+ file1.canRead()+" write permission "+file1.canWrite()   } Most of the preceding methods are self-explanatory. Try to execute the preceding task and observe the output. If you try to execute the fileOperations task twice, you will get the exception readme.txt (No such file or directory) since you have renamed the file to writeme.txt. Filter files Certain file methods allow users to pass a regular expression as an argument. Regular expressions can be used to filter out only the required data, rather than fetch all the data. The following is an example of the eachFileMatch() method, which will list only the Groovy files in a directory: task filterFiles << { def dir1 = new File("dir1") dir1.eachFileMatch(~/.*.groovy/) {    println it } dir1.eachFileRecurse { dir ->    if(dir.isDirectory()) {      dir.eachFileMatch(~/.*.groovy/) {        println it      }    } } } The output is as follows: $ gradle filterFiles   :filterFiles dir1groovySample.groovy dir1subdir1groovySample1.groovy dir1subdir2groovySample2.groovy dir1subdir2subDir3groovySample3.groovy   BUILD SUCCESSFUL Delete files and directories Gradle provides the delete() and deleteDir() APIs to delete files and directories respectively: task deleteFile << { def dir2 = new File("dir2") def file1 = new File("abc.txt") file1.createNewFile() dir2.mkdir() println "File path is "+file1.absolutePath println "Dir path is "+dir2.absolutePath file1.delete() dir2.deleteDir() println "Checking file(abc.txt) existence: "+file1.exists()+" and Directory(dir2) existence: "+dir2.exists() } The output is as follows: $ gradle deleteFile :deleteFile File path is Chapter6/FileExample/abc.txt Dir path is Chapter6/FileExample/dir2 Checking file(abc.txt) existence: false and Directory(dir2) existence: false   BUILD SUCCESSFUL The preceding task will create a directory dir2 and a file abc.txt. Then it will print the absolute paths and finally delete them. You can verify whether it is deleted properly by calling the exists() function. FileTree Until now, we have dealt with single file operations. Gradle provides plenty of user-friendly APIs to deal with file collections. One such API is FileTree. A FileTree represents a hierarchy of files or directories. It extends the FileCollection interface. Several objects in Gradle such as sourceSets, implement the FileTree interface. You can initialize FileTree with the fileTree() method. The following are the different ways you can initialize the fileTree method: task fileTreeSample << { FileTree fTree = fileTree('dir1') fTree.each {    println it.name } FileTree fTree1 = fileTree('dir1') {    include '**/*.groovy' } println "" fTree1.each {    println it.name } println "" FileTree fTree2 = fileTree(dir:'dir1',excludes:['**/*.groovy']) fTree2.each {    println it.absolutePath } } Execute the gradle fileTreeSample command and observe the output. The first iteration will print all the files in dir1. The second iteration will only include Groovy files (with extension .groovy). The third iteration will exclude Groovy files (with extension .groovy) and print other files with absolute path. You can also use FileTree to read contents from the archive files such as ZIP, JAR, or TAR files: FileTree jarFile = zipTree('SampleProject-1.0.jar') jarFile.each { println it.name } The preceding code snippet will list all the files contained in a jar file. Summary In this article, we have explored different topics of Gradle such as I/O operations, logging, Multi-Project build and testing using Gradle. We also learned how easy it is to generate assets for web applications and Scala projects with Gradle. In the Testing with Gradle section, we learned some basics to execute tests with JUnit and TestNG. In the next article, we will learn the code quality aspects of a Java project. We will analyze a few Gradle plugins such as Checkstyle and Sonar. Apart from learning these plugins, we will discuss another topic called Continuous Integration. These two topics will be combined and presented by exploration of two different continuous integration servers, namely Jenkins and TeamCity. Resources for Article: Further resources on this subject: Speeding up Gradle builds for Android [article] Defining Dependencies [article] Testing with the Android SDK [article]
Read more
  • 0
  • 0
  • 4713
article-image-scaling-influencers
Packt
11 Aug 2015
27 min read
Save for later

Scaling influencers

Packt
11 Aug 2015
27 min read
In this article written by Adam Boduch, author of the book JavaScript at Scale, goes on to say how we don't scale our software systems just because we can. While it's common to tout scalability, these claims need to be put into practice. In order to do so, there has to be a reason for scalable software. If there's no need to scale, then it's much easier, not to mention cost-effective, to simply build a system that doesn't scale. Putting something that was built to handle a wide variety of scaling issues into a context where scale isn't warranted just feels clunky. Especially to the end user. So we, as JavaScript developers and architects, need to acknowledge and understand the influences that necessitate scalability. While it's true that not all JavaScript applications need to scale, it may not always be the case. For example, it's difficult to say that we know this system isn't going to need to scale in any meaningful way, so let's not invest the time and effort to make it scalable. Unless we're developing a throw-away system, there's always going to be expectations of growth and success. At the opposite end of the spectrum, JavaScript applications aren't born as mature scalable systems. They grow up, accumulating scalable properties along the way. Scaling influencers are an effective tool for those of us working on JavaScript projects. We don't want to over-engineer something straight from inception, and we don't want to build something that's tied-down by early decisions, limiting its ability to scale. (For more resources related to this topic, see here.) The need for scale Scaling software is a reactive event. Thinking about scaling influencers helps us proactively prepare for these scaling events. In other systems, such as web application backends, these scaling events may be brief spikes, and are generally handled automatically. For example, there's an increased load due to more users issuing more requests. The load balancer kicks in and distributes the load evenly across backend servers. In the extreme case, the system may automatically provision new backend resources when needed, and destroy them when they're no longer of use. Scaling events in the frontend aren't like that. Rather, the scaling events that take place generally happen over longer periods of time, and are more complex. The unique aspect of JavaScript applications is that the only hardware resources available to them are those available to the browser in which they run. They get their data from the backend, and this may scale up perfectly fine, but that's not what we're concerned with. As our software grows, a necessary side-effect of doing something successfully, is that we need to pay attention to the influencers of scale. The preceding figure shows us a top-down flow chart of scaling influencers, starting with users, who require that our software implements features. Depending on various aspects of the features, such as their size and how they relate to other features, this influences the team of developers working on features. As we move down through the scaling influencers, this grows. Growing user base We're not building an application for just one user. If we were, there would be no need to scale our efforts. While what we build might be based on the requirements of one user representative, our software serves the needs of many users. We need to anticipate a growing user base as our application evolves. There's no exact target user count, although, depending on the nature of our application, we may set goals for the number of active users, possibly by benchmarking similar applications using a tool such as http://www.alexa.com/. For example, if our application is exposed on the public internet, we want lots of registered users. On the other hand, we might target private installations, and there, the number of users joining the system is a little slower. But even in the latter case, we still want the number of deployments to go up, increasing the total number of people using our software. The number of users interacting with our frontend is the largest influencer of scale. With each user added, along with the various architectural perspectives, growth happens exponentially. If you look at it from a top-down point of view, users call the shots. At the end of the day, our application exists to serve them. The better we're able to scale our JavaScript code, the more users we'll please. Building new features Perhaps the most obvious side-effect of successful software with a strong user base is the features necessary to keep those users happy. The feature set grows along with the users of the system. This is often overlooked by projects, despite the obviousness of new features. We know they're coming, yet, little thought goes into how the endless stream of features going into our code impedes our ability to scale up our efforts. This is especially tricky when the software is in its infancy. The organization developing the software will bend over backwards to reel in new users. And there's little consequence of doing so in the beginning because the side-effects are limited. There's not a lot of mature features, there's not a huge development team, and there's less chance of annoying existing users by breaking something that they've come to rely on. When these factors aren't there, it's easier for us to nimbly crank out the features and dazzle existing/prospective users. But how do we force ourselves to be mindful of these early design decisions? How do we make sure that we don't unnecessarily limit our ability to scale the software up, in terms of supporting more features? New feature development, as well as enhancing existing features, is an ongoing issue with scalable JavaScript architecture. It's not just the number of features listed in the marketing literature of our software that we need to be concerned about . There's also the complexity of a given feature, how common our features are with one another, and how many moving parts each of these features has. If the user is the first level when looking at JavaScript architecture from a top-down perspective, each feature is the next level, and from there, it expands out into enormous complexity. It's not just the individual users who make a given feature complex. Instead, it's a group of users that all need the same feature in order to use our software effectively. And from there, we have to start thinking about personas, or roles, and which features are available for which roles. The need for this type of organizational structure isn't made apparent till much later on in the game; after we've made decisions that make it difficult to introduce role-based feature delivery. And depending on how our software is deployed, we may have to support a variety of unique use cases. For example, if we have several large organizations as our customers, each with their own deployments, they'll likely have their own unique constraints on how users are structured. This is challenging, and our architecture needs to support the disparate needs of many organizations, if we're going to scale. Hiring more developers Making these features a reality requires solid JavaScript developers who know what they're doing, and if we're lucky, we'll be able to hire a team of them. The team part doesn't happen automatically. There's a level of trust and respect that needs to be established before the team members begin to actively rely on one another to crank out some awesome code. Once that starts happening, we're in good shape. Turning once again to the top-down perspective of our scaling influencers, the features we deliver can directly impact the health of our team. There's a balance that's essentially impossible to maintain, but we can at least get close. Too many features and not enough developers lead to a sense of perpetual inadequacy among team members. When there's no chance of delivering what's expected, there's not much sense in trying. On the other hand, if you have too many developers, and there's too much communication overhead due to a limited number of features, it's tough to define responsibilities. When there's no shared understanding of responsibilities, things start to break down. It's actually easier to deal with not enough developers for the features we're trying to develop, than having too many developers. When there's a large burden of feature development, it's a good opportunity to step back and think—"what would we do differently if we had more developers?" This question usually gets skipped. We go hire more developers, and when they arrive, it's to everyone's surprise that there's no immediate improvement in feature throughput. This is why it's best to have an open development culture where there are no stupid questions, and where responsibilities are defined. There's no one correct team structure or development methodology. The development team needs to apply itself to the issues faced by the software we're trying to deliver. The biggest hurdle is for sure the number, size, and complexity of features. So that's something we need to consider when forming our team initially, as well as when growing the team. This latter point is especially true because the team structure we used way back when the software was new isn't going to fit what we face when the features scale up. Architectural perspectives The preceding section was a sampling of the factors that influence scale in JavaScript applications. Starting from the top, each of these influencers affects the influencer below it. The number and nature of our users is the first and foremost influencer, and this has a direct impact on the number and nature of the features we develop. Further more, the size of the development team, and the structure of that team, are influenced by these features. Our job is to take these influencers of scale, and translate them into factors to consider from an architectural perspective: Scaling influences the perspectives of our architecture. Our architecture, in turn, determines responses to scaling influencers. The process is iterative and never-ending throughout the lifetime of our software. The browser is a unique environment Scaling up in the traditional sense doesn't really work in a browser environment. When backend services are overwhelmed by demand, it's common to "throw more hardware" at the problem. Easier said than done of course, but it's a lot easier to scale up our data services these days, compared to 20 years ago. Today's software systems are designed with scalability in mind. It's helpful to our frontend application if the backend services are always available and always responsive, but that's just a small portion of the issues we face. We can't throw more hardware at the web browsers running our code; given that; the time and space complexities of our algorithms are important. Desktop applications generally have a set of system requirements for running the software, such as OS version, minimum memory, minimum CPU, and so on. If we were to advertise requirements such as these in our JavaScript applications, our user base would shrink dramatically, and possibly generate some hate mail. The expectation that browser-based web applications be lean and fast is an emergent phenomenon. Perhaps, that's due in part to the competition we face. There are a lot of bloated applications out there, and whether they're used in the browser or natively on the desktop, users know what bloat feels like, and generally run the other way: JavaScript applications require many resources, all of different types; these are all fetched by the browser, on the application's behalf. Adding to our trouble is the fact that we're using a platform that was designed as a means to download and display hypertext, to click on a link, and repeat. Now we're doing the same thing, except with full-sized applications. Multi-page applications are slowly being set aside in favor of single-page applications. That being said, the application is still treated as though it were a web page. Despite all that, we're in the midst of big changes. The browser is a fully viable web platform, the JavaScript language is maturing, and there are numerous W3C specifications in progress; they assist with treating our JavaScript more like an application and less like a document. Take a look at the following diagram: A sampling of the technologies found in the growing web platform We use architectural perspectives to assess any architectural design we come up with. It's a powerful technique to examine our design through a different lens. JavaScript architecture is no different, especially for those that scale. The difference between JavaScript architecture and architecture for other environments is that ours have unique perspectives. The browser environment requires that we think differently about how we design, build, and deploy applications. Anything that runs in the browser is transient by nature, and this changes software design practices that we've taken for granted over the years. Additionally, we spend more time coding our architectures than diagramming them. By the time we sketch anything out, it's been superseded by another specification or another tool. Component design At an architectural level, components are the main building blocks we work with. These may be very high-level components with several levels of abstraction. Or, they could be something exposed by a framework we're using, as many of these tools provide their own idea of "components". When we first set out to build a JavaScript application with scale in mind, the composition of our components began to take shape. How our components are composed is a huge limiting factor in how we scale, because they set the standard. Components implement patterns for the sake of consistency, and it's important to get those patterns right: Components have an internal structure. The complexity of this composition depends on the type of component under consideration As we'll see, the design of our various components is closely-tied to the trade-offs we make in other perspectives. And that's a good thing, because it means that if we're paying attention to the scalable qualities we're after, we can go back and adjust the design of our components in order to meet those qualities. Component communication Components don't sit in the browser on their own. Components communicate with one another all the time. There's a wide variety of communication techniques at our disposal here. Component communication could be as simple as method invocation, or as complex as an asynchronous publish-subscribe event system. The approach we take with our architecture depends on our more specific goals. The challenge with components is that we often don't know what the ideal communication mechanism will be, till after we've started implementing our application. We have to make sure that we can adjust the chosen communication path: The component communication mechanism decouples components, enabling scalable structures Seldom will we implement our own communication mechanism for our components. Not when so many tools exist, that solve at least part of the problem for us. Most likely, we'll end up with a concoction of an existing tool for communication and our own implementation specifics. What's important is that the component communication mechanism is its own perspective, which can be designed independently of the components themselves. Load time JavaScript applications are always loading something. The biggest challenge is the application itself, loading all the static resources it needs to run, before the user is allowed to do anything. Then there's the application data. This needs to be loaded at some point, often on demand, and contributes to the overall latency experienced by the user. Load time is an important perspective, because it hugely contributes to the overall perception of our product quality. The initial load is the user's first impression and this is where most components are initialized; it's tough to get the initial load to be fast without sacrificing performance in other areas There's lots we can do here to offset the negative user experience of waiting for things to load. This includes utilizing web specifications that allow us to treat applications and the services they use as installable components in the web browser platform. Of course, these are all nascent ideas, but worth considering as they mature alongside our application. Responsiveness The second part of the performance perspective of our architecture is concerned with responsiveness. That is, after everything has loaded, how long does it take for us to respond to user input? Although this is a separate problem from that of loading resources from the backend, they're still closely-related. Often, user actions trigger API requests, and the techniques we employ to handle these workflows impact user-perceived responsiveness. User-perceived responsiveness is affected by the time taken by our components to respond to DOM events; a lot can happen in between the initial DOM event and when we finally notify the user by updating the DOM. Because of this necessary API interaction, user-perceived responsiveness is important. While we can't make the API go any faster, we can take steps to ensure that the user always has feedback from the UI and that feedback is immediate. Then, there's the responsiveness of simply navigating around the UI, using cached data that's already been loaded, for example. Every other architectural perspective is closely-tied to the performance of our JavaScript code, and ultimately, to the user-perceived responsiveness. This perspective is a subtle sanity-check for the design of our components and their chosen communication paths. Addressability Just because we're building a single-page application doesn't mean we no longer care about addressable URIs. This is perhaps the crowning achievement of the web— unique identifiers that point to the resource we want. We paste them in to our browser address bar and watch the magic happen. Our application most certainly has addressable resources, we just point to them differently. Instead of a URI that's parsed by the backend web server, where the page is constructed and sent back to the browser, it's our local JavaScript code that understands the URI: Components listen to routers for route events and respond accordingly. A changing browser URI triggers these events. Typically, these URIs will map to an API resource. When the user hits one of these URIs in our application, we'll translate the URI into another URI that's used to request backend data. The component we use to manage these application URIs is called a router, and there's lots of frameworks and libraries with a base implementation of a router. We'll likely use one of these. The addressability perspective plays a major role in our architecture, because ensuring that the various aspects of our application have an addressable URI complicates our design. However, it can also make things easier if we're clever about it. We can have our components utilize the URIs in the same way a user utilizes links. Configurability Rarely does software do what you need it to straight out of the box. Highly-configurable software systems are touted as being good software systems. Configuration in the frontend is a challenge because there's several dimensions of configuration, not to mention the issue of where we store these configuration options. Default values for configurable components are problematic too—where do they come from? For example, is there a default language setting that's set until the user changes it? As is often the case, different deployments of our frontend will require different default values for these settings: Component configuration values can come from the backend server, or from the web browser. Defaults must reside somewhere Every configurable aspect of our software complicates its design. Not to mention the performance overhead and potential bugs. So, configurability is a large issue, and it's worth the time spent up-front discussing with various stakeholders what they value in terms of configurability. Depending on the nature of our deployment, users may value portability with their configuration. This means that their values need to be stored in the backend, under their account settings. Obviously decisions like these have backend design implications, and sometimes it's better to get away with approaches that don't require a modified backend service. Making architectural trade-offs There's a lot to consider from the various perspectives of our architecture, if we're going to build something that scales. We'll never get everything that we need out of every perspective simultaneously. This is why we make architectural trade-offs—we trade one aspect of our design for another more desirable aspect. Defining your constants Before we start making trade-offs, it's important to state explicitly what cannot be traded. What aspects of our design are so crucial to achieving scale that they must remain constant? For instance, a constant might be the number of entities rendered on a given page, or a maximum level of function call indirection. There shouldn't be a ton of these architectural constants, but they do exist. It's best if we keep them narrow in scope and limited in number. If we have too many strict design principles that cannot be violated or otherwise changed to fit our needs, we won't be able to easily adapt to changing influencers of scale. Does it make sense to have constant design principles that never change, given the unpredictability of scaling influencers? It does, but only once they emerge and are obvious. So this may not be an up-front principle, though we'll often have at least one or two up-front principles to follow. The discovery of these principles may result from the early refactoring of code or the later success of our software. In any case, the constants we use going forward must be made explicit and be agreed upon by all those involved. Performance for ease of development Performance bottlenecks need to be fixed, or avoided in the first place where possible. Some performance bottlenecks are obvious and have an observable impact on the user experience. These need to be fixed immediately, because it means our code isn't scaling for some reason, and might even point to a larger design issue. Other performance issues are relatively small. These are generally noticed by developers running benchmarks against code, trying by all means necessary to improve the performance. This doesn't scale well, because these smaller performance bottlenecks that aren't observable by the end user are time-consuming to fix. If our application is of a reasonable size, with more than a few developers working on it, we're not going to be able to keep up with feature development if everyone's fixing minor performance problems. These micro-optimizations introduce specialized solutions into our code, and they're not exactly easy reading for other developers. On the other hand, if we let these minor inefficiencies go, we will manage to keep our code cleaner and thus easier to work with. Where possible, trade off optimized performance for better code quality. This improves our ability to scale from a number of perspectives. Configurability for performance It's nice to have generic components where nearly every aspect is configurable. However, this approach to component design comes at a performance cost. It's not noticeable at first, when there are few components, but as our software scales in feature count, the number of components grows, and so does the number of configuration options. Depending on the size of each component (its complexity, number of configuration options, and so forth) the potential for performance degradation increases exponentially. Take a look at the following diagram: The component on the left has twice as many configuration options as the component on the right. It's also twice as difficult to use and maintain. We can keep our configuration options around as long as there're no performance issues affecting our users. Just keep in mind that we may have to remove certain options in an effort to remove performance bottlenecks. It's unlikely that configurability is going to be our main source of performance issues. It's also easy to get carried away as we scale and add features. We'll find, retrospectively, that we created configuration options at design time that we thought would be helpful, but turned out to be nothing but overhead. Trade off configurability for performance when there's no tangible benefit to having the configuration option. Performance for substitutability A related problem to that of configurability is substitutability. Our user interface performs well, but as our user base grows and more features are added, we discover that certain components cannot be easily substituted with another. This can be a developmental problem, where we want to design a new component to replace something pre-existing. Or perhaps we need to substitute components at runtime. Our ability to substitute components lies mostly with the component communication model. If the new component is able to send/receive messages/events the same as the existing component, then it's a fairly straightforward substitution. However, not all aspects of our software are substitutable. In the interest of performance, there may not even be a component to replace. As we scale, we may need to re-factor larger components into smaller components that are replaceable. By doing so, we're introducing a new level of indirection, and a performance hit. Trade off minor performance penalties to gain substitutability that aids in other aspects of scaling our architecture. Ease of development for addressability Assigning addressable URIs to resources in our application certainly makes implementing features more difficult. Do we actually need URIs for every resource exposed by our application? Probably not. For the sake of consistency though, it would make sense to have URIs for almost every resource. If we don't have a router and URI generation scheme that's consistent and easy to follow, we're more likely to skip implementing URIs for certain resources. It's almost always better to have the added burden of assigning URIs to every resource in our application than to skip out on URIs. Or worse still, not supporting addressable resources at all. URIs make our application behave like the rest of the Web; the training ground for all our users. For example, perhaps URI generation and routes are a constant for anything in our application—a trade-off that cannot happen. Trade off ease of development for addressability in almost every case. The ease of development problem with regard to URIs can be tackled in more depth as the software matures. Maintainability for performance The ease with which features are developed in our software boils down to the development team and it's scaling influencers. For example, we could face pressure to hire entry-level developers for budgetary reasons. How well this approach scales depends on our code. When we're concerned with performance, we're likely to introduce all kinds of intimidating code that relatively inexperienced developers will have trouble swallowing. Obviously, this impedes the ease of developing new features, and if it's difficult, it takes longer. This obviously does not scale with respect to customer demand. Developers don't always have to struggle with understanding the unorthodox approaches we've taken to tackle performance bottlenecks in specific areas of the code. We can certainly help the situation by writing quality code that's understandable. Maybe even documentation. But we won't get all of this for free; if we're to support the team as a whole as it scales, we need to pay the productivity penalty in the short term for having to coach and mentor. Trade off ease of development for performance in critical code paths that are heavily utilized and not modified often. We can't always escape the ugliness required for performance purposes, but if it's well-hidden, we'll gain the benefit of the more common code being comprehensible and self-explanatory. For example, low-level JavaScript libraries perform well and have a cohesive API that's easy to use. But if you look at some of the underlying code, it isn't pretty. That's our gain—having someone else maintain code that's ugly for performance reasons. Our components on the left follow coding styles that are consistent and easy to read; they all utilize the high-performance library on the right, giving our application performance while isolating optimized code that's difficult to read and understand. Less features for maintainability When all else fails, we need to take a step back and look holistically at the featureset of our application. Can our architecture support them all? Is there a better alternative? Scrapping an architecture that we've sunk many hours into almost never makes sense—but it does happen. The majority of the time, however, we'll be asked to introduce a challenging set of features that violate one or more of our architectural constants. When that happens, we're disrupting stable features that already exist, or we're introducing something of poor quality into the application. Neither case is good, and it's worth the time, the headache, and the cursing to work with the stakeholders to figure out what has to go. If we've taken the time to figure out our architecture by making trade-offs, we should have a sound argument for why our software can't support hundreds of features. When an architecture is full, we can't continue to scale. The key is understanding where that breaking threshold lies, so we can better understand and communicate it to stakeholders. Leveraging frameworks Frameworks exist to help us implement our architecture using a cohesive set of patterns. There's a lot of variety out there, and choosing which framework is a combination of personal taste, and fitness based on our design. For example, one JavaScript application framework will do a lot for us out-of-the-box, while another has even more features, but a lot of them we don't need. JavaScript application frameworks vary in size and sophistication. Some come with batteries included, and some tend toward mechanism over policy. None of these frameworks were specifically designed for our application. Any purported ability of a framework needs to be taken with a grain of salt. The features advertised by frameworks are applied to a general case, and a simple one at that. Applied in the context of our architecture is something else entirely. That being said, we can certainly use a given framework of our liking as input to the design process. If we really like the tool, and our team has experience using it, we can let it influence our design decisions. Just as long as we understand that the framework does not automatically respond to scaling influencers—that part is up to us. It's worth the time investigating the framework to use for our project because choosing the wrong framework is a costly mistake. The realization that we should have gone with something else usually comes after we've implemented lots of functionality. The end result is lots of re-writing, re-planning, re-training, and re-documenting. Not to mention the time lost on the first implementation. Choose your frameworks wisely, and be cautious about being framework-coupling. Summary Scaling a JavaScript application isn't the same as scaling other types of applications. Although we can use JavaScript to create large-scale backend services, our concern is with scaling the applications our users interact with in the browser. And there're a number of influencers that guide our decision making process on producing an architecture that scales. We reviewed some of these influencers, and how they flow in a top-down fashion, creating challenges unique to frontend JavaScript development. We examined the effect of more users, more features, and more developers; we can see that there's a lot to think about. While the browser is becoming a powerful platform, onto which we're delivering our applications, it still has constraints not found on other platforms. Designing and implementing a scalable JavaScript application requires having an architecture. What the software must ultimately do is just one input to that design. The scaling influencers are key as well. From there, we address different perspectives of the architecture under consideration. Things such as component composition and responsiveness come into play when we talk about scale. These are observable aspects of our architecture that are impacted by influencers of scale. As these scaling factors change over time, we use architectural perspectives as tools to modify our design, or the product to align with scaling challenges. Resources for Article: Further resources on this subject: Developing a JavaFX Application for iOS [article] Deploying a Play application on CoreOS and Docker [article] Developing Location-based Services with Neo4j [article]
Read more
  • 0
  • 0
  • 1979

article-image-typical-javascript-project
Packt
11 Aug 2015
29 min read
Save for later

A Typical JavaScript Project

Packt
11 Aug 2015
29 min read
In this article by Phillip Fehre, author of the book JavaScript Domain-Driven Design, we will explore a practical approach to developing software with advanced business logic. There are many strategies to keep development flowing and the code and thoughts organized, there are frameworks building on conventions, there are different software paradigms such as object orientation and functional programming, or methodologies such as test-driven development. All these pieces solve problems, and are like tools in a toolbox to help manage growing complexity in software, but they also mean that today when starting something new, there are loads of decisions to make even before we get started at all. Do we want to develop a single-page application, do we want to develop following the standards of a framework closely or do we want to set our own? These kinds of decisions are important, but they also largely depend on the context of the application, and in most cases the best answer to the questions is: it depends. (For more resources related to this topic, see here.) So, how do we really start? Do we really even know what our problem is, and, if we understand it, does this understanding match that of others? Developers are very seldom the domain experts on a given topic. Therefore, the development process needs input from outside through experts of the business domain when it comes to specifying the behavior a system should have. Of course, this is not only true for a completely new project developed from the ground up, but also can be applied to any new feature added during development of to an application or product. So, even if your project is well on its way already, there will come a time when a new feature just seems to bog the whole thing down and, at this stage, you may want to think about alternative ways to go about approaching this new piece of functionality. Domain-driven design gives us another useful piece to play with, especially to solve the need to interact with other developers, business experts, and product owners. As in the modern era, JavaScript becomes a more and more persuasive choice to build projects in and, in many cases like browser-based web applications, it actually is the only viable choice. Today, the need to design software with JavaScript is more pressing than ever. In the past, the issues of a more involved software design were focused on either backend or client application development, with the rise of JavaScript as a language to develop complete systems in, this has changed. The development of a JavaScript client in the browser is a complex part of developing the application as a whole, and so is the development of server-side JavaScript applications with the rise of Node.js. In modern development, JavaScript plays a major role and therefore needs to receive the same amount of attention in development practices and processes as other languages and frameworks have in the past. A browser based client-side application often holds the same amount, or even more logic, than the backend. With this change, a lot of new problems and solutions have arisen, the first being the movement toward better encapsulation and modularization of JavaScript projects. New frameworks have arisen and established themselves as the bases for many projects. Last but not least, JavaScript made the jump from being the language in the browser to move more and more to the server side, by means of Node.js or as the query language of choice in some NoSQL databases. Let me take you on a tour of developing a piece of software, taking you through the stages of creating an application from start to finish using the concepts domain-driven design introduced and how they can be interpreted and applied. In this article, you will cover: The core idea of domain-driven design Our business scenario—managing an orc dungeon Tracking the business logic Understanding the core problem and selecting the right solution Learning what domain-driven design is The core idea of domain-driven design There are many software development methodologies around, all with pros and cons but all also have a core idea, which is to be applied and understood to get the methodology right. For a domain-driven design, the core lies in the realization that since we are not the experts in the domain the software is placed in, we need to gather input from other people who are experts. This realization means that we need to optimize our development process to gather and incorporate this input. So, what does this mean for JavaScript? When thinking about a browser application to expose a certain functionality to a consumer, we need to think about many things, for example: How does the user expect the application to behave in the browser? How does the business workflow work? What does the user know about the workflow? These three questions already involve three different types of experts: a person skilled in user experience can help with the first query, a business domain expert can address the second query, and a third person can research the target audience and provide input on the last query. Bringing all of this together is the goal we are trying to achieve. While the different types of people matter, the core idea is that the process of getting them involved is always the same. We provide a common way to talk about the process and establish a quick feedback loop for them to review. In JavaScript, this can be easier than in most other languages due to the nature of it being run in a browser, readily available to be modified and prototyped with; an advantage Java Enterprise Applications can only dream of. We can work closely with the user experience designer adjusting the expected interface and at the same time change the workflow dynamically to suit our business needs, first on the frontend in the browser and later moving the knowledge out of the prototype to the backend, if necessary. Managing an orc dungeon When talking about domain-driven design, it is often stated in the context of having complex business logic to deal with. In fact, most software development practices are not really useful when dealing with a very small, cut-out problem. Like with every tool, you need to be clear when it is the right time to use it. So, what does really fall in to the realm of complex business logic? It means that the software has to describe a real-world scenario, which normally involves human thinking and interaction. Writing software that deals with decisions, which 90 per cent of the time go a certain way and ten per cent of the time it's some other way, is notoriously hard, especially when explaining it to people not familiar with software. These kind of decisions are the core of many business problems, but even though this is an interesting problem to solve, following how the next accounting software is developed does not make an interesting read. With this in mind, I would like to introduce you to the problem we are trying to solve, that is, managing a dungeon. An orc Inside the dungeon Running an orc dungeon seems pretty simple from the outside, but managing it without getting killed is actually rather complicated. For this reason, we are contacted by an orc master who struggles with keeping his dungeon running smoothly. When we arrive at the dungeon, he explains to us how it actually works and what factors come into play. Even greenfield projects often have some status quo that work. This is important to keep in mind since it means that we don't have to come up with the feature set, but match the feature set of the current reality. Many outside factors play a role and the dungeon is not as independent at it would like to be. After all, it is part of the orc kingdom, and the king demands that his dungeons make him money. However, money is just part of the deal. How does it actually make money? The prisoners need to mine gold and to do that there needs to be a certain amount of prisoners in the dungeon that need to be kept. The way an orc kingdom is run also results in the constant arrival of new prisoners, new captures from war, those who couldn't afford their taxes, and so on. There always needs to be room for new prisoners. The good thing is that every dungeon is interconnected, and to achieve its goals it can rely on others by requesting a prisoner transfer to either fill up free cells or get rid of overflowing prisoners in its cells. These options allow the dungeon masters to keep a close watch on prisoners being kept and the amount of cell space available. Sending off prisoners into other dungeons as needed and requesting new ones from other dungeons, in case there is too much free cell space available, keeps the mining workforce at an optimal level for maximizing the profit, while at the same time being ready to accommodate the eventual arrival of a high value inmate sent directly to the dungeon. So far, the explanation is sound, but let's dig a little deeper and see what is going on. Managing incoming prisoners Prisoners can arrive for a couple of reasons, such as if a dungeon is overflowing and decides to transfer some of its inmates to a dungeon with free cells and, unless they flee on the way, they will eventually arrive at our dungeon sooner or later. Another source of prisoners is the ever expanding orc kingdom itself. The orcs will constantly enslave new folk and telling our king, "Sorry we don't have room", is not a valid option, it might actually result in us being one of the new prisoners. Looking at this, our dungeon will fill up eventually, but we need to make sure this doesn't happen. The way to handle this is by transferring inmates early enough to make room. This is obviously going to be the most complicated thing; we need to weigh several factors to decide when and how many prisoners to transfer. The reason we can't simply solve this via thresholds is that looking at the dungeon structure, this is not the only way we can lose inmates. After all, people are not always happy with being gold mining slaves and may decide the risk of dying in a prison is as high as dying while fleeing. Therefore, they decide to do so. The same is true while prisoners are on the move between different dungeons as well, and not unlikely. So even though we have a hard limit of physical cells, we need to deal with the soft number of incoming and outgoing prisoners. This is a classical problem in business software. Matching these numbers against each other and optimizing for a certain outcome is basically what computer data analysis is all about. The current state of the art With all this in mind, it becomes clear that the orc master's current system of keeping track via a badly written note on a napkin is not perfect. In fact, it almost got him killed multiple times already. To give you an example of what can happen, he tells the story of how one time the king captured four clan leaders and wanted to make them miners just to humiliate them. However, when arriving at the dungeon, he realized that there was no room and had to travel to the next dungeon to drop them off, all while having them laugh at him because he obviously didn't know how to run a kingdom. This was due to our orc master having forgotten about the arrival of eight transfers just the day before. Another time, the orc master was not able to deliver any gold when the king's sheriff arrived because he didn't know he only had one-third of his required prisoners to actually mine anything. This time it was due to having multiple people count the inmates, and instead of recoding them cell-by-cell, they actually tried to do it in their head. While being orc, this is a setup for failure. All this comes down to bad organization, and having your system to manage dungeon inmates drawn on the back of a napkin certainly qualifies as such. Digital dungeon management Guided by the recent failures, the orc master has finally realized it is time to move to modern times, and he wants to revolutionize the way to manage his dungeon by making everything digital. He strives to have a system that basically takes the busywork out of managing by automatically calculating the necessary transfers according to the current amount of cells filled. He would like to just sit back, relax and let the computer do all the work for him. A common pattern when talking with a business expert about software is that they are not aware of what can be done. Always remember that we, as developers, are the software experts and therefore are the only ones who are able to manage these expectations. It is time now for us to think about what we need to know about the details and how to deal with the different scenarios. The orc master is not really familiar with the concepts of software development, so we need to make sure we talk in a language he can follow and understand, while making sure we get all the answers we need. We are hired for our expertise in software development, so we need to make sure to manage the expectations as well as the feature set and development flow. The development itself is of course going to be an iterative process, since we can't expect to get a list of everything needed right in one go. It also means that we will need to keep possible changes in mind. This is an essential part of structuring complex business software. Developing software containing more complex business logic is prone to changing rapidly as the business is adapting itself and the users leverage the functionality the software provides. Therefore, it is essential to keep a common language between the people who understand the business and the developers who understand the software. Incorporate the business terms wherever possible, it will ease communication between the business domain experts and you as a developer and therefore prevent misunderstandings early on. Specification To create a good understanding of what a piece of software needs to do, at least to be useful in the best way, is to get an understanding of what the future users were doing before your software existed. Therefore, we sit down with the orc master as he is managing his incoming and outgoing prisoners, and let him walk us through what he is doing on a day-to-day basis. The dungeon is comprised of 100 cells that are either occupied by a prisoner or empty at the moment. When managing these cells, we can identify distinct tasks by watching the orc do his job. Drawing out what we see, we can roughly sketch it like this: There are a couple of organizational important events and states to be tracked, they are: Currently available or empty cells Outgoing transfer states Incoming transfer states Each transfer can be in multiple states that the master has to know about to make further decisions on what to do next. Keeping a view of the world like this is not easy especially accounting for the amount of concurrent updates happening. Tracking the state of everything results in further tasks for our master to do: Update the tracking Start outgoing transfers when too many cells are occupied Respond to incoming transfers by starting to track them Ask for incoming transfers if the occupied cells are to low So, what does each of them involve? Tracking available cells The current state of the dungeon is reflected by the state of its cells, so the first task is to get this knowledge. In its basic form, this is easily achievable by simply counting every occupied and every empty cell, writing down what the values are. Right now, our orc master tours the dungeon in the morning, noting each free cell assuming that the other one must be occupied. To make sure he does not get into trouble, he no longer trusts his subordinates to do that! The problem being that there only is one central sheet to keep track of everything, so his keepers may overwrite each other's information accidently if there is more than one person counting and writing down cells. Also, this is a good start and is sufficient as it is right now, although it misses some information that would be interesting to have, for example, the amount of inmates fleeing the dungeon and an understanding of the expected free cells based on this rate. For us, this means that we need to be able track this information inside the application, since ultimately we want to project the expected amount of free cells so that we can effectively create recommendations or warnings based on the dungeon state. Starting outgoing transfers The second part is to actually handle getting rid of prisoners in case the dungeon fills up. In this concrete case, this means that if the number of free cells drops beneath 10, it is time to move prisoners out, since there may be new prisoners coming at any time. This strategy works pretty reliably since, from experience, it has been established that there are hardly any larger transports, so the recommendation is to stick with it in the beginning. However, we can already see some optimizations which currently are too complex. Drawing from the experience of the business is important, as it is possible to encode such knowledge and reduces mistakes, but be mindful since encoding detailed experience is probably one of the most complex things to do. In the future, we want to optimize this based on the rate of inmates fleeing the dungeon, new prisoners arriving due to being captured, as well as the projection of new arrivals from transfers. All this is impossible right now, since it will just overwhelm the current tracking system, but it actually comes down to capturing as much data as possible and analyzing it, which is something modern computer systems are good at. After all, it could save the orc master's head! Tracking the state of incoming transfers On some days, a raven will arrive bringing news that some prisoners have been sent on their way to be transferred to our dungeon. There really is nothing we can do about it, but the protocol is to send the raven out five days prior to the prisoners actually arriving to give the dungeon a chance to prepare. Should prisoners flee along the way, another raven will be sent informing the dungeon of this embarrassing situation. These messages have to be sifted through every day, to make sure there actually is room available for those arriving. This is a big part of projecting the amount of filled cells, and also the most variable part, we get told. It is important to note that every message should only be processed once, but it can arrive at any time during the day. Right now, they are all dealt with by one orc, who throws them out immediately after noting what the content results in. One problem with the current system is that since other dungeons are managed the same way ours is currently, they react with quick and large transfers when they get in trouble, which makes this quite unpredictable. Initiating incoming transfers Besides keeping the prisoners where they belong, mining gold is the second major goal of the dungeon. To do this, there needs to be a certain amount of prisoners available to man the machines, otherwise production will essentially halt. This means that whenever too many cells become abandoned it is time to fill them, so the orc master sends a raven to request new prisoners in. This again takes five days and, unless they flee along the way, works reliably. In the past, it still has been a major problem for the dungeon due to the long delay. If the filled cells drop below 50, the dungeon will no longer produce any gold and not making money is a reason to replace the current dungeon master. If all the orc master does is react to the situation, it means that there will probably be about five days in which no gold will be mined. This is one of the major pain points in the current system because projecting the amount of filled cells five days out seems rather impossible, so all the orcs can do right now is react. All in all, this gives us a rough idea what the dungeon master is looking for and which tasks need to be accomplished to replace the current system. Of course, this does not have to happen in one go, but can be done gradually so everybody adjusts. Right now, it is time for us to identify where to start. From greenfield to application We are JavaScript developers, so it seems obvious for us to build a web application to implement this. As the problem is described, it is clear that starting out simply and growing the application as we further analyze the situation is clearly the way to go. Right now, we don't really have a clear understanding how some parts should be handled since the business process has not evolved to this level, yet. Also, it is possible that new features will arise or things start being handled differently as our software begins to get used. The steps described leave room for optimization based on collected data, so we first need the data to see how predictions can work. This means that we need to start by tracking as many events as possible in the dungeon. Running down the list, the first step is always to get a view of which state we are in, this means tracking the available cells and providing an interface for this. To start out, this can be done via a counter, but this can't be our final solution. So, we then need to grow toward tracking events and summing those to be able to make predictions for the future. The first route and model Of course there are many other ways to get started, but what it boils down to in most cases is that it is time now to choose the base to build on. By this I mean deciding on a framework or set of libraries to build upon. This happens alongside the decision on what database is used to back our application and many other small decisions, which are influenced by influenced by those decisions around framework and libraries. A clear understanding on how the frontend should be built is important as well, since building a single-page application, which implements a large amount of logic in the frontend and is backed by an API layer that differs a lot from an application, which implements most logic on the server side. Don't worry if you are unfamiliar with express or any other technology used in the following. You don't need to understand every single detail, but you will get the idea of how developing an application with a framework is achieved. Since we don't have a clear understanding, yet, which way the application will ultimately take, we try to push as many decisions as possible out, but decide on the stuff we immediately need. As we are developing in JavaScript, the application is going to be developed in Node.js and express is going to be our framework of choice. To make our life easier, we first decide that we are going to implement the frontend in plain HTML using EJS embedded JavaScript templates, since it will keep the logic in one place. This seems sensible since spreading the logic of a complex application across multiple layers will complicate things even further. Also, getting rid of the eventual errors during transport will ease our way toward a solid application in the beginning. We can push the decision about the database out and work with simple objects stored in RAM for our first prototype; this is, of course, no long-term solution, but we can at least validate some structure before we need to decide on another major piece of software, which brings along a lot of expectations as well. With all this in mind, we setup the application. In the following section and throughout the book, we are using Node.js to build a small backend. At the time of the writing, the currently active version was Node.js 0.10.33. Node.js can be obtained from http://nodejs.org/ and is available for Windows, Mac OS X, and Linux. The foundation for our web application is provided by express, available via the Node Package Manager (NPM) at the time of writing in version 3.0.3: $ npm install –g express$ express --ejs inmatr For the sake of brevity, the glue code in the following is omitted, but like all other code presented in the book, the code is available on the GitHub repository https://github.com/sideshowcoder/ddd-js-sample-code. Creating the model The most basic parts of the application are set up now. We can move on to creating our dungeon model in models/dungeon.js and add the following code to it to keep a model and its loading and saving logic: var Dungeon = function(cells) {this.cells = cellsthis.bookedCells = 0} Keeping in mind that this will eventually be stored in a database, we also need to be able to find a dungeon in some way, so the find method seems reasonable. This method should already adhere to the Node.js callback style to make our lives easier when switching to a real database. Even though we pushed this decision out, the assumption is clear since, even if we decide against a database, the dungeon reference will be stored and requested from outside the process in the future. The following shows an example with the find method: var dungeons = {}Dungeon.find = function(id, callback) {if(!dungeons[id]) {   dungeons[id] = new Dungeon(100)}callback(null, dungeons[id])} The first route and loading the dungeon Now that we have this in place, we can move on to actually react to requests. In express defining, the needed routes do this. Since we need to make sure we have our current dungeon available, we also use middleware to load it when a request comes in. Using the methods we just created, we can add a middleware to the express stack to load the dungeon whenever a request comes in. A middleware is a piece of code, which gets executed whenever a request reaches its level of the stack, for example, the router used to dispatch requests to defined functions is implemented as a middleware, as is logging and so on. This is a common pattern for many other kinds of interactions as well, such as user login. Our dungeon loading middleware looks like this, assuming for now we only manage one dungeon we can create it by adding a file in middleware/load_context.js with the following code: function(req, res, next) {req.context = req.context || {}Dungeon.find('main', function(err, dungeon) {   req.context.dungeon = dungeon   next()})} Displaying the page With this, we are now able to simply display information about the dungeon and track any changes made to it inside the request. Creating a view to render the state, as well as a form to modify it, are the essential parts of our GUI. Since we decided to implement the logic server-side, they are rather barebones. Creating a view under views/index.ejs allows us to render everything to the browser via express later. The following example is the HTML code for the frontend: <h1>Inmatr</h1> <p>You currently have <%= dungeon.free %> of <%= dungeon.cells %> cells available.</p>   <form action="/cells/book" method="post"> <select name="cells">    <% for(var i = 1; i < 11; i++) { %>    <option value="<%= i %>"><%= i %></option> <% } %> </select> <button type="submit" name="book" value="book"> Book cells</button> <button type="submit" name="free" value="free"> Free cells</button> </form> Gluing the application together via express Now that we are almost done, we have a display for the state, a model to track what is changing, and a middleware to load this model as needed. Now, to glue it all together we will use express to register our routes and call the necessary functions. We mainly need two routes: one to display the page and one to accept and process the form input. Displaying the page is done when a user hits the index page, so we need to bind to the root path. Accepting the form input is already declared in the form itself as /cells/book. We can just create a route for it. In express, we define routes in relation to the main app object and according to the HTTP verbs as follows: app.get('/', routes.index) app.post('/cells/book', routes.cells.book) Adding this to the main app.js file allows express to wire things up, the routes itself are implemented as follows in the routes/index.js file: var routes = { index: function(req, res){    res.render('index', req.context) },   cells: { book: function(req, res){    var dungeon = req.context.dungeon    var cells = parseInt(req.body.cells)    if (req.body.book) {    dungeon.book(cells) } else {    dungeon.unbook(cells) }        res.redirect('/')    } } } With this done, we have a working application to track free and used cells. The following shows the frontend output for the tracking system: Moving the application forward This is only the first step toward the application that will hopefully automate what is currently done by hand. With the first start in place, it is now time to make sure we can move the application along. We have to think about what this application is supposed to do and identify the next steps. After presenting the current state back to the business the next request is most likely to be to integrate some kind of login, since it will not be possible to modify the state of the dungeon unless you are authorized to do it. Since this is a web application, most people are familiar with them having a login. This moves us into a complicated space in which we need to start specifying the roles in the application along with their access patterns; so it is not clear if this is the way to go. Another route to take is starting to move the application towards tracking events instead of pure numbers of the free cells. From a developer's point of view, this is probably the most interesting route but the immediate business value might be hard to justify, since without the login it seems unusable. We need to create an endpoint to record events such as fleeing prisoner, and then modify the state of the dungeon according to those tracked events. This is based on the assumption that the highest value for the application will lie in the prediction of the prisoner movement. When we want to track free cells in such a way, we will need to modify the way our first version of the application works. The logic on what events need to be created will have to move somewhere, most logically the frontend, and the dungeon will no longer be the single source of truth for the dungeon state. Rather, it will be an aggregator for the state, which is modified by the generation of events. Thinking about the application in such a way makes some things clear. We are not completely sure what the value proposition of the application ultimately will be. This leads us down a dangerous path since the design decisions that we make now will impact how we build new features inside the application. This is also a problem in case our assumption about the main value proposition turns out to be wrong. In this case, we may have built quite a complex event tracking system which does not really solve the problem but complicates things. Every state modification needs to be transformed into a series of events where a simple state update on an object may have been enough. Not only does this design not solve the real problem, explaining it to the orc master is also tough. There are certain abstractions missing, and the communication is not following a pattern established as the business language. We need an alternative approach to keep the business more involved. Also, we need to keep development simple using abstraction on the business logic and not on the technologies, which are provided by the frameworks that are used. Summary In this article you were introduced to a typical business application and how it is developed. It showed how domain-driven design can help steer clear of common issues during the development to create a more problem-tailored application. Resources for Article: Further resources on this subject: An Introduction to Mastering JavaScript Promises and Its Implementation in Angular.js [article] Developing a JavaFX Application for iOS [article] Object-Oriented JavaScript with Backbone Classes [article]
Read more
  • 0
  • 0
  • 1264