Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Web Development

1802 Articles
article-image-how-to-publish-docker-and-integrate-with-maven
Pravin Dhandre
11 Apr 2018
6 min read
Save for later

How to publish Docker and integrate with Maven

Pravin Dhandre
11 Apr 2018
6 min read
We have learned how to create Dockers, and how to run them, but these Dockers are stored in our system. Now we need to publish them so that they are accessible anywhere. In this post, we will learn how to publish our Docker images, and how to finally integrate Maven with Docker to easily do the same steps for our microservices. Understanding repositories In our previous example, when we built a Docker image, we published it into our local system repository so we can execute Docker run. Docker will be able to find them; this local repository exists only on our system, and most likely we need to have this access to wherever we like to run our Docker. For example, we may create our Docker in a pipeline that runs on a machine that creates our builds, but the application itself may run in our pre production or production environments, so the Docker image should be available on any system that we need. One of the great advantages of Docker is that any developer building an image can run it from their own system exactly as they would on any server. This will minimize the risk of having something different in each environment, or not being able to reproduce production when you try to find the source of a problem. Docker provides a public repository, Docker Hub, that we can use to publish and pull images, but of course, you can use private Docker repositories such as Sonatype Nexus, VMware Harbor, or JFrog Artifactory. To learn how to configure additional repositories refer to the repositories documentation. Docker Hub registration After registering, we need to log into our account, so we can publish our Dockers using the Docker tool from the command line using Docker login: docker login Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.Docker.com to create one. Username: mydockerhubuser Password: Login Succeeded When we need to publish a Docker, we must always be logged into the registry that we are working with; remember to log into Docker. Publishing a Docker Now we'd like to publish our Docker image to Docker Hub; but before we can, we need to build our images for our repository. When we create an account in Docker Hub, a repository with our username will be created; in this example, it will be mydockerhubuser. In order to build the Docker for our repository, we can use this command from our microservice directory: docker build . -t mydockerhubuser/chapter07 This should be quite a fast process since all the different layers are cached: Sending build context to Docker daemon 21.58MB Step 1/3 : FROM openjdk:8-jdk-alpine ---> a2a00e606b82 Step 2/3 : ADD target/*.jar microservice.jar ---> Using cache ---> 4ae1b12e61aa Step 3/3 : ENTRYPOINT java -jar microservice.jar ---> Using cache ---> 70d76cbf7fb2 Successfully built 70d76cbf7fb2 Successfully tagged mydockerhubuser/chapter07:latest Now that our Docker is built, we can push it to Docker Hub with the following command: docker push mydockerhubuser/chapter07 This command will take several minutes since the whole image needs to be uploaded. With our Docker published, we can now run it from any Docker system with the following command: docker run mydockerhubuser/chapter07 Or else, we can run it as a daemon, with: docker run -d mydockerhubuser/chapter07 Integrating Docker with Maven Now that we know most of the Docker concepts, we can integrate Docker with Maven using the Docker-Maven-plugin created by fabric8, so we can create Docker as part of our Maven builds. First, we will move our Dockerfile to a different folder. In the IntelliJ Project window, right-click on the src folder and choose New | Directory. We will name it Docker. Now, drag and drop the existing Dockerfile into this new directory, and we will change it to the following: FROM openjdk:8-jdk-alpine ADD maven/*.jar microservice.jar ENTRYPOINT ["java","-jar", "microservice.jar"] To manage the Dockerfile better, we just move into our project folders. When our Docker is built using the plugin, the contents of our application will be created in a folder named Maven, so we change the Dockerfile to reference that folder. Now, we will modify our Maven pom.xml, and add the Dockerfile-Maven-plugin in the build | plugins section: <build> .... <plugins> .... <plugin> <groupId>io.fabric8</groupId> <artifactId>Docker-maven-plugin</artifactId> <version>0.23.0</version> <configuration> <verbose>true</verbose> <images>  </images> </configuration> </plugin> </plugins> </build> Here, we are specifying how to create our Docker, where the Dockerfile is, and even which version of the Docker we are building. Additionally, we specify some parameters when our Docker runs, such as the port that it exposes. If we need IntelliJ to reload the Maven changes, we may need to click on the Reimport all maven projects button in the Maven Project window. For building our Docker using Maven, we can use the Maven Project window by running the task Docker: build, or by running the following command: mvnw docker:build This will build the Docker image, but we require to have it before it's packaged, so we can perform the following command: mvnw package docker:build We can also publish our Docker using Maven, either with the Maven Project window to run the Docker: push task, or by running the following command: mvnw docker:push This will push our Docker into the Docker Hub, but if we'd like to do everything in just one command, we can just use the following code: mvnw package docker:build docker:push Finally, the plugin provides other tasks such as Docker: run, Docker: start, and Docker: stop, which we can use in the commands that we've already learned on the command line. With this, we learned how to publish docker manually and integrate them into the Maven lifecycle. Do check out the book Hands-On Microservices with Kotlin to start simplifying development of microservices and building high quality service environment. Check out other posts: The key differences between Kubernetes and Docker Swarm How to publish Microservice as a service onto a Docker Building Docker images using Dockerfiles  
Read more
  • 0
  • 0
  • 6194

article-image-5-things-consider-developing-ecommerce-website
Johti Vashisht
11 Apr 2018
7 min read
Save for later

5 things to consider when developing an eCommerce website

Johti Vashisht
11 Apr 2018
7 min read
Online businesses are booming and rightly so – this year it is expected that 18% of all UK retail purchases will occur online this year. That's partly because eCommerce website development has got easy - almost anyone can do it. But hubris might be your downfall; there are a number of important things to consider before you start to building your eCommerce website. This is especially true if you want customers to keep coming back to your site. We’ve compiled a list of things to keep in mind for when you are ready to build an eCommerce store. eCommerce website development begins with the right platform and brilliant Design Platform Before creating your eCommerce website, you need to decide which platform to create the website on. There are a variety of content management systems including WordPress, Joomla and Magento. Wordpress is a versatile and easy to use platform which also supports a large number of plugins so it may be suitable if you are offering services or only a few products. Platforms such as Magento have been created specifically for eCommerce use. If you are thinking of opening up an online store with many products then Magento is the best option as it is easier to manage your products. Design When designing your website, use a clean, simple design rather than one with too many graphics and incorporate clear call to actions. Another thing to take into account is whether you want to create your own custom theme or choose a preselected theme and build upon it. Although it can be pricier, a custom theme allows you to add custom functionality to your website that a standard pre-made theme may not have. In contrast, pre-made themes will be much cheaper or in most cases free. If you are choosing a pre-made theme, then be sure to check that it is regularly updated and that they have support contact details in case of any queries. Your website design should also be responsive so that your website can be viewed correctly across multiple platforms and operating systems. Your eCommerce website needs to be secure A secure website is beneficial for both you and your customers. With a growing number of websites being hacked and data being stolen, security is the one part of a website you cannot skip out on. An SSL (Secure Sockets Layer) certificate is essential to get for your website, as not only does it allow for a secure connection over which personal data can be transmitted, it also provides authentication so that customers know it’s safe to make purchases on your website.  SSL certificates are mandatory if you collect private information from customers via forms. HTTPS – (Hyper Text Transfer Protocol Secure) is an encrypted standard for website client communications. In order to for HTTP to become HTTPS, data is wrapped into secure SSL packets before being sent and after receiving the data. As well as securing data, HTTPS may also be used for search ranking purposes. If you utilise HTTPS, you will have a slight boost in ranking over competitor websites that do not utilise HTTPS. eCommerce plugins make adding features to your site easier If you have decided to use Wordpress to create your eCommerce website then there are a number of eCommerce plugins available to help you create your online store. Top eCommerce plugins include WooCommerce, Shopify, Shopp and Easy Digital Downloads. SEO attracts organic traffic to your eCommerce site If you want potential customers to see your products before that of competitors then optimising your website pages will aid in trying to be on the first page of search results. Undertake a keyword research to get the words that potential customers are most commonly using to find the products you offer. Google’s keyword planner is quite helpful in managing your keyword research. You can then add relevant words to your product names and descriptions. Revisit these keywords occasionally to update them and experiment with which keywords work better. You can improve your rankings with good page titles that include relevant keywords. Although meta descriptions do not improve ranking, it’s good to add useful meta descriptions as a better description may draw more clicks. Also ensure that the product URLs mirror what the product is and isn’t unnecessarily long. Other things to consider when building an eCommerce website You may wish to consider additional features in order to increase your chance of returning visitors: Site speed If your website is slow then it’s likely that customers may not return for a repeat purchase if it takes too long for a product to load. They’ll simply visit a competitor website that loads much faster. There are a few things you can do to speed up your website including caching and using in memory technology for certain things rather than constantly accessing the database. You could also use fast hosting servers to meet traffic requirements. Site speed is also an important SEO consideration. Guest checkout 23% of shoppers will abandon their shopping basket if they are forced to register an account. Make it easier for customers to purchase items with guest checkout. Some customers may not wish to create an account as they may be limited for time. Create a smooth, quick transaction process by adding the option of a guest checkout. Once they have completed their checkout, you can ask them if they would like to create an account. Site search Utilise search functionality to allow users to search for products with the ability to filter products through a variety of options (if applicable). Pain points Address potential concerns customers may have before purchasing your products by displaying information they may have concerns or queries about. This can include delivery options and whether free returns are offered. Mobile optimization In 2017 almost 59% of ecommerce sales occurred via mobile. There is an increasing number of users who now shop online using their smart phones and this trend will most likely grow. That’s why optimising your website for mobile is a must. User-generated reviews and testimonials Use social proof on your website with user reviews and testimonials. If a potential customer reads customer reviews then they are more likely to purchase a product. However, user-generated reviews can go both ways – a user may also post a negative review which may not be good for your website/online store. Related items Showing related items under a product is useful for customers who are looking for an item but may not have decided what type of that particular product they want. This is also useful for when the main product is out of stock. FAQs section Creating an FAQ section with common questions is very useful and saves both the customer and company time as basic queries can be answered by looking at the FAQ page. If you're starting out, good luck! Yes, in 2018 eCommerce website development is pretty easy thanks to the likes of Shopify, WooCommerce, and Magento among others. But as you can see, there’s plenty you need to consider. By incorporating most of these points, you will be able to create an ecommerce website that users will be able to navigate through easily and find the products or services they are looking for.
Read more
  • 0
  • 0
  • 8676

article-image-applying-spring-security-using-json-web-token-jwt
Vijin Boricha
10 Apr 2018
9 min read
Save for later

Applying Spring Security using JSON Web Token (JWT)

Vijin Boricha
10 Apr 2018
9 min read
Today, we will learn about spring security and how it can be applied in various forms using powerful libraries like JSON Web Token (JWT). Spring Security is a powerful authentication and authorization framework, which will help us provide a secure application. By using Spring Security, we can keep all of our REST APIs secured and accessible only by authenticated and authorized calls. Authentication and authorization Let's look at an example to explain this. Assume you have a library with many books. Authentication will provide a key to enter the library; however, authorization will give you permission to take a book. Without a key, you can't even enter the library. Even though you have a key to the library, you will be allowed to take only a few books. JSON Web Token (JWT) Spring Security can be applied in many forms, including XML configurations using powerful libraries such as Jason Web Token. As most companies use JWT in their security, we will focus more on JWT-based security than simple Spring Security, which can be configured in XML. JWT tokens are URL-safe and web browser-compatible especially for Single Sign-On (SSO) contexts. JWT has three parts: Header Payload Signature The header part decides which algorithm should be used to generate the token. While authenticating, the client has to save the JWT, which is returned by the server. Unlike traditional session creation approaches, this process doesn't need to store any cookies on the client side. JWT authentication is stateless as the client state is never saved on a server. JWT dependency To use JWT in our application, we may need to use the Maven dependency. The following dependency should be added in the pom.xml file. You can get the Maven dependency from: https://mvnrepository.com/artifact/javax.xml.Bind. We have used version 2.3.0 of the Maven dependency in our application: <dependency> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> <version>2.3.0</version> </dependency> Note: As Java 9 doesn't include DataTypeConverter in their bundle, we need to add the preceding configuration to work with DataTypeConverter. We will cover DataTypeConverter in the following section. Creating a Jason Web Token To create a token, we have added an abstract method called createToken in our SecurityService interface. This interface will tell the implementing class that it has to create a complete method for createToken. In the createToken method, we will use only the subject and expiry time as these two options are important when creating a token. At first, we will create an abstract method in the SecurityService interface. The concrete class (whoever implements the SecurityService interface) has to implement the method in their class: public interface SecurityService { String createToken(String subject, long ttlMillis); // other methods } In the preceding code, we defined the method for token creation in the interface. SecurityServiceImpl is the concrete class that implements the abstract method of the SecurityService interface by applying the business logic. The following code will explain how JWT will be created by using the subject and expiry time: private static final String secretKey= "4C8kum4LxyKWYLM78sKdXrzbBjDCFyfX"; @Override public String createToken(String subject, long ttlMillis) { if (ttlMillis <= 0) { throw new RuntimeException("Expiry time must be greater than Zero :["+ttlMillis+"] "); } // The JWT signature algorithm we will be using to sign the token SignatureAlgorithm signatureAlgorithm = SignatureAlgorithm.HS256; byte[] apiKeySecretBytes = DatatypeConverter.parseBase64Binary(secretKey); Key signingKey = new SecretKeySpec(apiKeySecretBytes, signatureAlgorithm.getJcaName()); JwtBuilder builder = Jwts.builder() .setSubject(subject) .signWith(signatureAlgorithm, signingKey); long nowMillis = System.currentTimeMillis(); builder.setExpiration(new Date(nowMillis + ttlMillis)); return builder.compact(); } The preceding code creates the token for the subject. Here, we have hardcoded the secret key "4C8kum4LxyKWYLM78sKdXrzbBjDCFyfX " to simplify the token creation process. If needed, we can keep the secret key inside the properties file to avoid hard code in the Java code. At first, we verify whether the time is greater than zero. If not, we throw the exception right away. We are using the SHA-256 algorithm as it is used in most applications. Note: Secure Hash Algorithm (SHA) is a cryptographic hash function. The cryptographic hash is in the text form of a data file. The SHA-256 algorithm generates an almost-unique, fixed-size 256-bit hash. SHA-256 is one of the more reliable hash functions. We have hardcoded the secret key in this class. We can also store the key in the application.properties file. However to simplify the process, we have hardcoded it: private static final String secretKey= "4C8kum4LxyKWYLM78sKdXrzbBjDCFyfX"; We are converting the string key to a byte array and then passing it to a Java class, SecretKeySpec, to get a signingKey. This key will be used in the token builder. Also, while creating a signing key, we use JCA, the name of our signature algorithm. Note: Java Cryptography Architecture (JCA) was introduced by Java to support modern cryptography techniques. We use the JwtBuilder class to create the token and set the expiration time for it. The following code defines the token creation and expiry time setting option: JwtBuilder builder = Jwts.builder() .setSubject(subject) .signWith(signatureAlgorithm, signingKey); long nowMillis = System.currentTimeMillis(); builder.setExpiration(new Date(nowMillis + ttlMillis)); We will have to pass time in milliseconds while calling this method as the setExpiration takes only milliseconds. Finally, we have to call the createToken method in our HomeController. Before calling the method, we will have to autowire the SecurityService as follows: @Autowired SecurityService securityService; The createToken call is coded as follows. We take the subject as the parameter. To simplify the process, we have hardcoded the expiry time as 2 * 1000 * 60 (two minutes). HomeController.java: @Autowired SecurityService securityService; @ResponseBody @RequestMapping("/security/generate/token") public Map<String, Object> generateToken(@RequestParam(value="subject") String subject){ String token = securityService.createToken(subject, (2 * 1000 * 60)); Map<String, Object> map = new LinkedHashMap<>(); map.put("result", token); return map; } Generating a token We can test the token by calling the API in a browser or any REST client. By calling this API, we can create a token. This token will be used for user authentication-like purposes. Sample API for creating a token is as follows: http://localhost:8080/security/generate/token?subject=one Here we have used one as a subject. We can see the token in the following result. This is how the token will be generated for all the subjects we pass to the API: { result: "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJvbmUiLCJleHAiOjE1MDk5MzY2ODF9.GknKcywiIG4- R2bRmBOsjomujP0MxZqdawrB8TO3P4" } Note: JWT is a string that has three parts, each separated by a dot (.). Each section is base-64 encoded. The first section is the header, which gives a clue about the algorithm used to sign the JWT. The second section is the body, and the final section is the signature. Getting a subject from a Jason Web Token So far, we have created a JWT token. Here, we are going to decode the token and get the subject from it. In a future section, we will talk about how to decode and get the subject from the token. As usual, we have to define the method to get the subject. We will define the getSubject method in SecurityService. Here, we will create an abstract method called getSubject in the SecurityService interface. Later, we will implement this method in our concrete class: String getSubject(String token); In our concrete class, we will implement the getSubject method and add our code in the SecurityServiceImpl class. We can use the following code to get the subject from the token: @Override public String getSubject(String token) { Claims claims = Jwts.parser() .setSigningKey(DatatypeConverter.parseBase64Binary(secretKey)) .parseClaimsJws(token).getBody(); return claims.getSubject(); } In the preceding method, we use the Jwts.parser to get the claims. We set a signing key by converting the secret key to binary and then passing it to a parser. Once we get the Claims, we can simply get the subject by calling getSubject. Finally, we can call the method in our controller and pass the generated token to get the subject. You can check the following code, where the controller is calling the getSubject method and returning the subject in the HomeController.java file: @ResponseBody @RequestMapping("/security/get/subject") public Map<String, Object> getSubject(@RequestParam(value="token") String token){ String subject = securityService.getSubject(token); Map<String, Object> map = new LinkedHashMap<>(); map.put("result", subject); return map; } Getting a subject from a token Previously, we created the code to get the token. Here we will test the method we created previously by calling the get subject API. By calling the REST API, we will get the subject that we passed earlier. Sample API: http://localhost:8080/security/get/subject?token=eyJhbGciOiJIUzI1NiJ9.eyJzd WIiOiJvbmUiLCJleHAiOjE1MDk5MzY2ODF9.GknKcywiI-G4- R2bRmBOsjomujP0MxZqdawrB8TO3P4 Since we used one as the subject when creating the token by calling the generateToken method, we will get "one" in the getSubject method: { result: "one" } Note: Usually, we attach the token in the headers; however, to avoid complexity, we have provided the result. Also, we have passed the token as a parameter to get the subject. You may not need to do it the same way in a real application. This is only for demo purposes. This article is an excerpt from the book Building RESTful Web Services with Spring 5 - Second Edition, written by Raja CSP Raman. This book involves techniques to deal with security in Spring and shows how to implement unit test and integration test strategies. You may also like How to develop RESTful web services in Spring, another tutorial from this book. Check out other posts on Spring Security: Spring Security 3: Tips and Tricks Opening up to OpenID with Spring Security Migration to Spring Security 3  
Read more
  • 0
  • 0
  • 8012

article-image-testing-restful-web-services-with-postman
Vijin Boricha
10 Apr 2018
3 min read
Save for later

Testing RESTful Web Services with Postman

Vijin Boricha
10 Apr 2018
3 min read
In today's tutorial, we are going to leverage Postman framework to successfully test RESTful Web Services. We will also discuss a simple JUnit test case, which is calling the getAllUsers method in userService. We can check the following code: @RunWith(SpringRunner.class) @SpringBootTest public class UserTests { @Autowired UserService userSevice; @Test public void testAllUsers(){ List<User> users = userSevice.getAllUsers(); assertEquals(3, users.size()); } } In the preceding code, we have called getAllUsers and verified the total count. Let's test the single-user method in another test case: // other methods @Test public void testSingleUser(){ User user = userSevice.getUser(100); assertTrue(user.getUsername().contains("David")); } In the preceding code snippets, we just tested our service layer and verified the business logic. However, we can directly test the controller by using mocking methods. Postman First, we shall start with a simple API for getting all the users: http://localhost:8080/user The earlier method will get all the users. The Postman screenshot for getting all the users is as follows: In the preceding screenshot, we can see that we get all the users that we added before. We have used the GET method to call this API. Adding a user – Postman Let's try to use the POST method in user to add a new user: http://localhost:8080/user Add the user, as shown in the following screenshot: In the preceding result, we can see the JSON output: { "result" : "added" } Generating a JWT – Postman Let's try generating the token (JWT) by calling the generate token API in Postman using the following code: http://localhost:8080/security/generate/token We can clearly see that we use subject in the Body to generate the token. Once we call the API, we will get the token. We can check the token in the following screenshot: Getting the subject from the token By using the existing token that we created before, we will get the subject by calling the get subject API: http://localhost:8080/security/get/subject The result will be as shown in the following screenshot: In the preceding API call, we sent the token in the API to get the subject. We can see the subject in the resulting JSON. You read an excerpt from Building RESTful Web Services with Spring 5 - Second Edition written by Raja CSP Raman.  From this book, you will learn to build resilient software in Java with the help of the Spring 5.0 framework. Check out the other tutorials from this book: How to develop RESTful web services in Spring Applying Spring Security using JSON Web Token (JWT) More Spring 5 tutorials: Introduction to Spring Framework Preparing the Spring Web Development Environment  
Read more
  • 0
  • 0
  • 11924

article-image-how-to-build-dockers-with-microservices
Pravin Dhandre
06 Apr 2018
9 min read
Save for later

How to build Dockers with microservices

Pravin Dhandre
06 Apr 2018
9 min read
Today, we will demonstrate in detail how to create and build dockers with microservices. We will also explore commands used to manage the building process with microservices. First, we will create a simple microservice that we will use for this tutorial. Then we will get familiar with the Docker building process, and finally, we will create and run our microservice within a Docker. Creating an example microservice In order to create our microservice, we will use Spring Initializr. We can start by visiting the URL: https:/​/​start.​spring.​io/​: We have chosen to create a Maven Project using Kotlin and Spring Boot 2.0.0 M7, and we've chosen the Group to be com.microservices and Artifact chapter07. For Dependencies, we have set Web. Now we can click on Generate Project to download it as a ZIP file. After we unzip it, we can open it with IntelliJ IDEA to start working on our project. After some minutes, our project will be ready and we can open the Maven window to see the different lifecycle phases, Maven plugins, and their goals. Now we will modify our application to create a simple microservice. Open the Chapter07Application.kt file from the project window, and modify it by adding a @RestController: package com.microservices.chapter07 import org.springframework.boot.autoconfigure.SpringBootApplication import org.springframework.boot.runApplication import org.springframework.web.bind.annotation.GetMapping import org.springframework.web.bind.annotation.RestController @SpringBootApplication class Chapter07Application @RestController class GreetingsController { @GetMapping("/greetings") fun greetings() = "hello from a Docker" } fun main(args: Array<String>) { runApplication<Chapter07Application>(*args) } Let's run to see our microservice start somehow. In the Maven window, just double-click on the spring-boot plugin, or just run goal from the command line in the microservice folder: mvnw spring-boot:run After some seconds, we will see several log lines, including something like the following: INFO 11960 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) INFO 11960 --- [ main] c.m.chapter07.Chapter07ApplicationKt : Started Chapter07ApplicationKt in 1.997 seconds (JVM running for 8.154) Our service is ready, and we can just navigate to the http://localhost:8080/greetings URL, but it's still not running in a Docker; let's stop with Ctrl + C, and continue. Creating a Dockerfile In order to create a Docker image, we need to first create a Dockerfile, a file that will include the instructions that we will give to Docker in order to build our image. To create this file, on the top of the Project window, right-click on chapter07 and then select in the drop-down menu New | File, and type Dockerfile. In the next window, click OK, and the file will be created. IntelliJ will recognize that file and offer a plugin to handle it. At the top of the editing window, a message will appear as Plugins supporting Dockerfile files found. On the right of this message, we will see Install Plugins Ignore extension. Let's click on Install Plugins to allow IntelliJ to handle this file. This will require the IDE to restart, and after some seconds it should start again. Now we can add this to our Dockerfile: FROM openjdk:8-jdk-alpine ENTRYPOINT ["java","-version"] Here, we are telling Docker that our image will be based on Java OpenJDK 8 in Alpine Linux. Then, we configure the entry point of our Docker and the command that will be executed when our Docker runs to be just the java command with a parameter, -version. Each of the lines on the Dockerfile will be a step, one of those layers that our Docker is completed with. Now, we should open a command line in our chapter07 directory and run this command to build our image: docker build . -t chapter07 This will create output that will look something like this: Sending build context to Docker daemon 2.302MB Step 1/2 : FROM openjdk:8-jdk-alpine 8-jdk-alpine: Pulling from library/openjdk b56ae66c2937: Pull complete 81cebc5bcaf8: Pull complete 9f7678525069: Pull complete Digest: sha256:219d9c2e4c27b8d1cfc6daeaf339e3eb7ceb82e67ce85857bdc55254822802bc Status: Downloaded newer image for openjdk:8-jdk-alpine ---> a2a00e606b82 Step 2/2 : ENTRYPOINT java --version ---> Running in 661d47cd0bbd ---> 3a1d8bea31e7 Removing intermediate container 661d47cd0bbd Successfully built 3a1d8bea31e7 Successfully tagged chapter07:latest What has happened now is that Docker has built an image for us, and the image has been tagged as chapter07, since we used the -t option. Let's now run it with: docker run chapter07 This output should look something like this: openjdk version "1.8.0_131" OpenJDK Runtime Environment (IcedTea 3.4.0) (Alpine 8.131.11-r2) OpenJDK 64-Bit Server VM (build 25.131-b11, mixed mode) This has run our Docker image that simply displays the Java version, but we need to add our microservice to it. Before that, let's understand clearly what a Docker is. A Dockerfile produces a binary image of a set of commands creating layers for each of them. Those commands are executed at build time to output the desired image. An image will have an entry point, a command that will be executed when we run the image itself. A Docker is a containerized instance of a particular image. We usually refer to them as containers. When we run them, a copy of the original image is containerized and run through the defined entry point, outputting the results of their execution. We have just briefly discussed creating Dockerfiles, but it is a technique that we should eventually master. We strongly recommend reviewing the Docker file reference on the Docker page https:/​/​docs.​Docker.​com/engine/​reference/​builder/​, also see Dockerfile best practices at: https://​docs.​Docker.​com/​engine/​userguide/​eng-​image/​dockerfile_​bestpractices/​. Dockerize our microservice In order to create a Docker with our microservice, we first need to package it into a JAR. So let's use Maven to do it, using the package lifecycle: mvnw package With the package created, now we need to modify our Dockerfile to actually use it: FROM openjdk:8-jdk-alpine ADD target/*.jar microservice.jar ENTRYPOINT ["java","-jar", "microservice.jar"] We use the ADD command to include our microservice JAR from the target folder. We get it from our target directory, and we add it to the Docker as microservices.jar. Then, we change our entry point to actually execute our JAR. Now we can build our image again, repeating the build command: docker build . -t chapter07 This should now give the following output: Sending build context to Docker daemon 21.58MB Step 1/3 : FROM openjdk:8-jdk-alpine ---> a2a00e606b82 Step 2/3 : ADD target/*.jar microservice.jar ---> 5c385fee6516 Step 3/3 : ENTRYPOINT java -jar microservice.jar ---> Running in 11071fdd0eb2 ---> a43186cc4ea0 Removing intermediate container 11071fdd0eb2 Successfully built a43186cc4ea0 Successfully tagged chapter07:latest However, this build is quicker than before, since the Docker command is an intelligent command; the things that have no changes from our FROM command are cached, and will not be built again. Now we can run our microservice again by using: docker run chapter07 We can now see our Spring Boot application running; however, if we try to navigate in our browser to it, we will not be able to reach it, so let's stop it with Ctrl + C. Sometimes, doing Ctrl + C will not stop our Docker from just returning to the terminal. If we really want to completely stop it, we could follow these steps. First, we should list our Docker with: docker ps This should list our Docker status, and actually, tell us that the Docker is still up: CONTAINER ID IMAGE COMMAND STATUS d6bd15780353 chapter07 "java -jar microse..." Up About a minute We can just stop it with the kill command: docker kill d6bd15780353 Now, if we repeat our Docker ps command again, the Docker should not be shown, but it will if we do a Docker ps -a: CONTAINER ID IMAGE COMMAND STATUS d6bd15780353 chapter07 "java -jar microse..." Exited (137) 2 minutes ago The status of our Docker has changed from up to existed, as we'd expect. Running the microservice The reason we can't access the microservice when we run our previous example is that we need to expose the port that is running on the container outside of it. So, we need to modify our Docker run command to: docker run -d -p8080:8080 chapter07 Now we can just navigate to the URL http://localhost:8080/greetings, and we should get the following output: hello from a Docker We have just exposed our Docker internal port 8080, but the -p option allows us to expose a different port too. So inside, the Docker can run on port 8080, but we can externally run on another port. When we run our microservice via the command line, we actually wait until we press Ctrl + C to terminate it. We can instead just run it as a daemon. A daemon is a process that runs in the background of our system, so we could continue executing other commands while our process keeps running behind the scenes. To run a Docker as a daemon, we could use the following command: docker run -d -p8080:8080 chapter07 This will run the Docker as a daemon in the background, but it is still accessible. It should be listed when we do the following: docker ps Here, we can get the CONTAINER ID from our running Docker: CONTAINER ID IMAGE COMMAND STATUS 741bf50a0bfc chapter07 "java -jar microse..." Up About a minute To see the logs, we can now run the following command: docker logs 741bf50a0bfc This will display the log of a running Docker; however, it will just exit after displaying the current logs. If we can wait for more output, as the Unix command tail does, we can instead do the following: docker logs 741bf50a0bfc -f With this, we learned quickly the building process of a docker along with various commands in microservices. Do check out the book Hands-On Microservices with Kotlin to start creating Docker containers for your microservices and scale them in your production environment.  
Read more
  • 0
  • 0
  • 3711

article-image-6-javascript-micro-optimizations-need-know
Savia Lobo
05 Apr 2018
18 min read
Save for later

6 JavaScript micro optimizations you need to know

Savia Lobo
05 Apr 2018
18 min read
JavaScript micro optimizations can improve the performance of your JavaScript code. This means you can get it to do more - this is essential especially when thinking about the scale of modern web applications, as greater efficiencies in code can lead to much stronger overall performance. Let us have a look at micro optimizations in detail. Truthy/falsy comparisons We have all, at some point, written if conditions or assigned default values by relying on the truthy or falsy nature of the JavaScript variables. As helpful as it is most of the times, we will need to consider the impact that such an operation would cause on our application. However, before we jump into the details, let's discuss how any condition is evaluated in JavaScript, specifically an if condition in this case. As a developer, we tend to do the following: if(objOrNumber) { // do something } This works for most of the cases, unless the number is 0, in which case it gets evaluated to false. That is a very common edge case, and most of us catch it anyway. However, what does the JavaScript engine have to do to evaluate this condition? How does it know whether the objOrNumber evaluates to true or false? Let's return to our ECMA262 specs and pull out the IF condition spec (https://www.ecma-international.org/ecma-262/5.1/#sec-12.5). The following is an excerpt of the same: Semantics The production IfStatement : If (Expression) Statement else Statement Statement is evaluated as follows:    Let exprRef be the result of evaluating Expression.    If ToBoolean(GetValue(exprRef)) is true, then Return the result of evaluating the first Statement.    Else, Return the result of evaluating the second Statement. Now, we note that whatever expression we pass goes through the following three steps: Getting the exprRef from Expression. GetValue is called on exprRef. ToBoolean is called as the result of step 2. Step 1 does not concern us much at this stage; think of it this way—an expression can be something like a == b or something like the shouldIEvaluateTheIFCondition() method call, that is, something that evaluates your condition. Step 2 extracts the value of the exprRef, that is, 10, true, undefined. In this step, we differentiate how the value is extracted based on the type of the exprRef. You can refer to the details of GetValue here. Step 3 then converts the value extracted from Step 2 into a Boolean value based on the following table (taken from https://www.ecma-international.org/ecma-262/5.1/#sec-9. 2): At each step, you can see that it is always beneficial if we are able to provide the direct boolean value instead of a truthy or falsy value. Looping optimizations We can do a deep-down dive into the for loop, similar to what we did with the if condition earlier (https://www.ecma-international.org/ecma-262/5.1/#sec-12.6.3), but there are easier and more obvious optimizations which can be applied when it comes to loops. Simple changes can drastically affect the quality and performance of the code; consider this for example: for(var i = 0; i < arr.length; i++) { // logic } The preceding code can be changed as follows: var len = arr.length; for(var i = 0; i < len; i++) { // logic } What is even better is to run the loops in reverse, which is even faster than what we have seen previously: var len = arr.length; for(var i = len; i >= 0; i--) { // logic } The conditional function call Some of the features that we have within our applications are conditional. For example, logging or analytics fall into this category. Some of the applications may have logging turned off for some time and then turned back on. The most obvious way of achieving this is to wrap the method for logging within an if condition. However, since the method could be triggered a lot of times, there is another way in which we can make the optimization in this case: function someUserAction() { // logic if (analyticsEnabled) { trackUserAnalytics(); } } // in some other class function trackUserAnalytics() { // save analytics } Instead of the preceding approach, we can instead try to do something, which is only slightly different but allows V8-based engines to optimize the way the code is executed: function someUserAction() { // logic trackUserAnalytics(); } // in some other class function toggleUserAnalytics() { if(enabled) { trackUserAnalytics =   userAnalyticsMethod; } else { trackUserAnalytics = noOp; } } function userAnalyticsMethod() { // save analytics } // empty function function noOp           {} Now, the preceding implementation is a double-edged sword. The reason for that is very simple. JavaScript engines employ a technique called inline caching (IC), which means that any previous lookup for a certain method performed by the JS engine will be cached and reused when triggered the next time; for example, if we have an object that has a nested method, a.b.c, the method a.b.c will be only looked up once and stored on cache (IC); if a.b.c is called the next time, it will be picked up from IC, and the JS engine will not parse the whole chain again. If there are any changes to the a.b.c chain, then the IC gets invalidated and a new dynamic lookup is performed the next time instead of being retrieved from the IC. So, from our previous example, when we have noOp assigned to the trackUserAnalytics() method, the method path gets tracked and saved within IC, but it internally removes this function call as it is a call to an empty method. However, when it is applied to an actual function with some logic in it, IC points it directly to this new method. So, if we keep calling our toggleUserAnalytics() method multiple times, it keeps invalidating our IC, and our dynamic method lookup has to happen every time until the application state stabilizes (that is, toggleUserAnalytics() is no longer called). Image and font optimizations When it comes to image and font optimizations, there are no limits to the types and the scale of optimization that we can perform. However, we need to keep in mind our target audience, and we need to tailor our approach based on the problem at hand. With both images and fonts, the first and foremost important thing is that we do not overserve, that is, we request and send only the data that is necessary by determining the dimensions of the device that our application is running on. The simplest way to do this is by adding a cookie for your device size and sending it to the server along with each of the request. Once the server receives the request for the image, it can then retrieve the image based on the dimension of the image that was sent to the cookie. Most of the time these images are something like a user avatar or a list of people who commented on a certain post. We can agree that the thumbnail images do not need to be of the same size as that of the profile page, and we can save some of the bandwidth while transmitting a smaller image based on the image. Since screens these days have very high Dots Per Inch (DPI), the media that we serve to screens needs to be worthy of it. Otherwise, the application looks bad and the images look all pixelated. This can be avoided using Vector images or SVGs, which can be GZipped over the wire, thus reducing the payload size. Another not so obvious optimization is changing the image compression type. Have you ever loaded a page in which the image loads from the top to bottom in small, incremental rectangles? By default, the images are compressed using a baseline technique, which is a default method of compressing the image from top to bottom. We can change this to be progressive compression using libraries such as imagemin. This would load the entire image first as blurred, then semi blurred, and so on until the entire image is uncompressed and displayed on the screen. Uncompressing a progressive JPEG might take a little longer than that of the baseline, so it is important to measure before making such optimizations. Another extension based on this concept is a Chrome-only format of an image called WebP. This is a highly effective way of serving images, which serves a lot of companies in production and saved almost 30% on bandwidth. Using WebP is almost as simple as the progressive compression as discussed previously. We can use the imagemin-webp node module, which can convert a JPEG image into a webp image, thus reducing the image size to a great extent. Web fonts are a little different than that of images. Images get downloaded and rendered onto the UI on demand, that is, when the browser encounters the image either from the HTML 0r CSS files. However, the fonts, on the other hand, are a little different. The font files are only requested when the Render Tree is completely constructed. That means that the CSSOM and DOM have to be ready by the time request is dispatched for the fonts. Also, if the fonts files are being served from the server and not locally, then there are chances that we may see the text without the font applied first (or no text at all) and then we see the font applied, which may cause a flashing effect of the text. There are multiple simple techniques to avoid this problem: Download, serve, and preload the font files locally: <link rel="preload" href="fonts/my-font.woff2" as="font"> Specify the unicode-range in the font-face so that browsers can adapt and improvise on the character set and glyphs that are actually expected by the browser: @font-face( ... unicode-range: U+000-5FF; // latin ... ) So far, we have seen that we can get the unstyled text to be loaded on to the UI and the get styled as we expected it to be; this can be changed using the font loading API, which allows us to load and render the font using JavaScript: var font = new FontFace("myFont", "url(/my-fonts/my-font.woff2)", { unicodeRange: 'U+000-5FF' }); // initiate a fetch without Render Tree font.load().then(function() { // apply the font document.fonts.add(font); document.body.style.fontFamily = "myFont"; }); Garbage collection in JavaScript Let's take a quick look at what garbage collection (GC) is and how we can handle it in JavaScript. A lot of low-level languages provide explicit capabilities to developers to allocate and free memory in their code. However, unlike those languages, JavaScript automatically handles the memory management, which is both a good and bad thing. Good because we no longer have to worry about how much memory we need to allocate, when we need to do so, and how to free the assigned memory. The bad part about the whole process is that, to an uninformed developer, this can be a recipe for disaster and they can end up with an application that might hang and crash. Luckily for us, understanding the process of GC is quite easy and can be very easily incorporated into our coding style to make sure that we are writing optimal code when it comes to memory management. Memory management has three very obvious steps:    Assign the memory to variables: var a = 10; // we assign a number to a memory location referenced by variable a    Use the variables to read or write from the memory: a += 3; // we read the memory location referenced by a and write a new value to it    Free the memory when it's no longer needed. Now, this is the part that is not explicit. How does the browser know when we are done with the variable a and it is ready to be garbage collected? Let's wrap this inside a function before we continue this discussion: function test() { var a = 10; a += 3; return a; } We have a very simple function, which just adds to our variable a and returns the result and finishes the execution. However, there is actually one more step, which will happen after the execution of this method called mark and sweep (not immediately after, sometimes this can also happen after a batch of operations is completed on the main thread). When the browser performs mark and sweep, it's dependent on the total memory the application consumes and the speed at which the memory is being consumed. Mark and sweep algorithm Since there is no accurate way to determine whether the data at a particular memory location is going to be used or not in the future, we will need to depend on alternatives which can help us make this decision. In JavaScript, we use the concept of a reference to determine whether a variable is still being used or not—if not, it can be garbage collected. The concept of mark and sweep is very straightforward: what all memory locations are reachable from all the known active memory locations? If something is not reachable, collect it, that is, free the memory. That's it, but what are the known active memory locations? It still needs a starting point, right? In most of the browsers, the GC algorithm keeps a list of the roots from which the mark and sweep process can be started. All the roots and their children are marked as active, and any variable that can be reached from these roots are also marked as active. Anything that cannot be reached can be marked as unreachable and thus collected. In most of the cases, the roots consist of the window object. So, we will go back to our previous example: function test() { var a = 10; a += 3; return a; } Our variable a is local to the test() method. As soon as the method is executed, there is no way to access that variable anymore, that is, no one holds any reference to that variable, and that is when it can be marked for garbage collection so that the next time GC runs, the var  a will be swept and the memory allocated to it can be freed. Garbage collection and V8 When it comes to V8, the process of garbage collection is extremely complex (as it should be). So, let's briefly discuss how V8 handles it. In V8, the memory (heap) is divided into two main generations, which are the new-space and old-space. Both new-space and old-space are assigned some memory (between 1 MB and 20 MB). Most of the programs and their variables when created are assigned within the new-space. As and when we create a new variable or perform an operation, which consumes memory, it is by default assigned from the new-space, which is optimized for memory allocation. Once the total memory allocated to the new-space is almost completely consumed, the browser triggers a Minor GC, which basically removes the variables that are no longer being referenced and marks the variables that are still being referenced and cannot be removed yet. Once a variable survives two or more Minor GCs, then it becomes a candidate for old-space where the GC cycle is not run as frequently as that of the new- space. A Major GC is triggered when the old-space is of a certain size, all of this is driven by the heuristics of the application, which is very important to the whole process. So, well- written programs move fewer objects into the old-space and thus have less Major GC events being triggered. Needless to say that this is a very high-level overview of what V8 does for garbage collection, and since this process keeps changing over time, we will switch gears and move on to the next topic. Avoiding memory leaks Well, now that we know on a high level what garbage collection is in JavaScript and how it works, let's take a look at some common pitfalls which prevent us from getting our variables marked for GC by the browser. Assigning variables to global scope This should be pretty obvious by now; we discussed how the GC mechanism determines a root (which is the window object) and treats everything on the root and its children as active and never marks them for garbage collection. So, the next time you forget to add a var to your variable declarations, remember that the global variable that you are creating will live forever and never get garbage collected: function test() { a = 10; // created on window object a += 3; return a; } Removing DOM elements and references It's imperative that we keep our DOM references to a minimum, so a well-known step that we like to perform is caching the DOM elements in our JavaScript so that we do not have to query any of the DOM elements over and over. However, once the DOM elements are removed, we will need to make sure that these methods are removed from our cache as well, otherwise, they will never get GC'd: var cache = { row: document.getElementById('row') }; function removeTable() { document.body.removeChild(document.getElementById('row')); } The code shown previously removes the row from the DOM but the variable cache still refers to the DOM element, hence preventing it from being garbage collected. Another interesting thing to note here is that even when we remove the table that was containing the row, the entire table would remain in the memory and not get GC'd because the row, which is in cache internally refers to the table. Closures edge case Closures are amazing; they help us deal with a lot of problematic scenarios and also provide us with ways in which we can simulate the concept of private variables. Well, all that is good, but sometimes we tend to overlook the potential downsides that are associated with the closures. Here is what we do know and use: function myGoodFunc() { var a = new Array(10000000).join('*'); // something big enough to cause a spike in memory usage function myGoodClosure() { return a + ' added from closure'; } myGoodClosure(); } setInterval(myGoodFunc, 1000); When we run this script in the browser and then profile it, we see as expected that the method consumes a constant amount of memory and then is GC'd and restored to the baseline memory consumed by the script: Now, let's zoom into one of these spikes and take a look at the call tree to determine what all events are triggered around the time of the spikes: We can see that everything happens as per our expectation here; first, our setInterval() is triggered, which calls myGoodFunc(), and once the execution is done, there is a GC, which collects the data and hence the spike, as we can see from the preceding screenshots. Now, this was the expected flow or the happy path when dealing with closures. However, sometimes our code is not as simple and we end up performing multiple things within one closure, and sometimes even end up nesting closures: function myComplexFunc() { var a = new Array(1000000).join('*'); // something big enough to cause a spike in memory usage function closure1() { return a + ' added from closure'; } closure1(); function closure2() { console.log('closure2 called') } setInterval(closure2, 100); } setInterval(myComplexFunc, 1000); We can note in the preceding code that we extended our method to contain two closures now: closure1 and closure2. Although closure1 still performs the same operation as before, closure2 will run forever because we have it running at 1/10th of the frequency of the parent function. Also, since both the closure methods share the parent closure scope, in this case the variable a, it will never get GC'd and thus cause a huge memory leak, which can be seen from the profile as follows: On a closer look, we can see that the GC is being triggered but because of the frequency at which the methods are being called, the memory is slowly leaking (lesser memory is collected than being created): Well, that was an extreme edge case, right? It's way more theoretical than practical—why would anyone have two nested setInterval() methods with closures. Let's take a look at another example in which we no longer nest multiple setInterval(), but it is driven by the same logic. Let's assume that we have a method that creates closures: var something = null; function replaceValue () { var previousValue = something; // `unused` method loads the `previousValue` into closure scope function </span>unused() { if (previousValue) console.log("hi"); } // update something something = { str: new Array(1000000).join('*'), // all closures within replaceValue share the same // closure scope hence someMethod would have access // to previousValue which is nothing but its parent // object (`something`) // since `someMethod` has access to its parent // object, even when it is replaced by a new (identical) // object in the next setInterval iteration, the previous // value does not get garbage collected because the someMethod // on previous value still maintains reference to previousValue // and so on. someMethod: function () {} }; } setInterval(replaceValue, 1000); A simple fix to solve this problem is obvious, as we have said ourselves that the previous value of the object something doesn't get garbage collected as it refers to the previousValue from the previous iteration. So, the solution to this would be to clear out the value of the previousValue at the end of each iteration, thus leaving nothing for something to refer once it is unloaded, hence the memory profiling can be seen to change: The preceding image changes as follows: To summarize,  we introduced JavaScript micro-optimizations and memory optimizations that ultimately led to a high performance JavaScript. If you have found this post useful, do check out the book Hands-On Data Structures and Algorithms with JavaScript for solutions to implement complex data structures and algorithms in practical way.    
Read more
  • 0
  • 2
  • 6978
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-reference-generator-for-job-portal-breadth-first-search-algorithm
Savia Lobo
05 Apr 2018
11 min read
Save for later

Creating a reference generator for a job portal using Breadth First Search (BFS) algorithm

Savia Lobo
05 Apr 2018
11 min read
In this tutorial, we will create a reference generator for a job portal with the help of Breadth First Search (BFS) algorithm. For instance, we have few users who are friends with each other, we will create nodes for each of the user and associate each of the nodes with data, such as their name and the company they work in. Once we create all the nodes, we will join them based on some predefined relationships between the nodes. Then, we will use these predefined relationships to determine who a user would have to talk to, in order to get referred for a job interview at a company of their choice. For example, A who works at company X and B who works at company Y are friends, B and C who works at company Z are friends. So, if A wants to get referred to company Z, then A talks to B, who can introduce them to C for a referral to company Z. In most production-level apps, you will not be creating graphs in such a fashion. You can simply use a graph database, which can perform a lot of features out of the box. Returning to our example, in more technical terms, we have an undirected graph (think of users as nodes and friendship as edges between them), and we want to determine the shortest path from one node to another. To perform what we have described so far, we will be using a technique known as Breadth First Search (BFS). BFS is a graph traversal mechanism in which the neighboring nodes are examined or evaluated first before moving on to the next level. This helps to ensure that the number of links found in the resulting chain is always minimum, hence we always get the shortest possible path from node A to node B. Although there are other algorithms, such as Dijkstra, to achieve similar results, we will go with BFS because Dijkstra is a more complex algorithm that is well suited when each edge has an associated cost with it. For example, in our case, we would go with Dijkstra if our user's friendships have a weight associated with it such as acquaintance, friend, and close friend, which would help us associate weights with each of those paths. A good use case to consider Dijkstra would be for something such as a Maps application, which would give you directions from point A to B based on the traffic (that is, the weight or cost associated with each edge) in between. Creating a bidirectional graph We can start with logic for our graph by creating a new file under utils/graph.js, which will hold the edges and then provide a simple shortestPath method to access the Graph and apply the BFS algorithm on the graph that is generated, as shown in the following code: var _ = require('lodash'); class Graph { constructor(users) { // initialize edges this.edges = {}; // save users for later access this.users = users; // add users and edges of each _.forEach(users, (user) => { this.edges[user.id] = user.friends; }); } } module.exports = Graph; Once we add the edges to our graph, it has nodes (user IDs), and edges are defined as the relationship between each user ID and friend in the friends array, which is available for each user. Forming the graph was an easy task, thanks to the way our data is structured. In our example dataset, each user has a set of friends list, which is listed in the following code: [ { id: 1, name: 'Adam', company: 'Facebook', friends: [2, 3, 4, 5, 7] }, { id: 2, name: 'John', company: 'Google', friends: [1, 6, 8] }, { id: 3, name: 'Bill', company: 'Twitter', friends: [1, 4, 5, 8] }, { id: 4, name: 'Jose', company: 'Apple', friends: [1, 3, 6, 8] }, { id: 5, name: 'Jack', company: 'Samsung', friends: [1, 3, 7] }, { id: 6, name: 'Rita', company: 'Toyota', friends: [2, 4, 7, 8] }, { id: 7, name: 'Smith', company: 'Matlab', friends: [1, 5, 6, 8] }, { id: 8, name: 'Jane', company: 'Ford', friends: [2, 3, 4, 6, 7] } ] As you can note in the preceding code, we did not really have to establish a bidirectional edge exclusively here because if user 1 is a friend of user 2 then user 2 is also a friend of user 1. Generating a pseudocode  for the shortest path generation Before its implementation, let's quickly jot down what we are about to do so that the actual implementation becomes a lot easier: INITIALIZE tail to 0 for subsequent iterations MARK source node as visited WHILE result not found GET neighbors of latest visited node (extracted using tail) FOR each of the node IF node already visited RETURN Mark node as visited IF node is our expected result INITIALIZE result with current neighbor node WHILE not source node BACKTRACK steps by popping users from previously visited path until the source user ADD source user to the result CREATE and format result variable IF result found return control NO result found, add user to previously visited path ADD friend to queue for BFS in next iteration INCREMENT tail for next loop RETURN NO_RESULT Implementing the shortest path generation Let's now create our customized BFS algorithm to parse the graph and generate the shortest possible path for our user to get referred to company A: var _ = require('lodash'); class Graph { constructor(users) { // initialize edges this.edges = {}; // save users for later access this.users = users; // add users and edges of each _.forEach(users, (user) => { this.edges[user.id] = user.friends; }); } shortestPath(sourceUser, targetCompany) { // final shortestPath var shortestPath; // for iterating along the breadth var tail = 0; // queue of users being visited var queue = [ sourceUser ]; // mark visited users var visitedNodes = []; // previous path to backtrack steps when shortestPath is found var prevPath = {}; // request is same as response if (_.isEqual(sourceUser.company, targetCompany)) { return; } // mark source user as visited so // next time we skip the processing visitedNodes.push(sourceUser.id); // loop queue until match is found // OR until the end of queue i.e no match while (!shortestPath && tail < queue.length) { // take user breadth first var user = queue[tail]; // take nodes forming edges with user var friendsIds = this.edges[user.id]; // loop over each node _.forEach(friendsIds, (friendId) => { // result found in previous iteration, so we can stop if (shortestPath) return; // get all details of node var friend = _.find(this.users, ['id', friendId]); // if visited already, // nothing to recheck so return if (_.includes(visitedNodes, friendId)) { return; } // mark as visited visitedNodes.push(friendId); // if company matched if (_.isEqual(friend.company, targetCompany)) { // create result path with the matched node var path = [ friend ]; // keep backtracking until source user and add to path while (user.id !== sourceUser.id) { // add user to shortest path path.unshift(user); // prepare for next iteration user = prevPath[user.id]; } // add source user to the path path.unshift(user); // format and return shortestPath shortestPath = _.map(path, 'name').join(' -> '); } // break loop if shortestPath found if (shortestPath) return; // no match found at current user, // add it to previous path to help backtracking later prevPath[friend.id] = user; // add to queue in the order of visit // i.e. breadth wise for next iteration queue.push(friend); }); // increment counter tail++; } return shortestPath || `No path between ${sourceUser.name} & ${targetCompany}`; } } module.exports = Graph; The most important part of the code is when the match is found, as shown in the following code block from the preceding code: // if company matched if (_.isEqual(friend.company, targetCompany)) { // create result path with the matched node var path = [ friend ]; // keep backtracking until source user and add to path while (user.id !== sourceUser.id) { // add user to shortest path path.unshift(user); // prepare for next iteration user = prevPath[user.id]; } // add source user to the path path.unshift(user); // format and return shortestPath shortestPath = _.map(path, 'name').join(' -> '); } Here, we are employing a technique called backtracking, which helps us retrace our steps when the result is found. The idea here is that we add the current state of the iteration to a map whenever the result is not found—the key as the node being visited currently, and the value as the node from which we are visiting. So, for example, if we visited node 1 from node 3, then the map would contain { 1: 3 } until we visit node 1 from some other node, and when that happens, our map will update to point to the new node from which we got to node 1, such as { 1: newNode }. Once we set up these previous paths, we can easily trace our steps back by looking at this map. By adding some log statements (available only in the GitHub code to avoid confusion), we can easily take a look at the long but simple flow of the data. Let us take an example of the data set that we defined earlier, so when Bill tries to look for friends who can refer him to Toyota, we see the following log statements: starting the shortest path determination added 3 to the queue marked 3 as visited shortest path not found, moving on to next node in queue: 3 extracting neighbor nodes of node 3 (1,4,5,8) accessing neighbor 1 mark 1 as visited result not found, mark our path from 3 to 1 result not found, add 1 to queue for next iteration current queue content : 3,1 accessing neighbor 4 mark 4 as visited result not found, mark our path from 3 to 4 result not found, add 4 to queue for next iteration current queue content : 3,1,4 accessing neighbor 5 mark 5 as visited result not found, mark our path from 3 to 5 result not found, add 5 to queue for next iteration current queue content : 3,1,4,5 accessing neighbor 8 mark 8 as visited result not found, mark our path from 3 to 8 result not found, add 8 to queue for next iteration current queue content : 3,1,4,5,8 increment tail to 1 shortest path not found, moving on to next node in queue: 1 extracting neighbor nodes of node 1 (2,3,4,5,7) accessing neighbor 2 mark 2 as visited result not found, mark our path from 1 to 2 result not found, add 2 to queue for next iteration current queue content : 3,1,4,5,8,2 accessing neighbor 3 neighbor 3 already visited, return control to top accessing neighbor 4 neighbor 4 already visited, return control to top accessing neighbor 5 neighbor 5 already visited, return control to top accessing neighbor 7 mark 7 as visited result not found, mark our path from 1 to 7 result not found, add 7 to queue for next iteration current queue content : 3,1,4,5,8,2,7 increment tail to 2 shortest path not found, moving on to next node in queue: 4 extracting neighbor nodes of node 4 (1,3,6,8) accessing neighbor 1 neighbor 1 already visited, return control to top accessing neighbor 3 neighbor 3 already visited, return control to top accessing neighbor 6 mark 6 as visited result found at 6, add it to result path ([6]) backtracking steps to 3 we got to 6 from 4 update path accordingly: ([4,6]) add source user 3 to result form result [3,4,6] return result increment tail to 3 return result Bill -> Jose -> Rita What we basically have here is an iterative process using BFS to traverse the tree and backtracking the result. This forms the core of our functionality. Creating a web server We can now add a route to access this graph and its corresponding shortestPath method. Let's first create the route under routes/references and add it as a middleware to the web server: var express = require('express'); var app = express(); var bodyParser = require('body-parser'); // register endpoints var references = require('./routes/references'); // middleware to parse the body of input requests app.use(bodyParser.json()); // route middleware app.use('/references', references); // start server app.listen(3000, function () { console.log('Application listening on port 3000!'); }); Then, create the route as shown in the following code: var express = require('express'); var router = express.Router(); var Graph = require('../utils/graph'); var _ = require('lodash'); var userGraph; // sample set of users with friends // same as list shown earlier var users = [...]; // middleware to create the users graph router.use(function(req) { // form graph userGraph = new Graph(users); // continue to next step req.next(); }); // create the route for generating reference path // this can also be a get request with params based // on developer preference router.route('/') .post(function(req, res) { // take user Id const userId = req.body.userId; // target company name const companyName = req.body.companyName; // extract current user info const user = _.find(users, ['id', userId]); // get shortest path const path = userGraph.shortestPath(user, companyName); // return res.send(path); }); module.exports = router; Running the reference generator To test this, simply start the web server by running the npm start command from the root of the project as shown earlier. Once the server is up and running, you can use any tool you wish to post the request to your web server, as shown in the following screenshot: As you can see in the preceding screenshot, we get the response back as expected. This can, of course, be changed in a way to return all the user objects instead of just the names. That could be a fun extension of the example for you to try on your own. We learned to create a reference generator for a job portal using the Breadth First Search (BFS) algorithm in JavaScript. If you have found this post interesting, do check out this book, Hands-On Data Structures and Algorithms with JavaScript to create and employ various data structures in a way that is demanded by your project or use case.  
Read more
  • 0
  • 0
  • 2908

article-image-how-to-work-with-the-intellij-idea-selenium-plugin
Amey Varangaonkar
03 Apr 2018
3 min read
Save for later

How to work with the Selenium IntelliJ IDEA plugin

Amey Varangaonkar
03 Apr 2018
3 min read
Most of the framework components you design and build will be customized to your application under test. However, there are many third-party tools and plugins available, which you can use to provide better results processing, reporting, performance, and services to engineers using the framework. In this article, we cover one of the most popular plugins used with Selenium - the Selenium IntelliJ IDEA plugin. IntelliJ IDEA Selenium plugin When we covered building page object classes earlier, we discussed how to define the locators on a page for each WebElement or MobileElement using the @findBy annotations. That required the user to use one of the Inspectors or plugins to view the DOM structure and hand-code a robust locator that is cross-platform safe. Now, when using CSS and XPath locators, the hierarchy of the element can get complex, and there is a greater chance of building invalid locators. So, Perfect Test has come up with a Selenium plugin for the IntelliJ IDEA that will find and create locators on the fly. Before discussing some of the features of the plugin, let's review where this is located. Sample project files There are instructions on the www.perfect-test.com site for installing the plugin and once that is done, users can create a new project using a sample template, which will auto- generate a series of template files. These files are generic "getting started" files, but you should still follow the structure and design of the framework as outlined in this book. Here is a quick screenshot of the autogenerated file structure of the sample project: Once the plugin is enabled by simply clicking on the Selenium icon in the toolbar, users can use the Code Generate menu features to create code samples, Java methods, getter/setter methods, WebElements, copyrights for files, locators, and so on. Generating element locators The plugin has a nice feature for creating WebElement definitions, adding locators of choice, and validating them in the class. It provides a set of tooltips to tell the user what is incorrect in the syntax of the locator, which is helpful when creating CSS and XPath strings. Here is a screenshot of the locator strategy feature: Once the WebElement structure is built into the page object class, you can capture and verify the locator, and it will indicate an error with a red underline. When moving over the invalid syntax, it provides a tooltip and a lightbulb icon to the left of it, where users can use features for Check Element Existence on page and Fix Locator Popup. These are very useful for quickly finding syntax errors and defining locators. Here is a screenshot of the Check Element Existence on page feature: Here is a screenshot of the Fix Locator Popup feature: The Selenium IntelliJ plugin deals mostly with creating locators and the differences between CSS and XPath syntax. The tool also provides drop-down lists of examples where users can pick and choose how to build the queries. It's a great way to get started using Selenium to build real page object classes, and it provides a tool to validate complex CSS and XPath structures in locators! Apart from the Selenium IntelliJ plugin, there are other third-party APIs such as HTML Publisher Plugin, BrowserMob Proxy Plugin, ExtentReports Reporter API and also Sauce Labs Test Cloud services.  This article is an excerpt taken from the book Selenium Framework Design in Data-Driven Testing by Carl Cocchiaro. It presents a step-by-step approach to design and build a data-driven test framework using Selenium WebDriver, Java, and TestNG.  
Read more
  • 0
  • 0
  • 8888

article-image-how-to-handle-exceptions-and-synchronization-methods-with-selenium-webdriver-api
Amey Varangaonkar
02 Apr 2018
11 min read
Save for later

How to handle exceptions and synchronization methods with Selenium WebDriver API

Amey Varangaonkar
02 Apr 2018
11 min read
One of the areas often misunderstood, but is important in framework design is exception handling. Users must program into their tests and methods on how to handle exceptions that might occur in tests, including those that are thrown by applications themselves, and those that occur using the Selenium WebDriver API. In this article, we will see how to do that effectively. Let us look at different kinds of exceptions that users must account for: Implicit exceptions: Implicit exceptions are internal exceptions raised by the API method when a certain condition is not met, such as an illegal index of an array, null pointer, file not found, or something unexpected occurring at runtime. Explicit exceptions: Explicit exceptions are thrown by the user to transfer control out of the current method, and to another event handler when certain conditions are not met, such as an object is not found on the page, a test verification fails, or something expected as a known state is not met. In other words, the user is predicting that something will occur, and explicitly throws an exception if it does not. WebDriver exceptions: The Selenium WebDriver API has its own set of exceptions that can implicitly occur when elements are not found, elements are not visible, elements are not enabled or clickable, and so on. They are thrown by the WebDriver API method, but users can catch those exceptions and explicitly handle them in a predictable way. Try...catch blocks: In Java, exception handling can be completely controlled using a try...catch block of statements to transfer control to another method, so that the exit out of the current routine doesn't transfer control to the call handler up the chain, but rather, is handled in a predictable way before the exception is thrown. Let us examine the different ways of handling exceptions during automated testing. Implicit exception handling A simple example of Selenium WebDriver implicit exception handling can be described as follows: Define an element on a page Create a method to retrieve the text from the element on the page In the signature of the method, add throws Exception Do not handle a specific exception like ElementNotFoundException: // create a method to retrieve the text from an element on a page @FindBy(id="submit") protected M submit; public String getText(WebElement element) throws Exception { return element.getText(); } // use the method LoginPO.getText(submit); Now, when using an assertion method, TestNG will implicitly throw an exception if the condition is not met: Define an element on a page Create a method to verify the text of the element on a page Cast the expected and actual text to the TestNG's assertEquals method TestNG will throw an AssertionError TestNG engages the difference viewer to compare the result if it fails: // create a method to verify the text from an element on a page @FindBy(id="submit") protected M submit; public void verifyText(WebElement element, String expText) throws AssertionError { assertEquals(element.getText(), expText, "Verify Submit Button Text"); } // use the method LoginPO.verifyText(submit, "Sign Inx"); // throws AssertionError java.lang.AssertionError: Verify Text Label expected [ Sign Inx] but found [ Sign In] Expected : Sign Inx Actual : Sign In <Click to see difference> TestNG difference viewer When using the TestNG's assertEquals methods, a difference viewer will be engaged if the comparison fails. There will be a link in the stacktrace in the console to open it. Since it is an overloaded method, it can take a number of data types, such as String, Integer, Boolean, Arrays, Objects, and so on. The following screenshot displays the TestNG difference viewer: Explicit exception handling In cases where the user can predict when an error might occur in the application, they can check for that error and explicitly raise an exception if it is found. Take the login function of a browser or mobile application as an example. If the user credentials are incorrect, the app will throw an exception saying something like "username invalid, try again" or "password incorrect, please re-enter". The exception can be explicitly handled in a way that the actual error message can be thrown in the exception. Here is an example of the login method we wrote earlier with exception handling added to it: @FindBy(id="myApp_exception") protected M error; /** * login - method to login to app with error handling * * @param username * @param password * @throws Exception */ public void login(String username, String password) throws Exception { if ( !this.username.getAttribute("value").equals("") ) { this.username.clear(); } this.username.sendKeys(username); if ( !this.password.getAttribute( "value" ).equals( "" ) ) { this.password.clear(); } this.password.sendKeys(password); submit.click(); // exception handling if ( BrowserUtils.elementExists(error, Global_VARS.TIMEOUT_SECOND) ) { String getError = error.getText(); throw new Exception("Login Failed with error = " + getError); } } Try...catch exception handling Now, sometimes the user will want to trap an exception instead of throwing it, and perform some other action such as retry, reload page, cleanup dialogs, and so on. In cases like that, the user can use try...catch in Java to trap the exception. The action would be included in the try clause, and the user can decide what to do in the catch condition. Here is a simple example that uses the ExpectedConditions method to look for an element on a page, and only return true or false if it is found. No exception will be raised:  /** * elementExists - wrapper around the WebDriverWait method to * return true or false * * @param element * @param timer * @throws Exception */ public static boolean elementExists(WebElement element, int timer) { try { WebDriver driver = CreateDriver.getInstance().getCurrentDriver(); WebDriverWait exists = new WebDriverWait(driver, timer); exists.until(ExpectedConditions.refreshed( ExpectedConditions.visibilityOf(element))); return true; } catch (StaleElementReferenceException | TimeoutException | NoSuchElementException e) { return false; } } In cases where the element is not found on the page, the Selenium WebDriver will return a specific exception such as ElementNotFoundException. If the element is not visible on the page, it will return ElementNotVisibleException, and so on. Users can catch those specific exceptions in a try...catch...finally block, and do something specific for each type (reload page, re-cache element, and so on): try { .... } catch(ElementNotFoundException e) { // do something } catch(ElementNotVisibleException f) { // do something else } finally { // cleanup } Synchronizing methods Earlier, the login method was introduced, and in that method, we will now call one of the synchronization methods waitFor(title, timer) that we created in the utility classes. This method will wait for the login page to appear with the title element as defined. So, in essence, after the URL is loaded, the login method is called, and it synchronizes against a predefined page title. If the waitFor method doesn't find it, it will throw an exception, and the login will not be attempted. It's important to predict and synchronize the page object methods so that they do not get out of "sync" with the application and continue executing when a state has not been reached during the test. This becomes a tedious process during the development of the page object methods, but pays big dividends in the long run when making those methods "robust". Also, users do not have to synchronize before accessing each element. Usually, you would synchronize against the last control rendered on a page when navigating between them. In the same login method, it's not enough to just check and wait for the login page title to appear before logging in; users must also wait for the next page to render, that being the home page of the application. So, finally, in the login method we just built, another waitFor will be added: public void login(String username, String password) throws Exception { BrowserUtils.waitFor(getPageTitle(), getElementWait()); if ( !this.username.getAttribute("value").equals("") ) { this.username.clear(); } this.username.sendKeys(username); if ( !this.password.getAttribute( "value" ).equals( "" ) ) { this.password.clear(); } this.password.sendKeys(password); submit.click(); // exception handling if ( BrowserUtils.elementExists(error, Global_VARS.TIMEOUT_SECOND) ) { String getError = error.getText(); throw new Exception("Login Failed with error = " + getError); } // wait for the home page to appear BrowserUtils.waitFor(new MyAppHomePO<WebElement>().getPageTitle(), getElementWait()); } Table classes When building the page object classes, there will frequently be components on a page that are common to multiple pages, but not all pages, and rather than including the similar locators and methods in each class, users can build a common class for just that portion of the page. HTML tables are a typical example of a common component that can be classed. So, what users can do is create a generic class for the common table rows and columns, extend the subclasses that have a table with this new class, and pass in the dynamic ID or locator to the constructor when extending the subclass with that table class. Let's take a look at how this is done: Create a new page object class for the table component in the application, but do not derive it from the base class in the framework In the constructor of the new class, add a parameter of the type WebElement, requiring users to pass in the static element defined in each subclass for that specific table Create generic methods to get the row count, column count, row data, and cell data for the table In each subclass that inherits these methods, implement them for each page, varying the starting row number and/or column header rows if <th> is used rather than <tr> When the methods are called on each table, it will identify them using the WebElement passed into the constructor: /** * WebTable Page Object Class * * @author Name */ public class WebTablePO { private WebElement table; /** constructor * * @param table * @throws Exception */ public WebTablePO(WebElement table) throws Exception { setTable(table); } /** * setTable - method to set the table on the page * * @param table * @throws Exception */ public void setTable(WebElement table) throws Exception { this.table = table; } /** * getTable - method to get the table on the page * * @return WebElement * @throws Exception */ public WebElement getTable() throws Exception { return this.table; } .... Now, the structure of the class is simple so far, so let's add in some common "generic" methods that can be inherited and extended by each subclass that extends the class: // Note: JavaDoc will be eliminated in these examples for simplicity sake public int getRowCount() { List<WebElement> tableRows = table.findElements(By.tagName("tr")); return tableRows.size(); } public int getColumnCount() { List<WebElement> tableRows = table.findElements(By.tagName("tr")); WebElement headerRow = tableRows.get(1); List<WebElement> tableCols = headerRow.findElements(By.tagName("td")); return tableCols.size(); } public int getColumnCount(int index) { List<WebElement> tableRows = table.findElements(By.tagName("tr")); WebElement headerRow = tableRows.get(index); List<WebElement> tableCols = headerRow.findElements(By.tagName("td")); return tableCols.size(); } public String getRowData(int rowIndex) { List<WebElement> tableRows = table.findElements(By.tagName("tr")); WebElement currentRow = tableRows.get(rowIndex); return currentRow.getText(); } public String getCellData(int rowIndex, int colIndex) { List<WebElement> tableRows = table.findElements(By.tagName("tr")); WebElement currentRow = tableRows.get(rowIndex); List<WebElement> tableCols = currentRow.findElements(By.tagName("td")); WebElement cell = tableCols.get(colIndex - 1); return cell.getText(); } Finally, let's extend a subclass with the new WebTablePO class, and implement some of the methods: /** * Homepage Page Object Class * * @author Name */ public class MyHomepagePO<M extends WebElement> extends WebTablePO<M> { public MyHomepagePO(M table) throws Exception { super(table); } @FindBy(id = "my_table") protected M myTable; // table methods public int getTableRowCount() throws Exception { WebTablePO table = new WebTablePO(getTable()); return table.getRowCount(); } public int getTableColumnCount() throws Exception { WebTablePO table = new WebTablePO(getTable()); return table.getColumnCount(); } public int getTableColumnCount(int index) throws Exception { WebTablePO table = new WebTablePO(getTable()); return table.getColumnCount(index); } public String getTableCellData(int row, int column) throws Exception { WebTablePO table = new WebTablePO(getTable()); return table.getCellData(row, column); } public String getTableRowData(int row) throws Exception { WebTablePO table = new WebTablePO(getTable()); return table.getRowData(row).replace("\n", " "); } public void verifyTableRowData(String expRowText) { String actRowText = ""; int totalNumRows = getTableRowCount(); // parse each row until row data found for ( int i = 0; i < totalNumRows; i++ ) { if ( this.getTableRowData(i).contains(expRowText) ) { actRowText = this.getTableRowData(i); break; } } // verify the row data try { assertEquals(actRowText, expRowText, "Verify Row Data"); } catch (AssertionError e) { String error = "Row data '" + expRowText + "' Not found!"; throw new Exception(error); } } } We saw, how fairly effective it is to handle object class methods, especially when it comes to handling synchronization and exceptions. You read an excerpt from the book Selenium Framework Design in Data-Driven Testing by Carl Cocchiaro. The book will show you how to design your own automation testing framework without any hassle.
Read more
  • 0
  • 0
  • 6922

article-image-3-best-practices-to-develop-effective-test-automation-with-selenium
Amey Varangaonkar
30 Mar 2018
5 min read
Save for later

3 best practices to develop effective test automation with Selenium

Amey Varangaonkar
30 Mar 2018
5 min read
In this article, we will look at some of the industry best practices and standards to use in order to develop and maintain effective test automation strategies with Selenium. 1. Naming Convention When developing the framework, it is important to establish some naming convention standards for each type of file created. In general, this is completely subjective. But it is important to establish them upfront so users can use the same file naming conventions for the same file types to avoid confusion later on, when there are many users building them. Here are a few suggestions: Utility classes: Utility classes don't use any prefix or suffix in their names, but do follow Java standards such as having the first letter of each word capitalized, and ending with .java extensions. (Acronyms used can be all caps). Examples include CreateDriver.java, Global_VARS.java, BrowserUtils.java, DataProvider_JSON.java, and so on. Page object classes: It is useful to be able to differentiate the page object classes from the utility classes. A good way to name them is FeaturePO.java, where PO stands for page object and is capitalized, along with the first letter of each word. End the name with a .java extension. Test classes: It is useful to be able to differentiate the test classes from the PO and utility classes. A good way to name them is FeatureTest.java, where Test stands for test class, and the first letter of each word is capitalized. End the name with a .java extension. Data files: Data files are obviously named with an extension for the type of file, such as .json, .csv, .xls, and so on. But, in the case of this framework, the files can be named the same as the corresponding test class, but without the word Test. For example, LoginCredsTest.java would have the data file LoginCreds.json. Setup classes: Usually, there is a common setup class for setup and teardown for all test classes, that can be named AUTSetup.java. So, as an example, GmailSetup.java would be the setup class for all test classes derived from it, and contains only TestNG annotated methods. Test methods: Most test methods in each test class are named using sequential numbering, followed by a feature and action. For example: tc001_gmailLoginCreds, tc002_gmailLoginPassword, and so on. Setup/teardown methods: The setup and teardown methods can be named according to the setup or teardown action they perform. The following naming conventions can be used in conjunction with the TestNG annotations: @BeforeSuite: The suiteSetup method @AfterSuite: The suiteTeardown method @BeforeClass: The classSetup method @AfterClass: The classTeardown method @BeforeMethod: The methodSetup method @AfterMethod: The methodTeardown method 2. Comments Although obvious and somewhat subjective, it is good practice to comment on code when it is not obvious why something is done, there is a complex routine, or there is a "kluge" added to work around a problem. In Java, there are two types of comments used, as well as a set of standards for JavaDoc. We will look at a couple of examples here: [box type="info" align="" class="" width=""]There is an Oracle article on using comments in Java located at http://www. oracle.com/ technetwork/java/codeconventions-141999.html#385[/box] Block comment: /* single line block comment */ code goes here… /* * multi-line block * comment */ code goes here... End-of-line comment: code goes here // end of line comment JavaDoc comments: /** * Description of the method * * @param arg1 to the method * @param arg2 to the method * return value returned from the method */ [box type="info" align="" class="" width=""]The Oracle documentation on using the JavaDoc tool is located at http://www.oracle.com/technetwork/java/javase/documentation/index-137868.html. [/box] 3. Folder names and structures As the framework starts to evolve, there needs to be some organization around the folder structure in the IDE, along with a naming convention. The IntelliJ IDE uses modules to organize the repo, and under those modules, users can create the folder structures. It is common to also separate the page object and utility classes from the test classes. So, as an example, under the top-level folder src, create main/java/com/yourCo/page objects and test/java/com/yourCo/tests folders. From there, under each structure, users can create feature-based folders. Also, to retain a completely independent set of libraries for the Selenium driver and utility classes, create a separate module called something like Selenium3 with the same folder structures. This will allow users to use the same driver class and utilities for any additional modules that are added to the repo/framework. It is common to automate testing for more than one application, and this will allow the inclusion of the module in those additional modules. Here are a few suggestions regarding folder naming conventions: Name all the folders using lowercase names, so there won't be a mix-and-match of different standards. Name the page object class folders after the features they pertain to; for instance, login for the LoginPO.java, email for the GmailPO.java, and so on. Name the test class folders after the same features as the PO classes, but under the test folder. Then there can be a one-to-one correlation between the PO and test class folders. Store the common base classes under a common folder under main. Store the common setup classes under a common folder under test. Store all the utility classes for the AUT under a utils folder under main. Store all the suite files for the tests under a suites folder under test. Here is an example of a folder structure for the Selenium3 module. Of course, there are no test folders under this one: Here is an example of a folder structure for an AUT module showing the PO and test class Folders: You read an excerpt from the book Selenium Framework Design in Data-Driven Testing  written by Carl Cocchiaro. This book presents effective techniques for building data-driven test frameworks using Selenium WebDriver.
Read more
  • 0
  • 0
  • 9708
article-image-django-and-django-rest-frameworks-build-restful-app
Sugandha Lahoti
29 Mar 2018
12 min read
Save for later

Getting started with Django and Django REST frameworks to build a RESTful app

Sugandha Lahoti
29 Mar 2018
12 min read
In this article, we will learn how to install Django and Django REST framework in an isolated environment. We will also look at the Django folders, files, and configurations, and how to create an app with Django. We will also introduce various command-line and GUI tools that are use to interact with the RESTful Web Services. Installing Django and Django REST frameworks in an isolated environment First, run the following command to install the Django web framework: pip install django==1.11.5 The last lines of the output will indicate that the django package has been successfully installed. The process will also install the pytz package that provides world time zone definitions. Take into account that you may also see a notice to upgrade pip. The next lines show a sample of the four last lines of the output generated by a successful pip installation: Collecting django Collecting pytz (from django) Installing collected packages: pytz, django Successfully installed django-1.11.5 pytz-2017.2 Now that we have installed the Django web framework, we can install Django REST framework. Django REST framework works on top of Django and provides us with a powerful and flexible toolkit to build RESTful Web Services. We just need to run the following command to install this package: pip install djangorestframework==3.6.4 The last lines for the output will indicate that the djangorestframework package has been successfully installed, as shown here: Collecting djangorestframework Installing collected packages: djangorestframework Successfully installed djangorestframework-3.6.4 After following the previous steps, we will have Django REST framework 3.6.4 and Django 1.11.5 installed in our virtual environment. Creating an app with Django Now, we will create our first app with Django and we will analyze the directory structure that Django creates. First, go to the root folder for the virtual environment: 01. In Linux or macOS, enter the following command: cd ~/HillarDjangoREST/01 If you prefer Command Prompt, run the following command in the Windows command line: cd /d %USERPROFILE%\HillarDjangoREST\01 If you prefer Windows PowerShell, run the following command in Windows PowerShell: cd /d $env:USERPROFILE\HillarDjangoREST\01 In Linux or macOS, run the following command to create a new Django project named restful01. The command won't produce any output: python bin/django-admin.py startproject restful01 In Windows, in either Command Prompt or PowerShell, run the following command to create a new Django project named restful01. The command won't produce any output: python Scripts\django-admin.py startproject restful01 The previous command creates a restful01 folder with other subfolders and Python files. Now, go to the recently created restful01 folder. Just execute the following command on any platform: cd restful01 Then, run the following command to create a new Django app named toys within the restful01 Django project. The command won't produce any output: python manage.py startapp toys The previous command creates a new restful01/toys subfolder, with the following files: views.py tests.py models.py apps.py admin.py   init  .py In addition, the restful01/toys folder will have a migrations subfolder with an init  .py Python script. The following diagram shows the folders and files in the directory tree, starting at the restful01 folder with two subfolders - toys and restful01: Understanding Django folders, files, and configurations After we create our first Django project and then a Django app, there are many new folders and files. First, use your favorite editor or IDE to check the Python code in the apps.py file within the restful01/toys folder (restful01\toys in Windows). The following lines show the code for this file: from django.apps import AppConfig class ToysConfig(AppConfig): name = 'toys' The code declares the ToysConfig class as a subclass of the django.apps.AppConfig class that represents a Django application and its configuration. The ToysConfig class just defines the name class attribute and sets its value to 'toys'. Now, we have to add toys.apps.ToysConfig as one of the installed apps in the restful01/settings.py file that configures settings for the restful01 Django project. I built the previous string by concatenating many values as follows: app name + .apps. + class name, which is, toys + .apps. + ToysConfig. In addition, we have to add the rest_framework app to make it possible for us to use Django REST framework. The restful01/settings.py file is a Python module with module-level variables that define the configuration of Django for the restful01 project. We will make some changes to this Django settings file. Open the restful01/settings.py file and locate the highlighted lines that specify the strings list that declares the installed apps. The following code shows the first lines for the settings.py file. Note that the file has more code: """ Django settings for restful01 project. Generated by 'django-admin startproject' using Django 1.11.5. For more information on this file, see https://docs.djangoproject.com/en/1.11/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/1.11/ref/settings/ """ import os # Build paths inside the project like this: os.path.join(BASE_DIR, ...) BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath( file ))) # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = '+uyg(tmn%eo+fpg+fcwmm&x(2x0gml8)=cs@$nijab%)y$a*xe' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True ALLOWED_HOSTS = [] # Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ]         Add the following two strings to the INSTALLED_APPS strings list and save the changes to the restful01/settings.py file: 'rest_framework' 'toys.apps.ToysConfig' The following lines show the new code that declares the INSTALLED_APPS string list with the added lines highlighted and with comments to understand what each added string means. The code file for the sample is included in the hillar_django_restful_01 folder: INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', # Django REST framework 'rest_framework', # Toys application 'toys.apps.ToysConfig', ] This way, we have added Django REST framework and the toys application to our initial Django project named restful01. Installing tools Now, we will leave Django for a while and we will install many tools that we will use to interact with the RESTful Web Services that we will develop throughout this book. We will use the following different kinds of tools to compose and send HTTP requests and visualize the responses throughout our book: Command-line tools GUI tools Python code Web browser JavaScript code You can use any other application that allows you to compose and send HTTP requests. There are many apps that run on tablets and smartphones that allow you to accomplish this task. However, we will focus our attention on the most useful tools when building RESTful Web Services with Django. Installing Curl We will start installing command-line tools. One of the key advantages of command-line tools is that you can easily run again the HTTP requests again after we have built them for the first time, and we don't need to use the mouse or tap the screen to run requests. We can also easily build a script with batch requests and run them. As happens with any command-line tool, it can take more time to perform the first requests compared with GUI tools, but it becomes easier once we have performed many requests and we can easily reuse the commands we have written in the past to compose new requests. Curl, also known as cURL, is a very popular open source command-line tool and library that allows us to easily transfer data. We can use the curl command-line tool to easily compose and send HTTP requests and check their responses. In Linux or macOS, you can open a Terminal and start using curl from the command line. In Windows, you have two options. You can work with curl in Command Prompt or you can decide to install curl as part of the Cygwin package installation option and execute it from the Cygwin terminal. You can read more about the Cygwin terminal and its installation procedure at: http://cygwin.com/install.html. Windows Powershell includes a curl alias that calls the Invoke-WebRequest command, and therefore, if you want to work with Windows Powershell with curl, it is necessary to remove the curl alias. If you want to use the curl command within Command Prompt, you just need to download and unzip the latest version of the curl download page: https://curl.haxx.se/download.html. Make sure you download the version that includes SSL and SSH. The following screenshot shows the available downloads for Windows. The Win64 - Generic section includes the versions that we can run in Command Prompt or Windows Powershell. After you unzip the .7zip or .zip file you have downloaded, you can include the folder in which curl.exe is included in your path. For example, if you unzip the Win64 x86_64.7zip file, you will find curl.exe in the bin folder. The following screenshot shows the results of executing curl --version on Command Prompt in Windows 10. The --version option makes curl display its version and all the libraries, protocols, and features it supports: Installing HTTPie Now, we will install HTTPie, a command-line HTTP client written in Python that makes it easy to send HTTP requests and uses a syntax that is easier than curl. By default, HTTPie displays colorized output and uses multiple lines to display the response details. In some cases, HTTPie makes it easier to understand the responses than the curl utility. However, one of the great disadvantages of HTTPie as a command-line utility is that it takes more time to load than curl, and therefore, if you want to code scripts with too many commands, you have to evaluate whether it makes sense to use HTTPie. We just need to make sure we run the following command in the virtual environment we have just created and activated. This way, we will install HTTPie only for our virtual environment. Run the following command in the terminal, Command Prompt, or Windows PowerShell to install the httpie package: pip install --upgrade httpie The last lines of the output will indicate that the httpie package has been successfully installed: Collecting httpie Collecting colorama>=0.2.4 (from httpie) Collecting requests>=2.11.0 (from httpie) Collecting Pygments>=2.1.3 (from httpie) Collecting idna<2.7,>=2.5 (from requests>=2.11.0->httpie) Collecting urllib3<1.23,>=1.21.1 (from requests>=2.11.0->httpie) Collecting chardet<3.1.0,>=3.0.2 (from requests>=2.11.0->httpie) Collecting certifi>=2017.4.17 (from requests>=2.11.0->httpie) Installing collected packages: colorama, idna, urllib3, chardet, certifi, requests, Pygments, httpie Successfully installed Pygments-2.2.0 certifi-2017.7.27.1 chardet-3.0.4 colorama-0.3.9 httpie-0.9.9 idna-2.6 requests-2.18.4 urllib3-1.22 Now, we will be able to use the http command to easily compose and send HTTP requests to our future RESTful Web Services build with Django. The following screenshot shows the results of executing http on Command Prompt in Windows 10. HTTPie displays the valid options and indicates that a URL is required: Installing the Postman REST client So far, we have installed two terminal-based or command-line tools to compose and send HTTP requests to our Django development server: cURL and HTTPie. Now, we will start installing Graphical User Interface (GUI) tools. Postman is a very popular API testing suite GUI tool that allows us to easily compose and send HTTP requests, among other features. Postman is available as a standalone app in Linux, macOS, and Windows. You can download the versions of the Postman app from the following URL: https://www.getpostman.com. The following screenshot shows the HTTP GET request builder in Postman: Installing Stoplight Stoplight is a very useful GUI tool that focuses on helping architects and developers to model complex APIs. If we need to consume our RESTful Web Service in many different programming languages, we will find Stoplight extremely helpful. Stoplight provides an HTTP request maker that allows us to compose and send requests and generate the necessary code to make them in different programming languages, such as JavaScript, Swift, C#, PHP, Node, and Go, among others. Stoplight provides a web version and is also available as a standalone app in Linux, macOS, and Windows. You can download the versions of Stoplight from the following URL: http://stoplight.io/. The following screenshot shows the HTTP GET request builder in Stoplight with the code generation at the bottom: Installing iCurlHTTP We can also use apps that can compose and send HTTP requests from mobile devices to work with our RESTful Web Services. For example, we can work with the iCurlHTTP app on iOS devices such as iPad and iPhone: https://itunes.apple.com/us/app/icurlhttp/id611943891. On Android devices, we can work with the HTTP Request app: https://play.google.com/store/apps/details?id=air.http.request&hl=en. The following screenshot shows the UI for the iCurlHTTP app running on an iPad Pro: At the time of writing, the mobile apps that allow you to compose and send HTTP requests do not provide all the features you can find in Postman or command-line utilities. We learnt to set up a virtual environment with Django and Django REST framework and created an app with Django. We looked at Django folders, files, and configurations and installed command-line and GUI tools to interact with the RESTful Web Services. This article is an excerpt from the book, Django RESTful Web Services, written by Gaston C. Hillar. This book serves as an easy guide to build Python RESTful APIs and web services with Django. The code bundle for the article is hosted on GitHub.
Read more
  • 0
  • 0
  • 4090

article-image-how-to-build-and-deploy-microservices-using-payara-micro
Gebin George
28 Mar 2018
9 min read
Save for later

How to build and deploy Microservices using Payara Micro

Gebin George
28 Mar 2018
9 min read
Payara Micro offers a new way to run Java EE or microservice applications. It is based on the Web profile of Glassfish and bundles few additional APIs. The distribution is designed keeping modern containerized environment in mind. Payara Micro is available to download as a standalone executable JAR, as well as a Docker image. It's an open source MicroProfile compatible runtime. Today, we will learn to use payara micro to build and deploy microservices. Here’s a list of APIs that are supported in Payara Micro: Servlets, JSTL, EL, and JSPs WebSockets JSF JAX-RS Chapter 4 [ 91 ] EJB lite JTA JPA Bean Validation CDI Interceptors JBatch Concurrency JCache We will be exploring how to build our services using Payara Micro in the next section. Building services with Payara Micro Let's start building parts of our Issue Management System (IMS), which is going to be a one-stop-destination for collaboration among teams. As the name implies, this system will be used for managing issues that are raised as tickets and get assigned to users for resolution. To begin the project, we will identify our microservice candidates based on the business model of IMS. Here, let's define three functional services, which will be hosted in their own independent Git repositories: ims-micro-users ims-micro-tasks ims-micro-notify You might wonder, why these three and why separate repositories? We could create much more fine-grained services and perhaps it wouldn't be wrong to do so. The answer lies in understanding the following points: Isolating what varies: We need to be able to independently develop and deploy each unit. Changes to one business capability or domain shouldn't require changes in other services more often than desired. Organisation or Team structure: If you define teams by business capability, then they can work independent of others and release features with greater agility. The tasks team should be able to evolve independent of the teams that are handling users or notifications. The functional boundaries should allow independent version and release cycle management. Transactional boundaries for consistency: Distributed transactions are not easy, thus creating services for related features that are too fine grained, and lead to more complexity than desired. You would need to become familiar with concepts like eventual consistency, but these are not easy to achieve in practice. Source repository per service: Setting up a single repository that hosts all the services is ideal when it's the same team that works on these services and the project is relatively small. But we are building our fictional IMS, which is a large complex system with many moving parts. Separate teams would get tightly coupled by sharing a repository. Moreover, versioning and tagging of releases will be yet another problem to solve. The projects are created as standard Java EE projects, which are Skinny WARs, that will be deployed using the Payara Micro server. Payara Micro allows us to delay the decision of using a Fat JAR or Skinny WAR. This gives us flexibility in picking the deployment choice at a later stage. As Maven is a widely adopted build tool among developers, we will use the same to create our example projects, using the following steps: mvn archetype:generate -DgroupId=org.jee8ng -DartifactId=ims-micro-users - DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false mvn archetype:generate -DgroupId=org.jee8ng -DartifactId=ims-micro-tasks - DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false mvn archetype:generate -DgroupId=org.jee8ng -DartifactId=ims-micro-notify - DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false Once the structure is generated, update the properties and dependencies section of pom.xml with the following contents, for all three projects: <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <failOnMissingWebXml>false</failOnMissingWebXml> </properties> <dependencies> <dependency> <groupId>javax</groupId> <artifactId>javaee-api</artifactId> <version>8.0</version> <scope>provided</scope> </dependency> Chapter 4 [ 93 ] <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> <scope>test</scope> </dependency> </dependencies> Next, create a beans.xml file under WEB-INF folder for all three projects: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_2_0.xsd" bean-discovery-mode="all"> </beans> You can delete the index.jsp and web.xml files, as we won't be needing them. The following is the project structure of ims-micro-users. The same structure will be used for ims-micro-tasks and ims-micro-notify: The package name for users, tasks, and notify service will be as shown as the following: org.jee8ng.ims.users (inside ims-micro-users) org.jee8ng.ims.tasks (inside ims-micro-tasks) org.jee8ng.ims.notify (inside ims-micro-notify) Each of the above will in turn have sub-packages called boundary, control, and entity. The structure follows the Boundary-Control-Entity (BCE)/Entity-Control-Boundary (ECB) pattern. The JaxrsActivator shown as follows is required to enable the JAX-RS API and thus needs to be placed in each of the projects: import javax.ws.rs.ApplicationPath; import javax.ws.rs.core.Application; @ApplicationPath("resources") public class JaxrsActivator extends Application {} All three projects will have REST endpoints that we can invoke over HTTP. When doing RESTful API design, a popular convention is to use plural names for resources, especially if the resource could represent a collection. For example: /users /tasks The resource class names in the projects use the plural form, as it's consistent with the resource URL naming used. This avoids confusions such as a resource URL being called a users resource, while the class is named UserResource. Given that this is an opinionated approach, feel free to use singular class names if desired. Here's the relevant code for ims-micro-users, ims-micro-tasks, and ims-micronotify projects respectively. Under ims-micro-users, define the UsersResource endpoint: package org.jee8ng.ims.users.boundary; import javax.ws.rs.*; import javax.ws.rs.core.*; @Path("users") public class UsersResource { @GET Chapter 4 [ 95 ] @Produces(MediaType.APPLICATION_JSON) public Response get() { return Response.ok("user works").build(); } } Under ims-micro-tasks, define the TasksResource endpoint: package org.jee8ng.ims.tasks.boundary; import javax.ws.rs.*; import javax.ws.rs.core.*; @Path("tasks") public class TasksResource { @GET @Produces(MediaType.APPLICATION_JSON) public Response get() { return Response.ok("task works").build(); } } Under ims-micro-notify, define the NotificationsResource endpoint: package org.jee8ng.ims.notify.boundary; import javax.ws.rs.*; import javax.ws.rs.core.*; @Path("notifications") public class NotificationsResource { @GET @Produces(MediaType.APPLICATION_JSON) public Response get() { return Response.ok("notification works").build(); } } Once you build all three projects using mvn clean install, you will get your Skinny WAR files generated in the target directory, which can be deployed on the Payara Micro server. Running services with Payara Micro Download the Payara Micro server if you haven't already, from this link: https://www.payara.fish/downloads. The micro server will have the name payara-micro-xxx.jar, where xxx will be the version number, which might be different when you download the file. Here's how you can start Payara Micro with our services deployed locally. When doing so, we need to ensure that the instances start on different ports, to avoid any port conflicts: >java -jar payara-micro-xxx.jar --deploy ims-micro-users/target/ims-microusers. war --port 8081 >java -jar payara-micro-xxx.jar --deploy ims-micro-tasks/target/ims-microtasks. war --port 8082 >java -jar payara-micro-xxx.jar --deploy ims-micro-notify/target/ims-micronotify. war --port 8083 This will start three instances of Payara Micro running on the specified ports. This makes our applications available under these URLs: http://localhost:8081/ims-micro-users/resources/users/ http://localhost:8082/ims-micro-tasks/resources/tasks/ http://localhost:8083/ims-micro-notify/resources/notifications/ Payar Micro can be started on a non-default port by using the --port parameter, as we did earlier. This is useful when running multiple instances on the same machine. Another option is to use the --autoBindHttp parameter, which will attempt to connect on 8080 as the default port, and if that port is unavailable, it will try to bind on the next port up, repeating until it finds an available port. Examples of starting Payara Micro: Uber JAR option: Now, there's one more feature that Payara Micro provides. We can generate an Uber JAR as well, which would be the Fat JAR approach that we learnt in the Fat JAR section. To package our ims-micro-users project as an Uber JAR, we can run the following command: java -jar payara-micro-xxx.jar --deploy ims-micro-users/target/ims-microusers. war --outputUberJar users.jar This will generate the users.jar file in the directory where you run this command. The size of this JAR will naturally be larger than our WAR file, since it will also bundle the Payara Micro runtime in it. Here's how you can start the application using the generated JAR: java -jar users.jar The server parameters that we used earlier can be passed to this runnable JAR file too. Apart from the two choices we saw for running our microservice projects, there's a third option as well. Payara Micro provides an API based approach, which can be used to programmatically start the embedded server. We will expand upon these three services as we progress further into the realm of cloud based Java EE. We saw how to leverage the power of Payara Micro to run Java EE or microservice applications. You read an excerpt from the book, Java EE 8 and Angular written by Prashant Padmanabhan. This book helps you build high-performing enterprise applications using Java EE powered by Angular at the frontend.  
Read more
  • 0
  • 0
  • 8712

article-image-how-to-build-microservices-using-rest-framework
Gebin George
28 Mar 2018
7 min read
Save for later

How to build Microservices using REST framework

Gebin George
28 Mar 2018
7 min read
Today, we will learn to build microservices using REST framework. Our microservices are Java EE 8 web projects, built using maven and published as separate Payara Micro instances, running within docker containers. The separation allows them to scale individually, as well as have independent operational activities. Given the BCE pattern used, we have the business component split into boundary, control, and entity, where the boundary comprises of the web resource (REST endpoint) and business service (EJB). The web resource will publish the CRUD operations and the EJB will in turn provide the transactional support for each of it along with making external calls to other resources. Here's a logical view for the boundary consisting of the web resource and business service: The microservices will have the following REST endpoints published for the projects shown, along with the boundary classes XXXResource and XXXService: Power Your APIs with JAXRS and CDI, for Server-Sent Events. In IMS, we publish task/issue updates to the browser using an SSE endpoint. The code observes for the events using the CDI event notification model and triggers the broadcast. The ims-users and ims-issues endpoints are similar in API format and behavior. While one deals with creating, reading, updating, and deleting a User, the other does the same for an Issue. Let's look at this in action. After you have the containers running, we can start firing requests to the /users web resource. The following curl command maps the URI /users to the @GET resource method named getAll() and returns a collection (JSON array) of users. The Java code will simply return a Set<User>, which gets converted to JsonArray due to the JSON binding support of JSON-B. The method invoked is as follows: @GET public Response getAll() {... } curl -v -H 'Accept: application/json' http://localhost:8081/ims-users/resources/users ... HTTP/1.1 200 OK ... [{ "id":1,"name":"Marcus","email":"marcus_jee8@testem.com" "credential":{"password":"1234","username":"marcus"} }, { "id":2,"name":"Bob","email":"bob@testem.com" "credential":{"password":"1234","username":"bob"} }] Next, for selecting one of the users, such as Marcus, we will issue the following curl command, which uses the /users/xxx path. This will map the URI to the @GET method which has the additional @Path("{id}") annotation as well. The value of the id is captured using the @PathParam("id") annotation placed before the field. The response is a User entity wrapped in the Response object returned. The method invoked is as follows: @GET @Path("{id}") public Response get(@PathParam("id") Long id) { ... } curl -v -H 'Accept: application/json' http://localhost:8081/ims-users/resources/users/1 ... HTTP/1.1 200 OK ... { "id":1,"name":"Marcus","email":"marcus_jee8@testem.com" "credential":{"password":"1234","username":"marcus"} } In both the preceding methods, we saw the response returned as 200 OK. This is achieved by using a Response builder. Here's the snippet for the method: return Response.ok( ..entity here..).build(); Next, for submitting data to the resource method, we use the @POST annotation. You might have noticed earlier that the signature of the method also made use of a UriInfo object. This is injected at runtime for us via the @Context annotation. A curl command can be used to submit the JSON data of a user entity. The method invoked is as follows: @POST public Response add(User newUser, @Context UriInfo uriInfo) We make use of the -d flag to send the JSON body in the request. The POST request is implied: curl -v -H 'Content-Type: application/json' http://localhost:8081/ims-users/resources/users -d '{"name": "james", "email":"james@testem.io", "credential": {"username":"james","password":"test123"}}' ... HTTP/1.1 201 Created ... Location: http://localhost:8081/ims-users/resources/users/3 The 201 status code is sent by the API to signal that an entity has been created, and it also returns the location for the newly created entity. Here's the relevant snippet to do this: //uriInfo is injected via @Context parameter to this method URI location = uriInfo.getAbsolutePathBuilder() .path(newUserId) // This is the new entity ID .build(); // To send 201 status with new Location return Response.created(location).build(); Similarly, we can also send an update request using the PUT method. The method invoked is as follows: @PUT @Path("{id}") public Response update(@PathParam("id") Long id, User existingUser) curl -v -X PUT -H 'Content-Type: application/json' http://localhost:8081/ims-users/resources/users/3 -d '{"name": "jameson", "email":"james@testem.io"}' ... HTTP/1.1 200 Ok The last method we need to map is the DELETE method, which is similar to the GET operation, with the only difference being the HTTP method used. The method invoked is as follows: @DELETE @Path("{id}") public Response delete(@PathParam("id") Long id) curl -v -X DELETE http://localhost:8081/ims-users/resources/users/3 ... HTTP/1.1 200 Ok You can try out the Issues endpoint in a similar manner. For the GET requests of /users or /issues, the code simply fetches and returns a set of entity objects. But when requesting an item within this collection, the resource method has to look up the entity by the passed in id value, captured by @PathParam("id"), and if found, return the entity, or else a 404 Not Found is returned. Here's a snippet showing just that: final Optional<Issue> issueFound = service.get(id); //id obtained if (issueFound.isPresent()) { return Response.ok(issueFound.get()).build(); } return Response.status(Response.Status.NOT_FOUND).build(); The issue instance can be fetched from a database of issues, which the service object interacts with. The persistence layer can return a JPA entity object which gets converted to JSON for the calling code. We will look at persistence using JPA in a later section. For the update request which is sent as an HTTP PUT, the code captures the identifier ID using @PathParam("id"), similar to the previous GET operation, and then uses that to update the entity. The entity itself is submitted as a JSON input and gets converted to the entity instance along with the passed in message body of the payload. Here's the code snippet for that: @PUT @Path("{id}") public Response update(@PathParam("id") Long id, Issue updated) { updated.setId(id); boolean done = service.update(updated); return done ? Response.ok(updated).build() : Response.status(Response.Status.NOT_FOUND).build(); } The code is simple to read and does one thing—it updates the identified entity and returns the response containing the updated entity or a 404 for a non-existing entity. The service references that we have looked at so far are @Stateless beans which are injected into the resource class as fields: // Project: ims-comments @Stateless public class CommentsService {... } // Project: ims-issues @Stateless public class IssuesService {... } // Project: ims-users @Stateless public class UsersService {... } These will in turn have the EntityManager injected via @PersistenceContext. Combined with the resource and service, our components have made the boundary ready for clients to use. Similar to the WebSockets section in Chapter 6, Power Your APIs with JAXRS and CDI, in IMS, we use a @ServerEndpoint which maintains the list of active sessions and then uses that to broadcast a message to all users who are connected. A ChatThread keeps track of the messages being exchanged through the @ServerEndpoint class. For the message to besent, we use the stream of sessions and filter it by open sessions, then send the message for each of the sessions: chatSessions.getSessions().stream().filter(Session::isOpen) .forEach(s -> { try { s.getBasicRemote().sendObject(chatMessage); }catch(Exception e) {...} }); To summarize, we practically saw how to leverage REST framework to build microservices. This article is an excerpt from the book, Java EE 8 and Angular written by Prashant Padmanabhan. The book covers building modern user friendly web apps with Java EE  
Read more
  • 0
  • 0
  • 3285
article-image-getting-started-with-django-restful-web-services
Sugandha Lahoti
27 Mar 2018
19 min read
Save for later

Getting started with Django RESTful Web Services

Sugandha Lahoti
27 Mar 2018
19 min read
In this article, we will kick start with learning RESTful Web Service and subsequently learn how to create models and perform migration, serialization and deserialization in Django. Here’s a quick glance of what to expect in this article: Defining the requirements for RESTful Web Service Analyzing and understanding Django tables and the database Controlling, serialization and deserialization in Django Defining the requirements for RESTful Web Service Imagine a team of developers working on a mobile app for iOS and Android and requires a RESTful Web Service to perform CRUD operations with toys. We definitely don't want to use a mock web service and we don't want to spend time choosing and configuring an ORM (short for Object-Relational Mapping). We want to quickly build a RESTful Web Service and have it ready as soon as possible to start interacting with it in the mobile app. We really want the toys to persist in a database but we don't need it to be production-ready. Therefore, we can use the simplest possible relational database, as long as we don't have to spend time performing complex installations or configurations. Django REST framework, also known as DRF, will allow us to easily accomplish this task and start making HTTP requests to the first version of our RESTful Web Service. In this case, we will work with a very simple SQLite database, the default database for a new Django REST framework project. First, we must specify the requirements for our main resource: a toy. We need the following attributes or fields for a toy entity: An integer identifier A name An optional description A toy category description, such as action figures, dolls, or playsets A release date A bool value indicating whether the toy has been on the online store's homepage at least once In addition, we want to have a timestamp with the date and time of the toy's addition to the database table, which will be generated to persist toys. In a RESTful Web Service, each resource has its own unique URL. In our web service, each toy will have its own unique URL. The following table shows the HTTP verbs, the scope, and the semantics of the methods that our first version of the web service must support. Each method is composed of an HTTP verb and a scope. All the methods have a well-defined meaning for toys and collections: HTTP verb Scope Semantics GET Toy Retrieve a single toy GET Collection of toys Retrieve all the stored toys in the collection, sorted by their name in ascending order POST Collection of toys Create a new toy in the collection PUT Toy Update an existing toy DELETE Toy Delete an existing toy In the previous table, the GET HTTP verb appears twice but with two different scopes: toys and collection of toys. The first row shows a GET HTTP verb applied to a toy, that is, to a single resource. The second row shows a GET HTTP verb applied to a collection of toys, that is, to a collection of resources. We want our web service to be able to differentiate collections from a single resource of the collection in the URLs. When we refer to a collection, we will use a slash (/) as the last character for the URL, as in http://localhost:8000/toys. When we refer to a single resource of the collection we won't use a slash (/) as the last character for the URL, as in http://localhost:8000/toys/5. Let's consider that http://localhost:8000/toys/ is the URL for the collection of toys. If we add a number to the previous URL, we identify a specific toy with an ID or primary key equal to the specified numeric value. For example, http://localhost:8000/toys/42 identifies the toy with an ID equal to 42. We have to compose and send an HTTP request with the POST HTTP verb and http://localhost:8000/toys/ request URL to create a new toy and add it to the toys collection. In this example, our RESTful Web Service will work with JSON (short for JavaScript Object Notation), and therefore we have to provide the JSON key-value pairs with the field names and the values to create the new toy. As a result of the request, the server will validate the provided values for the fields, make sure that it is a valid toy, and persist it in the database. The server will insert a new row with the new toy in the appropriate table and it will return a 201 Created status code and a JSON body with the recently added toy serialized to JSON, including the assigned ID that was automatically generated by the database and assigned to the toy object: POST http://localhost:8000/toys/ We have to compose and send an HTTP request with the GET HTTP verb and http://localhost:8000/toys/{id} request URL to retrieve the toy whose ID matches the specified numeric value in {id}. For example, if we use the request URL http://localhost:8000/toys/25, the server will retrieve the toy whose ID matches 25. As a result of the request, the server will retrieve a toy with the specified ID from the database and create the appropriate toy object in Python. If a toy is found, the server will serialize the toy object into JSON, return a 200 OK status code, and return a JSON body with the serialized toy object. If no toy matches the specified ID, the server will return only a 404 Not Found status: GET http://localhost:8000/toys/{id} We have to compose and send an HTTP request with the PUT HTTP verb and request URL http://localhost:8000/toys/{id} to retrieve the toy whose ID matches the value in {id} and replace it with a toy created with the provided data. In addition, we have to provide the JSON key-value pairs with the field names and the values to create the new toy that will replace the existing one. As a result of the request, the server will validate the provided values for the fields, make sure that it is a valid toy, and replace the one that matches the specified ID with the new one in the database. The ID for the toy will be the same after the update operation. The server will update the existing row in the appropriate table and it will return a 200 OK status code and a JSON body with the recently updated toy serialized to JSON. If we don't provide all the necessary data for the new toy, the server will return a 400 Bad Request status code. If the server doesn't find a toy with the specified ID, the server will only return a 404 Not Found status: PUT http://localhost:8000/toys/{id} We have to compose and send an HTTP request with the DELETE HTTP verb and request URL http://localhost:8000/toys/{id} to remove the toy whose ID matches the specified numeric value in {id}. For example, if we use the request URL http://localhost:8000/toys/34, the server will delete the toy whose ID matches 34. As a result of the request, the server will retrieve a toy with the specified ID from the database and create the appropriate toy object in Python. If a toy is found, the server will request the ORM delete the toy row associated with this toy object and the server will return a 204 No Content status code. If no toy matches the specified ID, the server will return only a 404 Not Found status: DELETE http://localhost:8000/toys/{id} Creating first Django model Now, we will create a simple Toy model in Django, which we will use to represent and persist toys. Open the toys/models.py file. The following lines show the initial code for this file with just one import statement and a comment that indicates we should create the models: from django.db import models #Create your models here. The following lines show the new code that creates a Toy class, specifically, a Toy model in the toys/models.py file. The code file for the sample is included in the hillar_django_restful_02_01 folder in the restful01/toys/models.py file: from django.db import models class Toy(models.Model): created = models.DateTimeField(auto_now_add=True) name = models.CharField(max_length=150, blank=False, default='') description = models.CharField(max_length=250, blank=True, default='') toy_category = models.CharField(max_length=200, blank=False, default='') release_date = models.DateTimeField() was_included_in_home = models.BooleanField(default=False) class Meta: ordering = ('name',) The Toy class is a subclass of the django.db.models.Model class and defines the following attributes: created, name, description, toy_category, release_date, and was_included_in_home. Each of these attributes represents a database column or field. We specified the field types, maximum lengths, and defaults for many attributes. The class declares a Meta inner class that declares an ordering attribute and sets its value to a tuple of string whose first value is the 'name' string. This way, the inner class indicates to Django that, by default, we want the results ordered by the name attribute in ascending order. Running initial migration Now, it is necessary to create the initial migration for the new Toy model we recently coded. We will also synchronize the SQLite database for the first time. By default, Django uses the popular self-contained and embedded SQLite database, and therefore we don't need to make changes in the initial ORM configuration. In this example, we will be working with this default configuration. Of course, we will upgrade to another database after we have a sample web service built with Django. We will only use SQLite for this example. We just need to run the following Python script in the virtual environment that we activated in the previous chapter. Make sure you are in the restful01 folder within the main folder for the virtual environment when you run the following command: python manage.py makemigrations toys The following lines show the output generated after running the previous command: Migrations for 'toys': toys/migrations/0001_initial.py: - Create model Toy The output indicates that the restful01/toys/migrations/0001_initial.py file includes the code to create the Toy model. The following lines show the code for this file that was automatically generated by Django. The code file for the sample is included in the hillar_django_restful_02_01 folder in the restful01/toys/migrations/0001_initial.py file: # -*- coding: utf-8 -*- # Generated by Django 1.11.5 on 2017-10-08 05:19 from   future    import unicode_literals from django.db import migrations, models class Migration(migrations.Migration): initial = True dependencies = [ ] operations = [ migrations.CreateModel( name='Toy', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('created', models.DateTimeField(auto_now_add=True)), ('name', models.CharField(default='', max_length=150)), ('description', models.CharField(blank=True, default='', max_length=250)), ('toy_category', models.CharField(default='', max_length=200)), ('release_date', models.DateTimeField()), ('was_included_in_home', models.BooleanField(default=False)), ], options={ 'ordering': ('name',), }, ), ] Understanding migrations The automatically generated code defines a subclass of the django.db.migrations.Migration class named Migration, which defines an operation that creates the Toy model's table and includes it in the operations attribute. The call to the migrations.CreateModel method specifies the model's name, the fields, and the options to instruct the ORM to create a table that will allow the underlying database to persist the model. The fields argument is a list of tuples that includes information about the field name, the field type, and additional attributes based on the data we provided in our model, that is, in the Toy class. Now, run the following Python script to apply all the generated migrations. Make sure you are in the restful01 folder within the main folder for the virtual environment when you run the following command: python manage.py migrate The following lines show the output generated after running the previous command: Operations to perform: Apply all migrations: admin, auth, contenttypes, sessions, toys Running migrations: Applying contenttypes.0001_initial... OK Applying auth.0001_initial... OK Applying admin.0001_initial... OK Applying admin.0002_logentry_remove_auto_add... OK Applying contenttypes.0002_remove_content_type_name... OK Applying auth.0002_alter_permission_name_max_length... OK Applying auth.0003_alter_user_email_max_length... OK Applying auth.0004_alter_user_username_opts... OK Applying auth.0005_alter_user_last_login_null... OK Applying auth.0006_require_contenttypes_0002... OK Applying auth.0007_alter_validators_add_error_messages... OK Applying auth.0008_alter_user_username_max_length... OK Applying sessions.0001_initial... OK Applying toys.0001_initial... OK After we run the previous command, we will notice that the root folder for our restful01 project now has a db.sqlite3 file that contains the SQLite database. We can use the SQLite command line or any other application that allows us to easily check the contents of the SQLite database to check the tables that Django generated. The first migration will generate many tables required by Django and its installed apps before running the code that creates the table for the Toys model. These tables provide support for user authentication, permissions, groups, logs, and migration management. We will work with the models related to these tables after we add more features and security to our web services. After the migration process creates all these Django tables in the underlying database, the first migration runs the Python code that creates the table required to persist our model. Thus, the last line of the running migrations section displays Applying toys.0001_initial. Analyzing the database In most modern Linux distributions and macOS, SQLite is already installed, and therefore you can run the sqlite3 command-line utility. In Windows, if you want to work with the sqlite3.exe command-line utility, you have to download the bundle of command-line tools for managing SQLite database files from the downloads section of the SQLite webpage at http://www.sqlite.org/download.html. For example, the ZIP file that includes the command-line tools for version 3.20.1 is sqlite- tools-win32-x8 6-3200100.zip. The name for the file changes with the SQLite version. You just need to make sure that you download the bundle of command-line tools and not the ZIP file that provides the SQLite DLLs. After you unzip the file, you can include the folder that includes the command-line tools in the PATH environment variable, or you can access the sqlite3.exe command-line utility by specifying the full path to it. Run the following command to list the generated tables. The first argument, db.sqlite3, specifies the file that contains that SQLite database and the second argument indicates the command that we want the sqlite3 command-line utility to run against the specified database: sqlite3 db.sqlite3 ".tables" The following lines show the output for the previous command with the list of tables that Django generated in the SQLite database: auth_group                django_admin_log auth_group_permissions                          django_content_type auth_permission                         django_migrations auth_user                 django_session auth_user_groups                      toys_toy auth_user_user_permissions The following command will allow you to check the contents of the toys_toy table after we compose and send HTTP requests to the RESTful Web Service and the web service makes CRUD operations to the toys_toy table: sqlite3 db.sqlite3 "SELECT * FROM toys_toy ORDER BY name;" Instead of working with the SQLite command-line utility, you can use a GUI tool to check the contents of the SQLite database. DB Browser for SQLite is a useful, free, multiplatform GUI tool that allows us to easily check the database contents of an SQLite database in Linux, macOS, and Windows. You can read more information about this tool and download its different versions from http://sqlitebrowser.org. Once you have installed the tool, you just need to open the db.sqlite3 file and you can check the database structure and browse the data for the different tables. After we start working with the first version of our web service, you need to check the contents of the toys_toy table with this tool The SQLite database engine and the database file name are specified in the restful01/settings.py Python file. The following lines show the declaration of the DATABASES dictionary, which contains the settings for all the databases that Django uses. The nested dictionary maps the database named default with the django.db.backends.sqlite3 database engine and the db.sqlite3 database file located in the BASE_DIR folder (restful01): DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } After we execute the migrations, the SQLite database will have the following tables. Django uses prefixes to identify the modules and applications that each table belongs to. The tables that start with the auth_ prefix belong to the Django authentication module. The table that starts with the toys_ prefix belongs to our toys application. If we add more models to our toys application, Django will create new tables with the toys_ prefix: auth_group: Stores authentication groups auth_group_permissions: Stores permissions for authentication groups auth_permission: Stores permissions for authentication auth_user: Stores authentication users auth_user_groups: Stores authentication user groups auth_user_groups_permissions: Stores permissions for authentication user groups django_admin_log: Stores the Django administrator log django_content_type: Stores Django content types django_migrations: Stores the scripts generated by Django migrations and the date and time at which they were applied django_session: Stores Django sessions toys_toy: Persists the Toys model sqlite_sequence: Stores sequences for SQLite primary keys with autoincrement fields Understanding the table generated by Django The toys_toy table persists in the database the Toy class we recently created, specifically, the Toy model. Django's integrated ORM generated the toys_toy table based on our Toy model. Run the following command to retrieve the SQL used to create the toys_toy table: sqlite3 db.sqlite3 ".schema toys_toy" The following lines show the output for the previous command together with the SQL that the migrations process executed, to create the toys_toy table that persists the Toy model. The next lines are formatted to make it easier to understand the SQL code. Notice that the output from the command is formatted in a different way: CREATE TABLE IF NOT EXISTS "toys_toy" ( "id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "created" datetime NOT NULL, "name" varchar(150) NOT NULL, "description" varchar(250) NOT NULL, "toy_category" varchar(200) NOT NULL, "release_date" datetime NOT NULL, "was_included_in_home" bool NOT NULL ); The toys_toy table has the following columns (also known as fields) with their SQLite types, all of them not nullable: id: The integer primary key, an autoincrement row created: DateTime name: varchar(150) description: varchar(250) toy_category: varchar(200) release_date: DateTime was_included_in_home: bool Controlling, serialization and deserialization Our RESTful Web Service has to be able to serialize and deserialize the Toy instances into JSON representations. In Django REST framework, we just need to create a serializer class for the Toy instances to manage serialization to JSON and deserialization from JSON. Now, we will dive deep into the serialization and deserialization process in Django REST framework. It is very important to understand how it works because it is one of the most important components for all the RESTful Web Services we will build. Django REST framework uses a two-phase process for serialization. The serializers are mediators between the model instances and Python primitives. Parser and renderers handle as mediators between Python primitives and HTTP requests and responses. We will configure our mediator between the Toy model instances and Python primitives by creating a subclass of the rest_framework.serializers.Serializer class to declare the fields and the necessary methods to manage serialization and deserialization. We will repeat some of the information about the fields that we have included in the Toy model so that we understand all the things that we can configure in a subclass of the Serializer class. However, we will work with shortcuts, which will reduce boilerplate code later in the following examples. We will write less code in the following examples by using the ModelSerializer class. Now, go to the restful01/toys folder and create a new Python code file named serializers.py. The following lines show the code that declares the new ToySerializer class. The code file for the sample is included in the hillar_django_restful_02_01 folder in the restful01/toys/serializers.py file: from rest_framework import serializers from toys.models import Toy class ToySerializer(serializers.Serializer): pk = serializers.IntegerField(read_only=True) name = serializers.CharField(max_length=150) description = serializers.CharField(max_length=250) release_date = serializers.DateTimeField() toy_category = serializers.CharField(max_length=200) was_included_in_home = serializers.BooleanField(required=False) def create(self, validated_data): return Toy.objects.create(**validated_data) def update(self, instance, validated_data): instance.name = validated_data.get('name', instance.name) instance.description = validated_data.get('description', instance.description) instance.release_date = validated_data.get('release_date', instance.release_date) instance.toy_category = validated_data.get('toy_category', instance.toy_category) instance.was_included_in_home = validated_data.get('was_included_in_home', instance.was_included_in_home) instance.save() return instance The ToySerializer class declares the attributes that represent the fields that we want to be serialized. Notice that we have omitted the created attribute that was present in the Toy model. When there is a call to the save method that ToySerializer inherits from the serializers.Serializer superclass, the overridden create and update methods define how to create a new instance or update an existing instance. In fact, these methods must be implemented in our class because they only raise a NotImplementedError exception in their base declaration in the serializers.Serializer superclass. The create method receives the validated data in the validated_data argument. The code creates and returns a new Toy instance based on the received validated data. The update method receives an existing Toy instance that is being updated and the new validated data in the instance and validated_data arguments. The code updates the values for the attributes of the instance with the updated attribute values retrieved from the validated data. Finally, the code calls the save method for the updated Toy instance and returns the updated and saved instance. We designed a RESTful Web Service to interact with a simple SQLite database and perform CRUD operations with toys. We defined the requirements for our web service and understood the tasks performed by each HTTP method and different scope. You read an excerpt from the book, Django RESTful Web Services, written by Gaston C. Hillar. This book helps developers build complex RESTful APIs from scratch with Django and the Django REST Framework. The code bundle for the article is hosted on GitHub.  
Read more
  • 0
  • 0
  • 4606

article-image-introduction-aspnet-core-web-api
Packt
07 Mar 2018
13 min read
Save for later

Introduction to ASP.NET Core Web API

Packt
07 Mar 2018
13 min read
In this article by MithunPattankarand MalendraHurbuns, the authors of the book, Mastering ASP.NET Web API,we will start with a quick recap of MVC. We will be looking at the following topics:  Quick recap of MVC framework  Why Web APIs were incepted and it's evolution?  Introduction to .NET Core?  Overview of ASP.NET Core architecture (For more resources related to this topic, see here.) Quick recap of MVC framework Model-View-Controller (MVC) is a powerful and elegant way of separating concerns within an application and applies itself extremely well to web applications. With ASP.NETMVC, it's translated roughly as follows: Models (M): These are the classes that represent the domain you are interested in. These domain objects often encapsulate data stored in a database as well as code that manipulates the data and enforces domain-specific business logic. With ASP.NETMVC, this is most likely a Data Access Layer of some kind, using a tool like Entity Framework or NHibernate or classic ADO.NET.  View (V): This is a template to dynamically generate HTML.  Controller(C): This is a special class that manages the relationship between the View and the Model. It responds to user input, talks to the Model, and decides which view to render (if any). In ASP.NETMVC, this class is conventionally denoted by the suffix Controller. Why Web APIs were incepted and it's evolution? Looking back to days when ASP.NETASMX-based XML web service was widely used for building service-oriented applications, it was easiest way to create SOAP-based service which can be used by both .NET applications and non .NET applications. It was available only over HTTP. Around 2006, Microsoft released Windows Communication Foundation (WCF).WCF was and even now a powerful technology for building SOA-based applications. It was giant leap in the world of Microsoft .NET world. WCF was flexible enough to be configured as HTTP service, Remoting service, TCP service, and so on. Using Contracts of WCF, we would keep entire business logic code base same and expose the service as HTTP based or non HTTP based via SOAP/ non SOAP. Until 2010 the ASMX based XML web service or WCF service were widely used in client server based applications, in-fact everything was running smoothly. But the developers of .NET or non .NET community started to feel need for completely new SOA technology for client server applications. Some of reasons behind them were as follows: With applications in production, the amount of data while communicating started to explode and transferring them over the network was bandwidth consuming. SOAP being light weight to some extent started to show signs of payload increase. A few KB SOAP packets were becoming few MBs of data transfer.  Consuming the SOAP service in applications lead to huge applications size because of WSDL and proxy generation. This was even worse when it was used in web applications. Any changes to SOAP services lead to repeat of consuming them by proxy generation. This wasn't easy task for any developers.  JavaScript-based web frameworks were getting released and gaining ground for much simpler way of web development. Consuming SOAP-based services were not that optimal way. Hand-held devices were becoming popular like tablets, smartphones. They had more focused applications and needed very lightweight service oriented approach.  Browser based Single Page Applications (SPA) was gaining ground very rapidly. Using SOAP based services for quite heavy for these SPA. Microsoft released REST based WCF components which can be configured to respond in JSON or XML, but even then it was WCF which was heavy technology to be used.  Applications where no longer just large enterprise services, but there was need was more focused light weight service to be up & running in few days and much easier to use. Any developer who has seen evolving nature of SOA based technologies like ASMX, WCF or any SOAP based felt the need to have much lighter, HTTP based services. HTTP only, JSON compatible POCO based lightweight services was need of the hour and concept of Web API started gaining momentum. What is Web API? A Web API is a programmatic interface to a system that is accessed via standard HTTP methods and headers. A Web API can be accessed by a variety of HTTP clients, including browsers and mobile devices. For Web API to be successful HTTP based service, it needed strong web infrastructure like hosting, caching, concurrency, logging, security etc. One of the best web infrastructure was none other than ASP.NET. ASP.NET either in form Web Form or MVC was widely adopted, so the solid base for web infrastructure was mature enough to be extended as Web API. Microsoft responded to community needs by creating ASP.NET Web API- a super-simple yet very powerful framework for building HTTP-only, JSON-by-default web services without all the fuss of WCF. ASP.NET Web API can be used to build REST based services in matter of minutes and can easily consumed with any front end technologies. It used IIS (mostly) for hosting, caching, concurrency etc. features, it became quite popular. It was launched in 2012 with most basics needs for HTTP based services like convention-based Routing, HTTP Request and Response messages. Later Microsoft released much bigger and better ASP.NET Web API 2 along with ASP.NETMVC 5 in Visual Studio 2013. ASP.NET Web API 2 evolved at much faster pace with these features. Installed via NuGet Installing of Web API 2 was made simpler by using NuGet, either create empty ASP.NET or MVC project and then run command in NuGet Package Manager Console: Install-Package Microsoft.AspNet.WebApi Attribute Routing Initial release of Web API was based on convention-based routing meaning we define one or more route templates and work around it. It's simple without much fuss as routing logic in a single place & it's applied across all controllers. The real world applications are more complicated with resources (controllers/ actions) have child resources like customers having orders, books having authors etc. In such cases convention-based routing is not scalable. Web API 2 introduced a new concept of Attribute Routing which uses attributes in programming languages to define routes. One straight forward advantage is developer has full controls how URIs for Web API are formed. Here is quick snippet of Attribute Routing: [Route("customers/{customerId}/orders")] public IEnumerable<Order>GetOrdersByCustomer(intcustomerId) { ... } For more understanding on this, read Attribute Routing in ASP.NET Web API 2(https://www.asp.net/web-api/overview/web-api-routing-and-actions/attribute-routing-in-web-api-2) OWIN self-host ASP.NET Web API lives on ASP.NET framework, leading to think that it can be hosted on IIS only. The Web API 2 came new hosting package. Microsoft.AspNet.WebApi.OwinSelfHost With this package it can self-hosted outside IIS using OWIN/Katana. CORS (Cross Origin Resource Sharing) Any Web API developed either using .NET or non .NET technologies and meant to be used across different web frameworks, then enabling CORS is must. A must read on CORS&ASP.NET Web API 2 (https://www.asp.net/web-api/overview/security/enabling-cross-origin-requests-in-web-api). IHTTPActionResult and Web API OData improvements are other few notable features which helped evolve Web API 2 as strong technology for developing HTTP based services. ASP.NET Web API 2 has becoming more powerful over the years with C# language improvements like Asynchronous programming using Async/ Await, LINQ, Entity Framework Integration, Dependency Injection with DI frameworks, and so on. ASP.NET into Open Source world Every technology has to evolve with growing needs and advancements in hardware, network and software industry, ASP.NET Web API is no exception to that. Some of the evolution that ASP.NET Web API should undergo from perspectives of developer community, enterprises and end users are: ASP.NETMVC and Web API even though part of ASP.NET stack but their implementation and code base is different. A unified code base reduces burden of maintaining them. It's known that Web API's are consumed by various clients like web applications, Native apps, and Hybrid apps, desktop applications using different technologies (.NET or non .NET). But how about developing Web API in cross platform way, need not rely always on Windows OS/ Visual Studio IDE. Open sourcing the ASP.NET stack so that it's adopted on much bigger scale. End users are benefitted with open source innovations. We saw that why Web APIs were incepted, how they evolved into powerful HTTP based service and some evolutions required. With these thoughts Microsoft made an entry into world of Open Source by launching .NET Core and ASP.NET Core 1.0. What is .NET Core? .NET Core is a cross-platform free and open-source managed software framework similar to .NET Framework. It consists of CoreCLR, a complete cross-platform runtime implementation of CLR. .NET Core 1.0 was released on 27 June 2016 along with Visual Studio 2015 Update 3, which enables .NET Core development. In much simpler terms .NET Core applications can be developed, tested, deployed on cross platforms such as Windows, Linux flavours, macOS systems. With help of .NET Core, we don't really need Windows OS and in particular Visual Studio IDE to develop ASP.NET web applications, command-line apps, libraries, and UWP apps. In short, let's understand .NET Core components: CoreCLR:It is a virtual machine that manages the execution of .NET programs. CoreCLRmeans Core Common Language Runtime, it includes the garbage collector, JIT compiler, base .NET data types and many low-level classes. CoreFX: .NET Core foundational libraries likes class for collections, file systems, console, XML, Async and many others. CoreRT: .NET Core runtime optimized for AOT (ahead of time compilation) scenarios, with the accompanying .NET Native compiler toolchain. Its main responsibility is to do native compilation of code written in any of our favorite .NET programming language. .NET Core shares subset of original .NET framework, plus it comes with its own set of APIs that is not part of .NET framework. This results in some shared APIs that can be used by both .NET core and .NET framework. A .Net Core application can easily work on existing .NET Framework but not vice versa. .NET Core provides a CLI (Command Line Interface) for an execution entry point for operating systems and provides developer services like compilation and package management. The following are the .NET Core interesting points to know: .NET Core can be installed on cross platforms like Windows, Linux, andmacOS. It can be used in device, cloud, and embedded/IoT scenarios.  Visual Studio IDE is not mandatory to work with .NET Core, but when working on Windows OS we can leverage existing IDE knowledge to work.  .NET Core is modular, meaning that instead of assemblies, developers deal with NuGet packages.  .NET Core relies on its package manager to receive updates because cross platform technology can't rely on Windows Updates. To learn .NET Core, we just need a shell, text editor and its runtime installed. .NET Core comes with flexible deployment. It can be included in your app or installed side-by-side user- or machine-wide.  .NET Core apps can also be self-hosted/run as standalone apps. .NET Core supports four cross-platform scenarios--ASP.NET Core web apps, command-line apps, libraries, and Universal Windows Platform apps. It does not implement Windows Forms or WPF which render the standard GUI for desktop software on Windows. At present only C# programming language can be used to write .NET Core apps. F# and VB support are on the way. We will primarily focus on ASP.NET Core web apps which includes MVC and Web API. CLI apps, libraries will be covered briefly. What is ASP.NET Core? A new open-source and cross-platform framework for building modern cloud-based web applications using .NET. ASP.NET Core is completely open-source, you can download it from GitHub. It's cross platform meaning you can develop ASP.NET Core apps on Linux/macOS and of course on Windows OS. ASP.NET was first released almost 15 years back with .NET framework. Since then it's adopted by millions of developers for large, small applications. ASP.NET has evolved with many capabilities. With .NET Core as cross platform, ASP.NET took a huge leap beyond boundaries of Windows OS environment for development and deployment of web applications. ASP.NET Core overview                                                ASP.NET Core Architecture overview ASP.NET Core high level overview provides following insights: ASP.NET Core runs both on Full .NET framework and .NET Core.  ASP.NET Core applications with full .NET framework can only be developed and deployed only Windows OS/Server.  When using .NET core, it can be developed and deployed on platform of choice. The logos of Windows, Linux, macOSindicates that you can work with ASP.NET Core.  ASP.NET Core when on non-Windows machine, use the .NET Core libraries to run the applications. It's obvious you won't have all full .NET libraries but most of them are available.  Developers working on ASP.NET Core can easily switch working on any machine not confined to Visual Studio 2015 IDE. ASP.NET Core can run with different version of .NET Core. ASP.NET Core has much more foundational improvements apart from being cross-platform, we gain following advantages of using ASP.NET Core: Totally Modular: ASP.NET Core takes totally modular approach for application development, every component needed to build application are well factored into NuGet packages. Only add required packages through NuGet to keep overall application lightweight.  ASP.NET Core is no longer based on System.Web.dll. Choose your editors and tools: Visual Studio IDE was used to develop ASP.NET applications on Windows OS box, now since we have moved beyond the Windows world. Then we will require IDE/editors/ Tools required for developingASP.NET applications on Linux/macOS. Microsoft developed powerful lightweight code editors for almost any type of web applications called as Visual Studio Code.  ASP.NET Core is such a framework that we don't need Visual Studio IDE/ code to develop applications. We can use code editors like Sublime, Vim also. To work with C# code in editors, installed and use OmniSharp plugin.  OmniSharp is a set of tooling, editor integrations and libraries that together create an ecosystem that allows you to have a great programming experience no matter what your editor and operating system of choice may be.  Integration with modern web frameworks: ASP.NET Core has powerful, seamless integration with modern web frameworks like Angular, Ember, NodeJS, and Bootstrap.  Using bower andNPM, we can work with modern web frameworks.  Cloud ready: ASP.NET Core apps are cloud ready with configuration system, it just seamlessly gets transitioned from on-premises to cloud.  Built in Dependency Injection. Can be hosted on IIS or self-host in your own process or on nginx.  New light-weight and modular HTTP request pipeline. Unified code base for Web UI and Web APIs. We will see more on this when we explore anatomy of ASP.NET Core application. Summary So in this article we covered MVC framework and introduced .NET Core and its architecture. Resources for Article:   Further resources on this subject: [article] [article] [article]
Read more
  • 0
  • 0
  • 2007