Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

406 Articles
article-image-making-simple-web-based-ssh-client-using-nodejs-and-socketio
Jakub Mandula
28 Oct 2015
7 min read
Save for later

Making a simple Web based SSH client using Node.js and Socket.io

Jakub Mandula
28 Oct 2015
7 min read
If you are reading this post, you probably know what SSH stands for. But just for the sake of formality, here we go: SSH stands for Secure Shell. It is a network protocol for secure access to the shell on a remote computer. You can do much more over SSH besides commanding your computer. Here you can find further information: http://en.wikipedia.org/wiki/Secure_Shell. In this post, we are going to create a very simple web terminal. And when I say simple, I mean it! However much you like colors, it will not support them because the parsing is just beyond the scope of this post. If you want a good client-side terminal library use term.js. It is made by the same guy who wrote pty.js, which we will be using. It is able to handle pretty much all key events and COLORS!!!! Installation I am going to assume you already have your node and npm installed. First we will install all of the npm packages we will be using: npm install express pty.js socket.io Express is a super cool web framework for Node. We are going to use it to serve our static files. I know it is a bit overkill, but I like Express. pty.js is where the magic will be happening. It forks processes into virtual pseudo terminals and provides bindings for communication. Socket.io is what we will use to transmit the data from the web browser to the server and back. It uses modern WebSockets, but provides fallbacks for backward compatibility. Anytime you want to create a real-time application, Socket.io is the way to go. Planning First things first, we need to think what we want the program to do. We want the program to create an instance of a shell on the server (remote machine) and send all of the text to the browser. Back in the browser, we want to capture any user events and send them back to the server shell. The WebSSH server This is the code that will power the terminal forwarding. Open a new file named server.js and start by importing all of the libraries: var express = require('express'); var https = require('https'); var http = require('http'); var fs = require('fs'); var pty = require('pty.js'); Set up express: // Setup the express app var app = express(); // Static file serving app.use("/",express.static("./")); Next we are going to create the server. // Creating an HTTP server var server = http.createServer(app).listen(8080) If you want to use HTTPS, which you probably will, you need to generate a key and certificate and import them as shown. var options = { key: fs.readFileSync('keys/key.pem'), cert: fs.readFileSync('keys/cert.pem') }; Then use the options object to create the actual server. Notice that this time we are using the https package. // Create an HTTPS server var server = https.createServer(options, app).listen(8080) CAUTION: Even if you use HTTPS, do not use this example program on the Internet. You are not authenticating the client in any way and thus providing a free open gate to your computer. Please make sure you only use this on your Private network protected by a firewall!!! Now bind the socket.io instance to the server: var io = require('socket.io')(server); After this, we can set up the place where the magic happens. // When a new socket connects io.on('connection', function(socket){ // Create terminal var term = pty.spawn('sh', [], { name: 'xterm-color', cols: 80, rows: 30, cwd: process.env.HOME, env: process.env }); // Listen on the terminal for output and send it to the client term.on('data', function(data){ socket.emit('output', data); }); // Listen on the client and send any input to the terminal socket.on('input', function(data){ term.write(data); }); // When socket disconnects, destroy the terminal socket.on("disconnect", function(){ term.destroy(); console.log("bye"); }); }); In this block, all we do is wait for new connections. When we get one, we spawn a new virtual terminal and start to pump the data from the terminal to the socket and vice versa. After the socket disconnects, we make sure to destroy the terminal. If you have noticed, I am using the simple sh shell. I did this mainly because I don't have a fancy prompt on it. Because we are not adding any parsing logic, my bash prompt would show up like this: ]0;piman@mothership: ~ _[01;32m✓ [33mpiman_[0m ↣ _[1;34m[~]_[37m$[0m - Eww! But you may use any shell you like. This is all that we need on the server side. Save the file and close it. Client side The client side is going to be just a very simple HTML file. Start with a very simple HTML markup: <!doctype html> <html> <head> <title>SSH Client</title> <script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/socket.io/1.3.5/socket.io.min.js"></script> <script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js"></script> <style> body { margin: 0; padding: 0; } .terminal { font-family: monospace; color: white; background: black; } </style> </head> <body> <h1>SSH</h1> <div class="terminal"> </div> <script> </script> </body> </html> I am downloading the client side libraries jquery and socket.io from cdnjs. All of the client code will be written in the script tag below the terminal div. Surprisingly the code is very simple: // Connect to the socket.io server var socket = io.connect('http://localhost:8080'); // Wait for data from the server socket.on('output', function (data) { // Insert some line breaks where they belong data = data.replace("n", "<br>"); data = data.replace("r", "<br>"); // Append the data to our terminal $('.terminal').append(data); }); // Listen for user input and pass it to the server $(document).on("keypress",function(e){ var char = String.fromCharCode(e.which); socket.emit("input", char); }); Notice that we do not have to explicitly append the text the client types to the terminal mainly because the server echos it back anyways. Now we are done! Run the server and open up the URL in your browser. node server.js You should see a small prompt and be able to start typing commands. You can now explore you machine from the browser! Remember that our Web Terminal does not support Tab, Ctrl, Backspace or Esc characters. Implementing this is your homework. Conclusion I hope you found this tutorial useful. You can apply the knowledge in any real-time application where communication with the server is critical. All the code is available here. Please note, that if you'd like to use a browser terminal I strongly recommend term.js. It supports colors and styles and all the basic keys including Tabs, Backspace etc. I use it in my PiDashboard project. It is much cleaner and less tedious than the example I have here. I can't wait what amazing apps you will invent based on this. About the Author Jakub Mandula is a student interested in anything to do with technology, computers, mathematics or science.
Read more
  • 0
  • 6
  • 37188

article-image-securing-and-authenticating-web-api
Packt
21 Oct 2015
9 min read
Save for later

Securing and Authenticating Web API

Packt
21 Oct 2015
9 min read
In this article by Rajesh Gunasundaram, author of ASP.NET Web API Security Essentials, we will cover how to secure a Web API using forms authentication and Windows authentication. You will also get to learn the advantages and disadvantages of using the forms and Windows authentication in Web API. In this article, we will cover the following topics: The working of forms authentication Implementing forms authentication in the Web API Discussing the advantages and disadvantages of using the integrated Windows authentication mechanism Configuring Windows authentication Enabling Windows authentication in Katana Discussing Hawkauthentication (For more resources related to this topic, see here.) The working of forms authentication The user credentials will be submitted to the server using the HTML forms in forms authentication. This can be used in the ASP.NET Web API only if it is consumed from a web application. Forms authentication is built under ASP.NET and uses the ASP.NET membership provider to manage user accounts. Forms authentication requires a browser client to pass the user credentials to the server. It sends the user credentials in the request and uses HTTP cookies for the authentication. Let's list out the process of forms authenticationstep by step: The browser tries to access a restricted action that requires an authenticated request. If the browser sends an unauthenticated request, thenthe server responds with an HTTP status 302 Found and triggers the URL redirection to the login page. To send the authenticated request, the user enters the username and password and submits the form. If the credentials are valid, the server responds with an HTTP 302 status code that initiates the browser to redirect the page to the original requested URI with the authentication cookie in the response. Any request from the browser will now include the authentication cookie and the server will grant access to any restricted resource. The following image illustrates the workflow of forms authentication: Fig 1 – Illustrates the workflow of forms authentication Implementing forms authentication in the Web API To send the credentials to the server, we need an HTML form to submit. Let's use the HTML form or view an ASP.NET MVC application. The steps to implement forms authentication in an ASP.NET MVC application areas follows: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed Templatenamed Web. Choose ASP.NET Web Applicationfrom the middle panel. Name the project Chapter06.FormsAuthentication and click OK. Fig 2 – We have named the ASP.NET Web Application as Chapter06.FormsAuthentication Select the MVC template in the New ASP.NET Project dialog. Tick Web APIunder Add folders and core referencesand press OKleaving Authentication to Individual User Accounts. Fig 3 – Select MVC template and check Web API in add folders and core references In the Models folder, add a class named Contact.cs with the following code: namespace Chapter06.FormsAuthentication.Models { public class Contact { publicint Id { get; set; } public string Name { get; set; } public string Email { get; set; } public string Mobile { get; set; } } } Add a Web API controller named ContactsController with the following code snippet: namespaceChapter06.FormsAuthentication.Api { public class ContactsController : ApiController { IEnumerable<Contact> contacts = new List<Contact> { new Contact { Id = 1, Name = "Steve", Email = "steve@gmail.com", Mobile = "+1(234)35434" }, new Contact { Id = 2, Name = "Matt", Email = "matt@gmail.com", Mobile = "+1(234)5654" }, new Contact { Id = 3, Name = "Mark", Email = "mark@gmail.com", Mobile = "+1(234)56789" } }; [Authorize] // GET: api/Contacts publicIEnumerable<Contact> Get() { return contacts; } } } As you can see in the preceding code, we decorated the Get() action in ContactsController with the [Authorize] attribute. So, this Web API action can only be accessed by an authenticated request. An unauthenticated request to this action will make the browser redirect to the login page and enable the user to either register or login. Once logged in, any request that tries to access this action will be allowed as it is authenticated.This is because the browser automatically sends the session cookie along with the request and forms authentication uses this cookie to authenticate the request. It is very important to secure the website using SSL as forms authentication sends unencrypted credentials. Discussing the advantages and disadvantages of using the integrated Windows authentication mechanism First let's see the advantages of Windows authentication. Windows authentication is built under theInternet Information Services (IIS). It doesn't sends the user credentials along with the request. This authentication mechanism is best suited for intranet applications and doesn't need a user to enter their credentials. However, with all these advantages, there are a few disadvantages in the Windows authentication mechanism. It requires Kerberos that works based on tickets or NTLM, which is a Microsoft security protocols that should be supported by the client. The client'sPC must be underan active directory domain. Windows authentication is not suitable for internet applications as the client may not necessarily be on the same domain. Configuring Windows authentication Let's implement Windows authentication to an ASP.NET MVC application, as follows: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed Templatenamed Web. Choose ASP.NET Web Applicationfrom the middle panel. Give project name as Chapter06.WindowsAuthentication and click OK. Fig 4 – We have named the ASP.NET Web Application as Chapter06.WindowsAuthentication Change the Authentication mode to Windows Authentication. Fig 5 – Select Windows Authentication in Change Authentication window Select the MVC template in the New ASP.NET Project dialog. Tick Web API under Add folders and core references and click OK. Fig 6 – Select MVC template and check Web API in add folders and core references Under theModels folder, add a class named Contact.cs with the following code: namespace Chapter06.FormsAuthentication.Models { public class Contact { publicint Id { get; set; } public string Name { get; set; } public string Email { get; set; } public string Mobile { get; set; } } } Add a Web API controller named ContactsController with the following code: namespace Chapter06.FormsAuthentication.Api { public class ContactsController : ApiController { IEnumerable<Contact> contacts = new List<Contact> { new Contact { Id = 1, Name = "Steve", Email = "steve@gmail.com", Mobile = "+1(234)35434" }, new Contact { Id = 2, Name = "Matt", Email = "matt@gmail.com", Mobile = "+1(234)5654" }, new Contact { Id = 3, Name = "Mark", Email = "mark@gmail.com", Mobile = "+1(234)56789" } }; [Authorize] // GET: api/Contacts publicIEnumerable<Contact> Get() { return contacts; } } } The Get() action in ContactsController is decorated with the[Authorize] attribute. However, in Windows authentication, any request is considered as an authenticated request if the client relies on the same domain. So no explicit login process is required to send an authenticated request to call theGet() action. Note that the Windows authentication is configured in the Web.config file: <system.web> <authentication mode="Windows" /> </system.web> Enabling Windows authentication in Katana The following steps will create a console application and enable Windows authentication in katana: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed TemplatenamedWindows Desktop. Select Console Applicationfrom the middle panel. Give project name as Chapter06.WindowsAuthenticationKatana and click OK: Fig 7 – We have named the Console Application as Chapter06.WindowsAuthenticationKatana Install NuGet Packagenamed Microsoft.Owin.SelfHost from NuGet Package Manager: Fig 8 – Install NuGet Package named Microsoft.Owin.SelfHost Add aStartup class with the following code snippet: namespace Chapter06.WindowsAuthenticationKatana { class Startup { public void Configuration(IAppBuilder app) { var listener = (HttpListener)app.Properties["System.Net.HttpListener"]; listener.AuthenticationSchemes = AuthenticationSchemes.IntegratedWindowsAuthentication; app.Run(context => { context.Response.ContentType = "text/plain"; returncontext.Response.WriteAsync("Hello Packt Readers!"); }); } } } Add the following code in the Main function in Program.cs: using (WebApp.Start<Startup>("http://localhost:8001")) { Console.WriteLine("Press any Key to quit Web App."); Console.ReadKey(); } Now run the application and open http://localhost:8001/ in the browser: Fig 8 – Open the Web App in a browser If you capture the request using the fiddler, you will notice an Authorization Negotiate entry in the header of the request Try calling http://localhost:8001/ in the fiddler and you will get a 401 Unauthorized response with theWWW-Authenticate headers that indicates that the server attaches a Negotiate protocol that consumes either Kerberos or NTLM, as follows: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/8.0 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM X-Powered-By: ASP.NET Date: Tue, 01 Sep 2015 19:35:51 IST Content-Length: 6062 Proxy-Support: Session-Based-Authentication Discussing Hawk authentication Hawk authentication is a message authentication code-based HTTP authentication scheme that facilitates the partial cryptographic verification of HTTP messages. Hawk authentication requires a symmetric key to be shared between the client and server. Instead of sending the username and password to the server in order to authenticate the request, Hawk authentication uses these credentials to generate a message authentication code and is passed to the server in the request for authentication. Hawk authentication is mainly implemented in those scenarios where you need to pass the username and password via the unsecured layer and no SSL is implemented over the server. In such cases, Hawk authentication protects the username and password and passes the message authentication code instead. For example, if you are building a small product that has control over both the server and client and implementing SSL is too expensive for such a small project, then Hawk is the best option to secure the communication between your server and client. Summary Voila! We just secured our Web API using the forms- and Windows-based authentication. In this article,youlearnedabout how forms authentication works and how it is implemented in the Web API. You also learnedabout configuring Windows authentication and got to know about the advantages and disadvantages of using Windows authentication. Then you learned about implementing the Windows authentication mechanism in Katana. Finally, we had an introduction about Hawk authentication and the scenarios of using Hawk authentication. Resources for Article: Further resources on this subject: Working with ASP.NET Web API [article] Creating an Application using ASP.NET MVC, AngularJS and ServiceStack [article] Enhancements to ASP.NET [article]
Read more
  • 0
  • 0
  • 5611

article-image-gamification-moodle-lms
Packt
19 Oct 2015
11 min read
Save for later

Gamification with Moodle LMS

Packt
19 Oct 2015
11 min read
 In this article by Natalie Denmeade, author of the book, Gamification with Moodle describes how teachers can use Gamification design in their course development within the Moodle Learning Management System (LMS) to increase the motivation and engagement of learners. (For more resources related to this topic, see here.) Gamification is a design process that re-frames goals to be more appealing and achievable by using game design principles. The goal of this process is it to keep learners engaged and motivated in a way that is not always present in traditional courses. When implemented in elegant solutions, learners may be unaware of the subtle game elements being used. A gamification strategy can be considered successful if learners are more engaged, feel challenged and confident to keep progressing, which has implications for the way teachers consider their course evaluation processes. It is important to note that Gamification in education is more about how the person feels at certain points in their learning journey than about the end product which may or may not look like a game. Gamification and Moodle After following the tutorials in this book, teachers will gain the basic skills to get started applying Gamification design techniques in their Moodle courses. They can take learners on a journey of risk, choice, surprise, delight, and transformation. Taking an activity and reframing it to be more appealing and achievable sounds like the job description of any teacher or coach! Therefore, many teachers are already doing this! Understanding games and play better can help teachers be more effective in using a wider range of game elements to aid retention and completions in their courses. In this book you will find hints and tips on how to apply proven strategies to online course development, including the research into a growth mindset from Carol Dweck in her book Mindset. You will see how the use of game elements in Foursquare (badges), Twitter (likes), and Linkedin (progress bar), can also be applied to Moodle course design. In addition, you will use the core features available in Moodle which were designed to encourage learner participation as they collaborate, tag, share, vote, network, and generate learning content for each other. Finally, explore new features and plug-ins which offer dozens of ways that teachers can use game elements in Moodle such as, badges, labels, rubrics, group assignments, custom grading scales, forums, and conditional activities. A benefit of using Moodle as a Gamification LMS is it was developed on social constructivist principles. As these are learner-centric principles this means it is easy to use common Moodle features to apply gamification through the implementation of game components, mechanics and dynamics. These have been described by Kevin Werbach (in the Coursera MOOC on Gamification) as: Game Dynamics are the grammar: (the hidden elements) Constraints, emotions, narrative, progression, relationships Game Mechanics are the verbs: The action is driven forward by challenges, chance, competition/cooperation, feedback, resource acquisition, rewards, transactions, turns, win states Game Components are the nouns: Achievements, avatars, badges, boss fights, collections, combat, content, unlocking, gifting, leaderboards, levels, points, quests, teams, virtual goods Most of these game elements are not new ideas to teachers. It could be argued that school is already gamified through the use of grades and feedback. In fact it would be impossible to find a classroom that is not using some game elements. This book will help you identify which elements will be most effective in your current context. Teachers are encouraged to start with a few and gradually expanding their repertoire. As with professional game design, just using game elements will not ensure learners are motivated and engaged. The measure of success of a Gamification strategy is that learners continue to build resilience and autonomy in their own learning. When implemented well, the potential benefits of using a Gamification design process in Moodle are to: Provide manageable set of subtasks and tasks by hiding and revealing content Make assessment criteria visible, predictable, and in plain English using marking guidelines and rubrics Increase ownership of learning paths through choice and activity restrictions Build individual and group identity through work place simulations and role play Offer freedom to fail and try again without negative repercussions Increase enjoyment of both teacher and learners When teachers follow the step by step guide provided in this book they will create a basic Moodle course that acts as a flexible framework ready for learning content. This approach is ideal for busy teachers who want to respond to the changing needs and situations in the classroom. The dynamic approach keeps Teachers in control of adding and changing content without involving a technology support team. Onboarding tips By using focussed examples, the book describes how to use Moodle to implement an activity loop that identifies a desired behaviour and wraps motivations and feedback around that action. For example, a desired action may be for each learner to update their Moodle profile information with their interests and an avatar. Various motivational strategies could be put in place to prompt (or force) the learners to complete this task, including: Ask learners to share their avatars, with a link to their profile in a forum with ratings. Everyone else is doing it and they will feel left out if they don't get a like or a comment (creating a social norm). They might get rated as having the best avatar. Update the forum type so that learners can't see other avatars until they make a post. Add a theme (for example, Lego inspired avatars) so that creating an avatar is a chance to be creative and play. Choosing how they represent themselves in an online space is an opportunity for autonomy. Set the conditional release so learners cannot see the next activity until this activity is marked as complete (for example, post at least 3 comments on other avatars). The value in this process is that learners have started building connections between new classmates. This activity loop is designed to appeal to diverse motivations and achieve multiple goals: Encourages learners to create an online persona and choose their level of anonymity Invite learners to look at each other’s profiles and speed up the process of getting to know each other Introduce learners to the idea of forum posting and rating in a low-risk (non-assessable) way Take the workload off the Teacher to assess each activity directly Enforce compliance through software options which saves admin time and creates an expectation of work standards for learners Feedback options Games celebrate small and large successes and so should Moodle courses. There are a number of ways to do this in Moodle, including simply automating feedback with a Label, which is revealed once a milestone is reached. These milestones could be an activity completion, topic completion, or a level has been reached in the course total. Feedback can be provided through symbols of the achievement. Learners of all ages are highly motivated by this. Nearly all human cultures use symbols, icons, medals and badges to indicate status and achievements such as a black belt in Karate, Victoria Cross and Order of Australia Medals, OBE, sporting trophies, Gold Logies, feathers and tattoos. Symbols of achievement can be achieved through the use of open badges. Moodle offers a simple way to issue badges in line with Open Badges Industry (OBI) standards. The learner can take full ownership of this badge when they export it to their online backpack. Higher education institutes are finding evidence that open badges are a highly effective way to increase motivation for mature learners. Kaplan University found the implementation of badges resulted in increased student engagement by 17 percent. As well as improving learner reactions to complete harder tasks, grades increased up to 9 percent. Class attendance and discussion board posts increased over the non-badged counterparts. Using open badges as a motivation strategy enables feedback to be regularly provided along the way from peers, automated reporting and the teacher. For advanced Moodlers, the book describes how rubrics can be used for "levelling up" and how the Moodle gradebook can be configured as an exponential point scoring system to indicate progress. Social game elements Implementing social game elements is a powerful way to increase motivation and participation. A Gamification experiment with thousands of MOOC participants measured participation of learners in three groups of "plain, game and social". Students in the game condition had a 22.5 percent higher test score in the final test compared to students in the plain condition. Students in the social condition showed an even stronger increase of almost 40 percent compared to students in the plain condition. (See A Playful Game Changer: Fostering Student Retention in Online Education with Social Gamification Krause et al, 2014). Moodle has a number of components that can be used to encourage collaborative learning. Just as the online gaming world has created spaces where players communicate outside of the game in forums, wikis and You Tube channels as well as having people make cheat guides about the games and are happy to share their knowledge with beginners. In Moodle we can imitate these collaborative spaces gamers use to teach each other and make the most of the natural leaders and influencers in the class. Moodle activities can be used to encourage communication between learners and allow delegation and skill-sharing. For example, the teacher may quickly explain and train the most experienced in the group how to perform a certain task and then showcase their work to others as an example. The learner could create blog posts which become an online version of an exercise book. The learner chooses the sharing level so classmates only, or the whole world, can view what is shared and leave comments. The process of delegating instruction through the connection of leader/learners to lagger/learners, in a particular area, allows finish lines to be at different points. Rather spending the last few weeks marking every learner’s individual work, the Teacher can now focus their attention on the few people who have lagged behind and need support to meet the deadlines. It's worth taking the time to learn how to configure a Moodle course. This provides the ability to set up a system that is scalable and adaptable to each learner. The options in Moodle can be used to allow learners to create their own paths within the boundaries set by a teacher. Therefore, rather than creating personalised learning paths for every student, set up a suite of tools for learners to create their own learning paths. Learning how to configure Moodle activities will reduce administration tasks through automatic reports, assessments and conditional release of activities. The Moodle activities will automatically create data on learner participation and competence to assist in identifying struggling learners. The inbuilt reports available in Moodle LMS help Teachers to get to know their learners faster. In addition, the reports also create evidence for formative assessment which saves hours of marking time. Through the release from repetitive tasks, teachers can spend more time on the creative and rewarding aspects of teaching. Rather than wait for a game design company to create an awesome educational game for a subject area, get started by using the same techniques in your classroom. This creative process is rewarding for both teachers and learners because it can be constantly adapted for their unique needs. Summary Moodle provides a flexible Gamification platform because teachers are directly in control of modifying and adding a sequence of activities, without having to go through an administrator. Although it may not look as good as a video game (made with an extensive budget) learners will appreciate the effort and personalisation. The Gamification framework does require some preparation. However, once implemented it picks up a momentum of its own and the teacher has a reduced workload in the long run. Purchase the book and enjoy a journey into Gamification in education with Moodle! Resources for Article: Further resources on this subject: Virtually Everything for Everyone [article] Moodle for Online Communities [article] State of Play of BuddyPress Themes [article]
Read more
  • 0
  • 0
  • 4807

article-image-how-build-cross-platform-desktop-application-nodejs-and-electron
Mika Turunen
14 Oct 2015
9 min read
Save for later

How to build a cross-platform desktop application with Node.js and Electron

Mika Turunen
14 Oct 2015
9 min read
Do you want to make a desktop application, but you have only mastered web development so far? Or maybe you feel overwhelmed by all of the different API’s that different desktop platforms have to offer? Or maybe you want to write a beautiful application in HTML5 and JavaScript and have it working on the desktop? Maybe you want to port an existing web application to the desktop? Well, luckily for us, there are a number of alternatives and we are going to look into Node.js and Electron to help us get our HTML5 and JavaScript running on the desktop side with no hiccups. What are the different parts in an Electron application Commonly, all of the different components in Electron are either running in the main process (backend) or the rendering process (frontend). The main process can communicate with different parts of the operating system if there’s a need for that, and the rendering process mainly just focuses on showing the content, pretty much like in any HTML5 application you find on the Internet. The processes communicate with each other through IPC (inter-process communication), which in Node.js terms is just a super simple event emitter and nothing else. You can send events and listen for events. You can get the complete source code from here for this post. Let's start working on it You need to have node.js installed and you can install it from https://nodejs.org/. Now that you have Node.js installed you can start focusing on creating the application. First of all, create an empty directory where you will be placing your code. # Open up your favourite terminal, command-line tool or any other alternative as we'll be running quite a bit of commands # Create the directory mkdir /some/location/that/works/in/your/system # Go into the directory cd /some/location/that/works/in/your/system # Now we need to initialize it for our Electron and Node work npm init NPM will start asking you questions about the application we are about to make. You can just hit Enter and not answer any of them if you feel like it. We can fill them in manually once we know a bit more about our application. Now we should have a directory structure with the following files in it: package.json And that's it, nothing else. We'll start by creating two new files in your favorite text editor or IDE. The files are (leave the files empty): main.js index.html Drop all of the files into the same directory as the package.json is in for easier handling of everything for now. Main.js will be our main process file, which is the connecting layer to the underlying desktop operating system for our Electron application. At this point we need to install Electron as a dependency for our application, which is really easy. Just write: npm install --save electron-prebuilt Alternatively if you cloned/downloaded the associated Github repository you can just go into the directory and write: npm install This will install all dependencies from package.json, including the prebuilt-electron. Now we have Electron's prebuilt binaries installed as a direct dependency for our application and we can run our application on our platform. It's wise to manually update our package.json file using the npm init command generated for us. Open up package.josn file and modify the scripts block to look like this (or if it's missing, create it): "main": "main.js", "scripts": { "start": "electron ." }, The whole package.json file should be roughly something like this (taken from the tutorial repo I linked earlier): { "name": "", "version": "1.0.0", "description": "", "main": "main.js", "scripts": { "start": "electron ." }, "repository": { }, "keywords": [ ], "author": "", "license": "MIT", "bugs": { }, "homepage": "", "dependencies": { "electron-prebuilt": "^0.25.3" } } The main property in the file points to the main.js and the scripts sections start property tells it to run command "Electron .", which essentially tells Electron to digest the current directory as an application and Electron hardwires the property main as the main process for the application. This means that main.js is now our main process, just like we wanted. Main and rendering process We need to write the main process JavaScript and the rendering process HTML to get our application to start. Let's start with the main process, main.js. You can also find all of the code below from the tutorial repository here. The code has been peppered with a good amount of comments to give a deeper understanding of what is going on in the code and what different parts do in the context of Electron. // Loads Electron specific app that is not commonly available for node or io.js var app = require("app"); // Inter process communication -- Used to communicate from Main process (this) // to the actual rendering process (index.html) -- not really used in this example var ipc = require("ipc"); // Loads the Electron specific module or browser handling var BrowserWindow = require("browser-window"); // Report crashes to our server. var crashReporter = require("crash-reporter"); // Keep a global reference of the window object, if you don't, the window will // be closed automatically when the javascript object is garbage collected var mainWindow = null; // Quit when all windows are closed. app.on("window-all-closed", function() { // OS X specific check if (process.platform != "darwin") { app.quit(); } }); // This event will be called when Electron has done initialization and ready for creating browser windows. app.on("ready", function() { crashReporter.start(); // Create the browser window (where the applications visual parts will be) mainWindow = newBrowserWindow({ width: 800, height: 600 }); // Building the file path to the index.html mainWindow.loadUrl("file://" + __dirname + "/index.html"); // Emitted when the window is closed. // The function just deferences the mainWindow so garbage collection can // pick it up mainWindow.on("closed", function() { mainWindow = null; }); }); You can now start the application, but it'll just start an empty window since we have nothing to render in the rendering process. Let's fix that by populating our index.html with some content. <!DOCTYPE html> <html> <head> <title>Hello Tutorial!</title> </head> <body> <h2>Tutorial</h2> We are using node.js <script>document.write(process.version)</script> and Electron <script>document.write(process.versions["electron"])</script>. </body> </html> Because this is an Electron application we have used the node.js/io.js process and other content relating to the actual node.js/io.js setup we have going. The line document.write(process.version) actually is a call to the Node.js process. This is one of the great things about Electron: we are essentially bridging the gap between the desktop applications and HTML5 applications. Now to run the application. npm start There is a huge list of different desktop environment integration possibilities you can do with Electron and you can read more about them from the Electron documentation at http://electron.atom.io/. Obviously this is still far from a complete application, but this should give you the understanding on how to work with Electron, how it behaves and what you can do with it. You can start using your favorite JavaScript/CSS frontend framework in the index.html to build a great looking GUI for your new desktop application and you can also use all Node.js specific NPM modules in the backend along with the desktop environment integration. Maybe we'll look into writing a great looking GUI for our application with some additional desktop environment integration in another post. Packaging and distributing Electron applications Applications can be packaged into distributable operating system specific containers. For example, .exe files allow them to run on different hardware. The packaging process is fairly simple and well documented in the Electron's documentation and it is out of the scope for this post but worth the look if you want to package your application. To understand more of the application distribution and packaging process, read the Electrons official documentation on it here. Electron and it's current use Electron is still really fresh and right out of GitHub's knowing hands, but it's already been adopted by quite few companies for use and there are number of applications already built on top of it. Companies using Electron: Slack Microsoft Github Applications built with Electron or using Electron Visual Studio Code - Microsofts Visual Studio code Heartdash - Hearthdash is a card tracking application for Hearthstone. Monu - Process monitoring app Kart - Frontend for RetroArch Friends - P2P chat powered by the web Final words on Electron It's obvious that Electron is still taking its first baby steps, but it's hard to deny the fact that more and more user interfaces will be written in different web technologies with HTML5 and this is one of the great starts for it. It'll be interesting to see how the gap between the desktop and the web application develop as time goes on and people like you and me will be playing a key role in the development of future applications. With help of technologies like Electron the desktop application development just got that much easier. For more Node.js content, look no further than our dedicated page! About the author Mika Turunen is a software professional hailing from the frozen cold Finland. He spends a good part of his day playing with emerging web and cloud related technologies, but he also has a big knack for games and game development. His hobbies include game collecting, game development and games in general. When he's not playing with technology he is spending time with his two cats and growing his beard.
Read more
  • 0
  • 0
  • 6506

article-image-templates-web-pages
Packt
13 Oct 2015
13 min read
Save for later

Templates for Web Pages

Packt
13 Oct 2015
13 min read
In this article, by Kai Nacke, author of the book D Web Development, we will learn that every website has some recurring elements, often called a theme. Templates are an easy way to define these elements only once and then reuse them. A template engine is included in vibe.dwith the so-called Diet templates. The template syntax is based on the Jade templates (http://jade-lang.com/), which you might already know about. In this article, you will learn the following: Why are templates useful Key concepts of Diet templates: inheritance, include and blocks How to use filters and how to create your own filter (For more resources related to this topic, see here.) Using templates Let's take a look at the simple HTML5 page with a header, footer, navigation bar and some content in the following: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Demo site</title> <link rel="stylesheet" type="text/css" href="demo.css" /> </head> <body> <header> Header </header> <nav> <ul> <li><a href="link1">Link 1</a></li> <li><a href="link2">Link 2</a></li> <li><a href="link3">Link 3</a></li> </ul> </nav> <article> <h1>Title</h1> <p>Some content here.</p> </article> <footer> Footer </footer> </body> </html> The formatting is done with a CSS file, as shown in the following: body { font-size: 1em; color: black; background-color: white; font-family: Arial; } header { display: block; font-size: 200%; font-weight: bolder; text-align: center; } footer { clear: both; display: block; text-align: center; } nav { display: block; float: left; width: 25%; } article { display: block; float: left; } Despite being simple, this page has elements that you often find on websites. If you create a website with more than one page, then you will use this structure on every page in order to provide a consistent user interface. From the 2nd page, you will violate the Don't Repeat Yourself(DRY) principle: the header and footer are the elements with fixed content. The content of the navigation bar is also fixed but not every item is always displayed. Only the real content of the page (in the article block) changes with every page. Templates solve this problem. A common approach is to define a base template with the structure. For each page, you will define a template that inherits from the base template and adds the new content. Creating your first template In the following sections, you will create a Diet template from the HTML page using different techniques. Turning the HTML page into a Diet template Let's start with a one-to-one translation of the HTML page into a Diet template. The syntax is based on the Jade templates. It looks similar to the following: doctype html html head meta(charset='utf-8') title Demo site link(rel='stylesheet', type='text/css', href='demo.css') body header | Header nav ul li a(href='link1') Link 1 li a(href='link2') Link 2 li a(href='link3') Link 3 article h1 Title p Some content here. footer | Footer The template resembles the HTML page. Here are the basic syntax rules for a template: The first word on a line is an HTML tag Attributes of an HTML tag are written as a comma-separated list surrounded by parenthesis A tag may be followed by plain text that may contain the HTML code Plain text on a new line starts with the pipe symbol Nesting of elements is done by increasing the indentation. If you want to see the result of this template, save the code as index.dt and put it together with the demo.css CSS file in the views folder. The Jade templates have a special syntax for the nested elements. The list item/anchor pair from the preceding code could be written in one line, as follows: li: a(href='link1') Link1 This syntax is currently not supported by vibe.d. Now, you need to create a small application to see the result of the template by following the given steps: Create a new project template with dub, using the following command: $ dub init template vibe.d Save the template as the views/index.dt file. Copy the demo.css CSS file in the public folder. Change the generated source/app.d application to the following: import vibe.d; shared static this() { auto router = new URLRouter; router.get("/", staticTemplate!"index.dt"); router.get("*", serveStaticFiles("public/")); auto settings = new HTTPServerSettings; settings.port = 8080; settings.bindAddresses = ["::1", "127.0.0.1"]; listenHTTP(settings, router); logInfo("Please open http://127.0.0.1:8080/ in your browser."); } Run dub inside the project folder to start the application and then browse to http://127.0.0.1:8080/ to see the resulting page. The application uses a new URLRouter class. This class is used to map a URL to a web page. With the router.get("/", staticTemplate!"index.dt");statement, every request for the base URL is responded with rendering of the index.dt template. The router.get("*", serveStaticFiles("public/")); statement uses a wild card to serve all other requests as static files that are stored in the public folder. Adding inheritance Up to now, the template is only a one-to-one translation of the HTML page. The next step is to split the file into two, layout.dt and index.dt. The layout.dtfile defines the general structure of a page while index.dt inherits from this file and adds new content. The key to template inheritance is the definition of a block. A block has a name and contains some template code. A child template may replace the block, append, or prepend the content to a block. In the following layout.dt file, four blocks are defined: header, navigation, content and footer. For all the blocks, except content, a default text is defined, as follows: doctype html html head meta(charset='utf-8') title Demo site link(rel='stylesheet', type='text/css', href='demo.css') body block header header Header block navigation nav ul li <a href="link1">Link 1</a> li <a href="link2">Link 2</a> li <a href="link3">Link 3</a> block content block footer footer Footer The template in the index.dt file inherits this layout and replaces the block content, as shown here: extends layout block content article h1 Title p Some content here. You can put both the files into the views folder and run dub again. The rendered page in your browser still looks the same. You can now add more pages and reuse the layout. It is also possible to change the common elements that you defined in the header, footer and navigation blocks. There is no restriction on the level of inheritance. This allows you to construct very sophisticated template systems. Using include Inheritance is not the only way to avoid repetition of template code. With the include keyword, you insert the content of another file. This allows you to put the reusable template code in separate files. As an example, just put the following navigation in a separate navigation.dtfile: nav     ul         li <a href="link1">Link 1</a>         li <a href="link2">Link 2</a>         li <a href="link3">Link 3</a> The index.dt file uses the include keyword to insert the navigation.dt file, as follows: doctype html html     head        meta(charset='utf-8')         title Demo site         link(rel='stylesheet', type='text/css', href='demo.css')     body         header Header         include navigation         article             h1 Title             p Some content here.         footer Footer Just as with the inheritance example, you can put both the files into the views folder and run dub again. The rendered page in your browser still looks the same. The Jade templates allow you to apply a filter to the included content. This is not yet implemented. Integrating other languages with blocks and filters So far, the templates only used the HTML content. However, a web application usually builds on a bunch of languages, most often integrated in a single document, as follows: CSS styles inside the style element JavaScript code inside the script element Content in a simplified markup language such as Markdown Diet templates have two mechanisms that are used for integration of other languages. If a tag is followed by a dot, then the block is treated as plain text. For example, the following template code: p. Some text And some more text It translates into the following: <p>     Some text     And some more text </p> The same can also be used for scripts and styles. For example, you can use the following script tag with the JavaScript code in it: script(type='text/javascript').     console.log('D is awesome') It translates to the following: <script type="text/javascript"> console.log('D is awesome') </script> An alternative is to use a filter. You specify a filter with a colon that is followed by the filter name. The script example can be written with a filter, as shown in the following: :javascript     console.log('D is awesome') This is translated to the following: <script type="text/javascript">     //<![CDATA[     console.log('D is aewsome')     //]]> </script> The following filters are provided by vibe.d: javascript for JavaScript code css for CSS styles markdown for content written in Markdown syntax htmlescapeto escape HTML symbols The css filter works in the same way as the javascript filter. The markdown filter accepts the text written in the Markdown syntax and translates it into HTML. Markdown is a simplified markup language for web authors. The syntax is available on the internet at http://daringfireball.net/projects/markdown/syntax. Here is our template, this time using the markdown filter for the navigation and the article content: doctype html html     head         meta(charset='utf-8')         title Demo site         link(rel='stylesheet', type='text/css', href='demo.css')     body        header Header         nav             :markdown                 - [Link 1](link1)                 - [Link 2](link2)                 - [Link 3](link3)         article             :markdown                 Title                 =====                 Some content here.         footer Footer The rendered HTML page is still the same. The advantage is that you have less to type, which is good if you produce a lot of content. The disadvantage is that you have to remember yet another syntax. A normal plain text block can contain HTML tags, as follows: p. Click this <a href="link">link</a> This is rendered as the following: <p>     Click this <a href="link">link</a> </p> There are situations where you want to even treat the HTML tags as plain text, for example, if you want to explain HTML syntax. In this case, you use the htmlescape filter, as follows: p     :htlmescape         Link syntax: <a href="url target">text to display</a> This is rendered as the following: <p>     Link syntax: &lt;a href="url target"&gt;text to display&lt;/a&gt; </p> You can also add your owns filters. The registerDietTextFilter()function is provided by vibe.d to register the new filters. This function takes the name of the filter and a pointer to the filter function. The filter function is called with the text to filter and the indentation level. It returns the filtered text. For example, you can use this functionality for pretty printing of D code, as follows: Create a new project with dub,using the following command: $ dub init filter vibe.d Create the index.dttemplate file in the viewsfolder. Use the new dcodefilter to format the D code, as shown in the following: doctype html head title Code filter example :css .keyword { color: #0000ff; font-weight: bold; } body p You can create your own functions. :dcode T max(T)(T a, T b) { if (a > b) return a; return b; } Implement the filter function in the app.dfile in the sourcefolder. The filter function outputs the text inside a <pre> tag. Identified keywords are put inside the <span class="keyword"> element to allow custom formatting. The whole application is as follows: import vibe.d; string filterDCode(string text, size_t indent) { import std.regex; import std.array; auto dst = appender!string; filterHTMLEscape(dst, text, HTMLEscapeFlags.escapeQuotes); auto regex = regex(r"(^|s)(if|return)(;|s)"); text = replaceAll(dst.data, regex, "$1<span class="keyword">$2</span>$3"); auto lines = splitLines(text); string indent_string = "n"; while (indent-- > 0) indent_string ~= "t"; string ret = indent_string ~ "<pre>"; foreach (ln; lines) ret ~= indent_string ~ ln; ret ~= indent_string ~ "</pre>"; return ret; } shared static this() { registerDietTextFilter("dcode", &filterDCode); auto settings = new HTTPServerSettings; settings.port = 8080; settings.bindAddresses = ["::1", "127.0.0.1"]; listenHTTP(settings, staticTemplate!"index.dt"); logInfo("Please open http://127.0.0.1:8080/ in your browser."); } Compile and run this application to see that the keywords are bold and blue. Summary In this article, we have seen how to create a Diet template using different techniques such as translating the HTML page into a Diet template, adding inheritance, using include, and integrating other languages with blocks and filters. Resources for Article: Further resources on this subject: MODx Web Development: Creating Lists [Article] MODx 2.0: Web Development Basics [Article] Ruby with MongoDB for Web Development [Article]
Read more
  • 0
  • 0
  • 1881

article-image-asynchronous-communication-between-components
Packt
09 Oct 2015
12 min read
Save for later

Asynchronous Communication between Components

Packt
09 Oct 2015
12 min read
In this article by Andreas Niedermair, the author of the book Mastering ServiceStack, we will see the communication between asynchronous components. The recent release of .NET has added several new ways to further embrace asynchronous and parallel processing by introducing the Task Parallel Library (TPL) and async and await. (For more resources related to this topic, see here.) The need for asynchronous processing has been there since the early days of programming. Its main concept is to offload the processing to another thread or process to release the calling thread from waiting and it has become a standard model since the rise of GUIs. In such interfaces only one thread is responsible for drawing the GUI, which must not be blocked in order to remain available and also to avoid putting the application in a non-responding state. This paradigm is a core point in distributed systems, at some point, long running operations are offloaded to a separate component, either to overcome blocking or to avoid resource bottlenecks using dedicated machines, which also makes the processing more robust against unexpected application pool recycling and other such issues. A synonym for "fire-and-forget" is "one-way", which is also reflected by the design of static routes of ServiceStack endpoints, where the default is /{format}/oneway/{service}. Asynchronism adds a whole new level of complexity to our processing chain, as some callers might depend on a return value. This problem can be overcome by adding callback or another event to your design. Messaging or in general a producer consumer chain is a fundamental design pattern, which can be applied within the same process or inter-process, on the same or a cross machine to decouple components. Consider the following architecture: The client issues a request to the service, which processes the message and returns a response. The server is known and is directly bound to the client, which makes an on-the-fly addition of servers practically impossible. You'd need to reconfigure the clients to reflect the collection of servers on every change and implement a distribution logic for requests. Therefore, a new component is introduced, which acts as a broker (without any processing of the message, except delivery) between the client and service to decouple the service from the client. This gives us the opportunity to introduce more services for heavy load scenarios by simply registering a new instance to the broker, as shown in the following figure:. I left out the clustering (scaling) of brokers and also the routing of messages on purpose at this stage of introduction. In many cross process scenarios a database is introduced as a broker, which is constantly polled by services (and clients, if there's a response involved) to check whether there's a message to be processed or not. Adding a database as a broker and implementing your own logic can be absolutely fine for basic systems, but for more advanced scenarios it lacks some essential features, which Messages Queues come shipped with. Scalability: Decoupling is the biggest step towards a robust design, as it introduces the possibility to add more processing nodes to your data flow. Resilience: Messages are guaranteed to be delivered and processed as automatic retrying is available for non-acknowledged (processed) messages. If the retry count is exceeded, failed messages are stored in a Dead Letter Queue (DLQ) to be inspected later and are requeued after fixing the issue that caused the failure. In case of a partial failure of your infrastructure, clients can still produce messages that get delivered and processed as soon as there is even a single consumer back online. Pushing instead of polling: This is where asynchronism comes into play, as clients do not constantly poll for messages but instead it gets pushed by the broker when there's a new message in their subscribed queue. This minimizes the spinning and wait time, when the timer ticks only for 10 seconds. Guaranteed order: Most Message Queues offer a guaranteed order of the processing under defined conditions (mostly FIFO). Load balancing: With multiple services registered for messages, there is an inherent load balancing so that the heavy load scenarios can be handled better. In addition to this round-robin routing there are other routing logics, such as smallest-mailbox, tail-chopping, or random routing. Message persistence: Message Queues can be configured to persist their data to disk and even survive restarts of the host on which they are running. To overcome the downtime of the Message Queue you can even setup a cluster to offload the demand to other brokers while restarting a single node. Built-in priority: Message Queues usually have separate queues for different messages and even provide a separate in queue for prioritized messages. There are many more features, such as Time to live, security and batching modes, which we will not cover as they are outside the scope of ServiceStack. In the following example we will refer to two basic DTOs: public class Hello : ServiceStack.IReturn<HelloResponse> { public string Name { get; set; } } public class HelloResponse { public string Result { get; set; } } The Hello class is used to send a Name to a consumer that generates a message, which will be enqueued in the Message Queue as well. RabbitMQ RabbitMQ is a mature broker built on top of the Advanced Message Queuing Protocol (AMQP), which makes it possible to solve even more complex scenarios, as shown here: The messages will survive restarts of the RabbitMQ service and the additional guaranty of delivery is accomplished by depending upon an acknowledgement of the receipt (and processing) of the message, by default it is done by ServiceStack for typical scenarios. The client of this Message Queue is located in the ServiceStack.RabbitMq object's NuGet package (it uses the official client in the RabbitMQ.Client package under the hood). You can add additional protocols to RabbitMQ, such as Message Queue Telemetry Transport (MQTT) and Streaming Text Oriented Messaging Protocol (STOMP), with plugins to ease Interop scenarios. Due to its complexity, we will focus on an abstracted interaction with the broker. There are many books and articles available for a deeper understanding of RabbitMQ. A quick overview of the covered scenarios is available at https://www.rabbitmq.com/getstarted.html. The method of publishing a message with RabbitMQ does not differ much from RedisMQ: using ServiceStack; using ServiceStack.RabbitMq; using (var rabbitMqServer = new RabbitMqServer()) { using (var messageProducer = rabbitMqServer.CreateMessageProducer()) { var hello = new Hello { Name = "Demo" }; messageProducer.Publish(hello); } } This will create a Helloobject and publish it to the corresponding queue in RabbitMQ. To retrieve this message, we need to register a handler, as shown here: using System; using ServiceStack; using ServiceStack.RabbitMq; using ServiceStack.Text; var rabbitMqServer = new RabbitMqServer(); rabbitMqServer.RegisterHandler<Hello>(message => { var hello = message.GetBody(); var name = hello.Name; var result = "Hello {0}".Fmt(name); result.Print(); return null; }); rabbitMqServer.Start(); "Listening for hello messages".Print(); Console.ReadLine(); rabbitMqServer.Dispose(); This registers a handler for Hello objects and prints a message to the console. In favor of a straightforward example we are omitting all the parameters with default values of the constructor of RabbitMqServer, which will connect us to the local instance at port 5672. To change this, you can either provide a connectionString parameter (and optional credentials) or use a RabbitMqMessageFactory object to customize the connection. Setup Setting up RabbitMQ involves a bit of effort. At first you need to install Erlang from http://www.erlang.org/download.html, which is the runtime for RabbitMQ due to its functional and concurrent nature. Then you can grab the installer from https://www.rabbitmq.com/download.html, which will set RabbitMQ up and running as a service with a default configuration. Processing chain Due to its complexity, the processing chain with any mature Message Queue is different from what you know from RedisMQ. Exchanges are introduced in front of queues to route the messages to their respective queues according to their routing keys: The default exchange name is mx.servicestack (defined in ServiceStack.Messaging.QueueNames.Exchange) and is used in any Publish to call an IMessageProducer or IMessageQueueClient object. With IMessageQueueClient.Publish you can inject a routing key (queueName parameter), to customize the routing of a queue. Failed messages are published to the ServiceStack.Messaging.QueueNames.ExchangeDlq (mx.servicestack.dlq) and routed to queues with the name mq:{type}.dlq. Successful messages are published to ServiceStack.Messaging.QueueNames.ExchangeTopic (mx.servicestack.topic) and routed to the queue mq:{type}.outq. Additionally, there's also a priority queue to the in-queue with the name mq:{type}.priority. If you interact with RabbitMQ on a lower level, you can directly publish to queues and leave the routing via an exchange out of the picture. Each queue has features to define whether the queue is durable, deletes itself after the last consumer disconnected, or which exchange is to be used to publish dead messages with which routing key. More information on the concepts, different exchange types, queues, and acknowledging messages can be found at https://www.rabbitmq.com/tutorials/amqp-concepts.html. Replying directly back to the producer Messages published to a queue are dequeued in FIFO mode, hence there is no guarantee if the responses are delivered to the issuer of the initial message or not. To force a response to the originator you can make use of the ReplyTo property of a message: using System; using ServiceStack; using ServiceStack.Messaging; using ServiceStack.RabbitMq; using ServiceStack.Text; var rabbitMqServer = new RabbitMqServer(); var messageQueueClient = rabbitMqServer.CreateMessageQueueClient(); var queueName = messageQueueClient.GetTempQueueName(); var hello = new Hello { Name = "reply to originator" }; messageQueueClient.Publish(new Message<Hello>(hello) { ReplyTo = queueName }); var message = messageQueueClient.Get<HelloResponse>(queueName); var helloResponse = message.GetBody(); This code is more or less identical to the RedisMQ approach, but it does something different under the hood. The messageQueueClient.GetTempQueueName object creates a temporary queue, whose name is generated by ServiceStack.Messaging.QueueNames.GetTempQueueName. This temporary queue does not survive a restart of RabbitMQ, and gets deleted as soon as the consumer disconnects. As each queue is a separate Erlang process, you may encounter the process limits of Erlang and the maximum amount of file descriptors of your OS. Broadcasting a message In many scenarios a broadcast to multiple consumers is required, for example if you need to attach multiple loggers to a system it needs a lower level of implementation. The solution to this requirement is to create a fan-out exchange that will forward the message to all the queues instead of one connected queue, where one queue is consumed exclusively by one consumer, as shown: using ServiceStack; using ServiceStack.Messaging; using ServiceStack.RabbitMq; var fanoutExchangeName = string.Concat(QueueNames.Exchange, ".", ExchangeType.Fanout); var rabbitMqServer = new RabbitMqServer(); var messageProducer= (RabbitMqProducer) rabbitMqServer.CreateMessageProducer(); var channel = messageProducer.Channel; channel.ExchangeDeclare(exchange: fanoutExchangeName, type: ExchangeType.Fanout, durable: true, autoDelete: false, arguments: null); With the cast to RabbitMqProducer we have access to lower level actions, we need to declare and exchange this with the name mx.servicestack.fanout, which is durable and does not get deleted. Now, we need to bind a temporary and an exclusive queue to the exchange: var messageQueueClient = (RabbitMqQueueClient) rabbitMqServer.CreateMessageQueueClient(); var queueName = messageQueueClient.GetTempQueueName(); channel.QueueBind(queue: queueName, exchange: fanoutExchangeName, routingKey: QueueNames<Hello>.In); The call to messageQueueClient.GetTempQueueName() creates a temporary queue, which lives as long as there is just one consumer connected. This queue is bound to the fan-out exchange with the routing key mq:Hello.inq, as shown here: To publish the messages we need to use the RabbitMqProducer object (messageProducer): var hello = new Hello { Name = "Broadcast" }; var message = new Message<Hello>(hello); messageProducer.Publish(queueName: QueueNames<Hello>.In, message: message, exchange: fanoutExchangeName); Even though the first parameter of Publish is named queueName, it is propagated as the routingKey to the underlying PublishMessagemethod call. This will publish the message on the newly generated exchange with mq:Hello.inq as the routing key: Now, we need to encapsulate the handling of the message as: var messageHandler = new MessageHandler<Hello>(rabbitMqServer, message => { var hello = message.GetBody(); var name = hello.Name; name.Print(); return null; }); The MessageHandler<T> class is used internally in all the messaging solutions and looks for retries and replies. Now, we need to connect the message handler to the queue. using System; using System.IO; using System.Threading.Tasks; using RabbitMQ.Client; using RabbitMQ.Client.Exceptions; using ServiceStack.Messaging; using ServiceStack.RabbitMq; var consumer = new RabbitMqBasicConsumer(channel); channel.BasicConsume(queue: queueName, noAck: false, consumer: consumer); Task.Run(() => { while (true) { BasicGetResult basicGetResult; try { basicGetResult = consumer.Queue.Dequeue(); } catch (EndOfStreamException) { // this is ok return; } catch (OperationInterruptedException) { // this is ok return; } var message = basicGetResult.ToMessage<Hello>(); messageHandler.ProcessMessage(messageQueueClient, message); } }); This creates a RabbitMqBasicConsumer object, which is used to consume the temporary queue. To process messages we try to dequeuer from the Queue property in a separate task. This example does not handle the disconnects and reconnects from the server and does not integrate with the services (however, both can be achieved). Integrate RabbitMQ in your service The integration of RabbitMQ in a ServiceStack service does not differ overly from RedisMQ. All you have to do is adapt to the Configure method of your host. using Funq; using ServiceStack; using ServiceStack.Messaging; using ServiceStack.RabbitMq; public override void Configure(Container container) { container.Register<IMessageService>(arg => new RabbitMqServer()); container.Register<IMessageFactory>(arg => new RabbitMqMessageFactory()); var messageService = container.Resolve<IMessageService>(); messageService.RegisterHandler<Hello> (this.ServiceController.ExecuteMessage); messageService.Start(); } The registration of an IMessageService is needed for the rerouting of the handlers to your service; and also, the registration of an IMessageFactory is relevant if you want to publish a message in your service with PublishMessage. Summary In this article the messaging pattern was introduced along with all the available clients of existing Message Queues. Resources for Article: Further resources on this subject: ServiceStack applications[article] Web API and Client Integration[article] Building a Web Application with PHP and MariaDB – Introduction to caching [article]
Read more
  • 0
  • 0
  • 4061
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-using-underscorejs-collections
Packt
01 Oct 2015
21 min read
Save for later

Using Underscore.js with Collections

Packt
01 Oct 2015
21 min read
In this article Alex Pop, the author of the book, Learning Underscore.js, we will explore Underscore functionality for collections using more in-depth examples. Some of the more advanced concepts related to Underscore functions such as scope resolution and execution context will be explained. The topics of the article are as follows: Key Underscore functions revisited Searching and filtering This article assumes that you are familiar with JavaScript fundamentals such as prototypical inheritance and the built-in data types. The source code for the examples from this article is hosted online at https://github.com/popalexandruvasile/underscorejs-examples/tree/master/collections, and you can execute the examples using the Cloud9 IDE at the address https://ide.c9.io/alexpop/underscorejs-examples from the collections folder. (For more resources related to this topic, see here.) Key Underscore functions – each, map, and reduce This flexible approach means that some Underscore functions can operate over collections: an Underscore-specific term for arrays, array like objects, and objects (where the collection represents the object properties). We will refer to the elements within these collections as collection items. By providing functions that operate over object properties Underscore expands JavaScript reflection like capabilities. Reflection is a programming feature for examining the structure of a computer program, especially during program execution. JavaScript is a dynamic language without static type system support (as of ES6). This makes it convenient to use a technique named duck typing when working with objects that share similar behaviors. Duck typing is a programming technique used in dynamic languages where objects are identified through their structure represented by properties and methods rather than their type (the name of duck typing is derived from the phrase "if it walks like a duck, swims like a duck, and quacks like a duck, then it is a duck"). Underscore itself uses duck typing to assert that an object is an array by checking for a property called length of type Number. Applying reflection techniques We will build an example that demonstrates duck typing and reflection techniques through a function that will extract object properties so that they can be persisted to a relational database. Usually relational database stores objects represented as a data row with columns types that map to regular SQL data types. We will use the _.each() function to iterate over object properties and extract those of type boolean, number, string and Date as they be easily mapped to SQL data type and ignore everything else: var propertyExtractor = (function() { "use strict" return { extractStorableProperties: function(source) { var storableProperties = {}; if (!source || source.id !== +source.id) { return storableProperties; } _.each(source, function(value, key) { var isDate = typeof value === 'object' && value instanceof Date; if (isDate || typeof value === 'boolean' || typeof value === 'number' || typeof value === 'string') { storableProperties[key] = value; } }); return storableProperties; } }; }()); You can find the example in the propertyExtractor.js file within the each-with-properties-and-context folder from the source code for this article. The first highlighted code snippet checks whether the object passed to the extractStorableProperties() function has a property called id that is a number. The + sign converts the id property to a number and the non-identity operator !== compares the result of this conversion with the unconverted original value. The non-identity operator returns true only if the type of the compared objects is different or they are of the same type and have different values. This was a duck typing technique used by Underscore up until version 1.7 to assert whether it deals with an array-like instance or an object instance in its collections related functions. Underscore collection related functions operate over array-like objects as they do not strictly check for the built in Array object. These functions can also work with the arguments objects or the HTML DOM NodeList objects. The last highlighted code snippet is the _.each() function that operates over object properties using an iteration function that receives the property value as its first argument and the property name as the optional second argument. If a property has a null or undefined value it will not appear in the returned object. The extractStorableProperties() function will return a new object with all the storable properties. The return value is used in the test specifications to assert that, given a sample object, the function behaves as expected: describe("Given propertyExtractor", function() { describe("when calling extractStorableProperties()", function() { var storableProperties; beforeEach(function() { var source = { id: 2, name: "Blue lamp", description: null, ui: undefined, price: 10, purchaseDate: new Date(2014, 10, 1), isInUse: true, }; storableProperties = propertyExtractor.extractStorableProperties(source); }); it("then the property count should be correct", function() { expect(Object.keys(storableProperties).length).toEqual(5); }); it("then the 'price' property should be correct", function() { expect(storableProperties.price).toEqual(10); }); it("then the 'description' property should not be defined", function() { expect(storableProperties.description).toEqual(undefined); }); }); }); Notice how we used the propertyExtractor global instance to access the function under test, and then, we used the ES5 function Object.keys to assert that the number of returned properties has the correct size. In a production ready application, we need to ensure that the global objects names do not clash among other best practices. You can find the test specification in the spec/propertyExtractorSpec.js file and execute them by browsing the SpecRunner.html file from the example source code folder. There is also an index.html file that will display the results of the example rendered in the browser using the index.js file. Manipulating the this variable Many Underscore functions have a similar signature with _.each(list, iteratee, [context]),where the optional context parameter will be used to set the this value for the iteratee function when it is called for each collection item. In JavaScript, the built in this variable will be different depending on the context where it is used. When the this variable is used in the global scope context, and in a browser environment, it will return the native window object instance. If this is used in a function scope, then the variable will have different values: If the function is an object method or an object constructor, then this will return the current object instance. Here is a short example code for this scenario: var item1 = { id: 1, name: "Item1", getInfo: function(){ return "Object: " + this.id + "-" + this.name; } }; console.log(item1.getInfo()); // -> “Object: 1-Item1” If the function does not belong to an object, then this will be undefined in the JavaScript strict mode. In the non-strict mode, this will return its global scope value. With a library such as Underscore that favors a functional style, we need to ensure that the functions used as parameters are using the this variable correctly. Let's assume that you have a function that references this (maybe it was used as an object method) and you want to use it with one of the Underscore functions such as _.each.. You can still use the function as is and provide the desired this value as the context parameter value when calling each. I have rewritten the previous example function to showcase the use of the context parameter: var propertyExtractor = (function() { "use strict"; return { extractStorablePropertiesWithThis: function(source) { var storableProperties = {}; if (!source || source.id !== +source.id) { return storableProperties; } _.each(source, function(value, key) { var isDate = typeof value === 'object' && value instanceof Date; if (isDate || typeof value === 'boolean' || typeof value === 'number' || typeof value === 'string') { this[key] = value; } }, storableProperties); return storableProperties; } }; }()); The first highlighted snippet shows the use of this, which is typical for an object method. The last highlighted snippet shows the context parameter value that this was set to. The storableProperties value will be passed as this for each iteratee function call. The test specifications for this example are identical with the previous example, and you can find them in the same folder each-with-properties-and-context from the source code for this article. You can use the optional context parameter in many of the Underscore functions where applicable and is a useful technique when working with functions that rely on a specific this value. Using map and reduce with object properties In the previous example, we had some user interface-specific code in the index.js file that was tasked with displaying the results of the propertyExtractor.extractStorableProperties() call in the browser. Let's pull this functionality in another example and imagine that we need a new function that, given an object, will transform its properties in a format suitable for displaying in a browser by returning an array of formatted text for each property. To achieve this, we will use the Underscore _.map() function over object properties as demonstrated in the next example: var propertyFormatter = (function() { "use strict"; return { extractPropertiesForDisplayAsArray: function(source) { if (!source || source.id !== +source.id) { return []; } return _.map(source, function(value, key) { var isDate = typeof value === 'object' && value instanceof Date; if (isDate || typeof value === 'boolean' || typeof value === 'number' || typeof value === 'string') { return "Property: " + key + " of type: " + typeof value + " has value: " + value; } return "Property: " + key + " cannot be displayed."; }); } }; }()); With Underscore, we can write compact and expressive code that manipulates these properties with little effort. The test specifications for the extractPropertiesForDisplayAsArray() function are using Jasmine regular expression matchers to assert the test conditions in the highlighted code snippets from the following example: describe("Given propertyFormatter", function() { describe("when calling extractPropertiesForDisplayAsArray()", function() { var propertiesForDisplayAsArray; beforeEach(function() { var source = { id: 2, name: "Blue lamp", description: null, ui: undefined, price: 10, purchaseDate: new Date(2014, 10, 1), isInUse: true, }; propertiesForDisplayAsArray = propertyFormatter.extractPropertiesForDisplayAsArray(source); }); it("then the returned property count should be correct", function() { expect(propertiesForDisplayAsArray.length).toEqual(7); }); it("then the 'price' property should be displayed", function() { expect(propertiesForDisplayAsArray[4]).toMatch("price.+10"); }); it("then the 'description' property should not be displayed", function() { expect(propertiesForDisplayAsArray[2]).toMatch("cannot be displayed"); }); }); }); The following example shows how _.reduce() is used to manipulate object properties. This will transform the properties of an object in a format suitable for browser display by returning a string value that contains all the properties in a convenient format: extractPropertiesForDisplayAsString: function(source) { if (!source || source.id !== +source.id) { return []; } return _.reduce(source, function(memo, value, key) { if (memo && memo !== "") { memo += "<br/>"; } var isDate = typeof value === 'object' && value instanceof Date; if (isDate || typeof value === 'boolean' || typeof value === 'number' || typeof value === 'string') { return memo + "Property: " + key + " of type: " + typeof value + " has value: " + value; } return memo + "Property: " + key + " cannot be displayed."; }, ""); } The example is almost identical with the previous one with the exception of the memo accumulator used to build the returned string value. The test specifications for the extractPropertiesForDisplayAsString() function are using a regular expression matcher and can be found in the spec/propertyFormatterSpec.js file: describe("when calling extractPropertiesForDisplayAsString()", function() { var propertiesForDisplayAsString; beforeEach(function() { var source = { id: 2, name: "Blue lamp", description: null, ui: undefined, price: 10, purchaseDate: new Date(2014, 10, 1), isInUse: true, }; propertiesForDisplayAsString = propertyFormatter.extractAllPropertiesForDisplay(source); }); it("then the returned string has expected length", function() { expect(propertiesForDisplayAsString.length).toBeGreaterThan(0); }); it("then the 'price' property should be displayed", function() { expect(propertiesForDisplayAsString).toMatch("<br/>Property: price of type: number has value: 10<br/>"); }); }); The examples from this subsection can be found within the map.reduce-with-properties folder from the source code for this article. Searching and filtering The _.find(list, predicate, [context]) function is part of the Underscore comprehensive functionality for searching and filtering collections represented by object properties and array like objects. We will make a distinction between search and filter functions with the former tasked with finding one item in a collection and the latter tasked with retrieving a subset of the collection (although sometimes, you will find the distinction between these functions thin and blurry). We will revisit the find function and the other search- and filtering-related functions using an example with slightly more diverse data that is suitable for database persistence. We will use the problem domain of a bicycle rental shop and build an array of bicycle objects with the following structure: var getBicycles = function() { return [{ id: 1, name: "A fast bike", type: "Road Bike", quantity: 10, rentPrice: 20, dateAdded: new Date(2015, 1, 2) }, { ... }, { id: 12, name: "A clown bike", type: "Children Bike", quantity: 2, rentPrice: 12, dateAdded: new Date(2014, 11, 1) }]; }; Each bicycle object has an id property, and we will use the propertyFormatter object built in the previous section to display the examples results in the browser for your convenience. The code was shortened here for brevity (you can find its full version alongside the other examples from this section within the searching and filtering folders from the source code for this article). All the examples are covered by tests and these are the recommended starting points if you want to explore them in detail. Searching For the first example of this section, we will define a bicycle-related requirement where we need to search for a bicycle of a specific type and with a rental price under a maximum value. Compared to the previous _.find() example, we will start with writing the tests specifications first for the functionality that is yet to be implemented. This is a test-driven development approach where we will define the acceptance criteria for the function under test first followed by the actual implementation. Writing the tests first forces us to think about what the code should do, rather than how it should do it, and this helps eliminate waste by writing only the code required to make the tests pass. Underscore find The test specifications for our initial requirement are as follows: describe("Given bicycleFinder", function() { describe("when calling findBicycle()", function() { var bicycle; beforeEach(function() { bicycle = bicycleFinder.findBicycle("Urban Bike", 16); }); it("then it should return an object", function() { expect(bicycle).toBeDefined(); }); it("then the 'type' property should be correct", function() { expect(bicycle.type).toEqual("Urban Bike"); }); it("then the 'rentPrice' property should be correct", function() { expect(bicycle.rentPrice).toEqual(15); }); }); }); The highlighted function call bicyleFinder.findBicycle() should return one bicycle object of the expected type and price as asserted by the tests. Here is the implementation that satisfies the test specifications: var bicycleFinder = (function() { "use strict"; var getBicycles = function() { return [{ id: 1, name: "A fast bike", type: "Road Bike", quantity: 10, rentPrice: 20, dateAdded: new Date(2015, 1, 2) }, { ... }, { id: 12, name: "A clown bike", type: "Children Bike", quantity: 2, rentPrice: 12, dateAdded: new Date(2014, 11, 1) }]; }; return { findBicycle: function(type, maxRentPrice) { var bicycles = getBicycles(); return _.find(bicycles, function(bicycle) { return bicycle.type === type && bicycle.rentPrice <= maxRentPrice; }); } }; }()); The code returns the first bicycle that satisfies the search criteria ignoring the rest of the bicycles that might meet the same criteria. You can browse the index.html file from the searching folder within the source code for this article to see the result of calling the bicyleFinder.findBicycle() function displayed on the browser via the propertyFormatter object. Underscore some There is a closely related function to _.find() with the signature _.some(list, [predicate], [context]). This function will return true if at least one item of the list collection satisfies the predicate function. The predicate parameter is optional, and if it is not specified, the _.some() function will return true if at least one item of the collection is not null. This makes the function a good candidate for implementing guard clauses. A guard clause is a function that ensures that a variable (usually a parameter) satisfies a specific condition before it is being used any further. The next example shows how _.some() is used to perform checks that are typical for a guard clause: var list1 = []; var list2 = [null, , undefined, {}]; var object1 = {}; var object2 = { property1: null, property3: true }; if (!_.some(list1) && !_.some(object1)) { alert("Collections list1 and object1 are not valid when calling _.some() over them."); } if(_.some(list2) && _.some(object2)){ alert("Collections list2 and object2 have at least one valid item and they are valid when calling _.some() over them."); } If you execute this code in a browser, you will see both alerts being displayed. The first alert gets triggered when an empty array or an object without any properties defined are found. The second alert appears when we have an array with at least one element that is not null and is not undefined or when we have an object that has at least one property that evaluates as true. Going back to our bicycle data, we will define a new requirement to showcase the use of _.some() in this context. We will implement a function that will ensure that we can find at least one bicycle of a specific type and with a maximum rent price. The code is very similar to the bicycleFinder.findBicycle() implementation with the difference that the new function returns true if the specific bicycle is found (rather than the actual object): hasBicycle: function(type, maxRentPrice) { var bicycles = getBicycles(); return _.some(bicycles, function(bicycle) { return bicycle.type === type && bicycle.rentPrice <= maxRentPrice; }); } You can find the tests specifications for this function in the spec/bicycleFinderSpec.js file from the searching example folder. Underscore findWhere Another function similar to _.find() has the signature _.findWhere(list, properties). This compares the property key-value pairs of each collection item from list with the property key-value pairs found on the properties object parameter. Usually, the properties parameter is an object literal that contains a subset of the properties of a collection item. The _.findWhere() function is useful when we need to extract a collection item matching an exact value compared to _.find() that can extract a collection item that matches a range of values or more complex criteria. To showcase the function, we will implement a requirement that needs to search a bicycle that has a specific id value. This is how the test specifications look like: describe("when calling findBicycleById()", function() { var bicycle; beforeEach(function() { bicycle = bicycleFinder.findBicycleById(6); }); it("then it should return an object", function() { expect(bicycle).toBeDefined(); }); it("then the 'id' property should be correct", function() { expect(bicycle.id).toEqual(6); }); }); And the next code snippet from the bicycleFinder.js file contains the actual implementation: findBicycleById: function(id){ var bicycles = getBicycles(); return _.findWhere(bicycles, {id: id}); } Underscore contains In a similar vein, with the _.some() function, there is a _.contains(list, value) function that will return true if there is at least one item from the list collection that is equal to the value parameter. The equality check is based on the strict comparison operator === where the operands will be checked for both type and value equality. We will implement a function that checks whether a bicycle with a specific id value exists in our collection: hasBicycleWithId: function(id) { var bicycles = getBicycles(); var bicycleIds = _.pluck(bicycles,"id"); return _.contains(bicycleIds, id); } Notice how the _.pluck(list, propertyName) function was used to create an array that stores the id property value for each collection item. In its implementation, _.pluck() is actually using _.map(), acting like a shortcut function for it. Filtering As we mentioned at the beginning of this section, Underscore provides powerful filtering functions, which are usually tasked with working on a subsection of a collection. We will reuse the same example data as before, and we will build some new functions to explore this functionality. Underscore filter We will start by defining a new requirement for our data where we need to build a function that retrieves all bicycles of a specific type and with a maximum rent price. This is how the test specifications looks like for the yet to be implemented function bicycleFinder.filterBicycles(type, maxRentPrice): describe("when calling filterBicycles()", function() { var bicycles; beforeEach(function() { bicycles = bicycleFinder.filterBicycles("Urban Bike", 16); }); it("then it should return two objects", function() { expect(bicycles).toBeDefined(); expect(bicycles.length).toEqual(2); }); it("then the 'type' property should be correct", function() { expect(bicycles[0].type).toEqual("Urban Bike"); expect(bicycles[1].type).toEqual("Urban Bike"); }); it("then the 'rentPrice' property should be correct", function() { expect(bicycles[0].rentPrice).toEqual(15); expect(bicycles[1].rentPrice).toEqual(14); }); }); The test expectations are assuming the function under test filterBicycles() returns an array, and they are asserting against each element of this array. To implement the new function, we will use the _.filter(list, predicate, [context]) function that returns an array with all the items from the list collection that satisfy the predicate function. Here is our example implementation code: filterBicycles: function(type, maxRentPrice) { var bicycles = getBicycles(); return _.filter(bicycles, function(bicycle) { return bicycle.type === type && bicycle.rentPrice <= maxRentPrice; }); } The usage of the _.filter() function is very similar to the _.find() function with the only difference in the return type of these functions. You can find this example together with the rest of examples from this subsection within the filtering folder from the source code for this article. Underscore where Underscore defines a shortcut function for _.filter() which is _.where(list, properties). This function is similar to the _.findWhere() function, and it uses the properties object parameter to compare and retrieve all the items from the list collection with matching properties. To showcase the function, we defined a new requirement for our example data where we need to retrieve all bicycles of a specific type. This is the code that implements the requirement: filterBicyclesByType: function(type) { var bicycles = getBicycles(); return _.where(bicycles, { type: type }); } By using _.where(), we are in fact using a more compact and expressive version of _.filter() in scenarios where we need to perform exact value matches. Underscore reject and partition Underscore provides a useful function which is the opposite for _.filter() and has a similar signature: _.reject(list, predicate, [context]). Calling the function will return an array of values from the list collection that do not satisfy the predicate function. To show its usage we will implement a function that retrieves all bicycles with a rental price less than or equal with a given value. Here is the function implementation: getAllBicyclesForSetRentPrice: function(setRentPrice) { var bicycles = getBicycles(); return _.reject(bicycles, function(bicycle) { return bicycle.rentPrice > setRentPrice; }); } Using the _.filter() function alongside the _.reject() function with the same list collection and predicate function will allow us to partition the collection in two arrays. One array holds items that do satisfy the predicate function while the other holds items that do not satisfy the predicate function. Underscore has a more convenient function that achieves the same result and this is _.partition(list, predicate). It returns an array that has two array elements: the first has the values that would be returned by calling _.filter() using the same input parameters and the second has the values for calling _.reject(). Underscore every We mentioned _.some() as being a great function for implementing guard clauses. It is also worth mentioning another closely related function _.every(list, [predicate], [context]). The function will check every item of the list collection and will return true if every item satisfies the predicate function or if list is null, undefined or empty. If the predicate function is not specified the value of each item will be evaluated instead. If we use the same data from the guard clause example for _.some() we will get the opposite results as shown in the next example: var list1 = []; var list2 = [null, , undefined, {}]; var object1 = {}; var object2 = { property1: null, property3: true }; if (_.every(list1) && _.every(object1)) { alert("Collections list1 and object1 are valid when calling _.every() over them."); } if(!_.every(list2) && !_.every(object2)){ alert("Collections list2 and object2 do not have all items valid so they are not valid when calling _.every() over them."); } To ensure a collection is not null, undefined, or empty and each item is also not null or undefined we should use both _.some() and _.every() as part of the same check as shown in the next example: var list1 = [{}]; var object1 = { property1: {}}; if (_.every(list1) && _.every(object1) && _.some(list1) && _.some(object1)) { alert("Collections list1 and object1 are valid when calling both _some() and _.every() over them."); } If the list1 object is an empty array or an empty object literal calling _.every() for it returns true while calling _some() returns false hence the need to use both functions when validating a collection. These code examples demonstrate how you can build your own guard clauses or data validation rules by using simple Underscore functions. Summary In this article, we explored many of the collection specific functions provided by Underscore and demonstrated additional functionality. We continued with searching and filtering functions. Resources for Article: Further resources on this subject: Packaged Elegance[article] Marshalling Data Services with Ext.Direct[article] Understanding and Developing Node Modules [article]
Read more
  • 0
  • 0
  • 4700

article-image-deploying-your-own-server
Packt
30 Sep 2015
16 min read
Save for later

Deploying on your own server

Packt
30 Sep 2015
16 min read
In this article by Jack Stouffer, the author of the book Mastering Flask, you will learn how to deploy and host your application on the different options available, and the advantages and disadvantages related to them. The most common way to deploy any web app is to run it on a server that you have control over. Control in this case means access to the terminal on the server with an administrator account. This type of deployment gives you the most amount of freedom out of the other choices as it allows you to install any program or tool you wish. This is in contrast to other hosting solutions where the web server and database are chosen for you. This type of deployment also happens to be the least expensive option. The downside to this freedom is that you take the responsibility of keeping the server up, backing up user data, keeping the software on the server up to date to avoid security issues, and so on. Entire books have been written on good server management, so if this is not a responsibility that you believe you or your company can handle, it would be best if you choose one of the other deployment options. This section will be based on a Debian Linux-based server, as Linux is far and away the most popular OS for running web servers, and Debian is the most popular Linux distro (a particular combination of software and the Linux kernel released as a package). Any OS with Bash and a program called SSH (which will be introduced in the next section) will work for this article, the only differences will be the command-line programs to install software on the server. (For more resources related to this topic, see here.) Each of these web servers will use a protocol named Web Server Gateway Interface (WSGI), which is a standard designed to allow Python web applications to easily communicate with web servers. We will never directly work with WSGI. However, most of the web server interfaces we will be using will have WSGI in their name, and it can be confusing if you don't know what the name is. Pushing code to your server with fabric To automate the process of setting up and pushing our application code to the server, we will use a Python tool called fabric. Fabric is a command-line program that reads and executes Python scripts on remote servers using a tool called SSH. SSH is a protocol that allows a user of one computer to remotely log in to another computer and execute commands on the command line, provided that the user has an account on the remote machine. To install fabric, we will use pip: $ pip install fabric Fabric commands are collections of command-line programs to be run on the remote machine's shell, in this case, Bash. We are going to make three different commands: one to run our unit tests, one to set up a brand new server to our specifications, and one to have the server update its copy of the application code with git. We will store these commands in a new file at the root of our project directory called fabfile.py. As it's the easiest to create, let's make the test command first: from fabric.api import local def test(): local('python -m unittest discover') To run this function from the command line, we can use fabric's command-line interface by passing the name of the command to run: $ fab test [localhost] local: python -m unittest discover ..... --------------------------------------------------------------------- Ran 5 tests in 6.028s OK Fabric has three main commands: local, run, and sudo. The local function, as seen in the preceding function, runs commands on the local computer. The run and sudo functions run commands on a remote machine, but sudo runs commands as an administrator. All of these functions notify fabric if the command ran successfully or not. If a command didn't run successfully, meaning that our tests failed in this case, any other commands in the function will not be run. This is useful for our commands because it allows us to force ourselves not to push any code to the server that does not pass our tests. Now we need to create the command to set up a new server from scratch. What this command will do is install the software our production environment needs as well as downloads the code from our centralized git repository. It will also create a new user that will act as the runner of the web server as well as the owner of the code repository. Do not run your webserver or have your code deployed by the root user. This opens your application to a whole host of security vulnerabilities. This command will differ based on your operating system, and we will be adding to this command in the rest of the article based on what server you choose: from fabric.api import env, local, run, sudo, cd env.hosts = ['deploy@[your IP]'] def upgrade_libs(): sudo("apt-get update") sudo("apt-get upgrade") def setup(): test() upgrade_libs() # necessary to install many Python libraries sudo("apt-get install -y build-essential") sudo("apt-get install -y git") sudo("apt-get install -y python") sudo("apt-get install -y python-pip") # necessary to install many Python libraries sudo("apt-get install -y python-all-dev") run("useradd -d /home/deploy/ deploy") run("gpasswd -a deploy sudo") # allows Python packages to be installed by the deploy user sudo("chown -R deploy /usr/local/") sudo("chown -R deploy /usr/lib/python2.7/") run("git config --global credential.helper store") with cd("/home/deploy/"): run("git clone [your repo URL]") with cd('/home/deploy/webapp'): run("pip install -r requirements.txt") run("python manage.py createdb") There are two new fabric features in this script. One is the env.hosts assignment, which tells fabric the user and IP address of the machine it should be logging in to. Second, there is the cd function used in conjunction with the with keyword, which executes any functions in the context of that directory instead of the home directory of the deploy user. The line that modifies the git configuration is there to tell git to remember your repository's username and password, so you do not have to enter it every time you wish to push code to the server. Also, before the server is set up, we make sure to update the server's software to keep the server up to date. Finally, we have the function to push our new code to the server. In time, this command will also restart the web server and reload any configuration files that come from our code. But this depends on the server you choose, so this is filled out in the subsequent sections: def deploy(): test() upgrade_libs() with cd('/home/deploy/webapp'): run("git pull") run("pip install -r requirements.txt") So, if we were to begin working on a new server, all we would need to do to set it up is to run the following commands: $ fabric setup $ fabric deploy Running your web server with supervisor Now that we have automated our updating process, we need some program on the server to make sure that our web server, and database if you aren't using SQLite, is running. To do this, we will use a simple program called supervisor. All that supervisor does is automatically run command-line programs in background processes and allows you to see the status of running programs. Supervisor also monitors all of the processes its running, and if the process dies, it tries to restart it. To install supervisor, we need to add it to the setup command in our fabfile.py: def setup(): … sudo("apt-get install -y supervisor") To tell supervisor what to do, we need to create a configuration file and then copy it to the /etc/supervisor/conf.d/ directory of our server during the deploy fabric command. Supervisor will load all of the files in this directory when it starts and attempt to run them. In a new file in the root of our project directory named supervisor.conf, add the following: [program:webapp] command= directory=/home/deploy/webapp user=deploy [program:rabbitmq] command=rabbitmq-server user=deploy [program:celery] command=celery worker -A celery_runner directory=/home/deploy/webapp user=deploy This is the bare minimum configuration needed to get a web server up and running. But, supervisor has a lot more configuration options. To view all of the customizations, go to the supervisor documentation at http://supervisord.org/. This configuration tells supervisor to run a command in the context of /home/deploy/webapp under the deploy user. The right hand of the command value is empty because it depends on what server you are running and will be filled in for each section. Now we need to add a sudo call in the deploy command to copy this configuration file to the /etc/supervisor/conf.d/ directory: def deploy(): … with cd('/home/deploy/webapp'): … sudo("cp supervisord.conf /etc/supervisor/conf.d/webapp.conf") sudo('service supervisor restart') A lot of projects just create the files on the server and forget about them, but having the configuration file stored in our git repository and copied on every deployment gives several advantages. First, this means that it easy to revert changes if something goes wrong using git. Second, it means that we don't have to log in to our server in order to make changes to the files. Don't use the Flask development server in production. Not only it fails to handle concurrent connections, but it also allows arbitrary Python code to be run on your server. Gevent The simplest option to get a web server up and running is to use a Python library called gevent to host your application. Gevent is a Python library that adds an alternative way of doing concurrent programming outside of the Python threading library called coroutines. Gevent has an interface for running WSGI applications that is both simple and has good performance. A simple gevent server can easily handle hundreds of concurrent users, which is more in number than 99 percent of websites on the Internet will ever have. The downside to this option is that its simplicity means a lack of configuration options. There is no way, for example, to add rate limiting to the server or to add HTTPS traffic. This deployment option is purely for sites that you don't expect to receive a huge amount of traffic. Remember YAGNI (short for You Aren't Gonna Need It); only upgrade to a different web server if you really need to. Coroutines are a bit outside of the scope of this book, so a good explanation can be found at https://en.wikipedia.org/wiki/Coroutine. To install gevent, we will use pip: $ pip install gevent In a new file in the root of the project directory named gserver.py, add the following: from gevent.wsgi import WSGIServer from webapp import create_app app = create_app('webapp.config.ProdConfig') server = WSGIServer(('', 80), app) server.serve_forever() To run the server with supervisor, just change the command value to the following: [program:webapp] command=python gserver.py directory=/home/deploy/webapp user=deploy Now when you deploy, gevent will be automatically installed for you by running your requirements.txt on every deployment, that is, if you are properly pip freeze–ing after every new dependency is added. Tornado Tornado is another very simple way to deploy WSGI apps purely with Python. Tornado is a web server that is designed to handle thousands of simultaneous connections. If your application needs real-time data, Tornado also supports websockets for continuous, long-lived connections to the server. Do not use Tornado in production on a Windows server. The Windows version of Tornado is not only much slower, but it is considered beta quality software. To use Tornado with our application, we will use Tornado's WSGIContainer in order to wrap the application object to make it Tornado compatible. Then, Tornado will start to listen on port 80 for requests until the process is terminated. In a new file named tserver.py, add the following: from tornado.wsgi import WSGIContainer from tornado.httpserver import HTTPServer from tornado.ioloop import IOLoop from webapp import create_app app = WSGIContainer(create_app("webapp.config.ProdConfig")) http_server = HTTPServer(app) http_server.listen(80) IOLoop.instance().start() To run the Tornado with supervisor, just change the command value to the following: [program:webapp] command=python tserver.py directory=/home/deploy/webapp user=deploy Nginx and uWSGI If you need more performance or customization, the most popular way to deploy a Python web application is to use the web server Nginx as a frontend for the WSGI server uWSGI by using a reverse proxy. A reverse proxy is a program in networks that retrieves contents for a client from a server as if they returned from the proxy itself as shown in the following figure: Nginx and uWSGI are used in this way because we get the power of the Nginx frontend while having the customization of uWSGI. Nginx is a very powerful web server that became popular by providing the best combination of speed and customization. Nginx is consistently faster than other web severs, such as Apache httpd, and has native support for WSGI applications. The way it achieves this speed is several good architecture decisions as well as the decision early on that they were not going to try to cover a large amount of use cases like Apache does. Having a smaller feature set makes it much easier to maintain and optimize the code. From a programmer's perspective, it is also much easier to configure Nginx, as there is no giant default configuration file (httpd.conf) that needs to be overridden with .htaccess files in each of your project directories. One downside is that Nginx has a much smaller community than Apache, so if you have an obscure problem, you are less likely to be able to find answers online. Also, it's possible that a feature that most programmers are used to in Apache isn't supported in Nginx. uWSGI is a web server that supports several different types of server interfaces, including WSGI. uWSGI handles severing the application content as well as things such as load balancing traffic across several different processes and threads. To install uWSGI, we will use pip in the following way: $ pip install uwsgi In order to run our application, uWSGI needs a file with an accessible WSGI application. In a new file named wsgi.py in the top level of the project directory, add the following: from webapp import create_app app = create_app("webapp.config.ProdConfig") To test uWSGI, we can run it from the command line with the following: $ uwsgi --socket 127.0.0.1:8080 --wsgi-file wsgi.py --callable app --processes 4 --threads 2 If you are running this on your server, you should be able to access port 8080 and see your app (if you don't have a firewall that is). What this command does is load the app object from the wsgi.py file and makes it accessible from localhost on port 8080. It also spawns four different processes with two threads each, which are automatically load balanced by a master process. This amount of processes is the overkill for the vast, vast majority of websites. To start off, use a single process with two threads and scale up from there. Instead of adding all of the configuration options on the command line, we can create a text file to hold our configuration, which brings the same benefits for configuration that were listed in the section on supervisor. In a new file in the root of the project directory named uwsgi.ini, add the following: [uwsgi] socket = 127.0.0.1:8080 wsgi-file = wsgi.py callable = app processes = 4 threads = 2 uWSGI supports hundreds of configuration options as well as several official and unofficial plugins. To leverage the full power of uWSGI, you can explore the documentation at http://uwsgi-docs.readthedocs.org/. Let's run the server now from supervisor: [program:webapp] command=uwsgi uwsgi.ini directory=/home/deploy/webapp user=deploy We also need to install Nginx during the setup function: def setup(): … sudo("apt-get install -y nginx") Because we are installing Nginx from the OS's package manager, the OS will handle running Nginx for us. At the time of writing, the Nginx version in the official Debian package manager is several years old. To install the most recent version, follow the instructions here: http://wiki.nginx.org/Install. Next, we need to create an Nginx configuration file and then copy it to the /etc/nginx/sites-available/ directory when we push the code. In a new file in the root of the project directory named nginx.conf, add the following server { listen 80; server_name your_domain_name; location / { include uwsgi_params; uwsgi_pass 127.0.0.1:8080; } location /static { alias /home/deploy/webapp/webapp/static; } } What this configuration file does is tell Nginx to listen for incoming requests on port 80 and forward all requests to the WSGI application that is listening on port 8080. Also, it makes an exception for any requests for static files and instead sends those requests directly to the file system. Bypassing uWSGI for static files gives a great performance boost, as Nginx is really good at serving static files quickly. Finally, in the fabfile.py file: def deploy(): … with cd('/home/deploy/webapp'): … sudo("cp nginx.conf " "/etc/nginx/sites-available/[your_domain]") sudo("ln -sf /etc/nginx/sites-available/your_domain " "/etc/nginx/sites-enabled/[your_domain]") sudo("service nginx restart") Apache and uWSGI Using Apache httpd with uWSGI has mostly the same setup. First off, we need an apache configuration file in a new file in the root of our project directory named apache.conf: <VirtualHost *:80> <Location /> ProxyPass / uwsgi://127.0.0.1:8080/ </Location> </VirtualHost> This file just tells Apache to pass all requests on port 80 to the uWSGI web server listening on port 8080. But, this functionality requires an extra Apache plugin from uWSGI called mod proxy uWSGI. We can install this as well as Apache in the set command: def setup(): … sudo("apt-get install -y apache2") sudo("apt-get install -y libapache2-mod-proxy-uwsgi") Finally, in the deploy command, we need to copy our Apache configuration file into Apache's configuration directory. def deploy(): … with cd('/home/deploy/webapp'): … sudo("cp apache.conf " "/etc/apache2/sites-available/[your_domain]") sudo("ln -sf /etc/apache2/sites-available/[your_domain] " "/etc/apache2/sites-enabled/[your_domain]") sudo("service apache2 restart") Summary In this article you learnt that there are many different options to hosting your application, each having their own pros and cons. Deciding on one depends on the amount of time and money you are willing to spend as well as the total number of users you expect. Resources for Article: Further resources on this subject: Handling sessions and users[article] Snap – The Code Snippet Sharing Application[article] Man, Do I Like Templates! [article] from fabric.api import local def test():     local('python -m unittest discover')
Read more
  • 0
  • 0
  • 7689

article-image-oracle-api-management-implementation-12c
Packt
29 Sep 2015
5 min read
Save for later

Oracle API Management Implementation 12c

Packt
29 Sep 2015
5 min read
 This article by Luis Augusto Weir, the author of the book, Oracle API Management 12c Implementation, gives you a gist of what is covered in the book. At present, the digital transformation is essential for any business strategy, regardless of the industry they belong to an organization. (For more resources related to this topic, see here.) The companies who embark on a journey of digital transformation, they become able to create innovative and disruptive solutions; this in order to deliver a user experience much richer, unified, and personalized at lower cost. These organizations are able to address customers dynamically and across a wide variety of channels, such as mobile applications, highly responsive websites, and social networks. Ultimately, companies that develop models aligned digital innovation business, acquire a considerable competitive advantage over those that do not. The main trigger for this transformation is the ability to expose and make available business information and key technological capabilities for this, which often are buried in information systems (EIS) of the organization, or in components integration are only visible internally. In the digital economy, it is highly desirable to realize those assets in a standardized way through APIs, this course, in a controlled, scalable, and secure environment. The lightweight nature and ease of finding/using these APIs greatly facilitates its adoption as the essential mechanism to expose and/or consume various features from a multichannel environment. API Management is the discipline that governs the development cycle of APIs, defining the tools and processes needed to build, publish, and operate, also including management development communities around them. Our recent book, API Management Oracle 12c (Luis Weir, Andrew Bell, Rolando Carrasco, Arturo Viveros), is a very comprehensive and detailed to implement API Management in an organization guide. In this book, he explains the relationship that keeps this discipline with concepts such great detail as SOA Governance and DevOps .The convergence of API Management with SOA and governance of such services is addressed particularly to explain and shape the concept of Application Services Governance (ESG). On the other hand, it highlights the presence of case studies based on real scenarios, with multiple examples to demonstrate the correct definition and implementation of a robust strategy in solving supported Oracle Management API. The book begins by describing a number of key concepts about API Management and contextualizing the complementary disciplines, such as SOA Governance, DevOps, and Enterprise Architecture (EA). This is in order to clear up any confusion about the relationship to these topics. Then, all these concepts are put into practice by defining the case study of an organization with real name, which previously dealt with successfully implementing a service-oriented architecture considering the government of it, and now It is the need/opportunity to extend its technology platform by implementing a strategy of API Management. Throughout the narrative of the case are also described: Business requirements justifying the adoption of API Management The potential impact of the proposed solution on the organization The steps required to design and implement the strategy The definition and implementation of the assessment of maturity (API Readiness) and analysis of gaps in terms of: people, tools, and technology The exercise of evaluation and selection of products, explaining the choice of Oracle as the most appropriate solution The implementation roadmap API Management In later chapters, the various steps are being addressed one by one needed to solve the raised stage, by implementing the following reference architecture for API Management, based on the components of the Oracle solution: Catalog API, API Manager, and API Gateway. In short, the book will enable the reader to acquire a number of advanced knowledge on the following topics: API Management, its definition, concepts, and objectives Differences and similarities between API Management and SOA Governance; where and how these two disciplines converge in the concept of ESG Application Services Governance[d1]  and how to define a framework aimed at ASG Definition and implementation of the assessment of maturity for API Management Criteria for the selection and evaluation tools; Why Oracle API Management Suite? Implementation of Oracle API Catalog (OAC), including OAC harvesting by bootstrapping & ANT scripts and JDev, OAC Console, user creation and management, metadata API, API Discovery, and how to extend the functionality of OAC REX by API. Management APIs and challenges in general API Management Oracle Implementation Manager API (OAPIM), including the creation, publishing, monitoring, subscription, and life cycle management APIs by OAPIM Portal Common scenarios for adoption/implementation of API Management and how to solve them[d2]  Implementation of Oracle API Gateway (OAG), including creation of policies with different filters, OAuth authentication, integration with LDAP, SOAP/REST APIs conversions, and Testing. Defining the deployment topology for Oracle API Management Suite Installing and configuring OAC, OAPIM, and OAG 12c Oracle Management API is designed for the following audience: Enterprise Architects, Solution Architects, Technical Leader and SOA and APIs professionals seeking to know thoroughly and successfully implement the Oracle API Management solution. Summary In this article, we looked at Oracle API Management Implementation 12c in brief. More information on this is provided in the book. Resources for Article: Further resources on this subject: Oracle 12c SQL and PL/SQL New Features[article] Securing Data at Rest in Oracle 11g[article] Getting Started with Oracle Primavera P6 [article]
Read more
  • 0
  • 0
  • 3366

article-image-building-jsf-forms
Packt
25 Sep 2015
16 min read
Save for later

Building JSF Forms

Packt
25 Sep 2015
16 min read
 In this article by Peter Pilgrim, author of the book Java EE 7 Web Application Development, we will learn about Java Server Faces as an example of a component-oriented web application framework. As opposed to Java EE 8 MVC, WebWork or Apache Struts, which are known as request-oriented web application frameworks. A request-oriented framework is one where the information flow is web request and response. Such frameworks provide ability and structure above the javax.servlet.http.HttpServletRequest and javax.servlet.http.HttpServletResponse objects, but there are no special user interface components. The application user with additional help must program mapping of parameters and attributes to the data entity models. The developer therefore has to write parsing logic. It is important to understand that component-oriented frameworks like JSF have their detractors. The quick inspection of the code resembles components like in standalone client like Java Swing or even JavaFX, but behind the scenes lurks the very same HttpServletRequest and HttpServletResponse. Hence, a competent JSF developer has to be still aware of the Servlet API and the underlying servlet scopes. This was a valid criticism in 2004 and in the digital marketing age, a digital developer has to know not only Servlet, we can presume they would be open to learning other technologies such as JavaScript. (For more resources related to this topic, see here.) Create Retrieve Update and Delete In this article, we are going to solve everyday problem with JSF. Java EE framework and enterprise application are about solving data entry issues. Unlike social networking software that is built with a different architecture and non-functional requirements: scalability, performance, statelessness, and eventual consistency, Java EE applications are designed for stateful work flows. Following is the screenshot of the page view for creating contact details: The preceding screenshot is the JSF application jsf-crud, which shows contact details form. Typically an enterprise application captures information from a web user, stores it in a data store, allows that information to be retrieved and edited. There is usually an option to delete the user's information. In software engineering, we call this idiom, Create Retrieve Update and Delete (CRUD). What constitutes actual deletion of user and customer data is a matter ultimately that affects business owners who are under pressure to conform to local and international law that define privacy and data protection. Basic create entity JSF form Let's create a basic form that captures the user's name, e-mail address and date of birthday. We shall write this code using HTML5 and take advantage of the Bootstrap for modern day CSS and JavaScript. See http://getbootstrap.com/getting-started/. Here is the JSF Facelet view createContact.xhtml: <!DOCTYPE html> <html > <h:head> <meta charset="utf-8"/> <title>Demonstration Application </title> <link href="#{request.contextPath}/resources/styles/bootstrap.css" rel="stylesheet"/> <link href="#{request.contextPath}/resources/styles/main.css" rel="stylesheet"/> </h:head> <h:body> <div class="main-container"> <div class="header-content"> <div class="navbar navbar-inverse" role="navigation"> </div> </div><!-- headerContent --> <div class="mainContent"> <h1> Enter New Contact Details </h1> <h:form id="createContactDetail" styleClass="form- horizontal" p_role="form"> ... </h:form> </div><!-- main-content --> <div class="footer-content"> </div> <!-- footer-content --> </div> <!-- main-container --> </h:body> <script src="#{request.contextPath}/resources/javascripts/jquery- 1.11.0.js"></script> <script src="#{request.contextPath}/resources/javascripts/bootstrap.js"> </script> <script src="#{request.contextPath}/resources/app/main.js"> </script> </html> You already recognise the <h:head> and <h:body> JSF custom tags. Because the type if a Facelet view (*.xhtml), the document is actually must be well formed like a XML document. You should have noticed that certain HTML5 elements tags like <meta> are closed and completed: the XHTML document must be well-formed in JSF. Always close XHTML elements The typical e-commerce application has web pages with standard HTML with <meta>, <link>, and <br> tags. In XHTML and Facelet views these tags, which web designers normally leave open and hanging, must be closed. Extensible Markup Language (XML) is less forgiving and XHTML, which is derived from XML, must be well formed. The new tag <h:form> is a JSF custom tag that corresponds to the HTML form element. A JSF form element shares many on the attributes of the HTML partner. You can see the idattribute is just the same. However, instead of the class attribute, we have in JSF, the styleClass attribute, because in Java the method java.lang.Object.getClass() is reserved and therefore it cannot be overridden. What is the JSF request context path expression? The curious mark up around the links to the style sheets, JavaScript and other resources is the expression language #{request.contextPath}. The expression reference ensures that the web application path is added to the URL of JSF resources. Bootstrap CSS itself relies on font glyph icons in a particular folder. JSF images, JavaScript modules files and CSS files should be placed in the resources folder of the web root. The p:role attribute is an example of JSF pass-through attribute, which informs the JSF render kit to send through the key and value to the rendered output. The pass-through attributes are important key addition in JSF 2.2, which is part of Java EE 7. They allow JSF to play well with recent HTML5 frameworks such as Bootstrap and Foundation (http://foundation.zurb.com/). Here is an extract of the rendered HTML source output. <h1> Enter New Contact Details </h1> <form id="createContactDetail" name="createContactDetail" method="post" action="/jsf-crud-1.0- SNAPSHOT/createContactDetail.xhtml" class="form-horizontal" enctype="application/x-www-form-urlencoded" role="form"> <input type="hidden" name="createContactDetail" value="createContactDetail" /> JSF was implemented before the Bootstrap was created at Twitter. How could the JSF designer retrofit the framework to be compatible with recent HTML5, CSS3, and JavaScript innovations? This is where pass-through attribute help. By declaring the XML namespace in the XHTML with the URI http:// is simply passed through to the output. The pass-through attributes allow JSF to easily handle HTML5 features such as placeholders in text input fields, as we will exploit from now onwards. If you are brand new to web development, you might be scared of the markup that appears over complicated. There are lots and lots of DIV HTML elements, which are often created by page designers and Interface Developers. This is the historical effect and just the way HTML and The Web has evolved over time. The practices of 2002 have no bearing on 2016. Let's take a deeper look at the <h:form> and fill in the missing details. Here is the extracted code: <h:form id="createContactDetail" styleClass="form-horizontal" p_role="form"> <div class="form-group"> <h:outputLabel for="title" class="col-sm-3 control-label"> Title</h:outputLabel> <div class="col-sm-9"> <h:selectOneMenu class="form-control" id="title" value="#{contactDetailController.contactDetail.title}"> <f:selectItem itemLabel="--" itemValue="" /> <f:selectItem itemValue="Mr" /> <f:selectItem itemValue="Mrs" /> <f:selectItem itemValue="Miss" /> <f:selectItem itemValue="Ms" /> <f:selectItem itemValue="Dr" /> </h:selectOneMenu> </div> </div> <div class="form-group"> <h:outputLabel for="firstName" class="col-sm-3 control-label"> First name</h:outputLabel> <div class="col-sm-9"> <h:inputText class="form-control" value="#{contactDetailController.contactDetail.firstName}" id="firstName" placeholder="First name"/> </div> </div> ... Rinse and Repeat for middleName and lastName ... <div class="form-group"> <h:outputLabel for="email" class="col-sm-3 control-label"> Email address </h:outputLabel> <div class="col-sm-9"> <h:inputText type="email" class="form-control" id="email" value="#{contactDetailController.contactDetail.email}" placeholder="Enter email"/> </div> </div> <div class="form-group"> <h:outputLabel class="col-sm-3 control-label"> Newsletter </h:outputLabel> <div class="col-sm-9 checkbox"> <h:selectBooleanCheckbox id="allowEmails" value="#{contactDetailController.contactDetail.allowEmails}"> Send me email promotions </h:selectBooleanCheckbox> </div> </div> <h:commandButton styleClass="btn btn-primary" action="#{contactDetailController.createContact()}" value="Submit" /> </h:form> This is from is built using the Bootstrap CSS styles, but we shall ignore the extraneous details and concentrate purely on the JSF custom tags. The <h:selectOneMenu> is a JSF custom tag that corresponds to the HTML Form Select element. The <f:selectItem> tag corresponds to the HTML Form Select Option element. The <h:inputText> tag corresponds to the HTML Form Input element. The <h:selectBooleanCheckbox> tag is a special custom tag to represent a HTML Select with only one Checkbox element. Finally, <h:commandButton> represents a HTML Form Submit element. JSF HTML Output Label The <h:outputLabel> tag renders the HTML Form Label element. <h:outputLabel for="firstName" class="col-sm-3 control-label"> First name</h:outputLabel> Developers should prefer this tag with conjunction with the other associated JSF form input tags, because the special for attribute targets the correct sugared identifier for the element. Here is the rendered output: <label for="createContactDetail:firstName" class="col-sm-3 control-label"> First name</label> We could have written the tag using the value attribute, so that looks like this: <h:outputLabel for="firstName" class="col-sm-3 control-label" value="firstName" /> It is also possible to take advantage of internationalization at this point, so just for illustration, we could rewrite the page content as: <h:outputLabel for="firstName" class="col-sm-3 control-label" value="${myapplication.contactForm.firstName}" /> JSF HTML Input Text The <h:inputText> tag allows data to be entered in the form like text. <h:inputText class="form-control" value="#{contactDetailController.contactDetail.firstName}" id="firstName" placeholder="First name"/> The value attribute represents a JSF expression language and the clue is the evaluation string starts with a hash character. Expression references a scoped backing bean ContactDetailController.java with the name of contactDetailController. In JSF 2.2, there are now convenience attributes to support HTML5 support, so the standard id, class, and placeholder attributes work as expected. The rendered output is like this: <input id="createContactDetail:firstName" type="text" name="createContactDetail:firstName" class="form-control" /> Notice that the sugared identifier createContactDetails:firstName matches the output of the <h:outputLabel> tag. JSF HTML Select One Menu The <h:selectOneMenu> tag generates a single select drop down list. If fact, it is part of a family of selection type custom tags. See the <h:selectBooleanCheckbox> in the next section. In the code, we have the following code: <h:selectOneMenu class="form-control" id="title" value="#{contactDetailController.contactDetail.title}"> <f:selectItem itemLabel="--" itemValue="" /> <f:selectItem itemValue="Mr" /> <f:selectItem itemValue="Mrs" /> <f:selectItem itemValue="Miss" /> <f:selectItem itemValue="Ms" /> <f:selectItem itemValue="Dr" /> </h:selectOneMenu> The <h:selectOneMenu> tag corresponds to a HTML Form Select tag. The value attribute is again JSF expression language string. In JSF, we can use another new custom tag <f:selectItem> to define in place option item. The <f:selectItem> tag accepts an itemLabel and itemValue attribute. If you set the itemValue and do not specify the itemLabel, then the value becomes the label. So for the first item the option is set to —, but the value submitted to the form is a blank string, because we want to hint to the user that there is a value that ought be chosen. The rendered HTML output is instructive: <select id="createContactDetail:title" size="1" name="createContactDetail:title" class="form-control"> <option value="" selected="selected">--</option> <option value="Mr">Mr</option> <option value="Mrs">Mrs</option> <option value="Miss">Miss</option> <option value="Ms">Ms</option> <option value="Dr">Dr</option> </select> JSF HTML Select Boolean Checkbox The <h:selectBooleanCheckbox> custom tag is special case of selection where there is only one item that the user can choose. Typically, in web application, you will find such an element is the finally terms and condition form or usually in marketing e-mail section in an e-commerce application. In the targeted managed bean, the only value must be a Boolean type. <h:selectBooleanCheckbox for="allowEmails" value="#{contactDetailController.contactDetail.allowEmails}"> Send me email promotions </h:selectBooleanCheckbox> The rendered output for this custom tag looks like: <input id="createContactDetail:allowEmails" type="checkbox" name="createContactDetail:allowEmails" /> JSF HTML Command Button The <h:commandButton> custom tags correspond to the HTML Form Submit element. It accepts an action attribute in JSF that refers to a method in a backing bean. The syntax is again in the JSF expression language. <h:commandButton styleClass="btn btn-primary" action="#{contactDetailController.createContact()}" value="Submit" /> When the user presses this submit, the JSF framework will find the named managed bean corresponding to contactDetailController and then invoke the no arguments method createContact(). In the expression language, it is important to note that the parentheses are not required, because the interpreter or Facelets automatically introspects whether the meaning is an action (MethodExpression) or a value definition (ValueExpression). Be aware, most examples in the real world do not add the parentheses as a short hand. The value attribute denotes the text for the form submit button. We have could written the tag in the alternative way and achieve the same result. <h:commandButton styleClass="btn btn-primary" action="#{contactDetailController.createContact()}" > Submit </h:commandButton> The value is taken from the body content of the custom tag. The rendered output of the tag looks like something this: <input type="submit" name="createContactDetail:j_idt45" value="Submit" class="btn btn-primary" /> <input type="hidden" name="javax.faces.ViewState" id="j_id1:javax.faces.ViewState:0" value="-3512045671223885154:3950316419280637340" autocomplete="off" /> The above code illustrates the output from the JSF renderer in the Mojarra implementation (https://javaserverfaces.java.net/), which is the reference implementation. You can clearly see that the renderer writes a HTML submit and hidden element in the output. The hidden element captures information about the view state that is posted back to the JSF framework (postback), which allows it to restore the view. Finally, here is a screenshot of this contact details form: The contact details input JSF form with additional DOB fields Now let's examine the backing bean also known as the controller. Backing Bean controller For our simple POJO form, we need a backing bean or a modern day JSF developer parlance a managed bean controller. This is the entire code for the ContactDetailController: package uk.co.xenonique.digital; import javax.ejb.EJB; import javax.inject.Named; import javax.faces.view.ViewScoped; import java.util.List; @Named("contactDetailController") @ViewScoped public class ContactDetailController { @EJB ContactDetailService contactDetailService; private ContactDetail contactDetail = new ContactDetail(); public ContactDetail getContactDetail() { return contactDetail; } public void setContactDetail( ContactDetail contactDetail) { this.contactDetail = contactDetail; } public String createContact() { contactDetailService.add(contactDetail); contactDetail = new ContactDetail(); return "index.xhtml"; } public List<ContactDetail> retrieveAllContacts() { return contactDetailService.findAll(); } } For this managed bean, let's introduce you to a couple of new annotations. The first annotation is called @javax.inject.Named and it is declares this POJO to be CDI managed bean, which also simultaneously declares a JSF controller. Here, we declare explicitly the value of name of the managed bean as contactDetailController. This is actually the default name of the managed bean, so we could have left it out. We can also write an alternative name like this: @Named("wizard") @ViewScoped public class ContactDetailController { /* .. . */ } Then JSF would give as the bean with the name wizard. The name of the managed bean helps in expression language syntax. When we are talking JSF, we can interchange the term backing bean with managed bean freely. Many professional Java web develop understand that both terms mean the same thing! The @javax.faces.view.ViewScoped annotation denotes the controller has a life cycle of view scoped. The view scoped is designed for the situation where the application data is preserved just for one page until the user navigates to another page. As soon as the user navigates to another page JSF destroys the bean. JSF removes the reference to the view-scoped bean from its internal data structures and the object is left for garbage collector. The @ViewScoped annotation is new in Java EE 7 and JSF 2.2 and fixes a bug between the Faces and CDI specifications. This is because the CDI and JSF were developed independently. By looking at the Java doc, you will find an older annotation @javax.faces.bean.ViewScoped, which is come from JSF 2.0, which was not part of the CDI specification. For now, if you choose to write @ViewScoped annotated controllers you probably should use @ManagedBean. We will explain further on. The ContactDetailController also has dependency on an EJB service endpoint ContactDetailService, and most importantly is has a bean property ContactDetail. Note that getter and setter methods and we also ensure that the property is instantiated during construction time. We turn our attention to the methods. public String createContact() { contactDetailService.add(contactDetail); contactDetail = new ContactDetail(); return "index.xhtml"; } public List<ContactDetail> retrieveAllContacts() { return contactDetailService.findAll(); } The createContact() method uses the EJB to create a new contact detail. It returns a String, which is the next Facelet view index.xhtml. This method was referenced by the <h:commandButton>. The retrieveAllContacts() method invokes the data service to fetch the list collection of entities. This method will be referenced by another page. Summary In this article, we learned about JSF forms we explore HTML and core JSF custom tags in building the answer to one of the sought questions on the Internet. It is surprising that this simple idea is considered difficult to program. We built digital JSF form that initially created a contact detail. We saw the Facelet view, the managed bean controller, the stateful session EJB and the entity Resources for Article: Further resources on this subject: WebSockets in Wildfly[article] Prerequisites[article] Contexts and Dependency Injection in NetBeans [article]
Read more
  • 0
  • 0
  • 20966
article-image-introduction-using-nodejs-hadoops-mapreduce-jobs
Harri Siirak
25 Sep 2015
5 min read
Save for later

Using Node.js and Hadoop to store distributed data

Harri Siirak
25 Sep 2015
5 min read
Hadoop is a well-known open-source software framework for distributed storage and distributed processing of very large data sets on computer clusters built from commodity hardware. It's designed with a fundamental assumption that hardware failures can (and will) happen and thus should be automatically handled in software by the framework. Under the hood it's using HDFS (Hadoop Distributed File System) for the data storage. HDFS can store large files across multiple machines and it achieves reliability by replicating the data across multiple hosts (default replication factor is 3 and can be configured to be higher when needed). Although it's designed for mostly immutable files and may not be suitable for systems requiring concurrent write-operations. Its target usage is not only restricted to MapReduce jobs, but it also can be used for cost effective and reliable data storage. In the following examples, I am going to give you an overview of how to establish connections to HDFS storage (namenode) and how to perform basic operations on the data. As you can probably guess, I'm using Node.js to build these examples. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. So it's really ideal for what I want to show you next. Two popular libraries for acccessing HDFS in Node.js are node-hdfs and webhdfs. The first one uses Hadoop's native libhdfs library and protocol to communicate with Hadoop namenode, albeit it seems to be not maintained anymore and doesn't support Stream API. Another one is using WebHDFS, which defines a public HTTP REST API, directly built into Hadoop's core (namenodes and datanodes both) and which permits clients to access Hadoop from multiple languages without installing Hadoop, and supports all HDFS user operations including reading files, writing to files, making directories, changing permissions and renaming. More details about WebHDFS REST API and about its implementation details and response codes/types can be found from here. At this point I'm assuming that you have Hadoop cluster up and running. There are plenty of good tutorials out there showing how to setup and run Hadoop cluster (single and multi node). Installing and using the webhdfs library webhdfs implements most of the REST API calls, albeit it's not yet supporting Hadoop delegation tokens. It's also Stream API compatible what makes its usage pretty straightforward and easy. Detailed examples and use cases for another supported calls can be found from here. Install webhdfs from npm: npm install wehbhdfs Create a new script named webhdfs-client.js: // Include webhdfs module var WebHDFS = require('webhdfs'); // Create a new var hdfs = WebHDFS.createClient({ user: 'hduser', // Hadoop user host: 'localhost', // Namenode host port: 50070 // Namenode port }); module.exports = hdfs; Here we initialized new webhdfs client with options, including namenode's host and port where we are connecting to. Let's proceed with a more detailed example. Storing file data in HDFS Create a new script named webhdfs-write-test.js and add the code below. // Include created client var hdfs = require('./webhdfs-client'); // Include fs module for local file system operations var fs = require('fs'); // Initialize readable stream from local file // Change this to real path in your file system var localFileStream = fs.createReadStream('/path/to/local/file'); // Initialize writable stream to HDFS target var remoteFileStream = hdfs.createWriteStream('/path/to/remote/file'); // Pipe data to HDFS localFileStream.pipe(remoteFileStream); // Handle errors remoteFileStream.on('error', function onError (err) { // Do something with the error }); // Handle finish event remoteFileStream.on('finish', function onFinish () { // Upload is done }); Basically what we are doing here is that we're initializing readable file stream from a local filesystem and piping its contents seamlessly into remote HDFS target. Optionally webhdfs exposes error and finish. Reading file data from HDFS Let's retrieve the data what we just stored in HDFS storage. Create a new script named webhdfs-read-test.js and add code below. var hdfs = require('./webhdfs-client'); var fs = require('fs'); // Initialize readable stream from HDFS source var remoteFileStream = hdfs.createReadStream('/path/to/remote/file'); // Variable for storing data var data = new Buffer(); remoteFileStream.on('error', function onError (err) { // Do something with the error }); remoteFileStream.on('data', function onChunk (chunk) { // Concat received data chunk data = Buffer.concat([ data, chunk ]); }); remoteFileStream.on('finish', function onFinish () { // Upload is done // Print received data console.log(data.toString()); }); What's next? Now when we have data in Hadoop cluster, we can start processing it by spawning some MapReduce jobs, and when it's processed we can retrieve the output data. In the second part of this article, I'm going to give you an overview of how Node.js can be used as part of MapReduce jobs. About the author Harri is a senior Node.js/Javascript developer among a talented team of full-stack developers who specialize in building scalable and secure Node.js based solutions. He can be found on Github at harrisiirak.
Read more
  • 0
  • 2
  • 13963

Packt
24 Sep 2015
8 min read
Save for later

Snap – The Code Snippet Sharing Application

Packt
24 Sep 2015
8 min read
In this article by Joel Perras, author of the book Flask Blueprints, we will build our first fully functional, database-backed application. This application with the codename, Snap, will allow users to create an account with a username and password. In this account, users will be allowed to add, update, and delete the so-called semiprivate snaps of text (with a focus on lines of code) that can be shared with others. For this you should be familiar with at least one of the following relational database systems: PostgreSQL, MySQL, or SQLite. Additionally, some knowledge of the SQLAlchemy Python library, which acts as an abstraction layer and object-relational mapper for these (and several other) databases, will be an asset. If you are not well versed in the usage of SQLAlchemy, fear not. We will have a gentle introduction to the library that will bring the new developers up to speed and serve as a refresher for the more experienced folks. The SQLite database will be our relational database of choice due to its very simple installation and operation. The other database systems that we listed are all client/server-based with a multitude of configuration options that may need adjustment depending on the system they are installed in, while SQLite's default mode of operation is self-contained, serverless, and zero-configuration. Any major relational database supported by SQLAlchemy as a first-class citizen will do. (For more resources related to this topic, see here.) Diving In To make sure things start correctly, let's create a folder where this project will exist and a virtual environment to encapsulate any dependencies that we will require: $ mkdir -p ~/src/snap && cd ~/src/snap $ mkvirtualenv snap -i flask This will create a folder called snap at the given path and take us to this newly created folder. It will then create the snap virtual environment and install Flask in this environment. Remember that the mkvirtualenv tool will create the virtual environment, which will be the default set of locations to install the packages from pip, but the mkvirtualenv command does not create the project folder for you. This is why we will run a command to create the project folder first and then create the virtual environment. Virtual environments, by virtue of the $PATH manipulation performed once they are activated, are completely independent of where in your file system your project files exist. We will then create our basic blueprint-based project layout with an empty users blueprint: application ├── __init__.py ├── run.py └── users ├── __init__.py ├── models.py └── views.py Flask-SQLAlchemy Once this has been established, we need to install the next important set of dependencies: SQLAlchemy, and the Flask extension that makes interacting with this library a bit more Flask-like, Flask-SQLAlchemy: $ pip install flask-sqlalchemy This will install the Flask extension to SQLAlchemy along with the base distribution of the latter and several other necessary dependencies in case they are not already present. Now, if we were using a relational database system other than SQLite, this is the point where we would create the database entity in, say, PostgreSQL along with the proper users and permissions so that our application can create tables and modify the contents of these tables. SQLite, however, does not require any of that. Instead, it assumes that any user that has access to the filesystem location that the database is stored in should also have permission to modify the contents of this database. For the sake of completeness, however, here is how one would create an empty database in the current folder of your filesystem: $ sqlite3 snap.db # hit control-D to escape out of the interactive SQL console if necessary.   As mentioned previously, we will be using SQLite as the database for our example applications and the directions given will assume that SQLite is being used; the exact name of the binary may differ on your system. You can substitute the equivalent commands to create and administer the database of your choice if anything other than SQLite is being used. Now, we can begin the basic configuration of the Flask-SQLAlchemy extension. Configuring Flask-SQLAlchemy First, we must register the Flask-SQLAlchemy extension with the application object in the application/__init__.py: from flask import Flask fromflask.ext.sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///../snap.db' db = SQLAlchemy(app) The value of app.config['SQLALCHEMY_DATABASE_URI'] is the escaped relative path to the snap.db SQLite database that we created previously. Once this simple configuration is in place, we will be able to create the SQLite database automatically via the db.create_all() method, which can be invoked in an interactive Python shell: $ python >>>from application import db >>>db.create_all() This should be an idempotent operation, which means that nothing would change even if the database already exists. If the local database file did not exist, however, it would be created. This also applies to adding new data models: running db.create_all() will add their definitions to the database, ensuring that the relevant tables have been created and are accessible. It does not, however, take into account the modification of an existing model/table definition that already exists in the database. For this, you will need to use the relevant tools (for example, the sqlite CLI) to modify the corresponding table definitions to match those that have been updated in your models or use a more general schema tracking and updating tool such as Alembic to do the majority of the heavy lifting for you. SQLAlchemy basics SQLAlchemy is, first and foremost, a toolkit to interact with the relational databases in Python. While it provides an incredible number of features—including the SQL connection handling and pooling for various database engines, ability to handle custom datatypes, and a comprehensive SQL expression API—the one feature that most developers are familiar with is the Object Relational Mapper. This mapper allows a developer to connect a Python object definition to a SQL table in the database of their choice, thus allowing them the flexibility to control the domain models in their own application and requiring only minimal coupling to the database product and the engine-specific SQLisms that each of them exposes. While debating the usefulness (or the lack thereof) of an object relational mapper is outside the scope of for those who are unfamiliar with SQLAlchemy we will provide a list of benefits that using this tool brings to the table, as follows: Your domain models are written to interface with one of the most well-respected, tested, and deployed Python packages ever created—SQLAlchemy. Onboarding new developers to a project becomes an order of magnitude easier due to the extensive documentation, tutorials, books, and articles that have been written about using SQLAlchemy. Import-time validation of queries written using the SQLAlchemy expression language; instead of having to execute each query string against the database to determine if there is a syntax error present. The expression language is in Python and can thus be validated with your usual set of tools and IDE. Thanks to the implementation of design patterns such as the Unit of Work, the Identity Map, and various lazy loading features, the developer can often be saved from performing more database/network roundtrips than necessary. Considering that the majority of a request/response cycle in a typical web application can easily be attributed to network latency of one form or another, minimizing the number of database queries in a typical response is a net performance win on many fronts. While many successful, performant applications can be built entirely on the ORM, SQLAlchemy does not force it upon you. If, for some reason, it is preferable to write raw SQL query strings or to use the SQLAlchemy expression language directly, then you can do that and still benefit from the connection pooling and the Python DBAPI abstraction functionality that is the core of SQLAlchemy itself. Now that we've given you several reasons why you should be using this database query and domain data abstraction layer, let's look at how we would go about defining a basic data model. Summary After having gone through this article we have seen several facets of how Flask may be augmented with the use of extensions. While Flask itself is relatively spartan, the ecology of extensions that are available make it such that building a fully fledged user-authenticated application may be done quickly and relatively painlessly. Resources for Article: Further resources on this subject: Creating Controllers with Blueprints[article] Deployment and Post Deployment [article] Man, Do I Like Templates! [article]
Read more
  • 0
  • 0
  • 1379

article-image-enhancing-your-blog-advanced-features
Packt
22 Sep 2015
8 min read
Save for later

Enhancing Your Blog with Advanced Features

Packt
22 Sep 2015
8 min read
In this article by Antonio Melé, the author of the Django by Example book shows how to use the Django forms, and ModelForms. You will let your users share posts by e-mail, and you will be able to extend your blog application with a comment system. You will also learn how to integrate third-party applications into your project, and build complex QuerySets to get useful information from your models. In this article, you will learn how to add tagging functionality using a third-party application. (For more resources related to this topic, see here.) Adding tagging functionality After implementing our comment system, we are going to create a system for adding tags to our posts. We are going to do this by integrating in our project a third-party Django tagging application. django-taggit is a reusable application that primarily offers you a Tag model, and a manager for easily adding tags to any model. You can take a look at its source code at https://github.com/alex/django-taggit. First, you need install django-taggit via pip by running the pip install django-taggit command. Then, open the settings.py file of the project, and add taggit to your INSTALLED_APPS setting as the following: INSTALLED_APPS = ( # ... 'mysite.blog', 'taggit', ) Then, open the models.py file of your blog application, and add to the Post model the TaggableManager manager, provided by django-taggit as the following: from taggit.managers import TaggableManager # ... class Post(models.Model): # ... tags = TaggableManager() You just added tags for this model. The tags manager will allow you to add, retrieve, and remove tags from the Post objects. Run the python manage.py makemigrations blog command to create a migration for your model changes. You will get the following output: Migrations for 'blog': 0003_post_tags.py: Add field tags to post Now, run the python manage.py migrate command to create the required database tables for django-taggit models and synchronize your model changes. You will see an output indicating that the migrations have been applied: Operations to perform: Apply all migrations: taggit, admin, blog, contenttypes, sessions, auth Running migrations: Applying taggit.0001_initial... OK Applying blog.0003_post_tags... OK Your database is now ready to use django-taggit models. Open the terminal with the python manage.py shell command, and learn how to use the tags manager. First, we retrieve one of our posts (the one with the ID as 1): >>> from mysite.blog.models import Post >>> post = Post.objects.get(id=1) Then, add some tags to it and retrieve its tags back to check that they were successfully added: >>> post.tags.add('music', 'jazz', 'django') >>> post.tags.all() [<Tag: jazz>, <Tag: django>, <Tag: music>] Finally, remove a tag and check the list of tags again: >>> post.tags.remove('django') >>> post.tags.all() [<Tag: jazz>, <Tag: music>] This was easy, right? Run the python manage.py runserver command to start the development server again, and open http://127.0.0.1:8000/admin/taggit/tag/ in your browser. You will see the admin page with the list of the Tag objects of the taggit application: Navigate to http://127.0.0.1:8000/admin/blog/post/ and click on a post to edit it. You will see that the posts now include a new Tags field as the following one where you can easily edit tags: Now, we are going to edit our blog posts to display the tags. Open the blog/post/list.html template and add the following HTML code below the post title: <p class="tags">Tags: {{ post.tags.all|join:", " }}</p> The join template filter works as the Python string join method to concatenate elements with the given string. Open http://127.0.0.1:8000/blog/ in your browser. You will see the list of tags under each post title: Now, we are going to edit our post_list view to let users see all posts tagged with a tag. Open the views.py file of your blog application, import the Tag model form django-taggit, and change the post_list view to optionally filter posts by tag as the following: from taggit.models import Tag def post_list(request, tag_slug=None): post_list = Post.published.all() if tag_slug: tag = get_object_or_404(Tag, slug=tag_slug) post_list = post_list.filter(tags__in=[tag]) # ... The view now takes an optional tag_slug parameter that has a None default value. This parameter will come in the URL. Inside the view, we build the initial QuerySet, retrieving all the published posts. If there is a given tag slug, we get the Tag object with the given slug using the get_object_or_404 shortcut. Then, we filter the list of posts by the ones which tags are contained in a given list composed only by the tag we are interested in. Remember that QuerySets are lazy. The QuerySet for retrieving posts will only be evaluated when we loop over the post list to render the template. Now, change the render function at the bottom of the view to pass all the local variables to the template using locals(). The view will finally look as the following: def post_list(request, tag_slug=None): post_list = Post.published.all() if tag_slug: tag = get_object_or_404(Tag, slug=tag_slug) post_list = post_list.filter(tags__in=[tag]) paginator = Paginator(post_list, 3) # 3 posts in each page page = request.GET.get('page') try: posts = paginator.page(page) except PageNotAnInteger: # If page is not an integer deliver the first page posts = paginator.page(1) except EmptyPage: # If page is out of range deliver last page of results posts = paginator.page(paginator.num_pages) return render(request, 'blog/post/list.html', locals()) Now, open the urls.py file of your blog application, and make sure you are using the following URL pattern for the post_list view: url(r'^$', post_list, name='post_list'), Now, add another URL pattern as the following one for listing posts by tag: url(r'^tag/(?P<tag_slug>[-w]+)/$', post_list, name='post_list_by_tag'), As you can see, both the patterns point to the same view, but we are naming them differently. The first pattern will call the post_list view without any optional parameters, whereas the second pattern will call the view with the tag_slug parameter. Let’s change our post list template to display posts tagged with a specific tag, and also link the tags to the list of posts filtered by this tag. Open blog/post/list.html and add the following lines before the for loop of posts: {% if tag %} <h2>Posts tagged with "{{ tag.name }}"</h2> {% endif %} If the user is accessing the blog, he will the list of all posts. If he is filtering by posts tagged with a specific tag, he will see this information. Now, change the way the tags are displayed into the following: <p class="tags"> Tags: {% for tag in post.tags.all %} <a href="{% url "blog:post_list_by_tag" tag.slug %}">{{ tag.name }}</a> {% if not forloop.last %}, {% endif %} {% endfor %} </p> Notice that now we are looping through all the tags of a post, and displaying a custom link to the URL for listing posts tagged with this tag. We build the link with {% url "blog:post_list_by_tag" tag.slug %} using the name that we gave to the URL, and the tag slug as parameter. We separate the tags by commas. The complete code of your template will look like the following: {% extends "blog/base.html" %} {% block title %}My Blog{% endblock %} {% block content %} <h1>My Blog</h1> {% if tag %} <h2>Posts tagged with "{{ tag.name }}"</h2> {% endif %} {% for post in posts %} <h2><a href="{{ post.get_absolute_url }}">{{ post.title }}</a></h2> <p class="tags"> Tags: {% for tag in post.tags.all %} <a href="{% url "blog:post_list_by_tag" tag.slug %}">{{ tag.name }}</a> {% if not forloop.last %}, {% endif %} {% endfor %} </p> <p class="date">Published {{ post.publish }} by {{ post.author }}</p> {{ post.body|truncatewords:30|linebreaks }} {% endfor %} {% include "pagination.html" with page=posts %} {% endblock %} Open http://127.0.0.1:8000/blog/ in your browser, and click on any tag link. You will see the list of posts filtered by this tag as the following: Summary In this article, you added tagging to your blog posts by integrating a reusable application. The book Django By Example, hands-on-guide will also show you how to integrate other popular technologies with Django in a fun and practical way. Resources for Article: Further resources on this subject: Code Style in Django[article] So, what is Django? [article] Share and Share Alike [article]
Read more
  • 0
  • 0
  • 3921
article-image-building-games-html5-and-dart
Packt
21 Sep 2015
19 min read
Save for later

Building Games with HTML5 and Dart

Packt
21 Sep 2015
19 min read
In this article written by Ivo Balbaert, author of the book Learning Dart - Second Edition, you will learn to create a well-known memory game. Also, you will design a model first and work up your way from a modest beginning to a completely functional game, step by step. You will also learn how to enhance the attractiveness of web games with audio and video techniques. The following topics will be covered in this article: The model for the memory game Spiral 1—drawing the board Spiral 2—drawing cells Spiral 3—coloring the cells Spiral 4—implementing the rules Spiral 5—game logic (bringing in the time element) Spiral 6—some finishing touches Spiral 7—using images (For more resources related to this topic, see here.) The model for the memory game When started, the game presents a board with square cells. Every cell hides an image that can be seen by clicking on the cell, but this disappears quickly. You must remember where the images are, because they come in pairs. If you quickly click on two cells that hide the same picture, the cells will "flip over" and the pictures will stay visible. The objective of the game is to turn over all the pairs of matching images in a very short time. After some thinking we came up with the following model, which describes the data handled by the application. In our game, we have a number of pictures, which could belong to a Catalog. For example, a travel catalog with a collection of photos from our trips or something similar. Furthermore, we have a collection of cells and each cell is hiding a picture. Also, we have a structure that we will call memory, and this contains the cells in a grid of rows and columns. We could draw it up as shown in the following figure. You can import the model from the game_memory_json.txt file that contains its JSON representation: A conceptual model of the memory game The Catalog ID is its name, which is mandatory, but the description is optional. The Picture ID consists of the sequence number within the Catalog. The imageUri field stores the location of the image file. width and height are optional properties, since they may be derived from the image file. The size may be small, medium, or large to help select an image. The ID of a Memory is its name within the Catalog, the collection of cells is determined by the memory length, for example, 4 cells per side. Each cell is of the same length cellLength, which is a property of the memory. A memory is recalled when all the image pairs are discovered. Some statistics must be kept, such as recall count, the best recall time in seconds, and the number of cell clicks to recover the whole image (minTryCount). The Cell has the row and column coordinates and also the coordinates of its twin with the same image. Once the model is discussed and improved, model views may be created: a Board would be a view of the Memory concept and a Box would be a view of the Cell concept. The application would be based on the Catalog concept. If there is no need to browse photos of a catalog and display them within a page, there would not be a corresponding view. Now, we can start developing this game from scratch. Spiral 1 – drawing the board The app starts with main() in educ_memory_game.dart: library memory; import 'dart:html'; part 'board.dart'; void main() { // Get a reference to the canvas. CanvasElement canvas = querySelector('#canvas'); (1) new Board(canvas); (2) } We'll draw a board on a canvas element. So, we need a reference that is given in line (1). The Board view is represented in code as its own Board class in the board.dart file. Since everything happens on this board, we construct its object with canvas as an argument (line (2)). Our game board will be periodically drawn as a rectangle in line (4) by using the animationFrame method from the Window class in line (3): part of memory; class Board { CanvasElement canvas; CanvasRenderingContext2D context; num width, height; Board(this.canvas) { context = canvas.getContext('2d'); width = canvas.width; height = canvas.height; window.animationFrame.then(gameLoop); (3) } void gameLoop(num delta) { draw(); window.animationFrame.then(gameLoop); } void draw() { clear(); border(); } void clear() { context.clearRect(0, 0, width, height); } void border() { context..rect(0, 0, width, height)..stroke(); (4) } } This is our first result: The game board Spiral 2 – drawing cells In this spiral, we will give our app code some structure: Board is a view, so board.dart is moved to the view folder. We will also introduce here the Memory class from our model in its own code memory.dart file in the model folder. So, we will have to change the part statements to the following: part 'model/memory.dart'; part 'view/board.dart'; The Board view needs to know about Memory. So, we will include it in the Board class and make its object in the Board constructor: new Board(canvas, new Memory(4)); The Memory class is still very rudimentary with only its length property: class Memory { num length; Memory(this.length); } Our Board class now also needs a method to draw the lines, which we decided to make private because it is specific to Board, as well as the clear() and border()methods: void draw() { _clear(); _border(); _lines(); } The lines method is quite straightforward; first draw it on a piece of paper and translate it to code using moveTo and lineTo. Remember that x goes from top-left to right and y goes from top-left to bottom: void _lines() { var gap = height / memory.length; var x, y; for (var i = 1; i < memory.length; i++) { x = gap * i; y = x; context ..moveTo(x, 0) ..lineTo(x, height) ..moveTo(0, y) ..lineTo(width, y); } } The result is a nice grid: Board with cells Spiral 3 – coloring the cells To simplify, we will start using colors instead of pictures to be shown in the grid. Up until now, we didn't implement the cell from the model. Let's do that in modelcell.dart. We start simple by saying that the Cell class has the row, column, and color properties, and it belongs to a Memory object passed in its constructor: class Cell { int row, column; String color; Memory memory; Cell(this.memory, this.row, this.column); } Because we need a collection of cells, it is a good idea to make a Cells class, which contains List. We give it an add method and also an iterator so that we are able to use a for…in statement to loop over the collection: class Cells { List _list; Cells() { _list = new List(); } void add(Cell cell) { _list.add(cell); } Iterator get iterator => _list.iterator; } We will need colors that are randomly assigned to the cells. We will also need some utility variables and methods that do not specifically belong to the model and don't need a class. Hence, we will code them in a folder called util. To specify the colors for the cells, we will use two utility variables: a List variable of colors (colorList), which has the name colors, and a colorMap variable that maps the names to their RGB values. Refer to utilcolor.dart; later on, we can choose some fancier colors: var colorList = ['black', 'blue', //other colors ]; var colorMap = {'black': '#000000', 'blue': '#0000ff', //... }; To generate (pseudo) random values (ints, doubles, or Booleans), Dart has the Random class from dart:math. We will use the nextInt method, which takes an integer (the maximum value) and returns a positive random integer in the range from 0 (inclusive) to max (exclusive). We will build upon this in utilrandom.dart to make methods that give us a random color: int randomInt(int max) => new Random().nextInt(max); randomListElement(List list) => list[randomInt(list.length - 1)]; String randomColor() => randomListElement(colorList); String randomColorCode() => colorMap[randomColor()]; Our Memory class now contains an instance of the Cells class: Cells cells; We build this in the Memory constructor in a nested for loop, where each cell is successively instantiated with a row and column, given a random color, and added to cells: Memory(this.length) { cells = new Cells(); var cell; for (var x = 0; x < length; x++) { for (var y = 0; y < length; y++) { cell = new Cell(this, x, y); cell.color = randomColor(); cells.add(cell); } } } We can draw a rectangle and fill it with a color at the same time. So, we realize that we don't need to draw lines as we did in the previous spiral! The _boxes method is called from the draw animation: with a for…in statement, we loop over the collection of cells and call the _colorBox method that will draw and color the cell for each cell: void _boxes() { for (Cell cell in memory.cells) { _colorBox(cell); } } void _colorBox(Cell cell) { var gap = height / memory.length; var x = cell.row * gap; var y = cell.column * gap; context ..beginPath() ..fillStyle = colorMap[cell.color] ..rect(x, y, gap, gap) ..fill() ..stroke() ..closePath(); } Spiral 4 – implementing the rules However, wait! Our game can only work if the same color appears in only two cells: a cell and its twin cell. Moreover, a cell can be hidden or not: the color can be seen or not? To take care of this, the Cell class gets two new attributes: Cell twin; bool hidden = true; The _colorBox method in the Board class can now show the color of the cell when hidden is false (line (2)); when hidden = true (the default state), a neutral gray color will be used for the cell (line (1)): static const String COLOR_CODE = '#f0f0f0'; We also gave the gap variable a better name, boxSize: void _colorBox(Cell cell) { var x = cell.column * boxSize; var y = cell.row * boxSize; context.beginPath(); if (cell.hidden) { context.fillStyle = COLOR_CODE; (1) } else { context.fillStyle = colorMap[cell.color]; (2) } // same code as in Spiral 3 } The lines (1) and (2) can also be stated more succinctly with the ? ternary operator. Remember that the drawing changes because the _colorBox method is called via draw at 60 frames per second and the board can react to a mouse click. In this spiral, we will show a cell when it is clicked together with its twin cell and then they will stay visible. Attaching an event handler for this is easy. We add the following line to the Board constructor: querySelector('#canvas').onMouseDown.listen(onMouseDown); The onMouseDown event handler has to know on which cell the click occurred. The mouse event e contains the coordinates of the click in its e.offset.x and e.offset.y properties (lines (3) and (4)). We will obtain the cell's row and column by using a truncating division ~/ operator dividing the x (which gives the column) and y (which gives the row) values by boxSize: void onMouseDown(MouseEvent e) { int row = e.offset.y ~/ boxSize; (3) int column = e.offset.x ~/ boxSize; (4) Cell cell = memory.getCell(row, column); (5) cell.hidden = false; (6) cell.twin.hidden = false; (7) } Memory has a collection of cells. To get the cell with a specified row and column value, we will add a getCell method to memory and call it in line (5). When we have the cell, we will set its hidden property and that of its twin cell to false (lines (6) to (7)). The getCell method must return the cell at the given row and column. It loops through all the cells in line (8) and checks each cell, whether it is positioned at that row and column (line (9)). If yes, it will return that cell: Cell getCell(int row, int column) { for (Cell cell in cells) { (8) if (cell.intersects(row, column)) { (9) return cell; } } } For this purpose, we will add an intersects method to the Cell class. This checks whether its row and column match the given row and column for the current cell (see line (10)): bool intersects(int row, int column) { if (this.row == row && this.column == column) { (10) return true; } return false; } Now, we have already added a lot of functionality, but the drawing of the board will need some more thinking: How to give a cell (and its twin cell) a random color that is not yet used? How to attach a cell randomly to a twin cell that is not yet used? To end this, we will have to make the constructor of Memory a lot more intelligent: Memory(this.length) { if (length.isOdd) { (1) throw new Exception( 'Memory length must be an even integer: $length.'); } cells = new Cells(); var cell, twinCell; for (var x = 0; x < length; x++) { for (var y = 0; y < length; y++) { cell = getCell(y, x); (2) if (cell == null) { (3) cell = new Cell(this, y, x); cell.color = _getFreeRandomColor(); (4) cells.add(cell); twinCell = _getFreeRandomCell(); (5) cell.twin = twinCell; (6) twinCell.twin = cell; twinCell.color = cell.color; cells.add(twinCell); } } } } The number of pairs given by ((length * length) / 2) must be even. This is only true if the length parameter of Memory itself is even, so we checked it in line (1). Again, we coded a nested loop and got the cell at that row and column. However, as the cell at that position has not yet been made (line (3)), we continued to construct it and assign its color and twin. In line (4), we called _getFreeRandomColor to get a color that is not yet used: String _getFreeRandomColor() { var color; do { color = randomColor(); } while (usedColors.any((c) => c == color)); (7) usedColors.add(color); (8) return color; } The do…while loop continues as long as the color is already in a list of usedColors. On exiting from the loop, we found an unused color, which is added to usedColors in line (8) and also returned. We then had to set everything for the twin cell. We searched for a free one with the _getFreeRandomCell method in line (5). Here, the do…while loop continues until a (row, column) position is found where cell == null is, meaning that we haven't yet created a cell there (line (9)). We will promptly do this in line (10): Cell _getFreeRandomCell() { var row, column; Cell cell; do { row = randomInt(length); column = randomInt(length); cell = getCell(row, column); } while (cell != null); (9) return new Cell(this, row, column); (10) } From line (6) onwards, the properties of the twin cell are set and added to the list. This is all we need to produce the following result: Paired colored cells Spiral 5 – game logic (bringing in the time element) Our app isn't playable yet: When a cell is clicked, its color must only show for a short period of time (say one second) When a cell and its twin cell are clicked within a certain time interval, they must remain visible All of this is coded in the mouseDown event handler and we also need a lastCellClicked variable of the Cell type in the Board class. Of course, this is exactly the cell we get in the mouseDown event handler. So, we will set it in line (5) in the following code snippet: void onMouseDown(MouseEvent e) { // same code as in Spiral 4 - if (cell.twin == lastCellClicked && lastCellClicked.shown) { (1) lastCellClicked.hidden = false; (2) if (memory.recalled) memory.hide(); (3) } else { new Timer(const Duration(milliseconds: 1000), () => cell.hidden = true); (4) } lastCellClicked = cell; (5) } In line (1), we checked whether the last clicked cell was the twin cell and whether this is still shown. Then, we made sure in (2) that it stays visible. shown is a new getter in the Cell class to make the code more readable: bool get shown => !hidden;. If at that moment all the cells were shown (the memory is recalled), we again hid them in line (3). If the last clicked cell was not the twin cell, we hid the current cell after one second in line (4). recalled is a simple getter (read-only property) in the Memory class and it makes use of a Boolean variable in Memory that is initialized to false (_recalled = false;): bool get recalled { if (!_recalled) { if (cells.every((c) => c.shown)) { (6) _recalled = true; } } return _recalled; } In line (6), we tested that if every cell is shown, then this variable is set to true (the game is over). every is a new method in the Cells List and a nice functional way to write this is given as follows: bool every(Function f) => list.every(f); The hide method is straightforward: hide every cell and reset the _recalled variable to false: hide() { for (final cell in cells) cell.hidden = true; _recalled = false; } This is it, our game works! Spiral 6 – some finishing touches A working program always gives its developer a sense of joy, and rightfully so. However, this doesn't that mean you can leave the code as it is. On the contrary, carefully review your code for some time to see whether there is room for improvement or optimization. For example, are the names you used clear enough? The color of a hidden cell is now named simply COLOR_CODE in board.dart, renaming it to HIDDEN_CELL_COLOR_CODE makes its meaning explicit. The List object used in the Cells class can indicate that it is List<Cell>, by applying the fact that Dart lists are generic. The parameter of the every method in the Cell class is more precise—it is a function that accepts a cell and returns bool. Our onMouseDown event handler contains our game logic, so it is very important to tune it if possible. After some thought, we see that the code from the previous spiral can be improved; in the following line, the second condition after && is, in fact, unnecessary: if (cell.twin == lastCellClicked && lastCellClicked.shown) {...} When the player has guessed everything correctly, showing the completed screen for a few seconds will be more satisfactory (line (2)). So, this portion of our event handler code will change to: if (cell.twin == lastCellClicked) { (1) lastCellClicked.hidden = false; if (memory.recalled) { // game over new Timer(const Duration(milliseconds: 5000), () => memory.hide()); (2) } } else if (cell.twin.hidden) { new Timer(const Duration(milliseconds: 800), () => cell.hidden = true); } Why don’t we show a "YOU HAVE WON!" banner. We will do this by drawing the text on the canvas (line (3)), so we must do it in the draw() method (otherwise, it would disappear after INTERVAL milliseconds): void draw() { _clear(); _boxes(); if (memory.recalled) { // game over context.font = "bold 25px sans-serif"; context.fillStyle = "red"; context.fillText("YOU HAVE WON !", boxSize, boxSize * 2); (3) } } Then, the same game with the same configuration can be played again. We could make it more obvious that a cell is hidden by decorating it with a small circle in the _colorBox method (line (4)): if (cell.hidden) { context.fillStyle = HIDDEN_CELL_COLOR_CODE; var centerX = cell.column * boxSize + boxSize / 2; var centerY = cell.row * boxSize + boxSize / 2; var radius = 4; context.arc(centerX, centerY, radius, 0, 2 * PI, false); (4) } We do want to give our player a chance to start over by supplying a Play again button. The easiest way will be to simply refresh the screen (line (5)) by adding this code to the startup script: void main() { canvas = querySelector('#canvas'); ButtonElement play = querySelector('#play'); play.onClick.listen(playAgain); new Board(canvas, new Memory(4)); } playAgain(Event e) { window.location.reload(); (5) } Spiral 7 – using images One improvement that certainly comes to mind is the use of pictures instead of colors as shown in the Using images screenshot. How difficult would that be? It turns out that this is surprisingly easy, because we already have the game logic firmly in place! In the images folder, we supply a number of game pictures. Instead of the color property, we give the cell a String property (image), which will contain the name of the picture file. We then replace utilcolor.dart with utilimages.dart, which contains a imageList variable with the image filenames. In utilrandom.dart, we will replace the color methods with the following code: String randomImage() => randomListElement(imageList); The changes to memory.dart are also straightforward: replace the usedColor list with List usedImages = []; and the _getFreeRandomColor method with _getFreeRandomImage, which will use the new list and method: List usedImages = []; String _getFreeRandomImage() { var image; do { image = randomImage(); } while (usedImages.any((i) => i == image)); usedImages.add(image); return image; } In board.dart, we replace _colorBox(cell) with _imageBox(cell). The only new thing is how to draw the image on canvas. For this, we need ImageElement objects. Here, we have to be careful to create these objects only once and not over and over again in every draw cycle, because this produces a flickering screen. We will store the ImageElements object in a Map: var imageMap = new Map<String, ImageElement>(); Then, we populate this in the Board constructor with a for…in loop over memory.cells: for (var cell in memory.cells) { ImageElement image = new Element.tag('img'); (1) image.src = 'images/${cell.image}'; (2) imageMap[cell.image] = image; (3) } We create a new ImageElement object in line (1), giving it the complete file path to the image file as a src property in line (2) and store it in imageMap in line (3). The image file will then be loaded into memory only once. We don't do any unnecessary network access to effectively cache the images. In the draw cycle, we will load the image from imageMap and draw it in the current cell with the drawImage method in line (4): if (cell.hidden) { // see previous code } else { ImageElement image = imageMap[cell.image]; context.drawImage(image, x, y); // resize to cell size (4) } Perhaps, you can think of other improvements? Why not let the player specify the game difficulty by asking the number of boxes. It is 16 now. Check whether the input is a square of an even number. Do you have enough colors to choose from? Perhaps, dynamically building a list with enough random colors would be a better idea. Calculating and storing the statistics discussed in the model would also make the game more attractive. Another enhancement from the model is to support different catalogs of pictures. Go ahead and exercise your Dart skills! Summary By thoroughly investigating two games applying all of Dart we have already covered, your Dart star begins to shine. For other Dart games, visit http://www.builtwithdart.com/projects/games/. You can find more information at http://www.dartgamedevs.org/ on building games. Resources for Article: Further resources on this subject: Slideshow Presentations [article] Dart with JavaScript [article] Practical Dart [article]
Read more
  • 0
  • 0
  • 4503

article-image-introducing-jax-rs-api
Packt
21 Sep 2015
25 min read
Save for later

Introducing JAX-RS API

Packt
21 Sep 2015
25 min read
 In this article by Jobinesh Purushothaman, author of the book, RESTful Java Web Services, Second Edition, we will see that there are many tools and frameworks available in the market today for building RESTful web services. There are some recent developments with respect to the standardization of various framework APIs by providing unified interfaces for a variety of implementations. Let's take a quick look at this effort. (For more resources related to this topic, see here.) As you may know, Java EE is the industry standard for developing portable, robust, scalable, and secure server-side Java applications. The Java EE 6 release took the first step towards standardizing RESTful web service APIs by introducing a Java API for RESTful web services (JAX-RS). JAX-RS is an integral part of the Java EE platform, which ensures portability of your REST API code across all Java EE-compliant application servers. The first release of JAX-RS was based on JSR 311. The latest version is JAX-RS 2 (based on JSR 339), which was released as part of the Java EE 7 platform. There are multiple JAX-RS implementations available today by various vendors. Some of the popular JAX-RS implementations are as follows: Jersey RESTful web service framework: This framework is an open source framework for developing RESTful web services in Java. It serves as a JAX-RS reference implementation. You can learn more about this project at https://jersey.java.net. Apache CXF: This framework is an open source web services framework. CXF supports both JAX-WS and JAX-RS web services. To learn more about CXF, refer to http://cxf.apache.org. RESTEasy: This framework is an open source project from JBoss, which provides various modules to help you build a RESTful web service. To learn more about RESTEasy, refer to http://resteasy.jboss.org. Restlet: This framework is a lightweight, open source RESTful web service framework. It has good support for building both scalable RESTful web service APIs and lightweight REST clients, which suits mobile platforms well. You can learn more about Restlet at http://restlet.com. Remember that you are not locked down to any specific vendor here, the RESTful web service APIs that you build using JAX-RS will run on any JAX-RS implementation as long as you do not use any vendor-specific APIs in the code. JAX-RS annotations                                      The main goal of the JAX-RS specification is to make the RESTful web service development easier than it has been in the past. As JAX-RS is a part of the Java EE platform, your code becomes portable across all Java EE-compliant servers. Specifying the dependency of the JAX-RS API To use JAX-RS APIs in your project, you need to add the javax.ws.rs-api JAR file to the class path. If the consuming project uses Maven for building the source, the dependency entry for the javax.ws.rs-api JAR file in the Project Object Model (POM) file may look like the following: <dependency> <groupId>javax.ws.rs</groupId> <artifactId>javax.ws.rs-api</artifactId> <version>2.0.1</version><!-- set the tight version --> <scope>provided</scope><!-- compile time dependency --> </dependency> Using JAX-RS annotations to build RESTful web services Java annotations provide the metadata for your Java class, which can be used during compilation, during deployment, or at runtime in order to perform designated tasks. The use of annotations allows us to create RESTful web services as easily as we develop a POJO class. Here, we leave the interception of the HTTP requests and representation negotiations to the framework and concentrate on the business rules necessary to solve the problem at hand. If you are not familiar with Java annotations, go through the tutorial available at http://docs.oracle.com/javase/tutorial/java/annotations/. Annotations for defining a RESTful resource REST resources are the fundamental elements of any RESTful web service. A REST resource can be defined as an object that is of a specific type with the associated data and is optionally associated to other resources. It also exposes a set of standard operations corresponding to the HTTP method types such as the HEAD, GET, POST, PUT, and DELETE methods. @Path The @javax.ws.rs.Path annotation indicates the URI path to which a resource class or a class method will respond. The value that you specify for the @Path annotation is relative to the URI of the server where the REST resource is hosted. This annotation can be applied at both the class and the method levels. A @Path annotation value is not required to have leading or trailing slashes (/), as you may see in some examples. The JAX-RS runtime will parse the URI path templates in the same way even if they have leading or trailing slashes. Specifying the @Path annotation on a resource class The following code snippet illustrates how you can make a POJO class respond to a URI path template containing the /departments path fragment: import javax.ws.rs.Path; @Path("departments") public class DepartmentService { //Rest of the code goes here } The /department path fragment that you see in this example is relative to the base path in the URI. The base path typically takes the following URI pattern: http://host:port/<context-root>/<application-path>. Specifying the @Path annotation on a resource class method The following code snippet shows how you can specify @Path on a method in a REST resource class. Note that for an annotated method, the base URI is the effective URI of the containing class. For instance, you will use the URI of the following form to invoke the getTotalDepartments() method defined in the DepartmentService class: /departments/count, where departments is the @Path annotation set on the class. import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; @Path("departments") public class DepartmentService { @GET @Path("count") @Produces("text/plain") public Integer getTotalDepartments() { return findTotalRecordCount(); } //Rest of the code goes here } Specifying variables in the URI path template It is very common that a client wants to retrieve data for a specific object by passing the desired parameter to the server. JAX-RS allows you to do this via the URI path variables as discussed here. The URI path template allows you to define variables that appear as placeholders in the URI. These variables would be replaced at runtime with the values set by the client. The following example illustrates the use of the path variable to request for a specific department resource. The URI path template looks like /departments/{id}. At runtime, the client can pass an appropriate value for the id parameter to get the desired resource from the server. For instance, the URI path of the /departments/10 format returns the IT department details to the caller. The following code snippet illustrates how you can pass the department ID as a path variable for deleting a specific department record. The path URI looks like /departments/10. import javax.ws.rs.Path; import javax.ws.rs.DELETE; @Path("departments") public class DepartmentService { @DELETE @Path("{id}") public void removeDepartment(@PathParam("id") short id) { removeDepartmentEntity(id); } //Other methods removed for brevity } In the preceding code snippet, the @PathParam annotation is used for copying the value of the path variable to the method parameter. Restricting values for path variables with regular expressions JAX-RS lets you use regular expressions in the URI path template for restricting the values set for the path variables at runtime by the client. By default, the JAX-RS runtime ensures that all the URI variables match the following regular expression: [^/]+?. The default regular expression allows the path variable to take any character except the forward slash (/). What if you want to override this default regular expression imposed on the path variable values? Good news is that JAX-RS lets you specify your own regular expression for the path variables. For example, you can set the regular expression as given in the following code snippet in order to ensure that the department name variable present in the URI path consists only of lowercase and uppercase alphanumeric characters: @DELETE @Path("{name: [a-zA-Z][a-zA-Z_0-9]}") public void removeDepartmentByName(@PathParam("name") String deptName) { //Method implementation goes here } If the path variable does not match the regular expression set of the resource class or method, the system reports the status back to the caller with an appropriate HTTP status code, such as 404 Not Found, which tells the caller that the requested resource could not be found at this moment. Annotations for specifying request-response media types The Content-Type header field in HTTP describes the body's content type present in the request and response messages. The content types are represented using the standard Internet media types. A RESTful web service makes use of this header field to indicate the type of content in the request or response message body. JAX-RS allows you to specify which Internet media types of representations a resource can produce or consume by using the @javax.ws.rs.Produces and @javax.ws.rs.Consumes annotations, respectively. @Produces The @javax.ws.rs.Produces annotation is used for defining the Internet media type(s) that a REST resource class method can return to the client. You can define this either at the class level (which will get defaulted for all methods) or the method level. The method-level annotations override the class-level annotations. The possible Internet media types that a REST API can produce are as follows: application/atom+xml application/json application/octet-stream application/svg+xml application/xhtml+xml application/xml text/html text/plain text/xml The following example uses the @Produces annotation at the class level in order to set the default response media type as JSON for all resource methods in this class. At runtime, the binding provider will convert the Java representation of the return value to the JSON format. import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; @Path("departments") @Produces(MediaType.APPLICATION_JSON) public class DepartmentService{ //Class implementation goes here... } @Consumes The @javax.ws.rs.Consumes annotation defines the Internet media type(s) that the resource class methods can accept. You can define the @Consumes annotation either at the class level (which will get defaulted for all methods) or the method level. The method-level annotations override the class-level annotations. The possible Internet media types that a REST API can consume are as follows: application/atom+xml application/json application/octet-stream application/svg+xml application/xhtml+xml application/xml text/html text/plain text/xml multipart/form-data application/x-www-form-urlencoded The following example illustrates how you can use the @Consumes attribute to designate a method in a class to consume a payload presented in the JSON media type. The binding provider will copy the JSON representation of an input message to the Department parameter of the createDepartment() method. import javax.ws.rs.Consumes; import javax.ws.rs.core.MediaType; import javax.ws.rs.POST; @POST @Consumes(MediaType.APPLICATION_JSON) public void createDepartment(Department entity) { //Method implementation goes here… } The javax.ws.rs.core.MediaType class defines constants for all media types supported in JAX-RS. To learn more about the MediaType class, visit the API documentation available at http://docs.oracle.com/javaee/7/api/javax/ws/rs/core/MediaType.html. Annotations for processing HTTP request methods In general, RESTful web services communicate over HTTP with the standard HTTP verbs (also known as method types) such as GET, PUT, POST, DELETE, HEAD, and OPTIONS. @GET A RESTful system uses the HTTP GET method type for retrieving the resources referenced in the URI path. The @javax.ws.rs.GET annotation designates a method of a resource class to respond to the HTTP GET requests. The following code snippet illustrates the use of the @GET annotation to make a method respond to the HTTP GET request type. In this example, the REST URI for accessing the findAllDepartments() method may look like /departments. The complete URI path may take the following URI pattern: http://host:port/<context-root>/<application-path>/departments. //imports removed for brevity @Path("departments") public class DepartmentService { @GET @Produces(MediaType.APPLICATION_JSON) public List<Department> findAllDepartments() { //Find all departments from the data store List<Department> departments = findAllDepartmentsFromDB(); return departments; } //Other methods removed for brevity } @PUT The HTTP PUT method is used for updating or creating the resource pointed by the URI. The @javax.ws.rs.PUT annotation designates a method of a resource class to respond to the HTTP PUT requests. The PUT request generally has a message body carrying the payload. The value of the payload could be any valid Internet media type such as the JSON object, XML structure, plain text, HTML content, or binary stream. When a request reaches a server, the framework intercepts the request and directs it to the appropriate method that matches the URI path and the HTTP method type. The request payload will be mapped to the method parameter as appropriate by the framework. The following code snippet shows how you can use the @PUT annotation to designate the editDepartment() method to respond to the HTTP PUT request. The payload present in the message body will be converted and copied to the department parameter by the framework: @PUT @Path("{id}") @Consumes(MediaType.APPLICATION_JSON) public void editDepartment(@PathParam("id") Short id, Department department) { //Updates department entity to data store updateDepartmentEntity(id, department); } @POST The HTTP POST method posts data to the server. Typically, this method type is used for creating a resource. The @javax.ws.rs.POST annotation designates a method of a resource class to respond to the HTTP POST requests. The following code snippet shows how you can use the @POST annotation to designate the createDepartment() method to respond to the HTTP POST request. The payload present in the message body will be converted and copied to the department parameter by the framework: @POST public void createDepartment(Department department) { //Create department entity in data store createDepartmentEntity(department); } @DELETE The HTTP DELETE method deletes the resource pointed by the URI. The @javax.ws.rs.DELETE annotation designates a method of a resource class to respond to the HTTP DELETE requests. The following code snippet shows how you can use the @DELETE annotation to designate the removeDepartment() method to respond to the HTTP DELETE request. The department ID is passed as the path variable in this example. @DELETE @Path("{id}") public void removeDepartment(@PathParam("id") Short id) { //remove department entity from data store removeDepartmentEntity(id); } @HEAD The @javax.ws.rs.HEAD annotation designates a method to respond to the HTTP HEAD requests. This method is useful for retrieving the metadata present in the response headers, without having to retrieve the message body from the server. You can use this method to check whether a URI pointing to a resource is active or to check the content size by using the Content-Length response header field, and so on. The JAX-RS runtime will offer the default implementations for the HEAD method type if the REST resource is missing explicit implementation. The default implementation provided by runtime for the HEAD method will call the method designated for the GET request type, ignoring the response entity retuned by the method. @OPTIONS The @javax.ws.rs.OPTIONS annotation designates a method to respond to the HTTP OPTIONS requests. This method is useful for obtaining a list of HTTP methods allowed on a resource. The JAX-RS runtime will offer a default implementation for the OPTIONS method type, if the REST resource is missing an explicit implementation. The default implementation offered by the runtime sets the Allow response header to all the HTTP method types supported by the resource. Annotations for accessing request parameters You can use this offering to extract the following parameters from a request: a query, URI path, form, cookie, header, and matrix. Mostly, these parameters are used in conjunction with the GET, POST, PUT, and DELETE methods. @PathParam A URI path template, in general, has a URI part pointing to the resource. It can also take the path variables embedded in the syntax; this facility is used by clients to pass parameters to the REST APIs as appropriate. The @javax.ws.rs.PathParam annotation injects (or binds) the value of the matching path parameter present in the URI path template into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. Typically, this annotation is used in conjunction with the HTTP method type annotations such as @GET, @POST, @PUT, and @DELETE. The following example illustrates the use of the @PathParam annotation to read the value of the path parameter, id, into the deptId method parameter. The URI path template for this example looks like /departments/{id}: //Other imports removed for brevity javax.ws.rs.PathParam @Path("departments") public class DepartmentService { @DELETE @Path("{id}") public void removeDepartment(@PathParam("id") Short deptId) { removeDepartmentEntity(deptId); } //Other methods removed for brevity } The REST API call to remove the department resource identified by id=10 looks like DELETE /departments/10 HTTP/1.1. We can also use multiple variables in a URI path template. For example, we can have the URI path template embedding the path variables to query a list of departments from a specific city and country, which may look like /departments/{country}/{city}. The following code snippet illustrates the use of @PathParam to extract variable values from the preceding URI path template: @Produces(MediaType.APPLICATION_JSON) @Path("{country}/{city} ") public List<Department> findAllDepartments( @PathParam("country") String countyCode, @PathParam("city") String cityCode) { //Find all departments from the data store for a country //and city List<Department> departments = findAllMatchingDepartmentEntities(countyCode, cityCode ); return departments; } @QueryParam The @javax.ws.rs.QueryParam annotation injects the value(s) of a HTTP query parameter into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The following example illustrates the use of @QueryParam to extract the value of the desired query parameter present in the URI. This example extracts the value of the query parameter, name, from the request URI and copies the value into the deptName method parameter. The URI that accesses the IT department resource looks like /departments?name=IT: @GET @Produces(MediaType.APPLICATION_JSON) public List<Department> findAllDepartmentsByName(@QueryParam("name") String deptName) { List<Department> depts= findAllMatchingDepartmentEntities (deptName); return depts; } @MatrixParam Matrix parameters are another way of defining parameters in the URI path template. The matrix parameters take the form of name-value pairs in the URI path, where each pair is preceded by semicolon (;). For instance, the URI path that uses a matrix parameter to list all departments in Bangalore city looks like /departments;city=Bangalore. The @javax.ws.rs.MatrixParam annotation injects the matrix parameter value into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The following code snippet demonstrates the use of the @MatrixParam annotation to extract the matrix parameters present in the request. The URI path used in this example looks like /departments;name=IT;city=Bangalore. @GET @Produces(MediaType.APPLICATION_JSON) @Path("matrix") public List<Department> findAllDepartmentsByNameWithMatrix(@MatrixParam("name") String deptName, @MatrixParam("city") String locationCode) { List<Department> depts=findAllDepartmentsFromDB(deptName, city); return depts; } You can use PathParam, QueryParam, and MatrixParam to pass the desired search parameters to the REST APIs. Now, you may ask when to use what? Although there are no strict rules here, a very common practice followed by many is to use PathParam to drill down to the entity class hierarchy. For example, you may use the URI of the following form to identify an employee working in a specific department: /departments/{dept}/employees/{id}. QueryParam can be used for specifying attributes to locate the instance of a class. For example, you may use URI with QueryParam to identify employees who have joined on January 1, 2015, which may look like /employees?doj=2015-01-01. The MatrixParam annotation is not used frequently. This is useful when you need to make a complex REST style query to multiple levels of resources and subresources. MatrixParam is applicable to a particular path element, while the query parameter is applicable to the entire request. @HeaderParam The HTTP header fields provide necessary information about the request and response contents in HTTP. For example, the header field, Content-Length: 348, for an HTTP request says that the size of the request body content is 348 octets (8-bit bytes). The @javax.ws.rs.HeaderParam annotation injects the header values present in the request into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The following example extracts the referrer header parameter and logs it for audit purposes. The referrer header field in HTTP contains the address of the previous web page from which a request to the currently processed page originated: @POST public void createDepartment(@HeaderParam("Referer") String referer, Department entity) { logSource(referer); createDepartmentInDB(department); } Remember that HTTP provides a very wide selection of headers that cover most of the header parameters that you are looking for. Although you can use custom HTTP headers to pass some application-specific data to the server, try using standard headers whenever possible. Further, avoid using a custom header for holding properties specific to a resource, or the state of the resource, or parameters directly affecting the resource. @CookieParam The @javax.ws.rs.CookieParam annotation injects the matching cookie parameters present in the HTTP headers into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The following code snippet uses the Default-Dept cookie parameter present in the request to return the default department details: @GET @Path("cook") @Produces(MediaType.APPLICATION_JSON) public Department getDefaultDepartment(@CookieParam("Default-Dept") short departmentId) { Department dept=findDepartmentById(departmentId); return dept; } @FormParam The @javax.ws.rs.FormParam annotation injects the matching HTML form parameters present in the request body into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The request body carrying the form elements must have the content type specified as application/x-www-form-urlencoded. Consider the following HTML form that contains the data capture form for a department entity. This form allows the user to enter the department entity details: <!DOCTYPE html> <html> <head> <title>Create Department</title> </head> <body> <form method="POST" action="/resources/departments"> Department Id: <input type="text" name="departmentId"> <br> Department Name: <input type="text" name="departmentName"> <br> <input type="submit" value="Add Department" /> </form> </body> </html> Upon clicking on the submit button on the HTML form, the department details that you entered will be posted to the REST URI, /resources/departments. The following code snippet shows the use of the @FormParam annotation for extracting the HTML form fields and copying them to the resource class method parameter: @Path("departments") public class DepartmentService { @POST //Specifies content type as //"application/x-www-form-urlencoded" @Consumes(MediaType.APPLICATION_FORM_URLENCODED) public void createDepartment(@FormParam("departmentId") short departmentId, @FormParam("departmentName") String departmentName) { createDepartmentEntity(departmentId, departmentName); } } @DefaultValue The @javax.ws.rs.DefaultValue annotation specifies a default value for the request parameters accessed using one of the following annotations: PathParam, QueryParam, MatrixParam, CookieParam, FormParam, or HeaderParam. The default value is used if no matching parameter value is found for the variables annotated using one of the preceding annotations. The following REST resource method will make use of the default value set for the from and to method parameters if the corresponding query parameters are found missing in the URI path: @GET @Produces(MediaType.APPLICATION_JSON) public List<Department> findAllDepartmentsInRange (@DefaultValue("0") @QueryParam("from") Integer from, @DefaultValue("100") @QueryParam("to") Integer to) { findAllDepartmentEntitiesInRange(from, to); } @Context The JAX-RS runtime offers different context objects, which can be used for accessing information associated with the resource class, operating environment, and so on. You may find various context objects that hold information associated with the URI path, request, HTTP header, security, and so on. Some of these context objects also provide the utility methods for dealing with the request and response content. JAX-RS allows you to reference the desired context objects in the code via dependency injection. JAX-RS provides the @javax.ws.rs.Context annotation that injects the matching context object into the target field. You can specify the @Context annotation on a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The following example illustrates the use of the @Context annotation to inject the javax.ws.rs.core.UriInfo context object into a method variable. The UriInfo instance provides access to the application and request URI information. This example uses UriInfo to read the query parameter present in the request URI path template, /departments/IT: @GET @Produces(MediaType.APPLICATION_JSON) public List<Department> findAllDepartmentsByName( @Context UriInfo uriInfo){ String deptName = uriInfo.getPathParameters().getFirst("name"); List<Department> depts= findAllMatchingDepartmentEntities (deptName); return depts; } Here is a list of the commonly used classes and interfaces, which can be injected using the @Context annotation: javax.ws.rs.core.Application: This class defines the components of a JAX-RS application and supplies additional metadata javax.ws.rs.core.UriInfo: This interface provides access to the application and request URI information javax.ws.rs.core.Request: This interface provides a method for request processing such as reading the method type and precondition evaluation. javax.ws.rs.core.HttpHeaders: This interface provides access to the HTTP header information javax.ws.rs.core.SecurityContext: This interface provides access to security-related information javax.ws.rs.ext.Providers: This interface offers the runtime lookup of a provider instance such as MessageBodyReader, MessageBodyWriter, ExceptionMapper, and ContextResolver javax.ws.rs.ext.ContextResolver<T>: This interface supplies the requested context to the resource classes and other providers javax.servlet.http.HttpServletRequest: This interface provides the client request information for a servlet javax.servlet.http.HttpServletResponse: This interface is used for sending a response to a client javax.servlet.ServletContext: This interface provides methods for a servlet to communicate with its servlet container javax.servlet.ServletConfig: This interface carries the servlet configuration parameters @BeanParam The @javax.ws.rs.BeanParam annotation allows you to inject all matching request parameters into a single bean object. The @BeanParam annotation can be set on a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The bean class can have fields or properties annotated with one of the request parameter annotations, namely @PathParam, @QueryParam, @MatrixParam, @HeaderParam, @CookieParam, or @FormParam. Apart from the request parameter annotations, the bean can have the @Context annotation if there is a need. Consider the example that we discussed for @FormParam. The createDepartment() method that we used in that example has two parameters annotated with @FormParam: public void createDepartment( @FormParam("departmentId") short departmentId, @FormParam("departmentName") String departmentName) Let's see how we can use @BeanParam for the preceding method to give a more logical, meaningful signature by grouping all the related fields into an aggregator class, thereby avoiding too many parameters in the method signature. The DepartmentBean class that we use for this example is as follows: public class DepartmentBean { @FormParam("departmentId") private short departmentId; @FormParam("departmentName") private String departmentName; //getter and setter for the above fields //are not shown here to save space } The following code snippet demonstrates the use of the @BeanParam annotation to inject the DepartmentBean instance that contains all the FormParam values extracted from the request message body: @POST public void createDepartment(@BeanParam DepartmentBean deptBean) { createDepartmentEntity(deptBean.getDepartmentId(), deptBean.getDepartmentName()); } @Encoded By default, the JAX-RS runtime decodes all request parameters before injecting the extracted values into the target variables annotated with one of the following annotations: @FormParam, @PathParam, @MatrixParam, or @QueryParam. You can use @javax.ws.rs.Encoded to disable the automatic decoding of the parameter values. With the @Encoded annotation, the value of parameters will be provided in the encoded form itself. This annotation can be used on a class, method, or parameters. If you set this annotation on a method, it will disable decoding for all parameters defined for this method. You can use this annotation on a class to disable decoding for all parameters of all methods. In the following example, the value of the path parameter called name is injected into the method parameter in the URL encoded form (without decoding). The method implementation should take care of the decoding of the values in such cases: @GET @Produces(MediaType.APPLICATION_JSON) public List<Department> findAllDepartmentsByName(@QueryParam("name") String deptName) { //Method body is removed for brevity } URL encoding converts a string into a valid URL format, which may contain alphabetic characters, numerals, and some special characters supported in the URL string. To learn about the URL specification, visit http://www.w3.org/Addressing/URL/url-spec.html. Summary With the use of annotations, the JAX-RS API provides a simple development model for RESTful web service programming. In case you are interested in knowing other Java RESTful Web Services books that Packt has in store for you, here is the link: RESTful Java Web Services, Jose Sandoval RESTful Java Web Services Security, René Enríquez, Andrés Salazar C Resources for Article: Further resources on this subject: The Importance of Securing Web Services[article] Understanding WebSockets and Server-sent Events in Detail[article] Adding health checks [article]
Read more
  • 0
  • 0
  • 5496