Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Front-End Web Development

341 Articles
article-image-nsb-and-security
Packt
06 Feb 2015
14 min read
Save for later

NSB and Security

Packt
06 Feb 2015
14 min read
This article by Rich Helton, the author of Learning NServiceBus Sagas, delves into the details of NSB and its security. In this article, we will cover the following: Introducing web security Cloud vendors Using .NET 4 Adding NServiceBus Benefits of NSB (For more resources related to this topic, see here.) Introducing web security According to the Top 10 list of 2013 by the Open Web Application Security Project (OWASP), found at https://www.owasp.org/index.php/Top10#OWASP_Top_10_for_2013, injection flaws still remain at the top among the ways to penetrate a web site. This is shown in the following screenshot: An injection flaw is a means of being able to access information or the site by injecting data into the input fields. This is normally used to bypass proper authentication and authorization. Normally, this is the data that the website has not seen in the testing efforts or considered during development. For references, I will consider some slides found at http://www.slideshare.net/rhelton_1/cweb-sec-oct27-2010-final. An instance of an injection flaw is to put SQL commands in form fields and even URL fields to try to get SQL errors and returns with further information. If the error is not generic, and a SQL exception occurs, it will sometimes return with table names. It may deny authorization for sa under the password table in SQL Server 2008. Knowing this gives a person knowledge of the SQL Server version, the sa user is being used, and the existence of a password table. There are many tools and websites for people on the Internet to practice their web security testing skills, rather than them literally being in IT security as a professional or amateur. Many of these websites are well-known and posted at places such as https://www.owasp.org/index.php/Phoenix/Tools. General disclaimer I do not endorse or encourage others to practice on websites without written permission from the website owner. Some of the live sites are as follows, and most are used to test web scanners: http://zero.webappsecurity.com/: This is developed by SPI Dynamics (now HP Security) for Web Inspect. It is an ASP site. http://crackme.cenzic.com/Kelev/view/home.php: This PHP site is from Cenzic. http://demo.testfire.net/: This is developed by WatchFire (now IBM Rational AppScan). It is an ASP site. http://testaspnet.vulnweb.com/: This is developed by Acunetix. It is a PHP site. http://webscantest.com/: This is developed by NT OBJECTives NTOSpider. It is a PHP site. There are many more sites and tools, and one would have to research them themselves. There are tools that will only look for SQL Injection. Hacking professionals who are very gifted and spend their days looking for only SQL injection would find these useful. We will start with SQL injection, as it is one of the most popular ways to enter a website. But before we start an analysis report on a website hack, we will document the website. Our target site will be http://zero.webappsecurity.com/. We will start with the EC-Council's Certified Ethical Hacker program, where they divide footprinting and scanning into seven basic steps: Information gathering Determining the network range Identifying active machines Finding open ports and access points OS fingerprinting Fingerprinting services Mapping the network We could also follow the OWASP Web Testing checklist, which includes: Information gathering Configuration testing Identity management testing Authentication testing Session management testing Data validation testing Error handling Cryptography Business logic testing Client-side testing The idea is to gather as much information on the website as possible before launching an attack, as there is no information gathered so far. To gather information on the website, you don't actually have to scan the website yourself at the start. There are many scanners that scan the website before you start. There are Google Bots gathering search information about the site, the Netcraft search engine gathering statistics about the site, as well as many domain search engines with contact information. If another person has hacked the site, there are sites and blogs where hackers talk about hacking a specific site, including what tools they used. They may even post security scans on the Internet, which could be found by googling. There is even a site (https://archive.org/) that is called the WayBack Machine as it keeps previous versions of websites that it scans for in archive. These are just some basic pieces, and any person who has studied for their Certified Ethical Hacker's exam should have all of this on their fingertips. We will discuss some of the benefits that Microsoft and Particular.net have taken into consideration to assist those who develop solutions in C#. We can search at http://web.archive.org/web/ or http://zero.webappsecurity.com/ for changes from the WayBack Machine, and we will see something like this: From this search engine, we look at what the screens looked like 2003, and walk through various changes to the present 2014. Actually, there were errors on archive copying the site in 2003, so this machine directed us to the first best copy on May 11, 2006, as shown in the following screenshot: Looking with Netcraft, we can see that it was first started in 2004, last rebooted in 2014, and is running Ubuntu, as shown in this screenshot: Next, we can try to see what Google tells us. There are many Google Hacking Databases that keep track of keywords in the Google Search Engine API. These keywords are expressions such as file: passwd to search for password files in Ubuntu, and many more. This is not a hacking book, and this site is well-known, so we will just search for webappsecurity.com file:passwd. This gives me more information than needed. On the first item, I get a sample web scan report of the available vulnerabilities in the site from 2008, as shown in the following screenshot: We can also see which links Google has already found by running http://zero.webappsecurity.com/, as shown in this screenshot: In these few steps, I have enough information to bring a targeted website attack to check whether these vulnerabilities are still active or not. I know the operating system of the website and have details of the history of the website. This is before I have even considered running tools to approach the website. To scan the website, for which permission is always needed ahead of time, there are multiple web scanners available. For a list of web scanners, one website is http://sectools.org/tag/web-scanners/. One of the favorites is built by the famed Googler Michal Zalewski, and is called skipfish. Skipfish is an open source tool written in the C language, and it can be used in Windows by compiling it in Cygwin libraries, which are Linux virtual libraries and tools for Windows. Skipfish has its own man pages at http://dev.man-online.org/man1/skipfish/, and it can be downloaded from https://code.google.com/p/skipfish/. Skipfish performs web crawling, fuzzing, and tests for many issues such as XSS and SQL Injection. In Skipfish's case, its fussing uses dictionaries to add more paths to websites, extensions, and keywords that are normally found as attack vectors through the experience of hackers, to apply to the website being scanned. For instance, it may not be apparent from the pages being scanned that there is an admin/index.html page available, but the dictionary will try to check whether the page is available. Skipfish results will appear as follows: The issue with Skipfish is that it is noisy, because of its fuzzer. Skipfish will try many scans and checks for links that might not exist, which will take some time and can be a little noisy out of the box. There are many configurations, and there is throttling of the scanning to try to hide the noise. An associated scan in HP's WebInspect scanner will appear like this: These are just automated means to inspect a website. These steps are common, and much of this material is known in web security. After an initial inspection of a website, a person may start making decisions on how to check their information further. Manually checking websites An experienced web security person may now start proceeding through more manual checks and less automated checking of websites after taking an initial look at the website. For instance, type Admin as the user ID and password, or type Guest instead of Admin, and the list progresses based on experience. Then try the Admin and password combination, then the Admin and password123 combination, and so on. A person inspecting a website might have a lot of time to try to perform penetration testing, and might try hundreds of scenarios. There are many tools and scripts to automate the process. As security analysts, we find many sites that give admin access just by using Admin and Admin as the user ID and password, respectively. To enhance personal skills, there are many tutorials to walk through. One thing to do is to pull down a live website that you can set up for practice, such as WebGoat, and go through the steps outlined in the tutorials from sites such as http://webappsecmovies.sourceforge.net/webgoat/. These sites will show a person how to perform SQL Injection testing through the WebGoat site. As part of these tutorials, there are plugins of Firefox to test security scripts, HTML, debug pieces and tamper with the website through the browser, as shown in this screenshot: Using .NET 4 can help Every page that is deployed to the Internet (and in many cases, the Intranet as well), constantly gets probed and prodded by scans, viruses, and network noise. There are so many pokes, probes, and prods on networks these days that most of them are seen as noise. By default, .NET 4 offers some validation and out-of-the-box support for Web requests. Using .NET 4, you may discover that some input types such as double quotes, single quotes, and even < are blocked in some form fields. You will get an error like what is shown in the following screenshot when trying to pass some of the values: This is very basic validation, and it will reside in the .NET version 4 framework's pooling pieces of Internet Information Services (IIS) for Windows. To further offer security following Microsoft's best enterprise practices, we may also consider using Model-View-Controller (MVC) and Entity Frameworks (EF). To get this information, we can review Microsoft Application Architecture Guide at http://msdn.microsoft.com/en-us/library/ff650706.aspx. The MVC design pattern is the most commonly used pattern in software and is designed as follows: This is a very common design pattern, so why is this important in security? What is helpful is that we can validate data requests and responses through the controllers, as well as provide data annotations for each data element for more validation. A common attack that appeared through viruses through the years is the buffer overflow. A buffer overflow is used to send a lot of data to the data elements. Validation can check whether there is sufficient data to counteract the buffer overflow. EF is a Microsoft framework used to provide an object-relationship mapper. Not only can it easily generate objects to and from the SQL Server through Visual Studio, but it can also use objects instead of SQL scripting. Since it does not use SQL, SQL Injection, which is an attack involving injecting SQL commands through input fields, can be counteracted. Even though some of these techniques will help mitigate many attack vectors, the gateway to backend processes is usually through the website. There are many more injection attack vectors. If stored procedures are used for SQL Server, a scan be tried to access any stored procedures that the website may be calling, as well as for any default stored procedures that may be lingering from default installations from SQL Server. So how do we add further validation and decouple the backend processes in an organization from the website? NServiceBus to the rescue NServiceBus is the most popular C# platform framework used to implement an Enterprise Service Bus (ESB) for service-oriented architecture (SOA). Basically, NSB hosts Windows services through its NServiceBus.Host.exe program, and interfaces these services through different message queuing components. A C# MVC-EF program can call web services directly, and when the web service receives an error, the website will receive the error directly in the MVC program. This creates a coupling of the web service and the website, where changes in the website can affect the web services and actions in the web services can affect the website. Because of this coupling, websites may have a Please do not refresh the page until the process is finished warning. Normally, it is wise to step away from the phone, tablet, or computer until the website is loaded. It could be that even though you may not touch the website, another process running on the machine may. A virus scanner, update, or multiple other processes running on the device could cause any glitch in the refreshing of anything on the device. With all the scans that could be happening on a website and that others on the Internet could be doing, it seems quite odd that a page would say Please don't' touch me, I am busy. In order to decouple the website from the web services, a service needs to be deployed between the website and web service. It helps if that service has a lot of out-of-the-box security features as well, to help protect the interaction between the website and web service. For this reason, a product such as NServiceBus is most helpful, where others have already laid the groundwork to have advanced security features in services tested through the industry by their use. Being the most common C# ESB platform has its advantages, as developers and architects ensure the integrity of the framework well before a new design starts using it. Benefits of NSB NSB provides many components needed for automation that are only found in ESBs. ESBs provide the following: Separation of duties: There is separation of duties from the frontend to the backend, allowing the frontend to fire a message to a service and continue in its processing, and not worrying about the results until it needs an update. Also, separation of workflow responsibility exists through separating out NSB services. One service could be used to send payments to a bank, and another service could be used to provide feedback of the current status of payment to the MVC-EF database so that a user may see their payment status. Message durability: Messages are saved in queues between services so that in case services are stopped, they can start from the messages in the queues when they restart, and the messages will persist until told otherwise. Workflow retries: Messages, or endpoints, can be told to retry a number of times until they completely fail and send an error. The error is automated to return to an error queue. For instance, a web service message can be sent to a bank, and it can be set to retry the web service every 5 minutes for 20 minutes before giving up completely. This is useful during any network or server issues. Monitoring: NSB ServicePulse can keep a heartbeat on its services. Other monitoring can easily be done on the NSB queues to report on the number of messages. Encryption: Messages between services and endpoints can be easily encrypted. High availability: Multiple services or subscribers could be processing the same or similar messages from various services that are living on different servers. When one server or service goes down, others could be made available to take over those that are already running. Summary If any website is on the Internet, it is being scanned by a multitude of means, from websites and others. It is wise to decouple external websites from backend processes through a means such as NServiceBus. Websites that are not decoupled from the backend can be acted upon by the external processes that it may be accomplishing, such as a web service to validate a credit card. These websites may say Do not refresh this page. Other conditions might occur to the website and be beyond your reach, refreshing the page to affect that interaction. The best solution is to decouple the website from these processes through NServiceBus. Resources for Article: Further resources on this subject: Mobile Game Design [Article] CryENGINE 3: Breaking Ground with Sandbox [Article] CryENGINE 3: Fun Physics [Article]
Read more
  • 0
  • 0
  • 3235

article-image-google-app-engine
Packt
05 Feb 2015
11 min read
Save for later

Google App Engine

Packt
05 Feb 2015
11 min read
In this article by Massimiliano Pippi, author of the book Python for Google App Engine, in this article, you will learn how to write a web application and seeing the platform in action. Web applications commonly provide a set of features such as user authentication and data storage. App Engine provides the services and tools needed to implement such features. (For more resources related to this topic, see here.) In this article, we will see: Details of the webapp2 framework How to authenticate users Storing data on Google Cloud Datastore Building HTML pages using templates Experimenting on the Notes application To better explore App Engine and Cloud Platform capabilities, we need a real-world application to experiment on; something that's not trivial to write, with a reasonable list of requirements. A good candidate is a note-taking application; we will name it Notes. Notes enable the users to add, remove, and modify a list of notes; a note has a title and a body of text. Users can only see their personal notes, so they must authenticate before using the application. The main page of the application will show the list of notes for logged-in users and a form to add new ones. The code from the helloworld example is a good starting point. We can simply change the name of the root folder and the application field in the app.yaml file to match the new name we chose for the application, or we can start a new project from scratch named notes. Authenticating users The first requirement for our Notes application is showing the home page only to users who are logged in and redirect others to the login form; the users service provided by App Engine is exactly what we need and adding it to our MainHandler class is quite simple: import webapp2 from google.appengine.api import users class MainHandler(webapp2.RequestHandler): def get(self): user = users.get_current_user() if user is not None: self.response.write('Hello Notes!') else: login_url = users.create_login_url(self.request.uri) self.redirect(login_url) app = webapp2.WSGIApplication([ ('/', MainHandler) ], debug=True) The user package we import on the second line of the previous code provides access to users' service functionalities. Inside the get() method of the MainHandler class, we first check whether the user visiting the page has logged in or not. If they have, the get_current_user() method returns an instance of the user class provided by App Engine and representing an authenticated user; otherwise, it returns None as output. If the user is valid, we provide the response as we did before; otherwise, we redirect them to the Google login form. The URL of the login form is returned using the create_login_url() method, and we call it, passing as a parameter the URL we want to redirect users to after a successful authentication. In this case, we want to redirect users to the same URL they are visiting, provided by webapp2 in the self.request.uri property. The webapp2 framework also provides handlers with a redirect() method we can use to conveniently set the right status and location properties of the response object so that the client browsers will be redirected to the login page. HTML templates with Jinja2 Web applications provide rich and complex HTML user interfaces, and Notes is no exception but, so far, response objects in our applications contained just small pieces of text. We could include HTML tags as strings in our Python modules and write them in the response body but we can imagine how easily it could become messy and hard to maintain the code. We need to completely separate the Python code from HTML pages and that's exactly what a template engine does. A template is a piece of HTML code living in its own file and possibly containing additional, special tags; with the help of a template engine, from the Python script, we can load this file, properly parse special tags, if any, and return valid HTML code in the response body. App Engine includes in the Python runtime a well-known template engine: the Jinja2 library. To make the Jinja2 library available to our application, we need to add this code to the app.yaml file under the libraries section: libraries: - name: webapp2 version: "2.5.2" - name: jinja2 version: latest We can put the HTML code for the main page in a file called main.html inside the application root. We start with a very simple page: <!DOCTYPE html> <html> <head lang="en"> <meta charset="UTF-8"> <title>Notes</title> </head> <body> <div class="container"> <h1>Welcome to Notes!</h1> <p> Hello, <b>{{user}}</b> - <a href="{{logout_url}}">Logout</a> </p> </div> </body> </html> Most of the content is static, which means that it will be rendered as standard HTML as we see it but there is a part that is dynamic and whose content depend on which data will be passed at runtime to the rendering process. This data is commonly referred to as template context. What has to be dynamic is the username of the current user and the link used to log out from the application. The HTML code contains two special elements written in the Jinja2 template syntax, {{user}} and {{logout_url}}, that will be substituted before the final output occurs. Back to the Python script; we need to add the code to initialize the template engine before the MainHandler class definition: import os import jinja2 jinja_env = jinja2.Environment( loader=jinja2.FileSystemLoader(os.path.dirname(__file__))) The environment instance stores engine configuration and global objects, and it's used to load templates instances; in our case, instances are loaded from HTML files on the filesystem in the same directory as the Python script. To load and render our template, we add the following code to the MainHandler.get() method: class MainHandler(webapp2.RequestHandler): def get(self): user = users.get_current_user() if user is not None: logout_url = users.create_logout_url(self.request.uri) template_context = { 'user': user.nickname(), 'logout_url': logout_url, } template = jinja_env.get_template('main.html') self.response.out.write( template.render(template_context)) else: login_url = users.create_login_url(self.request.uri) self.redirect(login_url) Similar to how we get the login URL, the create_logout_url() method provided by the user service returns the absolute URI to the logout procedure that we assign to the logout_url variable. We then create the template_context dictionary that contains the context values we want to pass to the template engine for the rendering process. We assign the nickname of the current user to the user key in the dictionary and the logout URL string to the logout_url key. The get_template() method from the jinja_env instance takes the name of the file that contains the HTML code and returns a Jinja2 template object. To obtain the final output, we call the render() method on the template object passing in the template_context dictionary whose values will be accessed, specifying their respective keys in the HTML file with the template syntax elements {{user}} and {{logout_url}}. Handling forms The main page of the application is supposed to list all the notes that belong to the current user but there isn't any way to create such notes at the moment. We need to display a web form on the main page so that users can submit details and create a note. To display a form to collect data and create notes, we put the following HTML code right below the username and the logout link in the main.html template file: {% if note_title %} <p>Title: {{note_title}}</p> <p>Content: {{note_content}}</p> {% endif %} <h4>Add a new note</h4> <form action="" method="post"> <div class="form-group"> <label for="title">Title:</label> <input type="text" id="title" name="title" /> </div> <div class="form-group"> <label for="content">Content:</label> <textarea id="content" name="content"></textarea> </div> <div class="form-group"> <button type="submit">Save note</button> </div> </form> Before showing the form, a message is displayed only when the template context contains a variable named note_title. To do this, we use an if statement, executed between the {% if note_title %} and {% endif %} delimiters; similar delimiters are used to perform for loops or assign values inside a template. The action property of the form tag is empty; this means that upon form submission, the browser will perform a POST request to the same URL, which in this case is the home page URL. As our WSGI application maps the home page to the MainHandler class, we need to add a method to this class so that it can handle POST requests: class MainHandler(webapp2.RequestHandler): def get(self): user = users.get_current_user() if user is not None: logout_url = users.create_logout_url(self.request.uri) template_context = { 'user': user.nickname(), 'logout_url': logout_url, } template = jinja_env.get_template('main.html') self.response.out.write( template.render(template_context)) else: login_url = users.create_login_url(self.request.uri) self.redirect(login_url) def post(self): user = users.get_current_user() if user is None: self.error(401) logout_url = users.create_logout_url(self.request.uri) template_context = { 'user': user.nickname(), 'logout_url': logout_url, 'note_title': self.request.get('title'), 'note_content': self.request.get('content'), } template = jinja_env.get_template('main.html') self.response.out.write( template.render(template_context)) When the form is submitted, the handler is invoked and the post() method is called. We first check whether a valid user is logged in; if not, we raise an HTTP 401: Unauthorized error without serving any content in the response body. Since the HTML template is the same served by the get() method, we still need to add the logout URL and the user name to the context. In this case, we also store the data coming from the HTML form in the context. To access the form data, we call the get() method on the self.request object. The last three lines are boilerplate code to load and render the home page template. We can move this code in a separate method to avoid duplication: def _render_template(self, template_name, context=None): if context is None: context = {} template = jinja_env.get_template(template_name) return template.render(context) In the handler class, we will then use something like this to output the template rendering result: self.response.out.write( self._render_template('main.html', template_context)) We can try to submit the form and check whether the note title and content are actually displayed above the form. Summary Thanks to App Engine, we have already implemented a rich set of features with a relatively small effort so far. We have discovered some more details about the webapp2 framework and its capabilities, implementing a nontrivial request handler. We have learned how to use the App Engine users service to provide users authentication. We have delved into some fundamental details of Datastore and now we know how to structure data in grouped entities and how to effectively retrieve data with ancestor queries. In addition, we have created an HTML user interface with the help of the Jinja2 template library, learning how to serve static content such as CSS files. Resources for Article: Further resources on this subject: Machine Learning in IPython with scikit-learn [Article] Introspecting Maya, Python, and PyMEL [Article] Driving Visual Analyses with Automobile Data (Python) [Article]
Read more
  • 0
  • 0
  • 1521

article-image-building-next-generation-web-meteor
Packt
05 Feb 2015
9 min read
Save for later

Building the next generation Web with Meteor

Packt
05 Feb 2015
9 min read
This article by Fabian Vogelsteller, the author of Building Single-page Web Apps with Meteor, explores the full-stack framework of Meteor. Meteor is not just a JavaScript library such as jQuery or AngularJS. It's a full-stack solution that contains frontend libraries, a Node.js-based server, and a command-line tool. All this together lets us write large-scale web applications in JavaScript, on both the server and client, using a consistent API. (For more resources related to this topic, see here.) Even with Meteor being quite young, already a few companies such as https://lookback.io, https://respond.ly and https://madeye.io use Meteor already in their production environment. If you want to see for yourself what's made with Meteor, take a look at http://madewith.meteor.com. Meteor makes it easy for us to build web applications quickly and takes care of the boring processes such as file linking, minifying, and concatenating of files. Here are a few highlights of what is possible with Meteor: We can build complex web applications amazingly fast using templates that automatically update themselves when data changes We can push new code to all clients on the fly while they are using our app Meteor core packages come with a complete account solution, allowing a seamless integration with Facebook, Twitter, and more Data will automatically be synced across clients, keeping every client in the same state in almost real time Latency compensation will make our interface appear super fast while the server response happens in the background With Meteor, we never have to link files with the <script> tags in HTML. Meteor's command-line tool automatically collects JavaScript or CSS files in our application's folder and links them in the index.html file, which is served to clients on initial page load. This makes structuring our code in separate files as easy as creating them. Meteor's command-line tool also watches all files inside our application's folder for changes and rebuilds them on the fly when they change. Additionally, it starts a Meteor server that serves the app's files to the clients. When a file changes, Meteor reloads the site of every client while preserving its state. This is called a hot code reload. In production, the build process also concatenates and minifies our CSS and JavaScript files. By simply adding the less and coffee core packages, we can even write all styles in LESS and code in CoffeeScript with no extra effort. The command-line tool is also the tool for deploying and bundling our app so that we can run it on a remote server. Sounds awesome? Let's take a look at what's needed to use Meteor Adding basic packages Packages in Meteor are libraries that can be added to our projects. The nice thing about Meteor packages is that they are self-contained units, which run out of the box. They mostly add either some templating functionality or provide extra objects in the global namespace of our project. Packages can also add features to Meteor's build process like the stylus package, which lets us write our app's style files with the stylus pre-processor syntax. Writing templates in Meteor Normally when we build websites, we build the complete HTML on the server side. This was quite straightforward; every page is built on the server, then it is sent to the client, and at last JavaScript added some additional animation or dynamic behavior to it. This is not so in single-page apps, where each page needs to be already in the client's browser so that it can be shown at will. Meteor solves that problem by providing templates that exists in JavaScript and can be placed in the DOM at some point. These templates can have nested templates, allowing for and easy way to reuse and structure an app's HTML layout. Since Meteor is so flexible in terms of folder and file structure, any *.html page can contain a template and will be parsed during Meteor's build process. This allows us to put all templates in the my-meteor-blog/client/templates folder. This folder structure is chosen as it helps us organizing templates while our app grows. Meteor template engine is called Spacebars, which is a derivative of the handlebars template engine. Spacebars is built on top of Blaze, which is Meteor's reactive DOM update engine. Meteor and databases Meteor currently uses MongoDB by default to store data on the server, although there are drivers planned for relational databases, too. If you are adventurous, you can try one of the community-built SQL drivers, such as the numtel:mysql package from https://atmospherejs.com/numtel/mysql. MongoDB is a NoSQL database. This means it is based on a flat document structure instead of a relational table structure. Its document approach makes it ideal for JavaScript as documents are written in BJSON, which is very similar to the JSON format. Meteor has a database everywhere approach, which means we have the same API to query the database on the client as well as on the server. Yet, when we query the database on the client, we are only able to access data that we published to a client. MongoDB uses a datastructure called a collection, which is the equivalent of a table in an SQL database. Collections contain documents, where each document has its own unique ID. These documents are JSON-like structures and can contain properties with values, even with multiple dimensions: { "_id": "W7sBzpBbov48rR7jW", "myName": "My Document Name", "someProperty": 123456, "aNestedProperty": { "anotherOne": "With another string" } } These collections are used to store data in the servers MongoDB as well as the client-sides minimongo collections, which is an in-memory database mimicking the behavior of the real MongoDB. The MongoDB API let us use a simple JSON-based query language to get documents from a collection. We can pass additional options to only ask for specific fields or sort the returned documents. These are very powerful features, especially on the client side, to display data in various ways. Data everywhere In Meteor, we can use the browser console to update data, which means we update the database from the client. This works because Meteor automatically syncs these changes to the server and updates the database accordingly. This is happening because we have the autopublish and insecure core packages added to our project by default. The autopublish package publishes automatically all documents to every client, whereas the insecure package allows every client to update database records by its _id field. Obviously, this works well for prototyping but is infeasible for production, as every client could manipulate our database. If we remove the insecure package, we would need to add the "allow and deny" rules to determine what a client is allowed to update and what not; otherwise all updates will get denied. Differences between client and server collections Meteor has a database everywhere approach. This means it provides the same API on the client as on the server. The data flow is controlled using a publication subscription model. On the server sits the real MongoDB database, which stores data persistently. On the client Meteor has a package called minimongo, which is a pure in-memory database mimicking most of MongoDB's query and update functions. Every time a client connects to its Meteor server, Meteor downloads the documents the client subscribed to and stores them in its local minimongo database. From here, they can be displayed in a template or processed by functions. When the client updates a document, Meteor syncs it back to the server, where it is passed through any allow/deny functions before being persistently stored in the database. This works also in the other way, when a document in the server-side database changes, it will get automatically sync to every client that is subscribed to it, keeping every connected client up to date. Syncing data – the current Web versus the new Web In the current Web, most pages are either static files hosted on a server or dynamically generated by a server on a request. This is true for most server-side-rendered websites, for example, those written with PHP, Rails, or Django. Both of these techniques required no effort besides being displayed by the clients; therefore, they are called thin clients. In modern web applications, the idea of the browser has moved from thin clients to fat clients. This means most of the website's logic resides on the client and the client asks for the data it needs. Currently, this is mostly done via calls to an API server. This API server then returns data, commonly in JSON form, giving the client an easy way to handle it and use it appropriately. Most modern websites are a mixture of thin and fat clients. Normal pages are server-side-rendered, where only some functionality, such as a chat box or news feed, is updated using API calls. Meteor, however, is built on the idea that it's better to use the calculation power of all clients instead of one single server. A pure fat client or a single-page app contains the entire logic of a website's frontend, which is send down on the initial page load. The server then merely acts as a data source, sending only the data to the clients. This can happen by connecting to an API and utilizing AJAX calls, or as with Meteor, using a model called publication/subscription. In this model, the server offers a range of publications and each client decides which dataset it wants to subscribe to. Compared with AJAX calls, the developer doesn't have to take care of any downloading or uploading logic. The Meteor client syncs all of the data automatically in the background as soon as it subscribes to a specific dataset. When data on the server changes, the server sends the updated documents to the clients and vice versa, as shown in the following diagram: Summary Meteor comes with more great ways of building pure JavaScript applications such as simple routing and simple ways to make components, which can be packaged for others to use. Meteor's reactivity model, which allows you to rerun any function and template helpers at will, allows for great consistent interfaces and simple dependency tracking, which is a key for large-scale JavaScript applications. If you want to dig deeper, buy the book and read How to build your own blog as single-page web application in a simple step-by-step fashion by using Meteor, the next generation web! Resources for Article: Further resources on this subject: Quick start - creating your first application [article] Meteor.js JavaScript Framework: Why Meteor Rocks! [article] Marionette View Types and Their Use [article]
Read more
  • 0
  • 0
  • 1357

article-image-ride-through-worlds-best-etl-tool-informatica-powercenter
Packt
30 Dec 2014
25 min read
Save for later

A ride through world's best ETL tool – Informatica PowerCenter

Packt
30 Dec 2014
25 min read
In this article, by Rahul Malewar, author of the book, Learning Informatica PowerCenter 9.x, we will go through the basics of Informatica PowerCenter. Informatica Corporation (Informatica), a multi-million dollar company incorporated in February 1993, is an independent provider of enterprise data integration and data quality software and services. The company enables a variety of complex enterprise data integration products, which include PowerCenter, Power Exchange, enterprise data integration, data quality, master data management, business to business (B2B) data exchange, application information lifecycle management, complex event processing, ultra messaging, and cloud data integration. Informatica PowerCenter is the most widely used tool of Informatica across the globe for various data integration processes. Informatica PowerCenter tool helps integration of data from almost any business system in almost any format. This flexibility of PowerCenter to handle almost any data makes it most widely used tool in the data integration world. (For more resources related to this topic, see here.) Informatica PowerCenter architecture PowerCenter has a service-oriented architecture that provides the ability to scale services and share resources across multiple machines. This lets you access the single licensed software installed on a remote machine via multiple machines. High availability functionality helps minimize service downtime due to unexpected failures or scheduled maintenance in the PowerCenter environment. Informatica architecture is divided into two sections: server and client. Server is the basic administrative unit of Informatica where we configure all services, create users, and assign authentication. Repository, nodes, Integration Service, and code page are some of the important services we configure while we work on the server side of Informatica PowerCenter. Client is the graphical interface provided to the users. Client includes PowerCenter Designer, PowerCenter Workflow Manager, PowerCenter Workflow Monitor, and PowerCenter Repository Manager. The best place to download the Informatica software for training purpose is from EDelivery (www.edelivery.com) website of Oracle. Once you download the files, start the extraction of the zipped files. After you finish extraction, install the server first and later client part of PowerCenter. For installation of Informatica PowerCenter, the minimum requirement is to have a database installed in your machine. Because Informatica uses the space from the Oracle database to store the system-related information and the metadata of the code, which you develop in client tool. Informatica PowerCenter client tools Informatica PowerCenter Designer client tool talks about working of the source files and source tables and similarly talks about working on targets. Designer tool allows import/create flat files and relational databases tables. Informatica PowerCenter allows you to work on both types of flat files, that is, delimited and fixed width files. In delimited files, the values are separated from each other by a delimiter. Any character or number can be used as delimiter but usually for better interpretation we use special characters as delimiter. In delimited files, the width of each field is not a mandatory option as each value gets separated by other using a delimiter. In fixed width files, the width of each field is fixed. The values are separated by each other by the fixed size of the column defined. There can be issues in extracting the data if the size of each column is not maintained properly. PowerCenter Designer tool allows you to create mappings using sources, targets, and transformations. Mappings contain source, target, and transformations linked to each other through links. The group of transformations which can be reused is called as mapplets. Mapplets are another important aspect of Informatica tool. The transformations are most important aspect of Informatica, which allows you to manipulate the data based on your requirements. There are various types of transformations available in Informatica PowerCenter. Every transformation performs specific functionality. Various transformations in Informatica PowerCenter The following are the various transformations in Informatica PowerCenter: Expression transformation is used for row-wise manipulation. For any type of manipulation you wish to do on an individual record, use Expression transformation. Expression transformation accepts the row-wise data, manipulates it, and passes to the target. The transformation receives the data from input port and it sends the data out from output ports. Use the Expression transformation for any row-wise calculation, like if you want to concatenate the names, get total salary, and convert in upper case. Aggregator transformation is used for calculations using aggregate functions on a column as against in the Expression transformation, which is used for row-wise manipulation. You can use aggregate functions, such as SUM, AVG, MAX, MIN, and so on in Aggregator transformation. When you use Aggregator transformation, Integration Services stores the data temporarily in cache memory. Cache memory is created because the data flows in row-wise manner in Informatica and the calculations required in Aggregator transformation is column wise. Unless we store the data temporarily in cache, we cannot perform the aggregate calculations to get the result. Using Group By option in Aggregator transformation, you can get the result of the Aggregate function based on group. Also it is always recommended that we pass sorted input to Aggregator transformation as this will enhance the performance. When you pass the sorted input to Aggregator transformation, Integration Services enhances the performance by storing less data into cache. When you pass unsorted data, Aggregator transformation stores all the data into cache which takes more time. When you pass the sorted data to Aggregator transformation, Aggregator transformation stores comparatively lesser data in the cache. Aggregator passes the result of each group as soon the data for particular group is received. Note that Aggregator transformation does not sort the data. If you have unsorted data, use Sorter transformation to sort the data and then pass the sorted data to Aggregator transformation. Sorter transformation is used to sort the data in ascending or descending order based on single or multiple key. Apart from ordering the data in ascending or descending order, you can also use Sorter transformation to remove duplicates from the data using the distinct option in the properties. Sorter can remove duplicates only if complete record is duplicate and not only particular column. Filter transformation is used to remove unwanted records from the mapping. You define the Filter condition in the Filter transformation. Based on filter condition, the records will be rejected or passed further in mapping. The default condition in Filter transformation is TRUE. Based on the condition defined, if the record returns True, the Filter transformation allows the record to pass. For each record which returns False, the Filter transformation drops those records. It is always recommended to use Filter transformation as early as possible in the mapping for better performance. Router transformation is single input group multiple output group transformation. Router can be used in place of multiple Filter transformations. Router transformation accepts the data once through input group and based on the output groups you define, it sends the data to multiple output ports. You need to define the filter condition in each output group. It is always recommended to use Router in place of multiple filters in the mapping to enhance the performance. Rank transformation is used to get top or bottom specific number of records based on the key. When you create a Rank transformation, a default output port RANKINDEX comes with the transformation. It is not mandatory to use the RANKINDEX port. Sequence Generator transformation is used to generate sequence of unique numbers. Based on the property defined in the Sequence Generator transformation, the unique values are generated. You need to define the start value, the increment by value, and the end value in the properties. Sequence Generator transformation has only two ports: NEXTVAL and CURRVAL. Both the ports are output port. Sequence Generator does not have any input port. You cannot add or delete any port in Sequence Generator. It is recommended that you should always use the NEXTVAL port first. If the NEXTVAL port is utilized, use the CURRVAL port. You can define the value of CURRVAL in the properties of Sequence Generator transformation. Joiner transformation is used to join two heterogeneous sources. You can join data from same source type also. The basic criteria for joining the data are a matching column in both the source. Joiner transformation has two pipelines, one is called mater and other is called as detail. We do not have left or right join as we have in SQL database. It is always recommended to make table with lesser number of record as master and other one as details. This is because Integration Service picks the data from master source and scans the corresponding record in details table. So if we have lesser number of records in master table, lesser number of times the scanning will happen. This enhances the performance. Joiner transformation has four types of joins: normal join, full outer join, master outer join, details outer join. Union transformation is used the merge the data from multiple sources. Union is multiple input single output transformation. This is opposite of Router transformation, which we discussed earlier. The basic criterion for using Union transformation is that you should have data with matching data type. If you do not have data with matching data type coming from multiple sources, Union transformation will not work. Union transformation merges the data coming from multiple sources and do not remove duplicates, that is, it acts as UNION ALL of SQL statements. As mentioned earlier, Union requires data coming from multiple sources. Union reads the data concurrently from multiple sources and processes the data. You can use heterogeneous sources to merge the data using Union transformation. Source Qualifier transformation acts as virtual source in Informatica. When you drag relational table or flat file in Mapping Designer, Source Qualifier transformation comes along. Source Qualifier is the point where actually Informatica processing starts. The extraction process starts from the Source Qualifier. Lookup transformation is used to lookup of source, Source Qualifier, or target to get the relevant data. You can look up on flat file or relational tables. Lookup transformation works on the similar lines as Joiner with few differences like Lookup does not require two source. Lookup transformations can be connected and unconnected. Lookup transformation extracts the data from the lookup table or file based on the lookup condition. When you create the Lookup transformation you can configure the Lookup transformation to cache the data. Caching the data makes the processing faster since the data is stored internally after cache is created. Once you select to cache the data, Lookup transformation caches the data from the file or table once and then based on the condition defined, lookup sends the output value. Since the data gets stored internally, the processing becomes faster as it does not require checking the lookup condition in file or database. Integration Services queries the cache memory as against checking the file or table for fetching the required data. The cache is created automatically and also it is deleted automatically once the processing is complete. Lookup transformation has four different types of ports. Input ports (I) receive the data from other transformation. This port will be used in Lookup condition. You need to have at least one input port. Output port (O) passes the data out of the Lookup transformation to other transformations. Lookup port (L) is the port for which you wish to bring the data in mapping. Each column is assigned as lookup and output port when you create the Lookup transformation. If you delete the lookup port from the flat file lookup source, the session will fail. If you delete the lookup port from relational lookup table, Integration Services extracts the data only with Lookup port. This helps in reducing the data extracted from the lookup source. Return port (R) is only used in case of unconnected Lookup transformation. This port indicates which data you wish to return in the Lookup transformation. You can define only one port as return port. Return port is not used in case on connected Lookup transformation. Cache is the temporary memory, which is created when you execute the process. Cache is created automatically when the process starts and is deleted automatically once the process is complete. The amount of cache memory is decided based on the property you define in the transformation level or session level. You usually set the property as default, so as required it can increase the size of the cache. If the size required for caching the data is more than the cache size defined, the process fails with the overflow error. There are different types of caches available for lookup transformation. You can define the session property to create the cache either sequentially or concurrently. When you select to create the cache sequentially, Integration Service caches the data in row-wise manner as the records enters the Lookup transformation. When the first record enters the Lookup transformation, lookup cache gets created and stores the matching record from the lookup table or file in the cache. This way the cache stores only matching data. It helps in saving the cache space by not storing the unnecessary data. When you select to create cache concurrently, Integration Service does not wait for the data to flow from the source, but it first caches complete data. Once the caching is complete, it allows the data to flow from the source. When you select concurrent cache, the performance enhances as compared to sequential cache, since the scanning happens internally using the data stored in cache. You can configure the cache to permanently save the data. By default, the cache is created as non-persistent, that is, the cache will be deleted once the session run is complete. If the lookup table or file does not change across the session runs, you can use the existing persistent cache. A cache is said to be static if it does not change with the changes happening in the lookup table. The static cache is not synchronized with the lookup table. By default Integration Service creates a static cache. Lookup cache is created as soon as the first record enters the Lookup transformation. Integration Service does not update the cache while it is processing the data. A cache is said to be dynamic if it changes with the changes happening in the lookup table. The static cache is synchronized with the lookup table. You can choose from the Lookup transformation properties to make the cache as dynamic. Lookup cache is created as soon as the first record enters the lookup transformation. Integration Service keeps on updating the cache while it is processing the data. The Integration Service marks the record as insert for new row inserted in dynamic cache. For the record which is updated, it marks the record as update in the cache. For every record which no change, the Integration Service marks it as unchanged. Update Strategy transformation is used to INSERT, UPDATE, DELETE, or REJECT record based on defined condition in the mapping. Update Strategy transformation is mostly used when you design mappings for SCD. When you implement SCD, you actually decide how you wish to maintain historical data with the current data. When you wish to maintain no history, complete history, or partial history, you can either use property defined in the session task or you use Update Strategy transformation. When you use Session task, you instruct the Integration Service to treat all records in the same way, that is, either insert, update or delete. When you use Update Strategy transformation in the mapping, the control is no more with the session task. Update Strategy transformation allows you to insert, update, delete or reject record based on the requirement. When you use Update Strategy transformation, the control is no more with session task. You need to define the following functions to perform the corresponding operation: DD_INSERT: This can be used when you wish to insert the records. It is also represented by numeric 0. DD_UPDATE: This can be used when you wish to update the records. It is also represented by numeric 1. DD_DELETE: This can be used when you wish to delete the records. It is also represented by numeric 2. DD_REJECT: This can be used when you wish to reject the records. It is also represented by numeric 3. Normalizer transformation is used in place of Source Qualifier transformation when you wish to read the data from Cobol Copybook source. Also, the Normalizer transformation is used to convert column-wise data to row-wise data. This is similar to transpose feature of MS Excel. You can use this feature if your source is Cobol Copybook file or relational database tables. Normalizer transformation converts column to row and also generate index for each converted row. Stored procedure is a database component. Informatica uses the stored procedure similar to database tables. Stored procedures are set of SQL instructions, which require certain set of input values and in return stored procedure returns output value. The way you either import or create database tables, you can import or create the stored procedure in mapping. To use the Stored Procedure in mapping the stored procedure should exist in the database. Similar to Lookup transformation, stored procedure can also be connected or unconnected transformation in Informatica. When you use connected stored procedure, you pass the value to stored procedure through links. When you use unconnected stored procedure, you pass the value using :SP function. Transaction Control transformation allows you to commit or rollback individual records, based on certain condition. By default, Integration Service commits the data based on the properties you define at the session task level. Using the commit interval property Integration Service commits or rollback the data into target. Suppose you define commit interval as 10,000, Integration Service will commit the data after every 10,000 records. When you use Transaction Control transformation, you get the control at each record to commit or rollback. When you use Transaction Control transformation, you need to define the condition in expression editor of the Transaction Control transformation. When you run the process, the data enters the Transaction Control transformation in row-wise manner. The Transaction Control transformation evaluates each row, based on which it commits or rollback the data. Classification of Transformations The transformations, which we discussed are classified into two categories—active/passive and connected/unconnected. Active/Passive classification of transformations is based on the number of records at the input and output port of the transformation. If the transformation does not change the number of records at its input and output port, it is said to be passive transformation. If the transformation changes the number of records at the input and output port of the transformation, it is said to be active transformation. Also if the transformation changes the sequence of records passing through it, it will be an active transformation as in case of Union transformation. A transformation is said to be connected if it is connected to any source or any target or any other transformation by at least a link. If the transformation is not connected by any link is it classed as unconnected. Only Lookup and stored procedure transformations can be connected and unconnected, rest all transformations are connected. Advanced Features of designer screen Talking about the advanced features of PowerCenter Designer tool, debugger helps you to debug the mappings to find the error in your code. Informatica PowerCenter provides a utility called as debugger to debug the mapping so that you can easily find the issue in the mapping which you created. Using the debugger, you can see the flow of every record across the transformations. Another feature is target load plan, a functionality which allows you to load data in multiple targets in a same mapping maintaining their constraints. The reusable transformations are transformations which allow you to reuse the transformations across multiple mapping. As source and target are reusable components, transformations can also be reused. When you work on any technology, it is always advised that your code should be dynamic. This means you should use the hard coded values as less as possible in your code. It is always recommended that you use the parameters or the variable in your code so you can easily pass these values and need not frequently change the code. This functionality is achieved by using parameter file in Informatica. The value of a variable can change between the session run. The value of parameter will remain constant across the session runs. The difference is very minute so you should define parameter or variable properly as per your requirements. Informatica PowerCenter allows you to compare objects present within repository. You can compare sources, targets, transformations, mapplets, and mappings in PowerCenter Designer under Source Analyzer, Target Designer, Transformation Developer, Mapplet Designer, Mapping Designer respectively. You can compare the objects in the same repository or in multiple repositories. Tracing level in Informatica defines the amount of data you wish to write in the session log when you execute the workflow. Tracing level is a very important aspect in Informatica as it helps in analyzing the error. Tracing level is very helpful in finding the bugs in the process. You can define tracing level in every transformation. Tracing level option is present in every transformation properties window. There are four types of tracing level available: Normal: When you set the tracing level as normal, Informatica stores status information, information about errors, and information about skipped rows. You get detailed information but not at individual row level. Terse: When you set the tracing level as terse, Informatica stores error information and information of rejected records. Terse tracing level occupies lesser space as compared to normal. Verbose initialization: When you set the tracing level as verbose initialize, it stores process details related to startup, details about index and data files created and more details of transformation process in addition to details stored in normal tracing. This tracing level takes more space as compared to normal and terse. Verbose data: This is the most detailed level of tracing level. It occupies more space and takes longer time as compared to other three. It stores row level data in the session log. It writes the truncation information whenever it truncates the data. It also writes the data to error log if you enable row error logging. Default tracing level is normal. You can change the tracing level to terse to enhance the performance. Tracing level can be defined at individual transformation level or you can override the tracing level by defining it at session level. Informatica PowerCenter Workflow Manager Workflow Manager screen is the second and last phase of our development work. In the Workflow Manager session task and workflows are created, which is used to execute mapping. Workflow Manager screen allows you to work on various connections like relations, FTP, and so on. Basically, Workflow Manager contains set of instructions which we define as workflow. The basic building block of workflow is tasks. As we have multiple transformations in designer screen, we have multiple tasks in Workflow Manager Screen. When you create a workflow, you add tasks into it as per your requirement and execute the workflow to see the status in the monitor. Workflow is a combination of multiple tasks connected with links that trigger in proper sequence to execute a process. Every workflow contains start task along with other tasks. When you execute the workflow, you actually trigger start task, which in turn triggers other tasks connected in the flow. Every task performs a specific functionality. You need to use the task based on the functionality you need to achieve. Various tasks in Workflow Manager The following are the tasks in Workflow Manager: Session task is used to execute the mapping. Every session task can execute a single mapping. You need to define the path/connection of the source and target used in the mapping, so the session can extract the data from the defined path and send the data to the mapping for processing. Email task is used to send success or failure email notifications. You can configure your outlook or mailbox with the email task to directly send the notification. Command task is used to execute Unix scripts/commands or Windows commands. Timer task is used to add some time gap or to add delay between two tasks. Timer task have properties related to absolute time and relative time. Assignment task is used to assign a value to workflow variable. Control task is used to control the flow of workflow by stopping or aborting the workflow in case on some error. You can control the flow of complete workflow using control task. Decision task is used to check the status of multiple tasks and hence control the execution of workflow. Link task as against decision task can only check the status of the previous task. Event task is used to wait for a particular event to occur. Usually it is used as file watcher task. Using event wait task we can keep looking for a particular file and then trigger the next task. Evert raise task is used to trigger a particular event defined in workflow. Advanced Workflow Manager Workflow Manager screen has some very important features called as scheduling and incremental aggregation, which allows in easier and convenient processing of data. Scheduling allows you to schedule the workflow as specified timing so the workflow runs at the desired time. You need not manually run the workflow every time, schedule can do the needful. Incremental aggregation and partitioning are advanced features, which allows you to process the data faster. When you run the workflow, Integration Service extracts the data in row wise manner from the source path/connection you defined in session task and makes it flow from the mapping. The data reaches the target through the transformations you defined in the mapping. The data always flow in a row wise manner in Informatica, no matter what so ever is your calculation or manipulation. So if you have 10 records in source, there will be 10 Source to target flows while the process is executed. Informatica PowerCenter Workflow Monitor The Workflow Monitor screen allows the monitoring of the workflows execute in Workflow Manager. Workflow Monitor screen allows you check the status and log files for the Workflow. Using the logs generated, you can easily find the error and rectify the error. Workflow Manager also shows the statistics for number of records extracted from source and number of records loaded into target. Also it gives statistics of error records and bad records. Informatica PowerCenter Repository Manager Repository Manager screen is the fourth client screen, which is basically used for migration (deployment) purpose. This screen is also used for some administration related activities like configuring server with client and creating users. Performance Tuning in Informatica PowerCenter The performance tuning has the contents for the optimizations of various components of Informatica PowerCenter tool, such as source, targets, mappings, sessions, systems. Performance tuning at high level involves two stages, finding the issues called as bottleneck and resolving them. Informatica PowerCenter has features like pushdown optimization and partitioning for better performance. With the defined steps and using the best practices for coding the performance can be enhanced drastically. Slowly Changing Dimensions Using all the understanding of the different client tools you can implement the Data warehousing concepts called as SCD, slowly changing dimensions. Informatica PowerCenter provides wizards, which allow you to easily create different types of SCDs, that is, SCD1, SCD2, and SCD3. Type 1 Dimension mapping (SCD1): It keeps only current data and do not maintain historical data. Type 2 Dimension/Version Number mapping (SCD2): It keeps current as well as historical data in the table. SCD2 allows you to insert new record and changed records using a new column (PM_VERSION_NUMBER) by maintaining the version number in the table to track the changes. We use a new column PM_PRIMARYKEY to maintain the history. Type 2 Dimension/Flag mapping: It keeps current as well as historical data in the table. SCD2 allows you to insert new record and changed records using a new column (PM_CURRENT_FLAG) by maintaining the flag in the table to track the changes. We use a new column PRIMARY_KEY to maintain the history. Type 2 Dimension/Effective Date Range mapping: It keeps current as well as historical data in the table. SCD2 allows you to insert new record and changed records using two new columns (PM_BEGIN_DATE and PM_END_DATE) by maintaining the date range in the table to track the changes. We use a new column PRIMARY_KEY to maintain the history. Type 3 Dimension mapping: It keeps current as well as historical data in the table. We maintain only partial history by adding new column. Summary With this we have discussed the complete PowerCenter tool in brief. The PowerCenter is the best fit tool for any size and any type of data, which you wish to handle. It also provides compatibility with all the files and databases for processing purpose. The transformations present allow you to manipulate any type of data in any form you wish. The advanced features make your work simple by providing convenient options. The PowerCenter tool can make your life easy and can offer you some great career path if you learn it properly as Informatica PowerCenter tool have huge demand in job market and it is one of the highly paid technologies in IT market. Just grab a book and start walking the path. The end will be a great career. We are always available for help. For any help in installation or any issues related to PowerCenter you can reach me at info@dw-learnwell.com. Resources for Article:  Further resources on this subject: Building Mobile Apps [article] Adding a Geolocation Trigger to the Salesforce Account Object [article] Introducing SproutCore [article]
Read more
  • 0
  • 0
  • 2347

article-image-api-mongodb-and-nodejs
Packt
22 Dec 2014
26 min read
Save for later

API with MongoDB and Node.js

Packt
22 Dec 2014
26 min read
In this article by Fernando Monteiro, author of the book Learning Single-page Web Application Development, we will see how to build a solid foundation for our API. Our main aim is to discuss the techniques to build rich web applications, with the SPA approach. We will be covering the following topics in this article: The working of an API Boilerplates and generators The speakers API concept Creating the package.json file The Node server with server.js The model with the Mongoose schema Defining the API routes Using MongoDB in the cloud Inserting data with the Postman Chrome extension (For more resources related to this topic, see here.) The working of an API An API works through communication between different codes, thus defining specific behavior of certain objects on an interface. That is, the API will connect several functions on one website (such as search, images, news, authentications, and so on) to enable it to be used in other applications. Operating systems also have APIs, and they still have the same function. Windows, for example, has APIs such as the Win16 API, Win32 API, or Telephony API, in all its versions. When you run a program that involves some process of the operating system, it is likely that we make a connection with one or more Windows APIs. To clarify the concept of an API, we will give go through some examples of how it works. On Windows, it works on an application that uses the system clock to display the same function within the program. It then associates a behavior to a given clock time in another application, for example, using the Time/Clock API from Windows to use the clock functionality on your own application. Another example, is when you use the Android SDK to build mobile applications. When you use the device GPS, you are interacting with the API (android.location) to display the user location on the map through another API, in this case, Google Maps API. The following is the API example: When it comes to web APIs, the functionality can be even greater. There are many services that provide their code, so that they can be used on other websites. Perhaps, the best example is the Facebook API. Several other websites use this service within their pages, for instance a like button, share, or even authentication. An API is a set of programming patterns and instructions to access a software application based on the Web. So, when you access a page of a beer store in your town, you can log in with your Facebook account. This is accomplished through the API. Using it, software developers and web programmers can create beautiful programs and pages filled with content for their users. Boilerplates and generators On a MEAN stack environment, our ecosystem is infinitely diverse, and we can find excellent alternatives to start the construction of our API. At hand, we have simple boilerplates to complex code generators that can be used with other tools in an integrated way, or even alone. Boilerplates are usually a group of tested code that provides the basic structure to the main goal, that is to create a foundation of a web project. Besides saving us from common tasks such as assembling the basic structure of the code and organizing the files, boilerplates already have a number of scripts to make life easier for the frontend. Let's describe some alternatives that we consider as good starting points for the development of APIs with the Express framework, MongoDB database, Node server, and AngularJS for the frontend. Some more accentuated knowledge of JavaScript might be necessary for the complete understanding of the concepts covered here; so we will try to make them as clearly as possible. It is important to note that everything is still very new when we talk about Node and all its ecosystems, and factors such as scalability, performance, and maintenance are still major risk factors. Bearing in mind also that languages such as Ruby on Rails, Scala, and the Play framework have a higher reputation in building large and maintainable web applications, but without a doubt, Node and JavaScript will conquer your space very soon. That being said, we present some alternatives for the initial kickoff with MEAN, but remember that our main focus is on SPA and not directly on MEAN stack. Hackathon starter Hackathon is highly recommended for a quick start to develop with Node. This is because the boilerplate has the main necessary characteristics to develop applications with the Express framework to build RESTful APIs, as it has no MVC/MVVM frontend framework as a standard but just the Bootstrap UI framework. Thus, you are free to choose the framework of your choice, as you will not need to refactor it to meet your needs. Other important characteristics are the use of the latest version of the Express framework, heavy use of Jade templates and some middleware such as Passport - a Node module to manage authentication with various social network sites such as Twitter, Facebook, APIs for LinkedIn, Github, Last.fm, Foursquare, and many more. They provide the necessary boilerplate code to start your projects very fast, and as we said before, it is very simple to install; just clone the Git open source repository: git clone --depth=1 https://github.com/sahat/hackathon-starter.git myproject Run the NPM install command inside the project folder: npm install Then, start the Node server: node app.js Remember, it is very important to have your local database up and running, in this case MongoDB, otherwise the command node app.js will return the error: Error connecting to database: failed to connect to [localhost: 27017] MEAN.io or MEAN.JS This is perhaps the most popular and currently available boilerplate. MEAN.JS is a fork of the original project MEAN.io; both are open source, with a very peculiar similarity, both have the same author. You can check for more details at http://meanjs.org/. However, there are some differences. We consider MEAN.JS to be a more complete and robust environment. It has a structure of directories, better organized, subdivided modules, and better scalability by adopting a vertical modules development. To install it, follow the same steps as previously: Clone the repository to your machine: git clone https://github.com/meanjs/mean.git Go to the installation directory and type on your terminal: npm install Finally, execute the application; this time with the Grunt.js command: grunt If you are on Windows, type the following command: grunt.cmd Now, you have your app up and running on your localhost. The most common problem when we need to scale a SPA is undoubtedly the structure of directories and how we manage all of the frontend JavaScript files and HTML templates using MVC/MVVM. Later, we will see an alternative to deal with this on a large-scale application; for now, let's see the module structure adopted by MEAN.JS: Note that MEAN.JS leaves more flexibility to the AngularJS framework to deal with the MVC approach for the frontend application, as we can see inside the public folder. Also, note the modules approach; each module has its own structure, keeping some conventions for controllers, services, views, config, and tests. This is very useful for team development, so keep all the structure well organized. It is a complete solution that makes use of additional modules such as passport, swig, mongoose, karma, among others. The Passport module Some things about the Passport module must be said; it can be defined as a simple, unobtrusive authentication module. It is a powerful middleware to use with Node; it is very flexible and also modular. It can also adapt easily within applications that use the Express. It has more than 140 alternative authentications and support session persistence; it is very lightweight and extremely simple to be implemented. It provides us with all the necessary structure for authentication, redirects, and validations, and hence it is possible to use the username and password of social networks such as Facebook, Twitter, and others. The following is a simple example of how to use local authentication: var passport = require('passport'), LocalStrategy = require('passport-local').Strategy, User = require('mongoose').model('User');   module.exports = function() { // Use local strategy passport.use(new LocalStrategy({ usernameField: 'username', passwordField: 'password' }, function(username, password, done) { User.findOne({    username: username }, function(err, user) { if (err) { return done(err); } if (!user) {    return done(null, false, {    message: 'Unknown user'    }); } if (!user.authenticate(password)) {    return done(null, false, {    message: 'Invalid password'    }); } return done(null, user); }); } )); }; Here's a sample screenshot of the login page using the MEAN.JS boilerplate with the Passport module: Back to the boilerplates topic; most boilerplates and generators already have the Passport module installed and ready to be configured. Moreover, it has a code generator so that it can be used with Yeoman, which is another essential frontend tool to be added to your tool belt. Yeoman is the most popular code generator for scaffold for modern web applications; it's easy to use and it has a lot of generators such as Backbone, Angular, Karma, and Ember to mention a few. More information can be found at http://yeoman.io/. Generators Generators are for the frontend as gem is for Ruby on Rails. We can create the foundation for any type of application, using available generators. Here's a console output from a Yeoman generator: It is important to bear in mind that we can solve almost all our problems using existing generators in our community. However, if you cannot find the generator you need, you can create your own and make it available to the entire community, such as what has been done with RubyGems by the Rails community. RubyGem, or simply gem, is a library of reusable Ruby files, labeled with a name and a version (a file called gemspec). Keep in mind the Don't Repeat Yourself (DRY) concept; always try to reuse an existing block of code. Don't reinvent the wheel. One of the great advantages of using a code generator structure is that many of the generators that we have currently, have plenty of options for the installation process. With them, you can choose whether or not to use many alternatives/frameworks that usually accompany the generator. The Express generator Another good option is the Express generator, which can be found at https://github.com/expressjs/generator. In all versions up to Express Version 4, the generator was already pre-installed and served as a scaffold to begin development. However, in the current version, it was removed and now must be installed as a supplement. They provide us with the express command directly in terminal and are quite useful to start the basic settings for utilization of the framework, as we can see in the following commands: create : . create : ./package.json create : ./app.js create : ./public create : ./public/javascripts create : ./public/images create : ./public/stylesheets create : ./public/stylesheets/style.css create : ./routes create : ./routes/index.js create : ./routes/users.js create : ./views create : ./views/index.jade create : ./views/layout.jade create : ./views/error.jade create : ./bin create : ./bin/www   install dependencies:    $ cd . && npm install   run the app:    $ DEBUG=express-generator ./bin/www Very similar to the Rails scaffold, we can observe the creation of the directory and files, including the public, routes, and views folders that are the basis of any application using Express. Note the npm install command; it installs all dependencies provided with the package.json file, created as follows: { "name": "express-generator", "version": "0.0.1", "private": true, "scripts": {    "start": "node ./bin/www" }, "dependencies": {    "express": "~4.2.0",    "static-favicon": "~1.0.0",    "morgan": "~1.0.0",    "cookie-parser": "~1.0.1",    "body-parser": "~1.0.0",    "debug": "~0.7.4",    "jade": "~1.3.0" } } This has a simple and effective package.json file to build web applications with the Express framework. The speakers API concept Let's go directly to build the example API. To be more realistic, let's write a user story similar to a backlog list in agile methodologies. Let's understand what problem we need to solve by the API. The user history We need a web application to manage speakers on a conference event. The main task is to store the following speaker information on an API: Name Company Track title Description A speaker picture Schedule presentation For now, we need to add, edit, and delete speakers. It is a simple CRUD function using exclusively the API with JSON format files. Creating the package.json file Although not necessarily required at this time, we recommend that you install the Webstorm IDE, as we'll use it throughout the article. Note that we are using the Webstorm IDE with an integrated environment with terminal, Github version control, and Grunt to ease our development. However, you are absolutely free to choose your own environment. From now on, when we mention terminal, we are referring to terminal Integrated WebStorm, but you can access it directly by the chosen independent editor, terminal for Mac and Linux and Command Prompt for Windows. Webstorm is very useful when you are using a Windows environment, because Windows Command Prompt does not have the facility to copy and paste like Mac OS X on the terminal window. Initiating the JSON file Follow the steps to initiate the JSON file: Create a blank folder and name it as conference-api, open your terminal, and place the command: npm init This command will walk you through creating a package.json file with the baseline configuration for our application. Also, this file is the heart of our application; we can control all the dependencies' versions and other important things like author, Github repositories, development dependencies, type of license, testing commands, and much more. Almost all commands are questions that guide you to the final process, so when we are done, we'll have a package.json file very similar to this: { "name": "conference-api", "version": "0.0.1", "description": "Sample Conference Web Application", "main": "server.js", "scripts": {    "test": "test" }, "keywords": [    "api" ], "author": "your name here", "license": "MIT" } Now, we need to add the necessary dependencies, such as Node modules, which we will use in our process. You can do this in two ways, either directly via terminal as we did here, or by editing the package.json file. Let's see how it works on the terminal first; let's start with the Express framework. Open your terminal in the api folder and type the following command: npm install express@4.0.0 –-save This command installs the Express module, in this case, Express Version 4, and updates the package.json file and also creates dependencies automatically, as we can see: { "name": "conference-api", "version": "0.0.1", "description": "Sample Conference Web Application", "main": "server.js", "scripts": {    "test": "test" }, "keywords": [    "api" ], "author": "your name here", "license": "MIT", "dependencies": {    "express": "^4.0.0" } } Now, let's add more dependencies directly in the package.json file. Open the file in your editor and add the following lines: { "name": "conference-api", "version": "0.0.1", "description": "Sample Conference Web Application", "main": "server.js", "scripts": {    "test": "test" }, "keywords": [    "api" ], "author": "your name here", "license": "MIT", "engines": {        "node": "0.8.4",        "npm": "1.1.49" }, "dependencies": {    "body-parser": "^1.0.1",    "express": "^4.0.0",    "method-override": "^1.0.0",    "mongoose": "^3.6.13",    "morgan": "^1.0.0",    "nodemon": "^1.2.0" }, } It's very important when you deploy your application using some services such as Travis Cl or Heroku hosting company. It's always good to set up the Node environment. Open the terminal again and type the command: npm install You can actually install the dependencies in two different ways, either directly into the directory of your application or globally with the -g command. This way, you will have the modules installed to use them in any application. When using this option, make sure that you are the administrator of the user machine, as this command requires special permissions to write to the root directory of the user. At the end of the process, we'll have all Node modules that we need for this project; we just need one more action. Let's place our code over a version control, in our case Git. More information about the Git can be found at http://git-scm.com however, you can use any version control as subversion or another. We recommend using Git, as we will need it later to deploy our application in the cloud, more specificly, on Heroku cloud hosting. At this time, our project folder must have the same structure as that of the example shown here: We must point out the utilization of an important module called the Nodemon module. Whenever a file changes it restarts the server automatically; otherwise, you will have to restart the server manually every time you make a change to a file, especially in a development environment that is extremely useful, as it constantly updates our files. Node server with server.js With this structure formed, we will start the creation of the server itself, which is the creation of a main JavaScript file. The most common name used is server.js, but it is also very common to use the app.js name, especially in older versions. Let's add this file to the root folder of the project and we will start with the basic server settings. There are many ways to configure our server, and probably you'll find the best one for yourself. As we are still in the initial process, we keep only the basics. Open your editor and type in the following code: // Import the Modules installed to our server var express   = require('express'); var bodyParser = require('body-parser');   // Start the Express web framework var app       = express();   // configure app app.use(bodyParser());   // where the application will run var port     = process.env.PORT || 8080;   // Import Mongoose var mongoose   = require('mongoose');   // connect to our database // you can use your own MongoDB installation at: mongodb://127.0.0.1/databasename mongoose.connect('mongodb://username:password@kahana.mongohq.com:10073/node-api');   // Start the Node Server app.listen(port); console.log('Magic happens on port ' + port); Realize that the line-making connection with MongoDB on our localhost is commented, because we are using an instance of MongoDB in the cloud. In our case, we use MongoHQ, a MongoDB-hosting service. Later on, will see how to connect with MongoHQ. Model with the Mongoose schema Now, let's create our model, using the Mongoose schema to map our speakers on MongoDB. // Import the Mongoose module. var mongoose     = require('mongoose'); var Schema       = mongoose.Schema;   // Set the data types, properties and default values to our Schema. var SpeakerSchema   = new Schema({    name:           { type: String, default: '' },    company:       { type: String, default: '' },    title:         { type: String, default: '' },    description:   { type: String, default: '' },    picture:       { type: String, default: '' },    schedule:       { type: String, default: '' },    createdOn:     { type: Date,   default: Date.now} }); module.exports = mongoose.model('Speaker', SpeakerSchema); Note that on the first line, we added the Mongoose module using the require() function. Our schema is pretty simple; on the left-hand side, we have the property name and on the right-hand side, the data type. We also we set the default value to nothing, but if you want, you can set a different value. The next step is to save this file to our project folder. For this, let's create a new directory named server; then inside this, create another folder called models and save the file as speaker.js. At this point, our folder looks like this: The README.md file is used for Github; as we are using the Git version control, we host our files on Github. Defining the API routes One of the most important aspects of our API are routes that we take to create, read, update, and delete our speakers. Our routes are based on the HTTP verb used to access our API, as shown in the following examples: To create record, use the POST verb To read record, use the GET verb To update record, use the PUT verb To delete records, use the DELETE verb So, our routes will be as follows: Routes Verb and Action /api/speakers GET retrieves speaker's records /api/speakers/ POST inserts speakers' record /api/speakers/:speaker_id GET retrieves a single record /api/speakers/:speaker_id PUT updates a single record /api/speakers/:speaker_id DELETE deletes a single record Configuring the API routes: Let's start defining the route and a common message for all requests: var Speaker     = require('./server/models/speaker');   // Defining the Routes for our API   // Start the Router var router = express.Router();   // A simple middleware to use for all Routes and Requests router.use(function(req, res, next) { // Give some message on the console console.log('An action was performed by the server.'); // Is very important using the next() function, without this the Route stops here. next(); });   // Default message when access the API folder through the browser router.get('/', function(req, res) { // Give some Hello there message res.json({ message: 'Hello SPA, the API is working!' }); }); Now, let's add the route to insert the speakers when the HTTP verb is POST: // When accessing the speakers Routes router.route('/speakers')   // create a speaker when the method passed is POST .post(function(req, res) {   // create a new instance of the Speaker model var speaker = new Speaker();   // set the speakers properties (comes from the request) speaker.name = req.body.name; speaker.company = req.body.company; speaker.title = req.body.title; speaker.description = req.body.description; speaker.picture = req.body.picture; speaker.schedule = req.body.schedule;   // save the data received speaker.save(function(err) {    if (err)      res.send(err);   // give some success message res.json({ message: 'speaker successfully created!' }); }); }) For the HTTP GET method, we need this: // get all the speakers when a method passed is GET .get(function(req, res) { Speaker.find(function(err, speakers) {    if (err)      res.send(err);      res.json(speakers); }); }); Note that in the res.json() function, we send all the object speakers as an answer. Now, we will see the use of different routes in the following steps: To retrieve a single record, we need to pass speaker_id, as shown in our previous table, so let's build this function: // on accessing speaker Route by id router.route('/speakers/:speaker_id')   // get the speaker by id .get(function(req, res) { Speaker.findById(req.params.speaker_id, function(err,    speaker) {    if (err)      res.send(err);      res.json(speaker);    }); }) To update a specific record, we use the PUT HTTP verb and then insert the function: // update the speaker by id .put(function(req, res) { Speaker.findById(req.params.speaker_id, function(err,     speaker) {      if (err)      res.send(err);   // set the speakers properties (comes from the request) speaker.name = req.body.name; speaker.company = req.body.company; speaker.title = req.body.title; speaker.description = req.body.description; speaker.picture = req.body.picture; speaker.schedule = req.body.schedule;   // save the data received speaker.save(function(err) {    if (err)      res.send(err);      // give some success message      res.json({ message: 'speaker successfully       updated!'}); });   }); }) To delete a specific record by its id: // delete the speaker by id .delete(function(req, res) { Speaker.remove({    _id: req.params.speaker_id }, function(err, speaker) {    if (err)      res.send(err);   // give some success message res.json({ message: 'speaker successfully deleted!' }); }); }); Finally, register the Routes on our server.js file: // register the route app.use('/api', router); All necessary work to configure the basic CRUD routes has been done, and we are ready to run our server and begin creating and updating our database. Open a small parenthesis here, for a quick step-by-step process to introduce another tool to create a database using MongoDB in the cloud. There are many companies that provide this type of service but we will not go into individual merits here; you can choose your preference. We chose Compose (formerly MongoHQ) that has a free sandbox for development, which is sufficient for our examples. Using MongoDB in the cloud Today, we have many options to work with MongoDB, from in-house services to hosting companies that provide Platform as a Service (PaaS) and Software as a Service (SaaS). We will present a solution called Database as a Service (DbaaS) that provides database services for highly scalable web applications. Here's a simple step-by-step process to start using a MongoDB instance with a cloud service: Go to https://www.compose.io/. Create your free account. On your dashboard panel, click on add Database. On the right-hand side, choose Sandbox Database. Name your database as node-api. Add a user to your database. Go back to your database title, click on admin. Copy the connection string. The string connection looks like this: mongodb://<user>:<password>@kahana.mongohq.com:10073/node-api. Let's edit the server.js file using the following steps: Place your own connection string to the Mongoose.connect() function. Open your terminal and input the command: nodemon server.js Open your browser and place http://localhost:8080/api. You will see a message like this in the browser: { Hello SPA, the API is working! } Remember the api folder was defined on the server.js file when we registered the routes: app.use('/api', router); But, if you try to access http://localhost:8080/api/speakers, you must have something like this: [] This is an empty array, because we haven't input any data into MongoDB. We use an extension for the Chrome browser called JSONView. This way, we can view the formatted and readable JSON files. You can install this for free from the Chrome Web Store. Inserting data with Postman To solve our empty database and before we create our frontend interface, let's add some data with the Chrome extension Postman. By the way, it's a very useful browser interface to work with RESTful APIs. As we already know that our database is empty, our first task is to insert a record. To do so, perform the following steps: Open Postman and enter http://localhost:8080/api/speakers. Select the x-www-form-urlencoded option and add the properties of our model: var SpeakerSchema   = new Schema({ name:           { type: String, default: '' }, company:       { type: String, default: '' }, title:         { type: String, default: '' }, description:   { type: String, default: '' }, picture:       { type: String, default: '' }, schedule:       { type: String, default: '' }, createdOn:     { type: Date,   default: Date.now} }); Now, click on the blue button at the end to send the request. With everything going as expected, you should see message: speaker successfully created! at the bottom of the screen, as shown in the following screenshot: Now, let's try http://localhost:8080/api/speakers in the browser again. Now, we have a JSON file like this, instead of an empty array: { "_id": "53a38ffd2cd34a7904000007", "__v": 0, "createdOn": "2014-06-20T02:20:31.384Z", "schedule": "10:20", "picture": "fernando.jpg", "description": "Lorem ipsum dolor sit amet, consectetur     adipisicing elit, sed do eiusmod...", "title": "MongoDB", "company": "Newaeonweb", "name": "Fernando Monteiro" } When performing the same action on Postman, we see the same result, as shown in the following screenshot: Go back to Postman, copy _id from the preceding JSON file and add to the end of the http://localhost:8080/api/speakers/53a38ffd2cd34a7904000005 URL and click on Send. You will see the same object on the screen. Now, let's test the method to update the object. In this case, change the method to PUT on Postman and click on Send. The output is shown in the following screenshot: Note that on the left-hand side, we have three methods under History; now, let's perform the last operation and delete the record. This is very simple to perform; just keep the same URL, change the method on Postman to DELETE, and click on Send. Finally, we have the last method executed successfully, as shown in the following screenshot: Take a look at your terminal, you can see four messages that are the same: An action was performed by the server. We configured this message in the server.js file when we were dealing with all routes of our API. router.use(function(req, res, next) { // Give some message on the console console.log('An action was performed by the server.'); // Is very important using the next() function, without this the Route stops here. next(); }); This way, we can monitor all interactions that take place at our API. Now that we have our API properly tested and working, we can start the development of the interface that will handle all this data. Summary In this article, we have covered almost all modules of the Node ecosystem to develop the RESTful API. Resources for Article: Further resources on this subject: Web Application Testing [article] A look into responsive design frameworks [article] Top Features You Need to Know About – Responsive Web Design [article]
Read more
  • 0
  • 0
  • 3470

article-image-recursive-directives
Packt
22 Dec 2014
13 min read
Save for later

Recursive directives

Packt
22 Dec 2014
13 min read
In this article by Matt Frisbie, the author of AngularJS Web Application Development Cookbook, we will see recursive directives. The power of directives can also be effectively applied when consuming data in a more unwieldy format. Consider the case in which you have a JavaScript object that exists in some sort of recursive tree structure. The view that you will generate for this object will also reflect its recursive nature and will have nested HTML elements that match the underlying data structure. (For more resources related to this topic, see here.) Recursive directives In this article by Matt Frisbie, the author of AngularJS Web Application Development Cookbook, we will see recursive directives. The power of directives can also be effectively applied when consuming data in a more unwieldy format. Consider the case in which you have a JavaScript object that exists in some sort of recursive tree structure. The view that you will generate for this object will also reflect its recursive nature and will have nested HTML elements that match the underlying data structure. Getting ready Suppose you had a recursive data object in your controller as follows: (app.js)   angular.module('myApp', []) .controller('MainCtrl', function($scope) { $scope.data = {    text: 'Primates',    items: [      {        text: 'Anthropoidea',        items: [          {            text: 'New World Anthropoids'          },          {            text: 'Old World Anthropoids',            items: [              {                text: 'Apes',                items: [                 {                    text: 'Lesser Apes'                  },                  {                    text: 'Greater Apes'                  }                ]              },              {                text: 'Monkeys'              }            ]          }        ]      },      {        text: 'Prosimii'      }    ] }; }); How to do it… As you might imagine, iteratively constructing a view or only partially using directives to accomplish this will become extremely messy very quickly. Instead, it would be better if you were able to create a directive that would seamlessly break apart the data recursively, and define and render the sub-HTML fragments cleanly. By cleverly using directives and the $compile service, this exact directive functionality is possible. The ideal directive in this scenario will be able to handle the recursive object without any additional parameters or outside assistance in parsing and rendering the object. So, in the main view, your directive will look something like this: <recursive value="nestedObject"></recursive> The directive is accepting an isolate scope = binding to the parent scope object, which will remain structurally identical as the directive descends through the recursive object. The $compile service You will need to inject the $compile service in order to make the recursive directive work. The reason for this is that each level of the directive can instantiate directives inside it and convert them from an uncompiled template to real DOM material. The angular.element() method The angular.element() method can be thought of as the jQuery $() equivalent. It accepts a string template or DOM fragment and returns a jqLite object that can be modified, inserted, or compiled for your purposes. If the jQuery library is present when the application is initialized, AngularJS will use that instead of jqLite. If you use the AngularJS template cache, retrieved templates will already exist as if you had called the angular.element() method on the template text. The $templateCache Inside a directive, it's possible to create a template using angular.element() and a string of HTML similar to an underscore.js template. However, it's completely unnecessary and quite unwieldy to use compared to AngularJS templates. When you declare a template and register it with AngularJS, it can be accessed through the injected $templateCache, which acts as a key-value store for your templates. The recursive template is as follows: <script type="text/ng-template" id="recursive.html"> <span>{{ val.text }}</span> <button ng-click="delSubtree()">delete</button> <ul ng-if="isParent" style="margin-left:30px">    <li ng-repeat="item in val.items">      <tree val="item" parent-data="val.items"></tree>    </li> </ul> </script> The <span> and <button> elements are present at each instance of a node, and they present the data at that node as well as an interface to the click event (which we will define in a moment) that will destroy it and all its children. Following these, the conditional <ul> element renders only if the isParent flag is set in the scope, and it repeats through the items array, recursing the child data and creating new instances of the directive. Here, you can see the full template definition of the directive: <tree val="item" parent-data="val.items"></tree> Not only does the directive take a val attribute for the local node data, but you can also see its parent-data attribute, which is the point of scope indirection that allows the tree structure. To make more sense of this, examine the following directive code: (app.js)   .directive('tree', function($compile, $templateCache) { return {    restrict: 'E',    scope: {      val: '=',      parentData: '='    },    link: function(scope, el, attrs) {      scope.isParent = angular.isArray(scope.val.items)      scope.delSubtree = function() {        if(scope.parentData) {            scope.parentData.splice(            scope.parentData.indexOf(scope.val),            1          );        }        scope.val={};      }        el.replaceWith(        $compile(          $templateCache.get('recursive.html')        )(scope)      );      } }; }); With all of this, if you provide the recursive directive with the data object provided at the beginning of this article, it will result in the following (presented here without the auto-added AngularJS comments and directives): (index.html – uncompiled)   <div ng-app="myApp"> <div ng-controller="MainCtrl">    <tree val="data"></tree> </div>    <script type="text/ng-template" id="recursive.html">    <span>{{ val.text }}</span>    <button ng-click="deleteSubtree()">delete</button>    <ul ng-if="isParent" style="margin-left:30px">      <li ng-repeat="item in val.items">        <tree val="item" parent-data="val.items"></tree>      </li>    </ul> </script> </div> The recursive nature of the directive templates enables nesting, and when compiled using the recursive data object located in the wrapping controller, it will compile into the following HTML: (index.html - compiled)   <div ng-controller="MainController"> <span>Primates</span> <button ng-click="delSubtree()">delete</button> <ul ng-if="isParent" style="margin-left:30px">    <li ng-repeat="item in val.items">      <span>Anthropoidea</span>      <button ng-click="delSubtree()">delete</button>      <ul ng-if="isParent" style="margin-left:30px">        <li ng-repeat="item in val.items">          <span>New World Anthropoids</span>          <button ng-click="delSubtree()">delete</button>        </li>        <li ng-repeat="item in val.items">          <span>Old World Anthropoids</span>          <button ng-click="delSubtree()">delete</button>          <ul ng-if="isParent" style="margin-left:30px">            <li ng-repeat="item in val.items">              <span>Apes</span>              <button ng-click="delSubtree()">delete</button>              <ul ng-if="isParent" style="margin-left:30px">                <li ng-repeat="item in val.items">                  <span>Lesser Apes</span>                  <button ng-click="delSubtree()">delete</button>                </li>                <li ng-repeat="item in val.items">                  <span>Greater Apes</span>                  <button ng-click="delSubtree()">delete</button>                </li>              </ul>            </li>            <li ng-repeat="item in val.items">              <span>Monkeys</span>              <button ng-click="delSubtree()">delete</button>            </li>          </ul>         </li>      </ul>    </li>    <li ng-repeat="item in val.items">      <span>Prosimii</span>      <button ng-click="delSubtree()">delete</button>    </li> </ul> </div> JSFiddle: http://jsfiddle.net/msfrisbie/ka46yx4u/ How it works… The definition of the isolate scope through the nested directives described in the previous section allows all or part of the recursive objects to be bound through parentData to the appropriate directive instance, all the while maintaining the nested connectedness afforded by the directive hierarchy. When a parent node is deleted, the lower directives are still bound to the data object and the removal propagates through cleanly. The meatiest and most important part of this directive is, of course, the link function. Here, the link function determines whether the node has any children (which simply checks for the existence of an array in the local data node) and declares the deleting method, which simply removes the relevant portion from the recursive object and cleans up the local node. Up until this point, there haven't been any recursive calls, and there shouldn't need to be. If your directive is constructed correctly, AngularJS data binding and inherent template management will take care of the template cleanup for you. This, of course, leads into the final line of the link function, which is broken up here for readability: el.replaceWith( $compile(    $templateCache.get('recursive.html') )(scope) ); Recall that in a link function, the second parameter is the jqLite-wrapped DOM object that the directive is linking—here, the <tree> element. This exposes to you a subset of jQuery object methods, including replaceWith(), which you will use here. The top-level instance of the directive will be replaced by the recursively-defined template, and this will carry down through the tree. At this point, you should have an idea of how the recursive structure is coming together. The element parameter needs to be replaced with a recursively-compiled template, and for this, you will employ the $compile service. This service accepts a template as a parameter and returns a function that you will invoke with the current scope inside the directive's link function. The template is retrieved from $templateCache by the recursive.html key, and then it's compiled. When the compiler reaches the nested <tree> directive, the recursive directive is realized all the way down through the data in the recursive object. Summary This article demonstrates the power of constructing a directive to convert a complex data object into a large DOM object. Relevant portions can be broken into individual templates, handled with distributed directive logic, and combined together in an elegant fashion to maximize modularity and reusability. Resources for Article:  Further resources on this subject: Working with Live Data and AngularJS [article] Angular Zen [article] AngularJS Project [article]
Read more
  • 0
  • 0
  • 3217
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-deep-customization-bootstrap
Packt
19 Dec 2014
8 min read
Save for later

Deep Customization of Bootstrap

Packt
19 Dec 2014
8 min read
This article is written by Aravind Shenoy and Ulrich Sossou, the authors of the book, Learning Bootstrap. It will introduce you to the concept of deep customization of Bootstrap. (For more resources related to this topic, see here.) Adding your own style sheet works when you are trying to do something quick or when the modifications are minimal. Customizing Bootstrap beyond small changes involves using the uncompiled Bootstrap source code. The Bootstrap CSS source code is written in LESS with variables and mixins to allow easy customization. LESS is an open source CSS preprocessor with cool features used to speed up your development time. LESS allows you to engage an efficient and modular style of working making it easier to maintain your CSS styling in your projects. The advantages of using variables in LESS are profound. You can reuse the same code many times thereby following the write once, use anywhere paradigm. Variables can be globally declared, which allows you to specify certain values in a single place. This needs to be updated only once if changes are required. LESS variables allow you to specify widely-used values such as colors, font family, and sizes in a single file. By modifying a single variable, the changes will be reflected in all the Bootstrap components that use it; for example, to change the background color of the body element to green (#00FF00 is the hexadecimal code for green), all you need to do is change the value of the variable called @body-bg in Bootstrap as shown in the following code: @body-bg: #00FF00; Mixins are similar to variables but for whole classes. Mixins enable you to embed the properties of a class into another. It allows you to group multiple code lines together so that it can be used numerous times across the style sheet. Mixins can also be used alongside variables and functions resulting in multiple inheritances; for example, to add clearfix to an article, you can use the .clearfix mixin as shown in the left column of the following table. It will result in all clearfix declarations included in the compiled CSS code shown in the right column: Mixin CSS code article { .clearfix; }   { article:before, article:after { content: " "; // 1 display: table; // 2 } article:after { clear: both; } }   A clearfix mixin is a way for an element to automatically clear after itself, so that you don't need to add additional markup. It's generally used in float layouts, where elements are floated to be stacked horizontally. Let's look at a pragmatic example to understand how this kind of customization is used in a real-time scenario: Download and unzip the Bootstrap files into a folder. Create an HTML file called bootstrap_example and save it in the same folder where you saved the Bootstrap files. Add the following code to it: <!DOCTYPE html> <html> <head> <title>BootStrap with Packt</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial- scale=1.0"> <!-- Downloaded Bootstrap CSS --> <link href="css/bootstrap.css" rel="stylesheet"> <!-- JavaScript plugins (requires jQuery) --> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/ jquery.min.js"></script> <!-- Include all compiled plugins (below), or include individual files as needed --> <script src="js/bootstrap.min.js"></script> </head> <body> <h1>Welcome to Packt</h1> <button type="button" class="btn btn-default btn-lg" id="packt">PACKT LESSONS</button> </body> </html> The output of this code upon execution will be as follows: The Bootstrap folder includes the following folders and file:     css     fonts     js     bootstrap_example.html This Bootstrap folder is shown in the following screenshot: Since we are going to use the Bootstrap source code now, let's download the ZIP file and keep it at any location. Unzip it, and we can see the contents of the folder as shown in the following screenshot: Let's now create a new folder called bootstrap in the css folder. The contents of our css folder will appear as displayed in the following screenshot: Copy the contents of the less folder from the source code and paste it into the newly created bootstrap folder inside the css folder. Thus, the contents of the same bootstrap folder within the css folder will appear as displayed in the following screenshot: In the bootstrap folder, look for the variable.less file and open it using Notepad or Notepad++. In this example, we are using a simple Notepad, and on opening the variable.less file with Notepad, we can see the contents of the file as shown in the following screenshot: Currently, we can see @body-bg is assigned the default value #fff as the color code. Change the background color of the body element to green by assigning the value #00ff00 to it. Save the file and later on, look for the bootstrap.less file in the bootstrap folder. In the next step, we are going to use WinLess. Open WinLess and add the contents of the bootstrap folder to it. In the folder pane, you will see all the less files loaded as shown in the following screenshot:   Now, we need to uncheck all the files and only select the bootstrap.less file as shown in following screenshot:  Click on Compile. This will compile your bootstrap.less file to bootstrap.css. Copy the newly compiled bootstrap.css file from the bootstrap folder and paste it into the css folder thereby replacing the original bootstrap.css file. Now that we have the updated bootstrap.css file, go back to bootstrap_example.html and execute it. Upon execution, the output of the code would be as follows:  Thus, we can see that the background color of the <body> element turns to green as we have altered it globally in the variables.less file that was linked to the bootstrap.less file, which was later compiled to bootstrap.css by WinLess. We can also use LESS variables and mixins to customize Bootstrap. We can import the Bootstrap files and add our customizations. Let's now create our own less file called styles.less in the css folder. We will now include the Bootstrap files by adding the following line of code in the styles.less file: @import "./bootstrap/bootstrap.less"; We have given the path,./bootstrap/bootstrap.less as per the location of the bootstrap.less file. Remember to give the appropriate path if you have placed it at any other location. Now, let's try a few customizations and add the following code to styles.less: @body-bg: #FFA500; @padding-large-horizontal: 40px; @font-size-base: 7px; @line-height-base: 9px; @border-radius-large: 75px; The next step is to compile the styles.less file to styles.css. We will again use WinLess for this purpose. You have to uncheck all the options and select only styles.less to be compiled:  On compilation, the styles.css file will contain all the CSS declarations from Bootstrap. The next step would be to add the styles.css stylesheet to the bootstrap_example.html file.So your HTML code will look like this: <!DOCTYPE html> <html> <head> <title>BootStrap with Packt</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial- scale=1.0"> <!-- Downloaded Bootstrap CSS --> <link href="css/bootstrap.css" rel="stylesheet"> <!-- JavaScript plugins (requires jQuery) --> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/ jquery.min.js"></script> <!-- Include all compiled plugins (below), or include individual files as needed --> <script src="js/bootstrap.min.js"></script> <link href="css/styles.css" rel="stylesheet"> </head> <body> <h1>Welcome to Packt</h1> <button type="button" class="btn btn-default btn-lg" id="packt">PACKT LESSONS</button> </body> </html> The output of the code is as follows: Since we changed the background color to orange (#ffa500), created a border radius, and defined the font-size-base and line-height-base, the output on execution was as displayed in the preceding screenshot. The LESS variables should be added to the styles.less file after the Bootstrap import so that they override the variables defined in the Bootstrap files. In short, all the custom code you write should be added after the Bootstrap import. Summary Therefore, we had a look at the procedure to implement Deep Customization in Bootstrap. However, we are still at the start of the journey. The learning curve is always steep as there is so much more to learn. Learning is always an ongoing process and it would never cease to exist. Thus, there is still a long way to go and in a pragmatic sense, the journey is the destination. Resources for Article: Further resources on this subject: Creating attention-grabbing pricing tables [article] Getting Started with Bootstrap [article] Bootstrap 3.0 is Mobile First [article]
Read more
  • 0
  • 0
  • 1807

article-image-components
Packt
25 Nov 2014
14 min read
Save for later

Components

Packt
25 Nov 2014
14 min read
This article by Timothy Moran, author of Mastering KnockoutJS, teaches you how to use the new Knockout components feature. (For more resources related to this topic, see here.) In Version 3.2, Knockout added components using the combination of a template (view) with a viewmodel to create reusable, behavior-driven DOM objects. Knockout components are inspired by web components, a new (and experimental, at the time of writing this) set of standards that allow developers to define custom HTML elements paired with JavaScript that create packed controls. Like web components, Knockout allows the developer to use custom HTML tags to represent these components in the DOM. Knockout also allows components to be instantiated with a binding handler on standard HTML elements. Knockout binds components by injecting an HTML template, which is bound to its own viewmodel. This is probably the single largest feature Knockout has ever added to the core library. The reason we started with RequireJS is that components can optionally be loaded and defined with module loaders, including their HTML templates! This means that our entire application (even the HTML) can be defined in independent modules, instead of as a single hierarchy, and loaded asynchronously. The basic component registration Unlike extenders and binding handlers, which are created by just adding an object to Knockout, components are created by calling the ko.components.register function: ko.components.register('contact-list, { viewModel: function(params) { }, template: //template string or object }); This will create a new component named contact-list, which uses the object returned by the viewModel function as a binding context, and the template as its view. It is recommended that you use lowercase, dash-separated names for components so that they can easily be used as custom elements in your HTML. To use this newly created component, you can use a custom element or the component binding. All the following three tags produce equivalent results: <contact-list params="data: contacts"><contact-list> <div data-bind="component: { name: 'contact-list', params: { data: contacts }"></div> <!-- ko component: { name: 'contact-list', params: { data: contacts } --><!-- /ko --> Obviously, the custom element syntax is much cleaner and easier to read. It is important to note that custom elements cannot be self-closing tags. This is a restriction of the HTML parser and cannot be controlled by Knockout. There is one advantage of using the component binding: the name of the component can be an observable. If the name of the component changes, the previous component will be disposed (just like it would if a control flow binding removed it) and the new component will be initialized. The params attribute of custom elements work in a manner that is similar to the data-bind attribute. Comma-separated key/value pairs are parsed to create a property bag, which is given to the component. The values can contain JavaScript literals, observable properties, or expressions. It is also possible to register a component without a viewmodel, in which case, the object created by params is directly used as the binding context. To see this, we'll convert the list of contacts into a component: <contact-list params="contacts: displayContacts, edit: editContact, delete: deleteContact"> </contact-list> The HTML code for the list is replaced with a custom element with parameters for the list as well as callbacks for the two buttons, which are edit and delete: ko.components.register('contact-list', { template: '<ul class="list-unstyled" data-bind="foreach: contacts">'    +'<li>'      +'<h3>'        +'<span data-bind="text: displayName"></span> <small data-          bind="text: phoneNumber"></small> '        +'<button class="btn btn-sm btn-default" data-bind="click:          $parent.edit">Edit</button> '        +'<button class="btn btn-sm btn-danger" data-bind="click:          $parent.delete">Delete</button>'      +'</h3>'    +'</li>' +'</ul>' }); This component registration uses an inline template. Everything still looks and works the same, but the resulting HTML now includes our custom element. Custom elements in IE 8 and higher IE 9 and later versions as well as all other major browsers have no issue with seeing custom elements in the DOM before they have been registered. However, older versions of IE will remove the element if it hasn't been registered. The registration can be done either with Knockout, with ko.components.register('component-name'), or with the standard document.createElement('component-name') expression statement. One of these must come before the custom element, either by the script containing them being first in the DOM, or by the custom element being added at runtime. When using RequireJS, being in the DOM first won't help as the loading is asynchronous. If you need to support older IE versions, it is recommended that you include a separate script to register the custom element names at the top of the body tag or in the head tag: <!DOCTYPE html> <html> <body>    <script>      document.createElement('my-custom-element');    </script>    <script src='require.js' data-main='app/startup'></script>      <my-custom-element></my-custom-element> </body> </html> Once this has been done, components will work in IE 6 and higher even with custom elements. Template registration The template property of the configuration sent to register can take any of the following formats: ko.components.register('component-name', { template: [OPTION] }); The element ID Consider the following code statement: template: { element: 'component-template' } If you specify the ID of an element in the DOM, the contents of that element will be used as the template for the component. Although it isn't supported in IE yet, the template element is a good candidate, as browsers do not visually render the contents of template elements. The element instance Consider the following code statement: template: { element: instance } You can pass a real DOM element to the template to be used. This might be useful in a scenario where the template was constructed programmatically. Like the element ID method, only the contents of the elements will be used as the template: var template = document.getElementById('contact-list-template'); ko.components.register('contact-list', { template: { element: template } }); An array of DOM nodes Consider the following code statement: template: [nodes] If you pass an array of DOM nodes to the template configuration, then the entire array will be used as a template and not just the descendants: var template = document.getElementById('contact-list-template') nodes = Array.prototype.slice.call(template.content.childNodes); ko.components.register('contact-list', { template: nodes }); Document fragments Consider the following code statement: template: documentFragmentInstance If you pass a document fragment, the entire fragment will be used as a template instead of just the descendants: var template = document.getElementById('contact-list-template'); ko.components.register('contact-list', { template: template.content }); This example works because template elements wrap their contents in a document fragment in order to stop the normal rendering. Using the content is the same method that Knockout uses internally when a template element is supplied. HTML strings We already saw an example for HTML strings in the previous section. While using the value inline is probably uncommon, supplying a string would be an easy thing to do if your build system provided it for you. Registering templates using the AMD module Consider the following code statement: template: { require: 'module/path' } If a require property is passed to the configuration object of a template, the default module loader will load the module and use it as the template. The module can return any of the preceding formats. This is especially useful for the RequireJS text plugin: ko.components.register('contact-list', { template: { require: 'text!contact-list.html'} }); Using this method, we can extract the HTML template into its own file, drastically improving its organization. By itself, this is a huge benefit to development. The viewmodel registration Like template registration, viewmodels can be registered using several different formats. To demonstrate this, we'll use a simple viewmodel of our contacts list components: function ListViewmodel(params) { this.contacts = params.contacts; this.edit = params.edit; this.delete = function(contact) {    console.log('Mock Deleting Contact', ko.toJS(contact)); }; }; To verify that things are getting wired up properly, you'll want something interactive; hence, we use the fake delete function. The constructor function Consider the following code statement: viewModel: Constructor If you supply a function to the viewModel property, it will be treated as a constructor. When the component is instantiated, new will be called on the function, with the params object as its first parameter: ko.components.register('contact-list', { template: { require: 'text!contact-list.html'}, viewModel: ListViewmodel //Defined above }); A singleton object Consider the following code statement: viewModel: { instance: singleton } If you want all your component instances to be backed by a shared object—though this is not recommended—you can pass it as the instance property of a configuration object. Because the object is shared, parameters cannot be passed to the viewmodel using this method. The factory function Consider the following code statement: viewModel: { createViewModel: function(params, componentInfo) {} } This method is useful because it supplies the container element of the component to the second parameter on componentInfo.element. It also provides you with the opportunity to perform any other setup, such as modifying or extending the constructor parameters. The createViewModel function should return an instance of a viewmodel component: ko.components.register('contact-list', { template: { require: 'text!contact-list.html'}, viewModel: { createViewModel: function(params, componentInfo) {    console.log('Initializing component for',      componentInfo.element);    return new ListViewmodel(params); }} }); Registering viewmodels using an AMD module Consider the following code statement: viewModel: { require: 'module-path' } Just like templates, viewmodels can be registered with an AMD module that returns any of the preceding formats. Registering AMD In addition to registering the template and the viewmodel as AMD modules individually, you can register the entire component with a require call: ko.components.register('contact-list', { require: 'contact-list' }); The AMD module will return the entire component configuration: define(['knockout', 'text!contact-list.html'], function(ko, templateString) {   function ListViewmodel(params) {    this.contacts = params.contacts;    this.edit = params.edit;    this.delete = function(contact) {      console.log('Mock Deleting Contact', ko.toJS(contact));    }; }   return { template: templateString, viewModel: ListViewmodel }; }); As the Knockout documentation points out, this method has several benefits: The registration call is just a require path, which is easy to manage. The component is composed of two parts: a JavaScript module and an HTML module. This provides both simple organization and clean separation. The RequireJS optimizer, which is r.js, can use the text dependency on the HTML module to bundle the HTML code with the bundled output. This means your entire application, including the HTML templates, can be a single file in production (or a collection of bundles if you want to take advantage of lazy loading). Observing changes in component parameters Component parameters will be passed via the params object to the component's viewmodel in one of the following three ways: No observable expression evaluation needs to occur, and the value is passed literally: <component params="name: 'Timothy Moran'"></component> <component params="name: nonObservableProperty"> </component> <component params="name: observableProperty"></component> <component params="name: viewModel.observableSubProperty "></component> In all of these cases, the value is passed directly to the component on the params object. This means that changes to these values will change the property on the instantiating viewmodel, except for the first case (literal values). Observable values can be subscribed to normally. An observable expression needs to be evaluated, so it is wrapped in a computed observable: <component params="name: name() + '!'"></component> In this case, params.name is not the original property. Calling params.name() will evaluate the computed wrapper. Trying to modify the value will fail, as the computed value is not writable. The value can be subscribed to normally. An observable expression evaluates an observable instance, so it is wrapped in an observable that unwraps the result of the expression: <component params="name: isFormal() ? firstName : lastName"></component> In this example, firstName and lastName are both observable properties. If calling params.name() returned the observable, you will need to call params.name()() to get the actual value, which is rather ugly. Instead, Knockout automatically unwraps the expression so that calling params.name() returns the actual value of either firstName or lastName. If you need to access the actual observable instances to, for example, write a value to them, trying to write to params.name will fail, as it is a computed observable. To get the unwrapped value, you can use the params.$raw object, which provides the unwrapped values. In this case, you can update the name by calling params.$raw.name('New'). In general, this case should be avoided by removing the logic from the binding expression and placing it in a computed observable in the viewmodel. The component's life cycle When a component binding is applied, Knockout takes the following steps. The component loader asynchronously creates the viewmodel factory and template. This result is cached so that it is only performed once per component. The template is cloned and injected into the container (either the custom element or the element with the component binding). If the component has a viewmodel, it is instantiated. This is done synchronously. The component is bound to either the viewmodel or the params object. The component is left active until it is disposed. The component is disposed. If the viewmodel has a dispose method, it is called, and then the template is removed from the DOM. The component's disposal If the component is removed from the DOM by Knockout, either because of the name of the component binding or a control flow binding being changed (for example, if and foreach), the component will be disposed. If the component's viewmodel has a dispose function, it will be called. Normal Knockout bindings in the components view will be automatically disposed, just as they would in a normal control flow situation. However, anything set up by the viewmodel needs to be manually cleaned up. Some examples of viewmodel cleanup include the following: The setInterval callbacks can be removed with clearInterval. Computed observables can be removed by calling their dispose method. Pure computed observables don't need to be disposed. Computed observables that are only used by bindings or other viewmodel properties also do not need to be disposed, as garbage collection will catch them. Observable subscriptions can be disposed by calling their dispose method. Event handlers can be created by components that are not part of a normal Knockout binding. Combining components with data bindings There is only one restriction of data-bind attributes that are used on custom elements with the component binding: the binding handlers cannot use controlsDescendantBindings. This isn't a new restriction; two bindings that control descendants cannot be on a single element, and since components control descendant bindings that cannot be combined with a binding handler that also controls descendants. It is worth remembering, though, as you might be inclined to place an if or foreach binding on a component; doing this will cause an error. Instead, wrap the component with an element or a containerless binding: <ul data-bind='foreach: allProducts'> <product-details params='product: $data'></product-details> </ul> It's also worth noting that bindings such as text and html will replace the contents of the element they are on. When used with components, this will potentially result in the component being lost, so it's not a good idea. Summary In this article, we learned that the Knockout components feature gives you a powerful tool that will help you create reusable, behavior-driven DOM elements. Resources for Article: Further resources on this subject: Deploying a Vert.x application [Article] The Dialog Widget [Article] Top features of KnockoutJS [Article]
Read more
  • 0
  • 0
  • 2070

article-image-web-application-testing
Packt
14 Nov 2014
15 min read
Save for later

Web Application Testing

Packt
14 Nov 2014
15 min read
This article is written by Roberto Messora, the author of the Web App Testing Using Knockout.JS book. This article will give you an overview of various design patterns used in web application testing. It will also tech you web development using jQuery. (For more resources related to this topic, see here.) Presentation design patterns in web application testing The Web has changed a lot since HTML5 has made its appearance. We are witnessing a gradual shift from a classical full server-side web development, to a new architectural asset that moves much of the application logic to the client-side. The general objective is to deliver rich internet applications (commonly known as RIA) with a desktop-like user experience. Think about web applications such as Gmail or Facebook: if you maximize your browser, they look like complete desktop applications in terms of usability, UI effects, responsiveness, and richness. Once we have established that testing is a pillar of our solutions, we need to understand which is the best way to proceed, in terms of software architecture and development. In this regard, it's very important to determine the very basic design principles that allow a proper approach to unit testing. In fact, even though HTML5 is a recent achievement, HTML in general and JavaScript are technologies that have been in use for quite some time. The problem here is that many developers tend to approach the modern web development in the same old way. This is a grave mistake because, back in time, client-side JavaScript development was a lot underrated and mostly confined to simple UI graphic management. Client-side development is historically driven by libraries such as Prototype, jQuery, and Dojo, whose primary feature is DOM (HTML Document Object Model, in other words HTML markup) management. They can work as-is in small web applications, but as soon as these grow in complexity, code base starts to become unmanageable and unmaintainable. We can't really think that we can continue to develop JavaScript in the same way we did 10 years ago. In those days, we only had to dynamically apply some UI transformations. Today we have to deliver complete working applications. We need a better design, but most of all we need to reconsider client-side JavaScript development and apply the advanced design patterns and principles. jQuery web application development JavaScript is the programming language of the web, but its native DOM API is something rudimental. We have to write a lot of code to manage and transform HTML markup to bring UI to life with some dynamic user interaction. Also not full standardization means that the same code can work differently (or not work at all) in different browsers. Over the past years, developers decided to resolve this situation: JavaScript libraries, such as Prototype, jQuery and Dojo have come to light. jQuery is one of the most known open-source JavaScript libraries, which was published for the first time in 2006. Its huge success is mainly due to: A simple and detailed API that allows you to manage HTML DOM elements Cross-browser support Simple and effective extensibility Since its appearance, it's been used by thousands of developers as the foundation library. A large amount of JavaScript code all around the world has been built with keeping jQuery in mind. jQuery ecosystem grew up very quickly and nowadays there are plenty of jQuery plugins that implement virtually everything related to the web development. Despite its simplicity, a typical jQuery web application is virtually untestable. There are two main reasons: User interface items are tightly coupled with the user interface logic User interface logic spans inside event handler callback functions The real problem is that everything passes through a jQuery reference, which is a jQuery("something") call. This means that we will always need a live reference of the HTML page, otherwise these calls will fail, and this is also true for a unit test case. We can't think about testing a piece of user interface logic running an entire web application! Large jQuery applications tend to be monolithic because jQuery itself allows callback function nesting too easily, and doesn't really promote any particular design strategy. The result is often spaghetti code. jQuery is a good option if you want to develop some specific custom plugin, also we will continue to use this library for pure user interface effects and animations, but we need something different to maintain a large web application logic. Presentation design patterns To move a step forward, we need to decide what's the best option in terms of testable code. The main topic here is the application design, in other words, how we can build our code base following a general guideline with keeping testability in mind. In software engineering there's nothing better than not to reinvent the wheel, we can rely on a safe and reliable resource: design patterns. Wikipedia provides a good definition for the term design pattern (http://en.wikipedia.org/wiki/Software_design_pattern): In software engineering, a design pattern is a general reusable solution to a commonly occurring problem within a given context in software design. A design pattern is not a finished design that can be transformed directly into source or machine code. It is a description or template for how to solve a problem that can be used in many different situations. Patterns are formalized best practices that the programmer can use to solve common problems when designing an application or system. There are tens of specific design patterns, but we also need something that is related to the presentation layer because this is where a JavaScript web application belongs to. The most important aspect in terms of design and maintainability of a JavaScript web application is a clear separation between the user interface (basically, the HTML markup) and the presentation logic. (The JavaScript code that turns a web page dynamic and responsive to user interaction.) This is what we learned digging into a typical jQuery web application. At this point, we need to identify an effective implementation of a presentation design pattern and use it in our web applications. In this regard, I have to admit that the JavaScript community has done an extraordinary job in the last two years: up to the present time, there are literally tens of frameworks and libraries that implement a particular presentation design pattern. We only have to choose the framework that fits our needs, for example, we can start taking a look at MyTodo MVC website (http://todomvc.com/): this is an open source project that shows you how to build the same web application using a different library each time. Most of these libraries implement a so-called MV* design pattern (also Knockout.JS does). MV* means that every design pattern belongs to a broader family with a common root: Model-View-Controller. The MVC pattern is one of the oldest and most enduring architectural design patterns: originally designed by Trygve Reenskaug working on Smalltalk-80 back in 1979, it has been heavily refactored since then. Basically, the MVC pattern enforces the isolation of business data (Models) from user interfaces (Views), with a third component (Controllers) that manages the logic and user-input. It can be described as (Addy Osmani, Learning JavaScript Design Patterns, http://addyosmani.com/resources/essentialjsdesignpatterns/book/#detailmvc): A Model represented domain-specific data and was ignorant of the user-interface (Views and Controllers). When a model changed, it would inform its observers A View represented the current state of a Model. The Observer pattern was used for letting the View know whenever the Model was updated or modified Presentation was taken care of by the View, but there wasn't just a single View and Controller - a View-Controller pair was required for each section or element being displayed on the screen The Controllers role in this pair was handling user interaction (such as key-presses and actions e.g. clicks), making decisions for the View This general definition has slightly changed over the years, not only to adapt its implementation to different technologies and programming languages, but also because changes have been made to the Controller part. Model-View-Presenter, Model-View-ViewModel are the most known alternatives to the MVC pattern. MV* presentation design patterns are a valid answer to our need: an architectural design guideline that promotes the separation of concerns and isolation, the two most important factors that are needed for software testing. In this way, we can separately test models, views, and the third actor whatever it is (a Controller, Presenter, ViewModel, etc.). On the other hand, adopting a presentation design pattern doesn't mean at all that we cease to use jQuery. jQuery is a great library, we will continue to add its reference to our pages, but we will also integrate its use wisely in a better design context. Knockout.JS and Model-View-ViewModel Knockout.JS is one of the most popular JavaScript presentation libraries, it implements the Model-View-ViewModel design pattern. The most important concepts that feature Knockout:JS are: An HTML fragment (or an entire page) is considered as a View. A View is always associated with a JavaScript object called ViewModel: this is a code representation of the View that contains the data (model) to be shown (in the form of properties) and the commands that handle View events triggered by the user (in the form of methods). The association between View and ViewModel is built around the concept of data-binding, a mechanism that provides automatic bidirectional synchronization: In the View, it's declared placing the data-bind attributes into DOM elements, the attributes' value must follow a specific syntax that specifies the nature of the association and the target ViewModel property/method. In the ViewModel, methods are considered as commands and properties that are defined as special objects called observables: their main feature is the capability to notify every state modification A ViewModel is a pure-code representation of the View: it contains data to show and commands that handle events triggered by the user. It's important to remember that a ViewModel shouldn't have any knowledge about the View and the UI: pure-code representation means that a ViewModel shouldn't contain any reference to HTML markup elements (buttons, textboxes, and so on), but only pure JavaScript properties and methods. Model-View-ViewModel's objective is to promote a clear separation between View and ViewModel, this principle is called Separation of Concerns. Why is this so important? The answer is quite easy: because, in this way a developer can achieve a real separation of responsibilities: the View is only responsible for presenting data to the user and react to her/his inputs, the ViewModel is only responsible for holding the data and providing the presentation logic. The following diagram from Microsoft MSDN depicts the existing relationships between the three pattern actors very well (http://msdn.microsoft.com/en-us/library/ff798384.aspx): Thinking about a web application in these terms leads to a ViewModel development without any reference to DOM elements' IDs or any other markup related code as in the classic jQuery style. The two main reasons behind this are: As the web application becomes more complex, the number of DOM elements increases and is not uncommon to reach a point where it becomes very difficult to manage all those IDs with the typical jQuery fluent interface style: the JavaScript code base turns into a spaghetti code nightmare very soon. A clear separation between View and ViewModel allows a new way of working: JavaScript developers can concentrate on the presentation logic, UX experts on the other hand, can provide an HTML markup that focuses on the user interaction and how a web application will look. The two groups can work quite independently and agree on the basic contact points using the data-bind tag attributes. The key feature of a ViewModel is the observable object: a special object that is capable to notify its state modifications to any subscribers. There are three types of observable objects: The basic observable that is based on JavaScript data types (string, number, and so on) The computed observable that is dependent on other observables or computed observables The observable array that is a standard JavaScript array, with a built-in change notification mechanism On the View-side, we talk about declarative data-binding because we need to place the data-bind attributes inside HTML tags, and specify what kind of binding is associated to a ViewModel property/command. MVVM and unit testing Why a clear separation between the user interface and presentation logic is a real benefit? There are several possible answers, but, if we want to remain in the unit testing context, we can assert that we can apply proper unit testing specifications to the presentation logic, independently, from the concrete user interface. In Model-View-ViewModel, the ViewModel is a pure-code representation of the View. The View itself must remain a thin and simple layer, whose job is to present data and receive the user interaction. This is a great scenario for unit testing: all the logic in the presentation layer is located in the ViewModel, and this is a JavaScript object. We can definitely test almost everything that takes place in the presentation layer. Ensuring a real separation between View and ViewModel means that we need to follow a particular development procedure: Think about a web application page as a composition of sub-views: we need to embrace the divide et impera principle when we build our user interface, the more sub-views are specific and simple, the more we can test them easily. Knockout.JS supports this kind of scenario very well. Write a class for every View and a corresponding class for its ViewModel: the first one is the starting point to instantiate the ViewModel and apply bindings, after all, the user interface (the HTML markup) is what the browser loads initially. Keep each View class as simple as possible, so simple that it might not even need be tested, it should be just a container for:     Its ViewModel instance     Sub-View instances, in case of a bigger View that is a composition of smaller ones     Pure user interface code, in case of particular UI JavaScript plugins that cannot take place in the ViewModel and simply provide graphical effects/enrichments (in other words they don't change the logical functioning) If we look carefully at a typical ViewModel class implementation, we can see that there are no HTML markup references: no tag names, no tag identifiers, nothing. All of these references are present in the View class implementation. In fact, if we were to test a ViewModel that holds a direct reference to an UI item, we also need a live instance of the UI, otherwise accessing that item reference would cause a null reference runtime error during the test. This is not what we want, because it is very difficult to test a presentation logic having to deal with a live instance of the user interface: there are many reasons, from the need of a web server that delivers the page, to the need of a separate instance of a web browser to load the page. This is not very different from debugging a live page with Mozilla Firebug or Google Chrome Developer Tools, our objective is the test automation, but also we want to run the tests easily and quickly in isolation: we don't want to run the page in any way! An important application asset is the event bus: this is a global object that works as an event/message broker for all the actors that are involved in the web page (Views and ViewModels). Event bus is one of the alternative forms of the Event Collaboration design pattern (http://martinfowler.com/eaaDev/EventCollaboration.html): Multiple components work together by communicating with each other by sending events when their internal state changes (Marting Fowler) The main aspect of an event bus is that: The sender is just broadcasting the event, the sender does not need to know who is interested and who will respond, this loose coupling means that the sender does not have to care about responses, allowing us to add behaviour by plugging new components (Martin Fowler) In this way, we can maintain all the different components of a web page that are completely separated: every View/ViewModel couple sends and receives events, but they don't know anything about all the other couples. Again, every ViewModel is completely decoupled from its View (remember that the View holds a reference to the ViewModel, but not the other way around) and in this case, it can trigger some events in order to communicate something to the View. Concerning unit testing, loose coupling means that we can test our presentation logic a single component at a time, simply ensuring that events are broadcasted when they need to. Event buses can also be mocked so we don't need to rely on concrete implementation. In real-world development, the production process is an iterative task. Usually, we need to: Define a View markup skeleton, without any data-bind attributes. Start developing classes for the View and the ViewModel, which are empty at the beginning. Start developing the presentation logic, adding observables to the ViewModel and their respective data bindings in the View. Start writing test specifications. This process is repetitive, adds more presentation logic at every iteration, until we reach the final result. Summary In this article, you learned about web development using jQuery, presentation design patters, and unit testing using MVVM. Resources for Article: Further resources on this subject: Big Data Analysis [Article] Advanced Hadoop MapReduce Administration [Article] HBase Administration, Performance Tuning [Article]
Read more
  • 0
  • 0
  • 1959

article-image-introduction-typescript
Packt
20 Oct 2014
16 min read
Save for later

Introduction to TypeScript

Packt
20 Oct 2014
16 min read
One of the primary benefits of compiled languages is that they provide a more plain syntax for the developer to work with before the code is eventually converted to machine code. TypeScript is able to bring this advantage to JavaScript development by wrapping several different patterns into language constructs that allow us to write better code. Every explicit type annotation that is provided is simply syntactic sugar that will be removed during compilation, but not before their constraints are analyzed and any errors are caught. In this article by Christopher Nance, the author of TypeScript Essentials, we will explore this type system in depth. We will also discuss the different language structures that TypeScript introduces. We will look at how these structures are emitted by the compiler into plain JavaScript. This article will contain a detailed at into each of these concepts: (For more resources related to this topic, see here.) Types Functions Interfaces Classes Types These type annotations put a specific set of constraints on the variables being created. These constraints allow the compiler and development tools to better assist in the proper use of the object. This includes a list of functions, variables, and properties available on the object. If a variable is created and no type is provided for it, TypeScript will attempt to infer the type from the context in which it is used. For instance, in the following code, we do not explicitly declare the variable hello as string; however, since it is created with an initial value, TypeScript is able to infer that it should always be treated as a string: var hello = "Hello There"; The ability of TypeScript to do this contextual typing provides development tools with the ability to enhance the development experience in a variety of ways. The type information allows our IDE to warn us of potential errors in our code, or provide intelligent code completion and suggestion. As you can see from the following screenshot, Visual Studio is able to provide a list of methods and properties associated with string objects as well as their type information: When an object’s type is not given and cannot be inferred from its initialization then it will be treated as an Any type. The Any type is the base type for all other types in TypeScript. It can represent any JavaScript value and the minimum amount of type checking is performed on objects of type Any. Every other type that exists in TypeScript falls into one of three categories: primitive types, object types, or type parameters. TypeScript's primitive types closely mirror those of JavaScript. The TypeScript primitive types are as follows: Number: var myNum: number = 2; Boolean: var myBool: boolean = true; String: var myString: string = "Hello"; Void: function(): void { var x = 2; } Null: if (x != null) { alert(x); } Undefined: if (x != undefined) { alert(x); } All of these types correspond directly to JavaScript's primitive types except for Void. The Void type is meant to represent the absence of a value. A function that returns no value has a return type of void. Object types are the most common types you will see in TypeScript and they are made up of references to classes, interfaces, and anonymous object types. Object types are made up of a complex set of members. These members fall into one of four categories: properties, call signatures, constructor signatures, or index signatures. Type parameters are used when referencing generic types or calling generic functions. Type parameters are used to keep code generic enough to be used on a multitude of objects while limiting those objects to a specific set of constraints. An early example of generics that we can cover is arrays. Arrays exist just like they do in JavaScript and they have an extra set of type constraints placed upon them. The array object itself has certain type constraints and methods that are created as being an object of the Array type, the second piece of information that comes from the array declaration is the type of the objects contained in the array. There are two ways to explicitly type an array; otherwise, the contextual typing system will attempt to infer the type information: var array1: string[] = []; var array2: Array<string> = []; Both of these examples are completely legal ways of declaring an array. They both generate the same JavaScript output and they both provide the same type information. The first example is a shorthand type literal using the [ and ] characters to create arrays. The resulting JavaScript for each of these arrays is shown as follows: var array1 = []; var array2 = []; Despite all of the type annotations and compile-time checking, TypeScript compiles to plain JavaScript and therefore adds absolutely no overhead to the run time speed of your applications. All of the type annotations are removed from the final code, providing us with both a much richer development experience and a clean finished product. Functions If you are at all familiar with JavaScript you will be very familiar with the concept of functions. TypeScript has added type annotations to the parameter list as well as the return type. Due to the new constraints being placed on the parameter list, the concept of function overloads was also included in the language specification. TypeScript also takes advantage of JavaScript's arguments object and provides syntax for rest parameters. Let's take a look at a function declaration in TypeScript: function add(x: number, y: number): number {    return x + y; } As you can see, we have created a function called add. It takes two parameters that are both of the type number, one of the primitive types, and it returns a number. This function is useful in its current form but it is a little limited in overall functionality. What if we want to add a third number to the first two? Then we have to call our function multiple times. TypeScript provides a way to provide optional parameters to functions. So now we can modify our function to take a third parameter, z, that will get added to the first two numbers, as shown in the following code: function add(x: number, y: number, z?: number) {    if (z !== undefined) {        return x + y + z;    }    return x + y; } As you can see, we have a third named parameter now but this one is followed by ?. This tells the compiler that this parameter is not required for the function to be called. Optional parameters tell the compiler not to generate an error if the parameter is not provided when the function is called. In JavaScript, this compile-time checking is not performed, meaning an exception could occur at runtime because each missing parameter will have a value of undefined. It is the responsibility of the developer to write code that verifies a value exists before attempting to use it. So now we can add three numbers together and we haven't broken any of our previous code that relied on the add method only taking two parameters. This has added a little bit more functionality but I think it would be nice to extend this code to operate on multiple types. We know that strings can be added together just the same as numbers can, so why not use the same method? In its current form, though, passing strings to the add function will result in compilation errors. We will modify the function's definition to take not only numbers but strings as well, as shown in the following code: function add(x: string, y: string): string; function add(x: number, y: number): number; function add(x: any, y: any): any {    return x + y; } As you can see, we now have two declarations of the add function: one for strings, one for numbers, and then we have the final implementation using the any type. The signature of the actual function implementation is not included in the function’s type definition, though. Attempting to call our add method with anything other than a number or string will fail at compile time, however, the overloads have no effect on the generated JavaScript. All of the type annotations are stripped out, as well as the overloads, and all we are left with is a very simple JavaScript method: function add(x, y) {  return x + y; } Great, so now we have a multipurpose add function that can take two values and combine them together for either strings or numbers. This still feels a little limited in overall functionality though. What if we wanted to add an indeterminate number of values together? We would have to call our add method over and over again until we eventually had only one value. Thankfully, TypeScript includes rest parameters, which is essentially an unbounded list of optional parameters. The following code shows how to modify our add functions to include a rest parameter: function add(arg1: string, ...args: string[]): string; function add(arg1: number, ...args: number[]): number; function add(arg1: any, ...args: any[]): any {    var total = arg1;    for (var i = 0; i < args.length; i++) {        total += args[i];    }    return total; } A rest parameter can only be the final parameter in a function's declaration. The TypeScript compiler recognizes the syntax of this final parameter and generates an extra bit of JavaScript to generate a shifted array from the JavaScript arguments object that is available to code inside of a function. The resulting JavaScript code shows the loop that the compiler has added to create the array that represents our indeterminate list of parameters: function add(arg1) {    var args = [];    for (var _i = 0; _i < (arguments.length - 1); _i++) {        args[_i] = arguments[_i + 1];    }    var total = arg1;    for (var i = 0; i < args.length; i++) {        total += args[i];    }    return total; } Now adding numbers and strings together is very simple and is completely type-safe. If you attempt to mix the different parameter types, a compile error will occur. The first two of the following statements are legal calls to our Add function; however, the third is not because the objects being passed in are not of the same type: alert(add("Hello ", "World!")); alert(add(3, 5, 9, 120, 42)); //Error alert(add(3, "World!")); We are still very early into our exploration of TypeScript but the benefits are already very apparent. There are still a few features of functions that we haven't covered yet but we need to learn more about the language first. Next, we will discuss the interface construct and the benefits it provides with absolutely no cost. Interfaces Interfaces are a key piece of creating large-scale software applications. They are a way of representing complex types about any object. Despite their usefulness they have absolutely no runtime consequences because JavaScript does not include any sort of runtime type checking. Interfaces are analyzed at compile time and then omitted from the resulting JavaScript. Interfaces create a contract for developers to use when developing new objects or writing methods to interact with existing ones. Interfaces are named types that contain a list of members. Let's look at an example of an interface: interface IPoint {    x: number;    y: number; } As you can see we use the interface keyword to start the interface declaration. Then we give the interface a name that we can easily reference from our code. Interfaces can be named anything, for example, foo or bar, however, a simple naming convention will improve the readability of the code. Interfaces will be given the format I<name> and object types will just use <name>, for example, IFoo and Foo. The interfaces' declaration body contains just a list of members and functions and their types. Interface members can only be instance members of an object. Using the static keyword in an interface declaration will result in a compile error. Interfaces have the ability to inherit from base types. This interface inheritance allows us to extend existing interfaces into a more enhanced version as well as merge separate interfaces together. To create an inheritance chain, interfaces use the extends clause. The extends clause is followed by a comma-separated list of types that the interface will merge with. interface IAdder {    add(arg1: number, ...args: number[]): number; } interface ISubtractor {    subtract(arg1: number, ...args: number[]): number; } interface ICalculator extends IAdder, ISubtractor {    multiply(arg1: number, ...args: number[]): number;    divide(arg1: number, arg2: number): number; } Here, we see three interfaces: IAdder, which defines a type that must implement the add method that we wrote earlier ISubtractor, which defines a new method called subtract that any object typed with ISubtractor must define ICalculator, which extends both IAdder and ISubtractor as well as defining two new methods that perform operations a calculator would be responsible for, which an adder or subtractor wouldn't perform These interfaces can now be referenced in our code as type parameters or type declarations. Interfaces cannot be directly instantiated and attempting to reference the members of an interface by using its type name directly will result in an error. In the following function declaration the ICalculator interface is used to restrict the object type that can be passed to the function. The compiler can now examine the function body and infer all of the type information associated with the calculator parameter and warn us if the object used does not implement this interface. function performCalculations(calculator: ICalculator, num1, num2) {    calculator.add(num1, num2);    calculator.subtract(num1, num2);    calculator.multiply(num1, num2);    calculator.divide(num1, num2);    return true; } The last thing that you need to know about interface definitions is that their declarations are open-ended and will implicitly merge together if they have the same type name. Our ICalculator interface could have been split into two separate declarations with each one adding its own list of base types and its own list of members. The resulting type definition from the following declaration is equivalent to the declaration we saw previously: interface ICalculator extends IAdder {    multiply(arg1: number, ...args: number[]): number; } interface ICalculator extends ISubtractor {    divide(arg1: number, arg2: number): number; } Creating large scale applications requires code that is flexible and reusable. Interfaces are a key component of keeping TypeScript as flexible as plain JavaScript, yet allow us to take advantage of the type checking provided at compile time. Your code doesn't have to be dependent on existing object types and will be ready for any new object types that might be introduced. The TypeScript compiler also implements a duck typing system that allows us to create objects on the fly while keeping type safety. The following example shows how we can pass objects that don't explicitly implement an interface but contain all of the required members to a function: function addPoints(p1: IPoint, p2: IPoint): IPoint {    var x = p1.x + p2.x;    var y = p1.y + p2.y;    return { x: x, y: y } } //Valid var newPoint = addPoints({ x: 3, y: 4 }, { x: 5, y: 1 }); //Error var newPoint2 = addPoints({ x: 1 }, { x: 4, y: 3 }); Classes In the next version of JavaScript, ECMAScript 6, a standard has been proposed for the definition of classes. TypeScript brings this concept to the current versions of JavaScript. Classes consist of a variety of different properties and members. These members can be either public or private and static or instance members. Definitions Creating classes in TypeScript is essentially the same as creating interfaces. Let's create a very simple Point class that keeps track of an x and a y position for us: class Point {    public x: number;    public y: number;    constructor(x: number, y = 0) {        this.x = x;        this.y = y;    } } As you can see, defining a class is very simple. Use the keyword class and then provide a name for the new type. Then you create a constructor for the object with any parameters you wish to provide upon creation. Our Point class requires two values that represent a location on a plane. The constructor is completely optional. If a constructor implementation is not provided, the compiler will automatically generate one that takes no parameters and initializes any instance members. We provided a default value for the property y. This default value tells the compiler to generate an extra JavaScript statement than if we had only given it a type. This also allows TypeScript to treat parameters with default values as optional parameters. If the parameter is not provided then the parameter's value is assigned to the default value you provide. This provides a simple method for ensuring that you are always operating on instantiated objects. The best part is that default values are available for all functions, not just constructors. Now let's examine the JavaScript output for the Point class: var Point = (function () {    function Point(x, y) {        if (typeof y === "undefined") { y = 0; }        this.x = x;        this.y = y;    }    return Point; })(); As you can see, a new object is created and assigned to an anonymous function that initializes the definition of the Point class. As we will see later, any public methods or static members will be added to the inner Point function's prototype. JavaScript closures are a very important concept in understanding TypeScript. Classes, modules, and enums in TypeScript all compile into JavaScript closures. Closures are actually a construct of the JavaScript language that provide a way of creating a private state for a specific segment of code. When a closure is created it contains two things: a function, and the state of the environment when the function was created. The function is returned to the caller of the closure and the state is used when the function is called. For more information about JavaScript closures and the module pattern visit http://www.adequatelygood.com/JavaScript-Module-Pattern-In-Depth.html. The optional parameter was accounted for by checking its type and initializing it if a value is not available. You can also see that both x and y properties were added to the new instance and assigned to the values that were passed into the constructor. Summary This article has thoroughly discussed the different language constructs in TypeScript. Resources for Article: Further resources on this subject: Setting Up The Rig [Article] Making Your Code Better [Article] Working with Flexible Content Elements in TYPO3 Templates [Article]
Read more
  • 0
  • 0
  • 4332
article-image-routing
Packt
16 Oct 2014
17 min read
Save for later

Routing

Packt
16 Oct 2014
17 min read
In this article by Mitchel Kelonye, author of Mastering Ember.js, we will learn URL-based state management in Ember.js, which constitutes routing. Routing enables us to translate different states in our applications into URLs and vice-versa. It is a key concept in Ember.js that enables developers to easily separate application logic. It also enables users to link back to content in the application via the usual HTTP URLs. (For more resources related to this topic, see here.) We all know that in traditional web development, every request is linked by a URL that enables the server make a decision on the incoming request. Typical actions include sending back a resource file or JSON payload, redirecting the request to a different resource, or sending back an error response such as in the case of unauthorized access. Ember.js strives to preserve these ideas in the browser environment by enabling association between these URLs and state of the application. The main component that manages these states is the application router. It is responsible for restoring an application to a state matching the given URL. It also enables the user to navigate between the application's history as expected. The router is automatically created on application initialization and can be referenced as MyApplicationNamespace.Router. Before we proceed, we will be using the bundled sample to better understand this extremely convenient component. The sample is a simple implementation of the Contacts OS X application as shown in the following screenshot: It enables users to add new contacts as well as edit and delete existing ones. For simplicity, we won't support avatars but that could be an implementation exercise for the reader. We already mentioned some of the states in which this application can transition into. These states have to be registered in the same way server-side frameworks have URL dispatchers that backend programmers use to map URL patters to views. The article sample already illustrates how these possible states are defined:  // app.jsvar App = Ember.Application.create();App.Router.map(function() {this.resource('contacts', function(){this.route('new');this.resource('contact', {path: '/:contact_id'}, function(){this.route('edit');});});this.route('about');}); Notice that the already instantiated router was referenced as App.Router. Calling its map method gives the application an opportunity to register its possible states. In addition, two other methods are used to classify these states into routes and resources. Mapping URLs to routes When defining routes and resources, we are essentially mapping URLs to possible states in our application. As shown in the first code snippet, the router's map function takes a function as its only argument. Inside this function, we may define a resource using the corresponding method, which takes the following signature: this.resource(resourceName, options, function); The first argument specifies the name of the resource and coincidentally, the path to match the request URL. The next argument is optional and holds configurations that we may need to specify as we shall see later. The last one is a function that is used to define the routes of that particular resource. For example, the first defined resource in the samples says, let the contacts resource handle any requests whose URL start with /contacts. It also specifies one route, new, that is used to handle creation of new contacts. Routes on the other hand accept the same arguments for the function argument. You must be asking yourself, "So how are routes different from resources?" The two are essentially the same, other than the former offers a way to categorize states (routes) that perform actions on a specific entity. We can think of an Ember.js application as tree, composed of a trunk (the router), branches (resources), and leaves (routes). For example, the contact state (a resource) caters for a specific contact. This resource can be displayed in two modes: read and write; hence, the index and edit routes respectively, as shown: this.resource('contact', {path: '/:contact_id'}, function(){this.route('index'); // auto definedthis.route('edit');}); Because Ember.js encourages convention, there are two components of routes and resources that are always autodefined: A default application resource: This is the master resource into which all other resources are defined. We therefore did not need to define it in the router. It's not mandatory to define resources on every state. For example, our about state is a route because it only needs to display static content to the user. It can however be thought to be a route of the already autodefined application resource. A default index route on every resource: Again, every resource has a default index route. It's autodefined because an application cannot settle on a resource state. The application therefore uses this route if no other route within this same resource was intended to be used. Nesting resources Resources can be nested depending on the architecture of the application. In our case, we need to load contacts in the sidebar before displaying any of them to the user. Therefore, we need to define the contact resource inside the contacts. On the other hand, in an application such as Twitter, it won't make sense to define a tweet resource embedded inside a tweets resource because an extra overhead will be incurred when a user just wants to view a single tweet linked from an external application. Understanding the state transition cycle A request is handled in the same way water travels from the roots (the application), up the trunk, and is eventually lost off leaves. This request we are referring to is a change in the browser location that can be triggered in a number of ways. Before we proceed into finer details about routes, let's discuss what happened when the application was first loaded. On boot, a few things happened as outlined: The application first transitioned into the application state, then the index state. Next, the application index route redirected the request to the contacts resource. Our application uses the browsers local storage to store the contacts and so for demoing purposes, the contacts resource populated this store with fixtures (located at fixtures.js). The application then transitioned into the corresponding contacts resource index route, contacts.index. Again, here we made a few decisions based on whether our store contained any data in it. Since we indeed have data, we redirected the application into the contact resource, passing the ID of the first contact along. Just as in the two preceding resources, the application transitioned from this last resource into the corresponding index route, contact.index. The following figure gives a good view of the preceding state change: Configuring the router The router can be customized in the following ways: Logging state transitions Specifying the root app URL Changing browser location lookup method During development, it may be necessary to track the states into which the application transitions into. Enabling these logs is as simple as: var App = Ember.Application.create({LOG_TRANSITIONS: true}); As illustrated, we enable the LOG_TRANSITIONS flag when creating the application. If an application is not served at the root of the website domain, then it may be necessary to specify the path name used as in the following example: App.Router.reopen({rootURL: '/contacts/'}); One other modification we may need to make revolves around the techniques Ember.js uses to subscribe to the browser's location changes. This makes it possible for the router to do its job of transitioning the app into the matched URL state. Two of these methods are as follows: Subscribing to the hashchange event Using the history.pushState API The default technique used is provided by the HashLocation class documented at http://emberjs.com/api/classes/Ember.HashLocation.html. This means that URL paths are usually prefixed with the hash symbol, for example, /#/contacts/1/edit. The other one is provided by the HistoryLocation class located at http://emberjs.com/api/classes/Ember.HistoryLocation.html. This does not distinguish URLs from the traditional ones and can be enabled as: App.Router.reopen({location: 'history'}); We can also opt to let Ember.js pick which method is best suited for our app with the following code: App.Router.reopen({location: 'auto'}); If we don't need any of these techniques, we could opt to do so especially when performing tests: App.Router.reopen({location: none}); Specifying a route's path We now know that when defining a route or resource, the resource name used also serves as the path the router uses to match request URLs. Sometimes, it may be necessary to specify a different path to use to match states. There are two common reasons that may lead us to do this, the first of which is good for delegating route handling to another route. Although, we have not yet covered route handlers, we already mentioned that our application transitions from the application index route into the contacts.index state. We may however specify that the contacts route handler should manage this path as: this.resource('contacts', {path: '/'}, function(){}); Therefore, to specify an alternative path for a route, simply pass the desired route in a hash as the second argument during resource definition. This also applies when defining routes. The second reason would be when a resource contains dynamic segments. For example, our contact resource handles contacts who should obviously have different URLs linking back to them. Ember.js uses URL pattern matching techniques used by other open source projects such as Ruby on Rails, Sinatra, and Express.js. Therefore, our contact resource should be defined as: this.resource('contact', {path: '/:contact_id'}, function(){}); In the preceding snippet, /:contact_id is the dynamic segment that will be replaced by the actual contact's ID. One thing to note is that nested resources prefix their paths with those of parent resources. Therefore, the contact resource's full path would be /contacts/:contact_id. It's also worth noting that the name of the dynamic segment is not mandated and so we could have named the dynamic segment as /:id. Defining route and resource handlers Now that we have defined all the possible states that our application can transition into, we need to define handlers to these states. From this point onwards, we will use the terms route and resource handlers interchangeably. A route handler performs the following major functions: Providing data (model) to be used by the current state Specifying the view and/or template to use to render the provided data to the user Redirecting an application away into another state Before we move into discussing these roles, we need to know that a route handler is defined from the Ember.Route class as: App.RouteHandlerNameRoute = Ember.Route.extend(); This class is used to define handlers for both resources and routes and therefore, the naming should not be a concern. Just as routes and resources are associated with paths and handlers, they are also associated with controllers, views, and templates using the Ember.js naming conventions. For example, when the application initializes, it enters into the application state and therefore, the following objects are sought: The application route The application controller The application view The application template In the spirit of do more with reduced boilerplate code, Ember.js autogenerates these objects unless explicitly defined in order to override the default implementations. As another example, if we examine our application, we notice that the contact.edit route has a corresponding App.ContactEditController controller and contact/edit template. We did not need to define its route handler or view. Having seen this example, when referring to routes, we normally separate the resource name from the route name by a period as in the following: resourceName.routeName In the case of templates, we may use a period or a forward slash: resourceName/routeName The other objects are usually camelized and suffixed by the class name: ResourcenameRoutenameClassname For example, the following table shows all the objects used. As mentioned earlier, some are autogenerated. Route Name Controller Route Handler View Template  applicationApplicationControllerApplicationRoute  ApplicationViewapplication        ApplicationViewapplication  IndexViewindex       about AboutController  AboutRoute  AboutView about  contactsContactsControllerContactsRoute  ContactsView  contacts      contacts.indexContactsIndexControllerContactsIndexRoute  ContactsIndexViewcontacts/index        ContactsIndexViewcontacts/index  ContactsNewRoute  ContactsNewViewcontacts/new      contact  ContactController  ContactRoute  ContactView contact  contact.index  ContactIndexController  ContactIndexRoute  ContactIndexView contact/index contact.edit  ContactEditController  ContactEditRoute  ContactEditView contact/index One thing to note is that objects associated with the intermediary application state do not need to carry the suffix; hence, just index or about. Specifying a route's model We mentioned that route handlers provide controllers, the data needed to be displayed by templates. These handlers have a model hook that can be used to provide this data in the following format: AppNamespace.RouteHandlerName = Ember.Route.extend({model: function(){}}) For instance, the route contacts handler in the sample loads any saved contacts from local storage as: model: function(){return App.Contact.find();} We have abstracted this logic into our App.Contact model. Notice how we reopen the class in order to define this static method. A static method can only be called by the class of that method and not its instances: App.Contact.reopenClass({find: function(id){return (!!id)? App.Contact.findOne(id): App.Contact.findAll();},…}) If no arguments are passed to the method, it goes ahead and calls the findAll method, which uses the local storage helper to retrieve the contacts: findAll: function(){var contacts = store('contacts') || [];return contacts.map(function(contact){return App.Contact.create(contact);});} Because we want to deal with contact objects, we iteratively convert the contents of the loaded contact list. If we examine the corresponding template, contacts, we notice that we were able to populate the sidebar as shown in the following code: <ul class="nav nav-pills nav-stacked">{{#each model}}<li>{{#link-to "contact.index" this}}{{name}}{{/link-to}}</li>{{/each}}</ul> Do not worry about the template syntax at this point if you're new to Ember.js. The important thing to note is that the model was accessed via the model variable. Of course, before that, we check to see if the model has any content in: {{#if model.length}}...{{else}}<h1>Create contact</h1>{{/if}} As we shall see later, if the list was empty, the application would be forced to transition into the contacts.new state, in order for the user to add the first contact as shown in the following screenshot: The contact handler is a different case. Remember we mentioned that its path has a dynamic segment that would be passed to the handler. This information is passed to the model hook in an options hash as: App.ContactRoute = Ember.Route.extend({model: function(params){return App.Contact.find(params.contact_id);},...}); Notice that we are able to access the contact's ID via the contact_id attribute of the hash. This time, the find method calls the findOne static method of the contact's class, which performs a search for the contact matching the provided ID, as shown in the following code: findOne: function(id){var contacts = store('contacts') || [];var contact = contacts.find(function(contact){return contact.id == id;});if (!contact) return;return App.Contact.create(contact);} Serializing resources We've mentioned that Ember.js supports content to be linked back externally. Internally, Ember.js simplifies creating these links in templates. In our sample application, when the user selects a contact, the application transitions into the contact.index state, passing his/her ID along. This is possible through the use of the link-to handlebars expression: {{#link-to "contact.index" this}}{{name}}{{/link-to}} The important thing to note is that this expression enables us to construct a link that points to the said resource by passing the resource name and the affected model. The destination resource or route handler is responsible for yielding this path constituting serialization. To serialize a resource, we need to override the matching serialize hook as in the contact handler case shown in the following code: App.ContactRoute = Ember.Route.extend({...serialize: function(model, params){var data = {}data[params[0]] = Ember.get(model, 'id');return data;}}); Serialization means that the hook is supposed to return the values of all the specified segments. It receives two arguments, the first of which is the affected resource and the second is an array of all the specified segments during the resource definition. In our case, we only had one and so we returned the required hash that resembled the following code: {contact_id: 1} If we, for example, defined a resource with multiple segments like the following code: this.resource('book',{path: '/name/:name/:publish_year'},function(){}); The serialization hook would need to return something close to: {name: 'jon+doe',publish_year: '1990'} Asynchronous routing In actual apps, we would often need to load the model data in an asynchronous fashion. There are various approaches that can be used to deliver this kind of data. The most robust way to load asynchronous data is through use of promises. Promises are objects whose unknown value can be set at a later point in time. It is very easy to create promises in Ember.js. For example, if our contacts were located in a remote resource, we could use jQuery to load them as: App.ContactsRoute = Ember.Route.extend({model: function(params){return Ember.$.getJSON('/contacts');}}); jQuery's HTTP utilities also return promises that Ember.js can consume. As a by the way, jQuery can also be referenced as Ember.$ in an Ember.js application. In the preceding snippet, once data is loaded, Ember.js would set it as the model of the resource. However, one thing is missing. We require that the loaded data be converted to the defined contact model as shown in the following little modification: App.ContactsRoute = Ember.Route.extend({model: function(params){var promise = Ember.Object.createWithMixins(Ember.DeferredMixin);Ember.$.getJSON('/contacts').then(reject, resolve);function resolve(contacts){contacts = contacts.map(function(contact){return App.Contact.create(contact);});promise.resolve(contacts)}function reject(res){var err = new Error(res.responseText);promise.reject(err);}return promise;}}); We first create the promise, kick off the XHR request, and then return the promise while the request is still being processed. Ember.js will resume routing once this promise is rejected or resolved. The XHR call also creates a promise; so, we need to attach to it, the then method which essentially says, invoke the passed resolve or reject function on successful or failed load respectively. The resolve function converts the loaded data and resolves the promise passing the data along thereby resumes routing. If the promise was rejected, the transition fails with an error. We will see how to handle this error in a moment. Note that there are two other flavors we can use to create promises in Ember.js as shown in the following examples: var promise = Ember.Deferred.create();Ember.$.getJSON('/contacts').then(success, fail);function success(){contacts = contacts.map(function(contact){return App.Contact.create(contact);});promise.resolve(contacts)}function fail(res){var err = new Error(res.responseText);promise.reject(err);}return promise; The second example is as follows: return new Ember.RSVP.Promise(function(resolve, reject){Ember.$.getJSON('/contacts').then(success, fail);function success(){contacts = contacts.map(function(contact){return App.Contact.create(contact);});resolve(contacts)}function fail(res){var err = new Error(res.responseText);reject(err);}}); Summary This article detailed how a browser's location-based state management is accomplished in Ember.js apps. Also, we accomplished how to create a router, define resources and routes, define a route's model, and perform a redirect. Resources for Article: Further resources on this subject: AngularJS Project [Article] Automating performance analysis with YSlow and PhantomJS [Article] AngularJS [Article]
Read more
  • 0
  • 0
  • 1600

article-image-introduction-custom-template-filters-and-tags
Packt
13 Oct 2014
25 min read
Save for later

Introduction to Custom Template Filters and Tags

Packt
13 Oct 2014
25 min read
This article is written by Aidas Bendoratis, the author of Web Development with Django Cookbook. In this article, we will cover the following recipes: Following conventions for your own template filters and tags Creating a template filter to show how many days have passed Creating a template filter to extract the first media object Creating a template filter to humanize URLs Creating a template tag to include a template if it exists Creating a template tag to load a QuerySet in a template Creating a template tag to parse content as a template Creating a template tag to modify request query parameters As you know, Django has quite an extensive template system, with features such as template inheritance, filters for changing the representation of values, and tags for presentational logic. Moreover, Django allows you to add your own template filters and tags in your apps. Custom filters or tags should be located in a template-tag library file under the templatetags Python package in your app. Your template-tag library can then be loaded in any template with a {% load %} template tag. In this article, we will create several useful filters and tags that give more control to the template editors. Following conventions for your own template filters and tags Custom template filters and tags can become a total mess if you don't have persistent guidelines to follow. Template filters and tags should serve template editors as much as possible. They should be both handy and flexible. In this recipe, we will look at some conventions that should be used when enhancing the functionality of the Django template system. How to do it... Follow these conventions when extending the Django template system: Don't create or use custom template filters or tags when the logic for the page fits better in the view, context processors, or in model methods. When your page is context-specific, such as a list of objects or an object-detail view, load the object in the view. If you need to show some content on every page, create a context processor. Use custom methods of the model instead of template filters when you need to get some properties of an object not related to the context of the template. Name the template-tag library with the _tags suffix. When your app is named differently than your template-tag library, you can avoid ambiguous package importing problems. In the newly created library, separate filters from tags, for example, by using comments such as the following: # -*- coding: UTF-8 -*-from django import templateregister = template.Library()### FILTERS #### .. your filters go here..### TAGS #### .. your tags go here.. Create template tags that are easy to remember by including the following constructs: for [app_name.model_name]: Include this construct to use a specific model using [template_name]: Include this construct to use a template for the output of the template tag limit [count]: Include this construct to limit the results to a specific amount as [context_variable]: Include this construct to save the results to a context variable that can be reused many times later Try to avoid multiple values defined positionally in template tags unless they are self-explanatory. Otherwise, this will likely confuse the template developers. Make as many arguments resolvable as possible. Strings without quotes should be treated as context variables that need to be resolved or as short words that remind you of the structure of the template tag components. See also The Creating a template filter to show how many days have passed recipe The Creating a template filter to extract the first media object recipe The Creating a template filter to humanize URLs recipe The Creating a template tag to include a template if it exists recipe The Creating a template tag to load a QuerySet in a template recipe The Creating a template tag to parse content as a template recipe The Creating a template tag to modify request query parameters recipe Creating a template filter to show how many days have passed Not all people keep track of the date, and when talking about creation or modification dates of cutting-edge information, for many of us, it is more convenient to read the time difference, for example, the blog entry was posted three days ago, the news article was published today, and the user last logged in yesterday. In this recipe, we will create a template filter named days_since that converts dates to humanized time differences. Getting ready Create the utils app and put it under INSTALLED_APPS in the settings, if you haven't done that yet. Then, create a Python package named templatetags inside this app (Python packages are directories with an empty __init__.py file). How to do it... Create a utility_tags.py file with this content: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-from datetime import datetimefrom django import templatefrom django.utils.translation import ugettext_lazy as _from django.utils.timezone import now as tz_nowregister = template.Library()### FILTERS ###@register.filterdef days_since(value):""" Returns number of days between today and value."""today = tz_now().date()if isinstance(value, datetime.datetime):value = value.date()diff = today - valueif diff.days > 1:return _("%s days ago") % diff.dayselif diff.days == 1:return _("yesterday")elif diff.days == 0:return _("today")else:# Date is in the future; return formatted date.return value.strftime("%B %d, %Y") How it works... If you use this filter in a template like the following, it will render something like yesterday or 5 days ago: {% load utility_tags %}{{ object.created|days_since }} You can apply this filter to the values of the date and datetime types. Each template-tag library has a register where filters and tags are collected. Django filters are functions registered by the register.filter decorator. By default, the filter in the template system will be named the same as the function or the other callable object. If you want, you can set a different name for the filter by passing name to the decorator, as follows: @register.filter(name="humanized_days_since")def days_since(value):... The filter itself is quite self-explanatory. At first, the current date is read. If the given value of the filter is of the datetime type, the date is extracted. Then, the difference between today and the extracted value is calculated. Depending on the number of days, different string results are returned. There's more... This filter is easy to extend to also show the difference in time, such as just now, 7 minutes ago, or 3 hours ago. Just operate the datetime values instead of the date values. See also The Creating a template filter to extract the first media object recipe The Creating a template filter to humanize URLs recipe Creating a template filter to extract the first media object Imagine that you are developing a blog overview page, and for each post, you want to show images, music, or videos in that page taken from the content. In such a case, you need to extract the <img>, <object>, and <embed> tags out of the HTML content of the post. In this recipe, we will see how to do this using regular expressions in the get_first_media filter. Getting ready We will start with the utils app that should be set in INSTALLED_APPS in the settings and the templatetags package inside this app. How to do it... In the utility_tags.py file, add the following content: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-import refrom django import templatefrom django.utils.safestring import mark_saferegister = template.Library()### FILTERS ###media_file_regex = re.compile(r"<object .+?</object>|"r"<(img|embed) [^>]+>") )@register.filterdef get_first_media(content):""" Returns the first image or flash file from the htmlcontent """m = media_file_regex.search(content)media_tag = ""if m:media_tag = m.group()return mark_safe(media_tag) How it works... While the HTML content in the database is valid, when you put the following code in the template, it will retrieve the <object>, <img>, or <embed> tags from the content field of the object, or an empty string if no media is found there: {% load utility_tags %} {{ object.content|get_first_media }} At first, we define the compiled regular expression as media_file_regex, then in the filter, we perform a search for that regular expression pattern. By default, the result will show the <, >, and & symbols escaped as &lt;, &gt;, and &amp; entities. But we use the mark_safe function that marks the result as safe HTML ready to be shown in the template without escaping. There's more... It is very easy to extend this filter to also extract the <iframe> tags (which are more recently being used by Vimeo and YouTube for embedded videos) or the HTML5 <audio> and <video> tags. Just modify the regular expression like this: media_file_regex = re.compile(r"<iframe .+?</iframe>|"r"<audio .+?</ audio>|<video .+?</video>|"r"<object .+?</object>|<(img|embed) [^>]+>") See also The Creating a template filter to show how many days have passed recipe The Creating a template filter to humanize URLs recipe Creating a template filter to humanize URLs Usually, common web users enter URLs into address fields without protocol and trailing slashes. In this recipe, we will create a humanize_url filter used to present URLs to the user in a shorter format, truncating very long addresses, just like what Twitter does with the links in tweets. Getting ready As in the previous recipes, we will start with the utils app that should be set in INSTALLED_APPS in the settings, and should contain the templatetags package. How to do it... In the FILTERS section of the utility_tags.py template library in the utils app, let's add a filter named humanize_url and register it: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-import refrom django import templateregister = template.Library()### FILTERS ###@register.filterdef humanize_url(url, letter_count):""" Returns a shortened human-readable URL """letter_count = int(letter_count)re_start = re.compile(r"^https?://")re_end = re.compile(r"/$")url = re_end.sub("", re_start.sub("", url))if len(url) > letter_count:url = u"%s…" % url[:letter_count - 1]return url How it works... We can use the humanize_url filter in any template like this: {% load utility_tags %}<a href="{{ object.website }}" target="_blank">{{ object.website|humanize_url:30 }}</a> The filter uses regular expressions to remove the leading protocol and the trailing slash, and then shortens the URL to the given amount of letters, adding an ellipsis to the end if the URL doesn't fit into the specified letter count. See also The Creating a template filter to show how many days have passed recipe The Creating a template filter to extract the first media object recipe The Creating a template tag to include a template if it exists recipe Creating a template tag to include a template if it exists Django has the {% include %} template tag that renders and includes another template. However, in some particular situations, there is a problem that an error is raised if the template does not exist. In this recipe, we will show you how to create a {% try_to_include %} template tag that includes another template, but fails silently if there is no such template. Getting ready We will start again with the utils app that should be installed and is ready for custom template tags. How to do it... Template tags consist of two things: the function parsing the arguments of the template tag and the node class that is responsible for the logic of the template tag as well as for the output. Perform the following steps: First, let's create the function parsing the template-tag arguments: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-from django import templatefrom django.template.loader import get_templateregister = template.Library()### TAGS ###@register.tagdef try_to_include(parser, token):"""Usage: {% try_to_include "sometemplate.html" %}This will fail silently if the template doesn't exist.If it does, it will be rendered with the current context."""try:tag_name, template_name = token.split_contents()except ValueError:raise template.TemplateSyntaxError, "%r tag requires a single argument" % token.contents.split()[0]return IncludeNode(template_name) Then, we need the node class in the same file, as follows: class IncludeNode(template.Node):def __init__(self, template_name):self.template_name = template_namedef render(self, context):try:# Loading the template and rendering ittemplate_name = template.resolve_variable(self. template_name, context)included_template = get_template(template_name).render(context)except template.TemplateDoesNotExist:included_template = ""return included_template How it works... The {% try_to_include %} template tag expects one argument, that is, template_name. So, in the try_to_include function, we are trying to assign the split contents of the token only to the tag_name variable (which is "try_to_include") and the template_name variable. If this doesn't work, the template syntax error is raised. The function returns the IncludeNode object, which gets the template_name field for later usage. In the render method of IncludeNode, we resolve the template_name variable. If a context variable was passed to the template tag, then its value will be used here for template_name. If a quoted string was passed to the template tag, then the content within quotes will be used for template_name. Lastly, we try to load the template and render it with the current template context. If that doesn't work, an empty string is returned. There are at least two situations where we could use this template tag: When including a template whose path is defined in a model, as follows: {% load utility_tags %}{% try_to_include object.template_path %} When including a template whose path is defined with the {% with %} template tag somewhere high in the template context variable's scope. This is especially useful when you need to create custom layouts for plugins in the placeholder of a template in Django CMS: #templates/cms/start_page.html{% with editorial_content_template_path="cms/plugins/editorial_content/start_page.html" %}{% placeholder "main_content" %}{% endwith %}#templates/cms/plugins/editorial_content.html{% load utility_tags %}{% if editorial_content_template_path %}{% try_to_include editorial_content_template_path %}{% else %}<div><!-- Some default presentation ofeditorial content plugin --></div>{% endif % There's more... You can use the {% try_to_include %} tag as well as the default {% include %} tag to include templates that extend other templates. This has a beneficial use for large-scale portals where you have different kinds of lists in which complex items share the same structure as widgets but have a different source of data. For example, in the artist list template, you can include the artist item template as follows: {% load utility_tags %}{% for object in object_list %}{% try_to_include "artists/includes/artist_item.html" %}{% endfor %} This template will extend from the item base as follows: {# templates/artists/includes/artist_item.html #}{% extends "utils/includes/item_base.html" %}  {% block item_title %}{{ object.first_name }} {{ object.last_name }}{% endblock %} The item base defines the markup for any item and also includes a Like widget, as follows: {# templates/utils/includes/item_base.html #}{% load likes_tags %}<h3>{% block item_title %}{% endblock %}</h3>{% if request.user.is_authenticated %}{% like_widget for object %}{% endif %} See also  The Creating a template tag to load a QuerySet in a template recipe The Creating a template tag to parse content as a template recipe The Creating a template tag to modify request query parameters recipe Creating a template tag to load a QuerySet in a template Most often, the content that should be shown in a web page will have to be defined in the view. If this is the content to show on every page, it is logical to create a context processor. Another situation is when you need to show additional content such as the latest news or a random quote on some specific pages, for example, the start page or the details page of an object. In this case, you can load the necessary content with the {% get_objects %} template tag, which we will implement in this recipe. Getting ready Once again, we will start with the utils app that should be installed and ready for custom template tags. How to do it... Template tags consist of function parsing arguments passed to the tag and a node class that renders the output of the tag or modifies the template context. Perform the following steps: First, let's create the function parsing the template-tag arguments, as follows: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-from django.db import modelsfrom django import templateregister = template.Library()### TAGS ###@register.tagdef get_objects(parser, token):"""Gets a queryset of objects of the model specified by appandmodel namesUsage:{% get_objects [<manager>.]<method> from<app_name>.<model_name> [limit <amount>] as<var_name> %}Example:{% get_objects latest_published from people.Personlimit 3 as people %}{% get_objects site_objects.all from news.Articlelimit 3 as articles %}{% get_objects site_objects.all from news.Articleas articles %}"""amount = Nonetry:tag_name, manager_method, str_from, appmodel,str_limit,amount, str_as, var_name = token.split_contents()except ValueError:try:tag_name, manager_method, str_from, appmodel, str_as,var_name = token.split_contents()except ValueError:raise template.TemplateSyntaxError, "get_objects tag requires a following syntax: ""{% get_objects [<manager>.]<method> from ""<app_ name>.<model_name>"" [limit <amount>] as <var_name> %}"try:app_name, model_name = appmodel.split(".")except ValueError:raise template.TemplateSyntaxError, "get_objects tag requires application name and ""model name separated by a dot"model = models.get_model(app_name, model_name)return ObjectsNode(model, manager_method, amount, var_name) Then, we create the node class in the same file, as follows: class ObjectsNode(template.Node):def __init__(self, model, manager_method, amount, var_name):self.model = modelself.manager_method = manager_methodself.amount = amountself.var_name = var_namedef render(self, context):if "." in self.manager_method:manager, method = self.manager_method.split(".")else:manager = "_default_manager"method = self.manager_methodqs = getattr(getattr(self.model, manager),method,self.model._default_manager.none,)()if self.amount:amount = template.resolve_variable(self.amount,context)context[self.var_name] = qs[:amount]else:context[self.var_name] = qsreturn "" How it works... The {% get_objects %} template tag loads a QuerySet defined by the manager method from a specified app and model, limits the result to the specified amount, and saves the result to a context variable. This is the simplest example of how to use the template tag that we have just created. It will load five news articles in any template using the following snippet: {% load utility_tags %}{% get_objects all from news.Article limit 5 as latest_articles %}{% for article in latest_articles %}<a href="{{ article.get_url_path }}">{{ article.title }}</a>{% endfor %} This is using the all method of the default objects manager of the Article model, and will sort the articles by the ordering attribute defined in the Meta class. A more advanced example would be required to create a custom manager with a custom method to query objects from the database. A manager is an interface that provides database query operations to models. Each model has at least one manager called objects by default. As an example, let's create the Artist model, which has a draft or published status, and a new manager, custom_manager, which allows you to select random published artists: #artists/models.py# -*- coding: UTF-8 -*-from django.db import modelsfrom django.utils.translation import ugettext_lazy as _STATUS_CHOICES = (('draft', _("Draft"),('published', _("Published"),)class ArtistManager(models.Manager):def random_published(self):return self.filter(status="published").order_by('?')class Artist(models.Model):# ...status = models.CharField(_("Status"), max_length=20,choices=STATUS_CHOICES)custom_manager = ArtistManager() To load a random published artist, you add the following snippet to any template: {% load utility_tags %}{% get_objects custom_manager.random_published from artists.Artistlimit 1 as random_artists %}{% for artist in random_artists %}{{ artist.first_name }} {{ artist.last_name }}{% endfor %} Let's look at the code of the template tag. In the parsing function, there is one of two formats expected: with the limit and without it. The string is parsed, the model is recognized, and then the components of the template tag are passed to the ObjectNode class. In the render method of the node class, we check the manager's name and its method's name. If this is not defined, _default_manager will be used, which is, in most cases, the same as objects. After that, we call the manager method and fall back to empty the QuerySet if the method doesn't exist. If the limit is defined, we resolve the value of it and limit the QuerySet. Lastly, we save the QuerySet to the context variable. See also The Creating a template tag to include a template if it exists recipe The Creating a template tag to parse content as a template recipe The Creating a template tag to modify request query parameters recipe Creating a template tag to parse content as a template In this recipe, we will create a template tag named {% parse %}, which allows you to put template snippets into the database. This is valuable when you want to provide different content for authenticated and non-authenticated users, when you want to include a personalized salutation, or when you don't want to hardcode media paths in the database. Getting ready No surprise, we will start with the utils app that should be installed and ready for custom template tags. How to do it... Template tags consist of two things: the function parsing the arguments of the template tag and the node class that is responsible for the logic of the template tag as well as for the output. Perform the following steps: First, let's create the function parsing the template-tag arguments, as follows: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-from django import templateregister = template.Library()### TAGS ###@register.tagdef parse(parser, token):"""Parses the value as a template and prints it or saves to avariableUsage:{% parse <template_value> [as <variable>] %}Examples:{% parse object.description %}{% parse header as header %}{% parse "{{ MEDIA_URL }}js/" as js_url %}"""bits = token.split_contents()tag_name = bits.pop(0)try:template_value = bits.pop(0)var_name = Noneif len(bits) == 2:bits.pop(0) # remove the word "as"var_name = bits.pop(0)except ValueError:raise template.TemplateSyntaxError, "parse tag requires a following syntax: ""{% parse <template_value> [as <variable>] %}"return ParseNode(template_value, var_name) Then, we create the node class in the same file, as follows: class ParseNode(template.Node):def __init__(self, template_value, var_name):self.template_value = template_valueself.var_name = var_namedef render(self, context):template_value = template.resolve_variable(self.template_value, context)t = template.Template(template_value)context_vars = {}for d in list(context):for var, val in d.items():context_vars[var] = valresult = t.render(template.RequestContext(context['request'], context_vars))if self.var_name:context[self.var_name] = resultreturn ""return result How it works... The {% parse %} template tag allows you to parse a value as a template and to render it immediately or to save it as a context variable. If we have an object with a description field, which can contain template variables or logic, then we can parse it and render it using the following code: {% load utility_tags %}{% parse object.description %} It is also possible to define a value to parse using a quoted string like this: {% load utility_tags %}{% parse "{{ STATIC_URL }}site/img/" as img_path %}<img src="{{ img_path }}someimage.png" alt="" /> Let's have a look at the code of the template tag. The parsing function checks the arguments of the template tag bit by bit. At first, we expect the name parse, then the template value, then optionally the word as, and lastly the context variable name. The template value and the variable name are passed to the ParseNode class. The render method of that class at first resolves the value of the template variable and creates a template object out of it. Then, it renders the template with all the context variables. If the variable name is defined, the result is saved to it; otherwise, the result is shown immediately. See also The Creating a template tag to include a template if it exists recipe The Creating a template tag to load a QuerySet in a template recipe The Creating a template tag to modify request query parameters recipe Creating a template tag to modify request query parameters Django has a convenient and flexible system to create canonical, clean URLs just by adding regular expression rules in the URL configuration files. But there is a lack of built-in mechanisms to manage query parameters. Views such as search or filterable object lists need to accept query parameters to drill down through filtered results using another parameter or to go to another page. In this recipe, we will create a template tag named {% append_to_query %}, which lets you add, change, or remove parameters of the current query. Getting ready Once again, we start with the utils app that should be set in INSTALLED_APPS and should contain the templatetags package. Also, make sure that you have the request context processor set for the TEMPLATE_CONTEXT_PROCESSORS setting, as follows: #settings.pyTEMPLATE_CONTEXT_PROCESSORS = ("django.contrib.auth.context_processors.auth","django.core.context_processors.debug","django.core.context_processors.i18n","django.core.context_processors.media","django.core.context_processors.static","django.core.context_processors.tz","django.contrib.messages.context_processors.messages","django.core.context_processors.request",) How to do it... For this template tag, we will be using the simple_tag decorator that parses the components and requires you to define just the rendering function, as follows: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-import urllibfrom django import templatefrom django.utils.encoding import force_strregister = template.Library()### TAGS ###@register.simple_tag(takes_context=True)def append_to_query(context, **kwargs):""" Renders a link with modified current query parameters """query_params = context['request'].GET.copy()for key, value in kwargs.items():query_params[key] = valuequery_string = u""if len(query_params):query_string += u"?%s" % urllib.urlencode([(key, force_str(value)) for (key, value) inquery_params. iteritems() if value]).replace('&', '&amp;')return query_string How it works... The {% append_to_query %} template tag reads the current query parameters from the request.GET dictionary-like QueryDict object to a new dictionary named query_params, and loops through the keyword parameters passed to the template tag updating the values. Then, the new query string is formed, all spaces and special characters are URL-encoded, and ampersands connecting query parameters are escaped. This new query string is returned to the template. To read more about QueryDict objects, refer to the official Django documentation: https://docs.djangoproject.com/en/1.6/ref/request-response/#querydict-objects Let's have a look at an example of how the {% append_to_query %} template tag can be used. If the current URL is http://127.0.0.1:8000/artists/?category=fine-art&page=1, we can use the following template tag to render a link that goes to the next page: {% load utility_tags %}<a href="{% append_to_query page=2 %}">2</a> The following is the output rendered, using the preceding template tag: <a href="?category=fine-art&amp;page=2">2</a> Or we can use the following template tag to render a link that resets pagination and goes to another category: {% load utility_tags i18n %} <a href="{% append_to_query category="sculpture" page="" %}">{% trans "Sculpture" %}</a> The following is the output rendered, using the preceding template tag: <a href="?category=sculpture">Sculpture</a> See also The Creating a template tag to include a template if it exists recipe The Creating a template tag to load a QuerySet in a template recipe The Creating a template tag to parse content as a template recipe Summary In this article showed you how to create and use your own template filters and tags, as the default Django template system is quite extensive, and there are more things to add for different cases. Resources for Article: Further resources on this subject: Adding a developer with Django forms [Article] So, what is Django? [Article] Django JavaScript Integration: jQuery In-place Editing Using Ajax [Article]
Read more
  • 0
  • 0
  • 4977

article-image-how-to-create-flappy-bird-clone-with-melonjs
Ellison Leao
26 Sep 2014
18 min read
Save for later

How to Create a Flappy Bird Clone with MelonJS

Ellison Leao
26 Sep 2014
18 min read
How to create a Flappy Bird clone using MelonJS Web game frameworks such as MelonJS are becoming more popular every day. In this post I will show you how easy it is to create a Flappy Bird clone game using the MelonJS bag of tricks. I will assume that you have some experience with JavaScript and that you have visited the melonJS official page. All of the code shown in this post is available on this GitHub repository. Step 1 - Organization A MelonJS game can be divided into three basic objects: Scene objects: Define all of the game scenes (Play, Menus, Game Over, High Score, and so on) Game entities: Add all of the stuff that interacts on the game (Players, enemies, collectables, and so on) Hud entities: All of the HUD objects to be inserted on the scenes (Life, Score, Pause buttons, and so on) For our Flappy Bird game, first create a directory, flappybirdgame, on your machine. Then create the following structure: flabbybirdgame | |--js |--|--entities |--|--screens |--|--game.js |--|--resources.js |--data |--|--img |--|--bgm |--|--sfx |--lib |--index.html Just a quick explanation about the folders: The js contains all of the game source. The entities folder will handle the HUD and the Game entities. In the screen folder, we will create all of the scene files. The game.js is the main game file. It will initialize all of the game resources, which is created in the resources.js file, the input, and the loading of the first scene. The data folder is where all of the assets, sounds, and game themes are inserted. I divided the folders into img for images (backgrounds, player atlas, menus, and so on), bgm for background music files (we need to provide a .ogg and .mp3 file for each sound if we want full compatibility with all browsers) and sfx for sound effects. In the lib folder we will add the current 1.0.2 version of MelonJS. Lastly, an index.html file is used to build the canvas. Step 2 - Implementation First we will build the game.js file: var game = { data: { score : 0, steps: 0, start: false, newHiScore: false, muted: false }, "onload": function() { if (!me.video.init("screen", 900, 600, true, 'auto')) { alert("Your browser does not support HTML5 canvas."); return; } me.audio.init("mp3,ogg"); me.loader.onload = this.loaded.bind(this); me.loader.preload(game.resources); me.state.change(me.state.LOADING); }, "loaded": function() { me.state.set(me.state.MENU, new game.TitleScreen()); me.state.set(me.state.PLAY, new game.PlayScreen()); me.state.set(me.state.GAME_OVER, new game.GameOverScreen()); me.input.bindKey(me.input.KEY.SPACE, "fly", true); me.input.bindKey(me.input.KEY.M, "mute", true); me.input.bindPointer(me.input.KEY.SPACE); me.pool.register("clumsy", BirdEntity); me.pool.register("pipe", PipeEntity, true); me.pool.register("hit", HitEntity, true); // in melonJS 1.0.0, viewport size is set to Infinity by default me.game.viewport.setBounds(0, 0, 900, 600); me.state.change(me.state.MENU); } }; The game.js is divided into: data object: This global object will handle all of the global variables that will be used on the game. For our game we will use score to record the player score, and steps to record how far the bird goes. The other variables are flags that we are using to control some game states. onload method: This method preloads the resources and initializes the canvas screen and then calls the loaded method when it's done. loaded method: This method first creates and puts into the state stack the screens that we will use on the game. We will use the implementation for these screens later on. It enables all of the input keys to handle the game. For our game we will be using the space and left mouse keys to control the bird and the M key to mute sound. It also adds the game entities BirdEntity, PipeEntity and the HitEntity in the game poll. I will explain the entities later. Then you need to create the resource.js file: game.resources = [ {name: "bg", type:"image", src: "data/img/bg.png"}, {name: "clumsy", type:"image", src: "data/img/clumsy.png"}, {name: "pipe", type:"image", src: "data/img/pipe.png"}, {name: "logo", type:"image", src: "data/img/logo.png"}, {name: "ground", type:"image", src: "data/img/ground.png"}, {name: "gameover", type:"image", src: "data/img/gameover.png"}, {name: "gameoverbg", type:"image", src: "data/img/gameoverbg.png"}, {name: "hit", type:"image", src: "data/img/hit.png"}, {name: "getready", type:"image", src: "data/img/getready.png"}, {name: "new", type:"image", src: "data/img/new.png"}, {name: "share", type:"image", src: "data/img/share.png"}, {name: "tweet", type:"image", src: "data/img/tweet.png"}, {name: "leader", type:"image", src: "data/img/leader.png"}, {name: "theme", type: "audio", src: "data/bgm/"}, {name: "hit", type: "audio", src: "data/sfx/"}, {name: "lose", type: "audio", src: "data/sfx/"}, {name: "wing", type: "audio", src: "data/sfx/"}, ]; Now let's create the game entities. First the HUD elements: create a HUD.js file in the entities folder. In this file you will create: A score entity A background layer entity The share buttons entities (Facebook, Twitter, and so on) game.HUD = game.HUD || {}; game.HUD.Container = me.ObjectContainer.extend({ init: function() { // call the constructor this.parent(); // persistent across level change this.isPersistent = true; // non collidable this.collidable = false; // make sure our object is always draw first this.z = Infinity; // give a name this.name = "HUD"; // add our child score object at the top left corner this.addChild(new game.HUD.ScoreItem(5, 5)); } }); game.HUD.ScoreItem = me.Renderable.extend({ init: function(x, y) { // call the parent constructor // (size does not matter here) this.parent(new me.Vector2d(x, y), 10, 10); // local copy of the global score this.stepsFont = new me.Font('gamefont', 80, '#000', 'center'); // make sure we use screen coordinates this.floating = true; }, update: function() { return true; }, draw: function (context) { if (game.data.start && me.state.isCurrent(me.state.PLAY)) this.stepsFont.draw(context, game.data.steps, me.video.getWidth()/2, 10); } }); var BackgroundLayer = me.ImageLayer.extend({ init: function(image, z, speed) { name = image; width = 900; height = 600; ratio = 1; // call parent constructor this.parent(name, width, height, image, z, ratio); }, update: function() { if (me.input.isKeyPressed('mute')) { game.data.muted = !game.data.muted; if (game.data.muted){ me.audio.disable(); }else{ me.audio.enable(); } } return true; } }); var Share = me.GUI_Object.extend({ init: function(x, y) { var settings = {}; settings.image = "share"; settings.spritewidth = 150; settings.spriteheight = 75; this.parent(x, y, settings); }, onClick: function(event) { var shareText = 'Just made ' + game.data.steps + ' steps on Clumsy Bird! Can you beat me? Try online here!'; var url = 'http://ellisonleao.github.io/clumsy-bird/'; FB.ui( { method: 'feed', name: 'My Clumsy Bird Score!', caption: "Share to your friends", description: ( shareText ), link: url, picture: 'http://ellisonleao.github.io/clumsy-bird/data/img/clumsy.png' } ); return false; } }); var Tweet = me.GUI_Object.extend({ init: function(x, y) { var settings = {}; settings.image = "tweet"; settings.spritewidth = 152; settings.spriteheight = 75; this.parent(x, y, settings); }, onClick: function(event) { var shareText = 'Just made ' + game.data.steps + ' steps on Clumsy Bird! Can you beat me? Try online here!'; var url = 'http://ellisonleao.github.io/clumsy-bird/'; var hashtags = 'clumsybird,melonjs' window.open('https://twitter.com/intent/tweet?text=' + shareText + '&hashtags=' + hashtags + '&count=' + url + '&url=' + url, 'Tweet!', 'height=300,width=400') return false; } }); You should notice that there are different me classes for different types of entities. The ScoreItem is a Renderable object that is created under an ObjectContainer and it will render the game steps on the play screen that we will create later. The share and Tweet buttons are created with the GUI_Object class. This class implements the onClick event that handles click events used to create the share events. The BackgroundLayer is a particular object created using the ImageLayer class. This class controls some generic image layers that can be used in the game. In our particular case we are just using a single fixed image, with fixed ratio and no scrolling. Now to the game entities. For this game we will need: BirdEntity: The bird and its behavior PipeEntity: The pipe object HitEntity: A invisible entity just to get the steps counting PipeGenerator: Will handle the PipeEntity creation Ground: A entity for the ground TheGround: The animated ground Container Add an entities.js file into the entities folder: var BirdEntity = me.ObjectEntity.extend({ init: function(x, y) { var settings = {}; settings.image = me.loader.getImage('clumsy'); settings.width = 85; settings.height = 60; settings.spritewidth = 85; settings.spriteheight= 60; this.parent(x, y, settings); this.alwaysUpdate = true; this.gravity = 0.2; this.gravityForce = 0.01; this.maxAngleRotation = Number.prototype.degToRad(30); this.maxAngleRotationDown = Number.prototype.degToRad(90); this.renderable.addAnimation("flying", [0, 1, 2]); this.renderable.addAnimation("idle", [0]); this.renderable.setCurrentAnimation("flying"); this.animationController = 0; // manually add a rectangular collision shape this.addShape(new me.Rect(new me.Vector2d(5, 5), 70, 50)); // a tween object for the flying physic effect this.flyTween = new me.Tween(this.pos); this.flyTween.easing(me.Tween.Easing.Exponential.InOut); }, update: function(dt) { // mechanics if (game.data.start) { if (me.input.isKeyPressed('fly')) { me.audio.play('wing'); this.gravityForce = 0.01; var currentPos = this.pos.y; // stop the previous one this.flyTween.stop() this.flyTween.to({y: currentPos - 72}, 100); this.flyTween.start(); this.renderable.angle = -this.maxAngleRotation; } else { this.gravityForce += 0.2; this.pos.y += me.timer.tick * this.gravityForce; this.renderable.angle += Number.prototype.degToRad(3) * me.timer.tick; if (this.renderable.angle > this.maxAngleRotationDown) this.renderable.angle = this.maxAngleRotationDown; } } var res = me.game.world.collide(this); if (res) { if (res.obj.type != 'hit') { me.device.vibrate(500); me.state.change(me.state.GAME_OVER); return false; } // remove the hit box me.game.world.removeChildNow(res.obj); // the give dt parameter to the update function // give the time in ms since last frame // use it instead ? game.data.steps++; me.audio.play('hit'); } else { var hitGround = me.game.viewport.height - (96 + 60); var hitSky = -80; // bird height + 20px if (this.pos.y >= hitGround || this.pos.y <= hitSky) { me.state.change(me.state.GAME_OVER); return false; } } return this.parent(dt); }, }); var PipeEntity = me.ObjectEntity.extend({ init: function(x, y) { var settings = {}; settings.image = me.loader.getImage('pipe'); settings.width = 148; settings.height= 1664; settings.spritewidth = 148; settings.spriteheight= 1664; this.parent(x, y, settings); this.alwaysUpdate = true; this.gravity = 5; this.updateTime = false; }, update: function(dt) { // mechanics this.pos.add(new me.Vector2d(-this.gravity * me.timer.tick, 0)); if (this.pos.x < -148) { me.game.world.removeChild(this); } return true; }, }); var PipeGenerator = me.Renderable.extend({ init: function() { this.parent(new me.Vector2d(), me.game.viewport.width, me.game.viewport.height); this.alwaysUpdate = true; this.generate = 0; this.pipeFrequency = 92; this.pipeHoleSize = 1240; this.posX = me.game.viewport.width; }, update: function(dt) { if (this.generate++ % this.pipeFrequency == 0) { var posY = Number.prototype.random( me.video.getHeight() - 100, 200 ); var posY2 = posY - me.video.getHeight() - this.pipeHoleSize; var pipe1 = new me.pool.pull("pipe", this.posX, posY); var pipe2 = new me.pool.pull("pipe", this.posX, posY2); var hitPos = posY - 100; var hit = new me.pool.pull("hit", this.posX, hitPos); pipe1.renderable.flipY(); me.game.world.addChild(pipe1, 10); me.game.world.addChild(pipe2, 10); me.game.world.addChild(hit, 11); } return true; }, }); var HitEntity = me.ObjectEntity.extend({ init: function(x, y) { var settings = {}; settings.image = me.loader.getImage('hit'); settings.width = 148; settings.height= 60; settings.spritewidth = 148; settings.spriteheight= 60; this.parent(x, y, settings); this.alwaysUpdate = true; this.gravity = 5; this.updateTime = false; this.type = 'hit'; this.renderable.alpha = 0; this.ac = new me.Vector2d(-this.gravity, 0); }, update: function() { // mechanics this.pos.add(this.ac); if (this.pos.x < -148) { me.game.world.removeChild(this); } return true; }, }); var Ground = me.ObjectEntity.extend({ init: function(x, y) { var settings = {}; settings.image = me.loader.getImage('ground'); settings.width = 900; settings.height= 96; this.parent(x, y, settings); this.alwaysUpdate = true; this.gravity = 0; this.updateTime = false; this.accel = new me.Vector2d(-4, 0); }, update: function() { // mechanics this.pos.add(this.accel); if (this.pos.x < -this.renderable.width) { this.pos.x = me.video.getWidth() - 10; } return true; }, }); var TheGround = Object.extend({ init: function() { this.ground1 = new Ground(0, me.video.getHeight() - 96); this.ground2 = new Ground(me.video.getWidth(), me.video.getHeight() - 96); me.game.world.addChild(this.ground1, 11); me.game.world.addChild(this.ground2, 11); }, update: function () { return true; } }) Note that every game entity inherits from the me.ObjectEntity class. We need to pass the settings of the entity on the init method, telling it which image we will use from the resources along with the image measure. We also implement the update method for each Entity, telling it how it will behave during game time. Now we need to create our scenes. The game is divided into: TitleScreen PlayScreen GameOverScreen We will separate the scenes into js files. First create a title.js file in the screens folder: game.TitleScreen = me.ScreenObject.extend({ init: function(){ this.font = null; }, onResetEvent: function() { me.audio.stop("theme"); game.data.newHiScore = false; me.game.world.addChild(new BackgroundLayer('bg', 1)); me.input.bindKey(me.input.KEY.ENTER, "enter", true); me.input.bindKey(me.input.KEY.SPACE, "enter", true); me.input.bindPointer(me.input.mouse.LEFT, me.input.KEY.ENTER); this.handler = me.event.subscribe(me.event.KEYDOWN, function (action, keyCode, edge) { if (action === "enter") { me.state.change(me.state.PLAY); } }); //logo var logoImg = me.loader.getImage('logo'); var logo = new me.SpriteObject ( me.game.viewport.width/2 - 170, -logoImg, logoImg ); me.game.world.addChild(logo, 10); var logoTween = new me.Tween(logo.pos).to({y: me.game.viewport.height/2 - 100}, 1000).easing(me.Tween.Easing.Exponential.InOut).start(); this.ground = new TheGround(); me.game.world.addChild(this.ground, 11); me.game.world.addChild(new (me.Renderable.extend ({ // constructor init: function() { // size does not matter, it's just to avoid having a zero size // renderable this.parent(new me.Vector2d(), 100, 100); //this.font = new me.Font('Arial Black', 20, 'black', 'left'); this.text = me.device.touch ? 'Tap to start' : 'PRESS SPACE OR CLICK LEFT MOUSE BUTTON TO START ntttttttttttPRESS "M" TO MUTE SOUND'; this.font = new me.Font('gamefont', 20, '#000'); }, update: function () { return true; }, draw: function (context) { var measure = this.font.measureText(context, this.text); this.font.draw(context, this.text, me.game.viewport.width/2 - measure.width/2, me.game.viewport.height/2 + 50); } })), 12); }, onDestroyEvent: function() { // unregister the event me.event.unsubscribe(this.handler); me.input.unbindKey(me.input.KEY.ENTER); me.input.unbindKey(me.input.KEY.SPACE); me.input.unbindPointer(me.input.mouse.LEFT); me.game.world.removeChild(this.ground); } }); Then, create a play.js file on the same folder: game.PlayScreen = me.ScreenObject.extend({ init: function() { me.audio.play("theme", true); // lower audio volume on firefox browser var vol = me.device.ua.contains("Firefox") ? 0.3 : 0.5; me.audio.setVolume(vol); this.parent(this); }, onResetEvent: function() { me.audio.stop("theme"); if (!game.data.muted){ me.audio.play("theme", true); } me.input.bindKey(me.input.KEY.SPACE, "fly", true); game.data.score = 0; game.data.steps = 0; game.data.start = false; game.data.newHiscore = false; me.game.world.addChild(new BackgroundLayer('bg', 1)); this.ground = new TheGround(); me.game.world.addChild(this.ground, 11); this.HUD = new game.HUD.Container(); me.game.world.addChild(this.HUD); this.bird = me.pool.pull("clumsy", 60, me.game.viewport.height/2 - 100); me.game.world.addChild(this.bird, 10); //inputs me.input.bindPointer(me.input.mouse.LEFT, me.input.KEY.SPACE); this.getReady = new me.SpriteObject( me.video.getWidth()/2 - 200, me.video.getHeight()/2 - 100, me.loader.getImage('getready') ); me.game.world.addChild(this.getReady, 11); var fadeOut = new me.Tween(this.getReady).to({alpha: 0}, 2000) .easing(me.Tween.Easing.Linear.None) .onComplete(function() { game.data.start = true; me.game.world.addChild(new PipeGenerator(), 0); }).start(); }, onDestroyEvent: function() { me.audio.stopTrack('theme'); // free the stored instance this.HUD = null; this.bird = null; me.input.unbindKey(me.input.KEY.SPACE); me.input.unbindPointer(me.input.mouse.LEFT); } }); Finally, the gameover.js screen: game.GameOverScreen = me.ScreenObject.extend({ init: function() { this.savedData = null; this.handler = null; }, onResetEvent: function() { me.audio.play("lose"); //save section this.savedData = { score: game.data.score, steps: game.data.steps }; me.save.add(this.savedData); // clay.io if (game.data.score > 0) { me.plugin.clay.leaderboard('clumsy'); } if (!me.save.topSteps) me.save.add({topSteps: game.data.steps}); if (game.data.steps > me.save.topSteps) { me.save.topSteps = game.data.steps; game.data.newHiScore = true; } me.input.bindKey(me.input.KEY.ENTER, "enter", true); me.input.bindKey(me.input.KEY.SPACE, "enter", false) me.input.bindPointer(me.input.mouse.LEFT, me.input.KEY.ENTER); this.handler = me.event.subscribe(me.event.KEYDOWN, function (action, keyCode, edge) { if (action === "enter") { me.state.change(me.state.MENU); } }); var gImage = me.loader.getImage('gameover'); me.game.world.addChild(new me.SpriteObject( me.video.getWidth()/2 - gImage.width/2, me.video.getHeight()/2 - gImage.height/2 - 100, gImage ), 12); var gImageBoard = me.loader.getImage('gameoverbg'); me.game.world.addChild(new me.SpriteObject( me.video.getWidth()/2 - gImageBoard.width/2, me.video.getHeight()/2 - gImageBoard.height/2, gImageBoard ), 10); me.game.world.addChild(new BackgroundLayer('bg', 1)); this.ground = new TheGround(); me.game.world.addChild(this.ground, 11); // share button var buttonsHeight = me.video.getHeight() / 2 + 200; this.share = new Share(me.video.getWidth()/3 - 100, buttonsHeight); me.game.world.addChild(this.share, 12); //tweet button this.tweet = new Tweet(this.share.pos.x + 170, buttonsHeight); me.game.world.addChild(this.tweet, 12); //leaderboard button this.leader = new Leader(this.tweet.pos.x + 170, buttonsHeight); me.game.world.addChild(this.leader, 12); // add the dialog witht he game information if (game.data.newHiScore) { var newRect = new me.SpriteObject( 235, 355, me.loader.getImage('new') ); me.game.world.addChild(newRect, 12); } this.dialog = new (me.Renderable.extend({ // constructor init: function() { // size does not matter, it's just to avoid having a zero size // renderable this.parent(new me.Vector2d(), 100, 100); this.font = new me.Font('gamefont', 40, 'black', 'left'); this.steps = 'Steps: ' + game.data.steps.toString(); this.topSteps= 'Higher Step: ' + me.save.topSteps.toString(); }, update: function () { return true; }, draw: function (context) { var stepsText = this.font.measureText(context, this.steps); var topStepsText = this.font.measureText(context, this.topSteps); var scoreText = this.font.measureText(context, this.score); //steps this.font.draw( context, this.steps, me.game.viewport.width/2 - stepsText.width/2 - 60, me.game.viewport.height/2 ); //top score this.font.draw( context, this.topSteps, me.game.viewport.width/2 - stepsText.width/2 - 60, me.game.viewport.height/2 + 50 ); } })); me.game.world.addChild(this.dialog, 12); }, onDestroyEvent: function() { // unregister the event me.event.unsubscribe(this.handler); me.input.unbindKey(me.input.KEY.ENTER); me.input.unbindKey(me.input.KEY.SPACE); me.input.unbindPointer(me.input.mouse.LEFT); me.game.world.removeChild(this.ground); this.font = null; me.audio.stop("theme"); } });  Here is how the ScreenObjects works: First it calls the init constructor method for any variable initialization. onResetEvent is called next. This method will be called every time the scene is called. In our case the onResetEvent will add some objects to the game world stack. The onDestroyEvent acts like a garbage collector and unregisters bind events and removes some elements on the draw calls. Now, let's put it all together in the index.html file: <!DOCTYPE HTML> <html lang="en"> <head> <title>Clumsy Bird</title> </head> <body> <!-- the facebook init for the share button --> <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({ appId : '213642148840283', status : true, xfbml : true }); }; (function(d, s, id){ var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) {return;} js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/pt_BR/all.js"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); </script> <!-- Canvas placeholder --> <div id="screen"></div> <!-- melonJS Library --> <script type="text/javascript" src="lib/melonJS-1.0.2.js" ></script> <script type="text/javascript" src="js/entities/HUD.js" ></script> <script type="text/javascript" src="js/entities/entities.js" ></script> <script type="text/javascript" src="js/screens/title.js" ></script> <script type="text/javascript" src="js/screens/play.js" ></script> <script type="text/javascript" src="js/screens/gameover.js" ></script> </body> </html> Step 3 - Flying! To run our game we will need a web server of your choice. If you have Python installed, you can simply type the following in your shell: $python -m SimpleHTTPServer Then you can open your browser at http://localhost:8000. If all went well, you will see the title screen after it loads, like in the following image: I hope you enjoyed this post!  About this author Ellison Leão (@ellisonleao) is a passionate software engineer with more than 6 years of experience in web projects and is a contributor to the MelonJS framework and other open source projects. When he is not writing games, he loves to play drums.
Read more
  • 0
  • 12
  • 9912
article-image-interfacing-react-components-angular-applications
Patrick Marabeas
26 Sep 2014
10 min read
Save for later

Interfacing React Components with Angular Applications

Patrick Marabeas
26 Sep 2014
10 min read
There's been talk lately of using React as the view within Angular's MVC architecture. Angular, as we all know, uses dirty checking. As I'll touch on later, it accepts the fact of (minor) performance loss to gain the great two-way data binding it has. React, on the other hand, uses a virtual DOM and only renders the difference. This results in very fast performance. So, how do we leverage React's performance from our Angular application? Can we retain two-way data flow? And just how significant is the performance increase? The nrg module and demo code can be found over on my GitHub. The application To demonstrate communication between the two frameworks, let's build a reusable Angular module (nrg[Angular(ng) + React(r) = energy(nrg)!]) which will render (and re-render) a React component when our model changes. The React component will be composed of aninputandpelement that will display our model and will also update the model on change. To show this, we'll add aninputandpto our view bound to the model. In essence, changes to either input should result in all elements being kept in sync. We'll also add a button to our component that will demonstrate component unmounting on scope destruction. ;( ;(function(window, document, angular, undefined) { 'use strict'; angular.module('app', ['nrg']) .controller('MainController', ['$scope', function($scope) { $scope.text = 'This is set in Angular'; $scope.destroy = function() { $scope.$destroy(); } }]); })(window, document, angular); data-component specifies the React component we want to mount.data-ctrl (optional) specifies the controller we want to inject into the directive—this will allow specific components to be accessible onscope itself rather than scope.$parent.data-ng-model is the model we are going to pass between our Angular controller and our React view. <div data-ng-controller="MainController"> <!-- React component --> <div data-component="reactComponent" data-ctrl="" data-ng-model="text"> <!-- <input /> --> <!-- <button></button> --> <!-- <p></p> --> </div> <!-- Angular view --> <input type="text" data-ng-model="text" /> <p>{{text}}</p> </div> As you can see, the view has meaning when using Angular to render React components.<div data-component="reactComponent" data-ctrl="" data-ng-model="text"></div> has meaning when compared to<div id="reactComponent"></div>,which requires referencing a script file to see what component (and settings) will be mounted on that element. The Angular module - nrg.js The main functions of this reusable Angular module will be to: Specify the DOM element that the component should be mounted onto. Render the React component when changes have been made to the model. Pass the scope and element attributes to the component. Unmount the React component when the Angular scope is destroyed. The skeleton of our module looks like this: ;(function(window, document, angular, React, undefined) { 'use strict'; angular.module('nrg', []) To keep our code modular and extensible, we'll create a factory that will house our component functions, which are currently justrender and unmount . .factory('ComponentFactory', [function() { return { render: function() { }, unmount: function() { } } }]) This will be injected into our directive. .directive('component', ['$controller', 'ComponentFactory', function($controller, ComponentFactory) { return { restrict: 'EA', If a controller has been specified on the elements viadata-ctrl , then inject the$controller service. As mentioned earlier, this will allow scope variables and functions to be used within the React component to be accessible directly onscope , rather thanscope.$parent (the controller also doesn't need to be declared in the view withng-controller ). controller: function($scope, $element, $attrs){ return ($attrs.ctrl) ? $controller($attrs.ctrl, {$scope:$scope, $element:$element, $attrs:$attrs}) : null; }, Here’s an isolated scope with two-way-binding ondata-ng-model . scope: { ngModel: '=' }, link: function(scope, element, attrs) { // Calling ComponentFactory.render() & watching ng-model } } }]); })(window, document, angular, React); ComponentFactory Fleshing out theComponentFactory , we'll need to know how to render and unmount components. React.renderComponent( ReactComponent component, DOMElement container, [function callback] ) As such, we'll need to pass the component we wish to mount (component), the container we want to mount it in (element) and any properties (attrsandscope) we wish to pass to the component. This render function will be called every time the model is updated, so the updated scope will be pushed through each time. According to the React documentation, "If the React component was previously rendered into container, this (React.renderComponent) will perform an update on it and only mutate the DOM as necessary to reflect the latest React component." .factory('ComponentFactory', [function() { return { render: function(component, element, scope, attrs) { // If you have name-spaced your components, you'll want to specify that here - or pass it in via an attribute etc React.renderComponent(window[component]({ scope: scope, attrs: attrs }), element[0]); }, unmount: function(element) { React.unmountComponentAtNode(element[0]); } } }]) Component directive Back in our directive, we can now set up when we are going to call these two functions. link: function(scope, element, attrs) { // Collect the elements attrs in a nice usable object var attributes = {}; angular.forEach(element[0].attributes, function(a) { attributes[a.name.replace('data-','')] = a.value; }); // Render the component when the directive loads ComponentFactory.render(attrs.component, element, scope, attributes); // Watch the model and re-render the component scope.$watch('ngModel', function() { ComponentFactory.render(attrs.component, element, scope, attributes); }, true); // Unmount the component when the scope is destroyed scope.$on('$destroy', function () { ComponentFactory.unmount(element); }); } This implements dirty checking to see if the model has been updated. I haven't played around too much to see if there's a notable difference in performance between this and using a broadcast/listener. That said, to get a listener working as expected, you will need to wrap the render call in a $timeout to push it to the bottom of the stack to ensure scope is updated. scope.$on('renderMe', function() { $timeout(function() { ComponentFactory.render(attrs.component, element, scope, attributes); }); }); The React component We can now build our React component, which will use the model we defined as well as inform Angular of any updates it performs. /** @jsx React.DOM */ ;(function(window, document, React, undefined) { 'use strict'; window.reactComponent = React.createClass({ This is the content that will be rendered into the container. The properties that we passed to the component ({ scope: scope, attrs: attrs }) when we called React.renderComponent back in our component directive are now accessible via this.props. render: function(){ return ( <div> <input type='text' value={this.props.scope.ngModel} onChange={this.handleChange} /> <button onClick={this.deleteScope}>Destroy Scope</button> <p>{this.props.scope.ngModel}</p> </div> ) }, Via the on Change   event, we can call for Angular to run a digest, just as we normally would, but accessing scope via this.props : handleChange: function(event) { var _this = this; this.props.scope.$apply(function() { _this.props.scope.ngModel = event.target.value; }); }, Here we deal with the click event deleteScope  . The controller is accessible via scope.$parent  . If we had injected a controller into the component directive, its contents would be accessible directly on scope  , just as ngModel is.     deleteScope: function() { this.props.scope.$parent.destroy(); } }); })(window, document, React); The result Putting this code together (you can view the completed code on GitHub, or see it in action) we end up with: Two input elements, both of which update the model. Any changes in either our Angular application or our React view will be reflected in both. A React component button that calls a function in our MainController, destroying the scope and also resulting in the unmounting of the component. Pretty cool. But where is my perf increase!? This is obviously too small an application for anything to be gained by throwing your view over to React. To demonstrate just how much faster applications can be (by using React as the view), we'll throw a kitchen sink worth of randomly generated data at it. 5000 bits to be precise. Now, it should be stated that you probably have a pretty questionable UI if you have this much data binding going on. Misko Hevery has a great response regarding Angular's performance on StackOverflow. In summary: Humans are: Slow: Anything faster than 50ms is imperceptible to humans and thus can be considered as "instant". Limited: You can't really show more than about 2000 pieces of information to a human on a single page. Anything more than that is really bad UI, and humans can't process this anyway. Basically, know Angular's limits and your user's limits! That said, the following performance test was certainly accentuated on mobile devices. Though, on the flip side, UI should be simpler on mobile. Brute force performance demonstration ;(function(window, document, angular, undefined) { 'use strict'; angular.module('app') .controller('NumberController', ['$scope', function($scope) { $scope.numbers = []; ($scope.numGen = function(){ for(var i = 0; i < 5000; i++) { $scope.numbers[i] = Math.floor(Math.random() * (999999999999999 - 1)) + 1; } })(); }]); })(window, document, angular); Angular ng-repeat <div data-ng-controller="NumberController"> <button ng-click="numGen()">Refresh Data</button> <table> <tr ng-repeat="number in numbers"> <td>{{number}}</td> </tr> </table> </div> There was definitely lag felt as the numbers were loaded in and refreshed. From start to finish, this took around 1.5 seconds. React component <div data-ng-controller="NumberController"> <button ng-click="numGen()">Refresh Data</button> <div data-component="numberComponent" data-ng-model="numbers"></div> </div> ;(function(window, document, React, undefined) { window.numberComponent = React.createClass({ render: function() { var rows = this.props.scope.ngModel.map(function(number) { return ( <tr> <td>{number}</td> </tr> ); }); return ( <table>{rows}</table> ); } }); })(window, document, React); So that just happened. 270 milliseconds start to finish. Around 80% faster! Conclusion So, should you go rewrite all those Angular modules as React components? Probably not. It really comes down to the application you are developing and how dependent you are on OSS. It's definitely possible that a handful of complex modules could put your application in the realm of “feeling a tad sluggish”, but it should be remembered that perceived performance is all that matters to the user. Altering the manner in which content is loaded could end up being a better investment of time. Users will definitely feel performance increases on mobile websites sooner, however, and is certainly something to keep in mind. The nrg module and demo code can be found over on my GitHub. Visit our JavaScript page for more JavaScript content and tutorials!  About the author A guest post by Patrick Marabeas, a freelance frontend developer who loves learning and working with cutting edge web technologies. He spends much of his free time developing Angular modules, such as ng-FitText, ng-Slider, ng-YouTubeAPI, and ng-ScrollSpy. You can follow him on Twitter: @patrickmarabeas.
Read more
  • 0
  • 0
  • 6753

article-image-using-socketio-and-express-together
Packt
23 Sep 2014
16 min read
Save for later

Using Socket.IO and Express together

Packt
23 Sep 2014
16 min read
In this article by Joshua Johanan, the author of the book Building Scalable Apps with Redis and Node.js, tells us that Express application is just the foundation. We are going to add features until it is a fully usable app. We currently can serve web pages and respond to HTTP, but now we want to add real-time communication. It's very fortunate that we just spent most of this article learning about Socket.IO; it does just that! Let's see how we are going to integrate Socket.IO with an Express application. (For more resources related to this topic, see here.) We are going to use Express and Socket.IO side by side. Socket.IO does not use HTTP like a web application. It is event based, not request based. This means that Socket.IO will not interfere with Express routes that we have set up, and that's a good thing. The bad thing is that we will not have access to all the middleware that we set up for Express in Socket.IO. There are some frameworks that combine these two, but it still has to convert the request from Express into something that Socket.IO can use. I am not trying to knock down these frameworks. They simplify a complex problem and most importantly, they do it well (Sails is a great example of this). Our app, though, is going to keep Socket.IO and Express separated as much as possible with the least number of dependencies. We know that Socket.IO does not need Express, as all our examples have not used Express in any way. This has an added benefit in that we can break off our Socket.IO module and run it as its own application at a future point in time. The other great benefit is that we learn how to do it ourselves. We need to go into the directory where our Express application is. Make sure that our pacakage.json has all the additional packages for this article and run npm.install. The first thing we need to do is add our configuration settings. Adding Socket.IO to the config We will use the same config file that we created for our Express app. Open up config.js and change the file to what I have done in the following code: var config = {port: 3000,secret: 'secret',redisPort: 6379,redisHost: 'localhost',routes: {   login: '/account/login',   logout: '/account/logout'}};module.exports = config; We are adding two new attributes, redisPort and redisHost. This is because of how the redis package configures its clients. We also are removing the redisUrl attribute. We can configure all our clients with just these two Redis config options. Next, create a directory under the root of our project named socket.io. Then, create a file called index.js. This will be where we initialize Socket.IO and wire up all our event listeners and emitters. We are just going to use one namespace for our application. If we were to add multiple namespaces, I would just add them as files underneath the socket.io directory. Open up app.js and change the following lines in it: //variable declarations at the topVar io = require('./socket.io');//after all the middleware and routesvar server = app.listen(config.port);io.startIo(server); We will define the startIo function shortly, but let's talk about our app.listen change. Previously, we had the app.listen execute, and we did not capture it in a variable; now we are. Socket.IO listens using Node's http.createServer. It does this automatically if you pass in a number into its listen function. When Express executes app.listen, it returns an instance of the HTTP server. We capture that, and now we can pass the http server to Socket.IO's listen function. Let's create that startIo function. Open up index.js present in the socket.io location and add the following lines of code to it: var io = require('socket.io');var config = require('../config');var socketConnection = function socketConnection(socket){socket.emit('message', {message: 'Hey!'});};exports.startIo = function startIo(server){io = io.listen(server);var packtchat = io.of('/packtchat');packtchat.on('connection', socketConnection);return io;}; We are exporting the startIo function that expects a server object that goes right into Socket.IO's listen function. This should start Socket.IO serving. Next, we get a reference to our namespace and listen on the connection event, sending a message event back to the client. We also are loading our configuration settings. Let's add some code to the layout and see whether our application has real-time communication. We will need the Socket.IO client library, so link to it from node_modules like you have been doing, and put it in our static directory under a newly created js directory. Open layout.ejs present in the packtchatviews location and add the following lines to it: <!-- put these right before the body end tag --><script type="text/javascript" src="/js/socket.io.js"></script><script>var socket = io.connect("http://localhost:3000/packtchat");socket.on('message', function(d){console.log(d);});</script> We just listen for a message event and log it to the console. Fire up the node and load your application, http://localhost:3000. Check to see whether you get a message in your console. You should see your message logged to the console, as seen in the following screenshot: Success! Our application now has real-time communication. We are not done though. We still have to wire up all the events for our app. Who are you? There is one glaring issue. How do we know who is making the requests? Express has middleware that parses the session to see if someone has logged in. Socket.IO does not even know about a session. Socket.IO lets anyone connect that knows the URL. We do not want anonymous connections that can listen to all our events and send events to the server. We only want authenticated users to be able to create a WebSocket. We need to get Socket.IO access to our sessions. Authorization in Socket.IO We haven't discussed it yet, but Socket.IO has middleware. Before the connection event gets fired, we can execute a function and either allow the connection or deny it. This is exactly what we need. Using the authorization handler Authorization can happen at two places, on the default namespace or on a named namespace connection. Both authorizations happen through the handshake. The function's signature is the same either way. It will pass in the socket server, which has some stuff we need such as the connection's headers, for example. For now, we will add a simple authorization function to see how it works with Socket.IO. Open up index.js, present at the packtchatsocket.io location, and add a new function that will sit next to the socketConnection function, as seen in the following code: var io = require('socket.io');var socketAuth = function socketAuth(socket, next){return next();return next(new Error('Nothing Defined'));};var socketConnection = function socketConnection(socket){socket.emit('message', {message: 'Hey!'});};exports.startIo = function startIo(server){io = io.listen(server);var packtchat = io.of('/packtchat');packtchat.use(socketAuth);packtchat.on('connection', socketConnection);return io;}; I know that there are two returns in this function. We are going to comment one out, load the site, and then switch the lines that are commented out. The socket server that is passed in will have a reference to the handshake data that we will use shortly. The next function works just like it does in Express. If we execute it without anything, the middleware chain will continue. If it is executed with an error, it will stop the chain. Let's load up our site and test both by switching which return gets executed. We can allow or deny connections as we please now, but how do we know who is trying to connect? Cookies and sessions We will do it the same way Express does. We will look at the cookies that are passed and see if there is a session. If there is a session, then we will load it up and see what is in it. At this point, we should have the same knowledge about the Socket.IO connection that Express does about a request. The first thing we need to do is get a cookie parser. We will use a very aptly named package called cookie. This should already be installed if you updated your package.json and installed all the packages. Add a reference to this at the top of index.js present in the packtchatsocket.io location with all the other variable declarations: Var cookie = require('cookie'); And now we can parse our cookies. Socket.IO passes in the cookie with the socket object in our middleware. Here is how we parse it. Add the following code in the socketAuth function: var handshakeData = socket.request;var parsedCookie = cookie.parse(handshakeData.headers.cookie); At this point, we will have an object that has our connect.sid in it. Remember that this is a signed value. We cannot use it as it is right now to get the session ID. We will need to parse this signed cookie. This is where cookie-parser comes in. We will now create a reference to it, as follows: var cookieParser = require('cookie-parser'); We can now parse the signed connect.sid cookie to get our session ID. Add the following code right after our parsing code: var sid = cookieParser.signedCookie (parsedCookie['connect.sid'], config.secret); This will take the value from our parsedCookie and using our secret passphrase, will return the unsigned value. We will do a quick check to make sure this was a valid signed cookie by comparing the unsigned value to the original. We will do this in the following way: if (parsedCookie['connect.sid'] === sid)   return next(new Error('Not Authenticated')); This check will make sure we are only using valid signed session IDs. The following screenshot will show you the values of an example Socket.IO authorization with a cookie: Getting the session We now have a session ID so we can query Redis and get the session out. The default session store object of Express is extended by connect-redis. To use connect-redis, we use the same session package as we did with Express, express-session. The following code is used to create all this in index.js, present at packtchatsocket.io: //at the top with the other variable declarationsvar expressSession = require('express-session');var ConnectRedis = require('connect-redis')(expressSession);var redisSession = new ConnectRedis({host: config.redisHost, port: config.redisPort}); The final line is creating the object that will connect to Redis and get our session. This is the same command used with Express when setting the store option for the session. We can now get the session from Redis and see what's inside of it. What follows is the entire socketAuth function along with all our variable declarations: var io = require('socket.io'),connect = require('connect'),cookie = require('cookie'),expressSession = require('express-session'),ConnectRedis = require('connect-redis')(expressSession),redis = require('redis'),config = require('../config'),redisSession = new ConnectRedis({host: config.redisHost, port: config.redisPort});var socketAuth = function socketAuth(socket, next){var handshakeData = socket.request;var parsedCookie = cookie.parse(handshakeData.headers.cookie);var sid = connect.utils.parseSignedCookie(parsedCookie['connect.sid'], config.secret);if (parsedCookie['connect.sid'] === sid) return next(new Error('Not Authenticated'));redisSession.get(sid, function(err, session){   if (session.isAuthenticated)   {     socket.user = session.user;     socket.sid = sid;     return next();   }   else     return next(new Error('Not Authenticated'));});}; We can use redisSession and sid to get the session out of Redis and check its attributes. As far as our packages are concerned, we are just another Express app getting session data. Once we have the session data, we check the isAuthenticated attribute. If it's true, we know the user is logged in. If not, we do not let them connect yet. We are adding properties to the socket object to store information from the session. Later on, after a connection is made, we can get this information. As an example, we are going to change our socketConnection function to send the user object to the client. The following should be our socketConnection function: var socketConnection = function socketConnection(socket){socket.emit('message', {message: 'Hey!'});socket.emit('message', socket.user);}; Now, let's load up our browser and go to http://localhost:3000. Log in and then check the browser's console. The following screenshot will show that the client is receiving the messages: Adding application-specific events The next thing to do is to build out all the real-time events that Socket.IO is going to listen for and respond to. We are just going to create the skeleton for each of these listeners. Open up index.js, present in packtchatsocket.io, and change the entire socketConnection function to the following code: var socketConnection = function socketConnection(socket){socket.on('GetMe', function(){});socket.on('GetUser', function(room){});socket.on('GetChat', function(data){});socket.on('AddChat', function(chat){});socket.on('GetRoom', function(){});socket.on('AddRoom', function(r){});socket.on('disconnect', function(){});}; Most of our emit events will happen in response to a listener. Using Redis as the store for Socket.IO The final thing we are going to add is to switch Socket.IO's internal store to Redis. By default, Socket.IO uses a memory store to save any data you attach to a socket. As we know now, we cannot have an application state that is stored only on one server. We need to store it in Redis. Therefore, we add it to index.js, present in packtchatsocket.io. Add the following code to the variable declarations: Var redisAdapter = require('socket.io-redis'); An application state is a flexible idea. We can store the application state locally. This is done when the state does not need to be shared. A simple example is keeping the path to a local temp file. When the data will be needed by multiple connections, then it must be put into a shared space. Anything with a user's session will need to be shared, for example. The next thing we need to do is add some code to our startIo function. The following code is what our startIo function should look like: exports.startIo = function startIo(server){io = io.listen(server);io.adapter(redisAdapter({host: config.redisHost, port: config.redisPort}));var packtchat = io.of('/packtchat');packtchat.use(socketAuth);packtchat.on('connection', socketConnection);return io;}; The first thing is to start the server listening. Next, we will call io.set, which allows us to set configuration options. We create a new redisStore and set all the Redis attributes (redisPub, redisSub, and redisClient) to a new Redis client connection. The Redis client takes a port and the hostname. Socket.IO inner workings We are not going to completely dive into everything that Socket.IO does, but we will discuss a few topics. WebSockets This is what makes Socket.IO work. All web servers serve HTTP, that is, what makes them web servers. This works great when all you want to do is serve pages. These pages are served based on requests. The browser must ask for information before receiving it. If you want to have real-time connections, though, it is difficult and requires some workaround. HTTP was not designed to have the server initiate the request. This is where WebSockets come in. WebSockets allow the server and client to create a connection and keep it open. Inside of this connection, either side can send messages back and forth. This is what Socket.IO (technically, Engine.io) leverages to create real-time communication. Socket.IO even has fallbacks if you are using a browser that does not support WebSockets. The browsers that do support WebSockets at the time of writing include the latest versions of Chrome, Firefox, Safari, Safari on iOS, Opera, and IE 11. This means the browsers that do not support WebSockets are all the older versions of IE. Socket.IO will use different techniques to simulate a WebSocket connection. This involves creating an Ajax request and keeping the connection open for a long time. If data needs to be sent, it will send it in an Ajax request. Eventually, that request will close and the client will immediately create another request. Socket.IO even has an Adobe Flash implementation if you have to support really old browsers (IE 6, for example). It is not enabled by default. WebSockets also are a little different when scaling our application. Because each WebSocket creates a persistent connection, we may need more servers to handle Socket.IO traffic then regular HTTP. For example, when someone connects and chats for an hour, there will have only been one or two HTTP requests. In contrast, a WebSocket will have to be open for the entire hour. The way our code base is written, we can easily scale up more Socket.IO servers by themselves. Ideas to take away from this article The first takeaway is that for every emit, there needs to be an on. This is true whether the sender is the server or the client. It is always best to sit down and map out each event and which direction it is going. The next idea is that of note, which entails building our app out of loosely coupled modules. Our app.js kicks everything that deals with Express off. Then, it fires the startIo function. While it does pass over an object, we could easily create one and use that. Socket.IO just wants a basic HTTP server. In fact, you can just pass the port, which is what we used in our first couple of Socket.IO applications (Ping-Pong). If we wanted to create an application layer of Socket.IO servers, we could refactor this code out and have all the Socket.IO servers run on separate servers other than Express. Summary At this point, we should feel comfortable about using real-time events in Socket.IO. We should also know how to namespace our io server and create groups of users. We also learned how to authorize socket connections to only allow logged-in users to connect. Resources for Article: Further resources on this subject: Exploring streams [article] Working with Data Access and File Formats Using Node.js [article] So, what is Node.js? [article]
Read more
  • 0
  • 0
  • 16342