Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Front-End Web Development

341 Articles
article-image-working-charts
Packt
07 Sep 2015
14 min read
Save for later

Working with Charts

Packt
07 Sep 2015
14 min read
In this article by Anand Dayalan, the author of the book Ext JS 6 By Example, he explores the different types of chart components in Ext JS and ends with a sample project called expense analyzer. The following topics will be covered: Charts types Bar and column charts Area and line charts Pie charts 3D charts The expense analyzer – a sample project (For more resources related to this topic, see here.) Charts Ext JS is almost like a one-stop shop for all your JavaScript framework needs. Yes, Ext JS also includes charts with all other rich components you learned so far. Chart types There are three types of charts: cartesian, polar, and spacefilling. The cartesian chart Ext.chart.CartesianChart (xtype: cartesian or chart) A cartesian chart has two directions: X and Y. By default, X is horizontal and Y is vertical. Charts that use the cartesian coordinates are column, bar, area, line, and scatter. The polar chart Ext.chart.PolarChart (xtype: polar) These charts have two axes: angular and radial. Charts that plot values using the polar coordinates are pie and radar. The spacefilling chart Ext.chart.SpaceFillingChart (xtype: spacefilling) These charts fill the complete area of the chart. Bar and column charts For bar and column charts, at the minimum, you need to provide a store, axes, and series. The basic column chart Let's start with a simple basic column chart. First, let's create a simple tree store with the inline hardcoded data as follows: Ext.define('MyApp.model.Population', { extend: 'Ext.data.Model', fields: ['year', 'population'] }); Ext.define('MyApp.store.Population', { extend: 'Ext.data.Store', storeId: 'population', model: 'MyApp.model.Population', data: [ { "year": "1610","population": 350 }, { "year": "1650","population": 50368 }, { "year": "1700", "population": 250888 }, { "year": "1750","population": 1170760 }, { "year": "1800","population": 5308483 }, { "year": "1900","population": 76212168 }, { "year": "1950","population": 151325798 }, { "year": "2000","population": 281421906 }, { "year": "2010","population": 308745538 }, ] }); var store = Ext.create("MyApp.store.Population"); Now, let's create the chart using Ext.chart.CartesianChart (xtype: cartesian or chart ) and use the store created above. Ext.create('Ext.Container', { renderTo: Ext.getBody(), width: 500, height: 500, layout: 'fit', items: [{ xtype: 'chart', insetPadding: { top: 60, bottom: 20, left: 20, right: 40 }, store: store, axes: [{ type: 'numeric', position: 'left', grid: true, title: { text: 'Population in Millions', fontSize: 16 }, }, { type: 'category', title: { text: 'Year', fontSize: 16 }, position: 'bottom', } ], series: [{ type: 'bar', xField: 'year', yField: ['population'] }], sprites: { type: 'text', text: 'United States Population', font: '25px Helvetica', width: 120, height: 35, x: 100, y: 40 } } ] }); Important things to note in the preceding code are axes, series, and sprite. Axes can be of one of the three types: numeric, time, and category. In series, you can see that the type is set to bar. In Ext JS, to render the column or bar chart, you have to specify the type as bar, but if you want a bar chart, you have to set flipXY to true in the chart config. The sprites config used here is quite straightforward. Sprites is optional, not a must. The grid property can be specified for both the axes, although we have specified it only for one axis here. The insetPadding is used to specify the padding for the chart to render other information, such as the title. If we don't specify insetPadding, the title and other information may get overlapped with the chart. The output of the preceding code is shown here: The bar chart As mentioned before, in order to get the bar chart, you can just use the same code, but specify flipXP to true and change the positions of axes accordingly, as shown in the following code: Ext.create('Ext.Container', { renderTo: Ext.getBody(), width: 500, height: 500, layout: 'fit', items: [{ xtype: 'chart', flipXY: true, insetPadding: { top: 60, bottom: 20, left: 20, right: 40 }, store: store, axes: [{ type: 'numeric', position: 'bottom', grid: true, title: { text: 'Population in Millions', fontSize: 16 }, }, { type: 'category', title: { text: 'Year', fontSize: 16 }, position: 'left', } ], series: [{ type: 'bar', xField: 'year', yField: ['population'] } ], sprites: { type: 'text', text: 'United States Population', font: '25px Helvetica', width: 120, height: 35, x: 100, y: 40 } } ] }); The output of the preceding code is shown in the following screenshot: The stacked chart Now, let's say you want to plot two values in each category in the column chart. You can either stack them or have two bar columns for each category. Let's modify our column chart example to render a stacked column chart. For this, we need an additional numeric field in the store, and we need to specify two fields for yField in the series. You can stack more than two fields, but for this example, we will stack only two fields. Take a look at the following code: Ext.define('MyApp.model.Population', { extend: 'Ext.data.Model', fields: ['year', 'total','slaves'] }); Ext.define('MyApp.store.Population', { extend: 'Ext.data.Store', storeId: 'population', model: 'MyApp.model.Population', data: [ { "year": "1790", "total": 3.9, "slaves": 0.7 }, { "year": "1800", "total": 5.3, "slaves": 0.9 }, { "year": "1810", "total": 7.2, "slaves": 1.2 }, { "year": "1820", "total": 9.6, "slaves": 1.5 }, { "year": "1830", "total": 12.9, "slaves": 2 }, { "year": "1840", "total": 17, "slaves": 2.5 }, { "year": "1850", "total": 23.2, "slaves": 3.2 }, { "year": "1860", "total": 31.4, "slaves": 4 }, ] }); var store = Ext.create("MyApp.store.Population"); Ext.create('Ext.Container', { renderTo: Ext.getBody(), width: 500, height: 500, layout: 'fit', items: [{ xtype: 'cartesian', store: store, insetPadding: { top: 60, bottom: 20, left: 20, right: 40 }, axes: [{ type: 'numeric', position: 'left', grid: true, title: { text: 'Population in Millions', fontSize: 16 }, }, { type: 'category', title: { text: 'Year', fontSize: 16 }, position: 'bottom', } ], series: [{ type: 'bar', xField: 'year', yField: ['total','slaves'] } ], sprites: { type: 'text', text: 'United States Slaves Distribution 1790 to 1860', font: '20px Helvetica', width: 120, height: 35, x: 60, y: 40 } } ] }); The output of the stacked column chart is shown here: If you want to render multiple fields without stacking, then you can simply set the stacked property of the series to false to get the following output: There are so many options available in the chart. Let's take a look at some of the commonly used options: tooltip: This can be added easily by setting a tooltip property in the series legend: This can be rendered to any of the four sides of the chart by specifying the legend config sprites: This can be an array if you want to specify multiple informations, such as header, footer, and so on Here is the code for the same store configured with some advanced options: Ext.create('Ext.Container', { renderTo: Ext.getBody(), width: 500, height: 500, layout: 'fit', items: [{ xtype: 'chart', legend: { docked: 'bottom' }, insetPadding: { top: 60, bottom: 20, left: 20, right: 40 }, store: store, axes: [{ type: 'numeric', position: 'left', grid: true, title: { text: 'Population in Millions', fontSize: 16 }, minimum: 0, }, { type: 'category', title: { text: 'Year', fontSize: 16 }, position: 'bottom', } ], series: [{ type: 'bar', xField: 'year', stacked: false, title: ['Total', 'Slaves'], yField: ['total', 'slaves'], tooltip: { trackMouse: true, style: 'background: #fff', renderer: function (storeItem, item) { this.setHtml('In ' + storeItem.get('year') + ' ' + item.field + ' population was ' + storeItem.get(item.field) + ' m'); } } ], sprites: [{ type: 'text', text: 'United States Slaves Distribution 1790 to 1860', font: '20px Helvetica', width: 120, height: 35, x: 60, y: 40 }, { type: 'text', text: 'Source: http://www.wikipedia.org', fontSize: 10, x: 12, y: 440 }] }] }); The output with tooltip, legend, and footer is shown here: The 3D bar chart If you simply change the type of the series to 3D bar instead of bar, you'll get the 3D column chart, as show in the following screenshot: Area and line charts Area and line charts are also cartesian charts. The area chart To render an area chart, simply replace the series in the previous example with the following code: series: [{ type: 'area', xField: 'year', stacked: false, title: ['Total','slaves'], yField: ['total', 'slaves'], style: { stroke: "#94ae0a", fillOpacity: 0.6, } }] The output of the preceding code is shown here: Similar to the stacked column chart, you can have the stacked area chart as well by setting stacked to true in the series. If you set stacked to true in the preceding example, you'll get the following output:  Figure 7.1 The line chart To get the line chart shown in Figure 7.1, use the following series config in the preceding example instead: series: [{ type: 'line', xField: 'year', title: ['Total'], yField: ['total'] }, { type: 'line', xField: 'year', title: ['Slaves'], yField: ['slaves'] }], The pie chart This is one of the frequently used charts in many applications and reporting tools. Ext.chart.PolarChart (xtype: polar) should be used to render a pie chart. The basic pie chart Specify the type as pie, and specify the angleField and label to render a basic pie chart, as as shown in the following code: Ext.define('MyApp.store.Expense', { extend: 'Ext.data.Store', alias: 'store.expense', fields: [ 'cat', 'spent'], data: [ { "cat": "Restaurant", "spent": 100}, { "cat": "Travel", "spent": 150}, { "cat": "Insurance", "spent": 500}, { "cat": "Rent", "spent": 1000}, { "cat": "Groceries", "spent": 400}, { "cat": "Utilities", "spent": 300}, ] }); var store = Ext.create("MyApp.store.Expense"); Ext.create('Ext.Container', { renderTo: Ext.getBody(), width: 600, height: 500, layout: 'fit', items: [{ xtype: 'polar', legend: { docked: 'bottom' }, insetPadding: { top: 100, bottom: 20, left: 20, right: 40 }, store: store, series: [{ type: 'pie', angleField: 'spent', label: { field: 'cat', }, tooltip: { trackMouse: true, renderer: function (storeItem, item) { var value = ((parseFloat(storeItem.get('spent') / storeItem.store.sum('spent')) * 100.0).toFixed(2)); this.setHtml(storeItem.get('cat') + ': ' + value + '%'); } } }] }] }); The donut chart Just by setting the donut property of the series in the preceding example to 40, you'll get the following chart. Here, donut is the percentage of the radius of the hole compared to the entire disk: The 3D pie chart In Ext JS 6, there were some improvements made to the 3D pie chart. The 3D pie chart in Ext JS 6 now supports the label and configurable 3D aspects, such as thickness, distortion, and so on. Let's use the same model and store that was used in the pie chart example and create a 3D pie chart as follows: Ext.create('Ext.Container', { renderTo: Ext.getBody(), width: 600, height: 500, layout: 'fit', items: [{ xtype: 'polar', legend: { docked: 'bottom' }, insetPadding: { top: 100, bottom: 20, left: 80, right: 80 }, store: store, series: [{ type: 'pie3d', donut: 50, thickness: 70, distortion: 0.5, angleField: 'spent', label: { field: 'cat' }, tooltip: { trackMouse: true, renderer: function (storeItem, item) { var value = ((parseFloat(storeItem.get('spent') / storeItem.store.sum('spent')) * 100.0).toFixed(2)); this.setHtml(storeItem.get('cat') + ': ' + value + '%'); } } }] }] }); The following image shows the output of the preceding code: The expense analyzer – a sample project Now that you have learned the different kinds of charts available in Ext JS, let's use them to create a sample project called Expense Analyzer. The following screenshot shows the design of this sample project: Let's use Sencha Cmd to scaffold our application. Run the following command in the terminal or command window: sencha -sdk <path to SDK>/ext-6.0.0.415/ generate app EA ./expense-analyzer Now, let's remove all the unwanted files and code and add some additional files to create this project. The final folder structure and some of the important files are shown in the following Figure 7.2: The complete source code is not given in this article. Here, only some of the important files are shown. In between, some less important code has been truncated. The complete source is available at https://github.com/ananddayalan/extjs-by-example-expense-analyzer.  Figure 7.2 Now, let's create the grid shown in the design. The following code is used to create the grid. This List view extends from Ext.grid.Panel, uses the expense store for the data, and has three columns: Ext.define('EA.view.main.List', { extend: 'Ext.grid.Panel', xtype: 'mainlist', maxHeight: 400, requires: [ 'EA.store.Expense'], title: 'Year to date expense by category', store: { type: 'expense' }, columns: { defaults: { flex:1 }, items: [{ text: 'Category', dataIndex: 'cat' }, { formatter: "date('F')", text: 'Month', dataIndex: 'date' }, { text: 'Spent', dataIndex: 'spent' }] } }); Here, I have not used the pagination. The maxHeight is used to limit the height of the grid, and this enables the scroll bar as well because we have more records that won't fit the given maximum height of the grid. The following code creates the expense store used in the preceding example. This is a simple store with the inline data. Here, we have not created a separate model and added fields directly in the store: Ext.define('EA.store.Expense', { extend: 'Ext.data.Store', alias: 'store.expense', storeId: 'expense', fields: [{ name:'date', type: 'date' }, 'cat', 'spent' ], data: { items: [ { "date": "1/1/2015", "cat": "Restaurant", "spent": 100 }, { "date": "1/1/2015", "cat": "Travel", "spent": 22 }, { "date": "1/1/2015", "cat": "Insurance", "spent": 343 }, // Truncated code ]}, proxy: { type: 'memory', reader: { type: 'json', rootProperty: 'items' } } }); Next, let's create the bar chart shown in the design. In the bar chart, we will use another store called expensebyMonthStore, in which we'll populate data from the expense data store. The following 3D bar chart has two types of axis: numeric and category. We have used the month part of the date field as a category. A renderer is used to render the month part of the date field: Ext.define('EA.view.main.Bar', { extend: 'Ext.chart.CartesianChart', requires: ['Ext.chart.axis.Category', 'Ext.chart.series.Bar3D', 'Ext.chart.axis.Numeric', 'Ext.chart.interactions.ItemHighlight'], xtype: 'mainbar', height: 500, padding: { top: 50, bottom: 20, left: 100, right: 100 }, legend: { docked: 'bottom' }, insetPadding: { top: 100, bottom: 20, left: 20, right: 40 }, store: { type: 'expensebyMonthStore' }, axes: [{ type: 'numeric', position: 'left', grid: true, minimum: 0, title: { text: 'Spendings in $', fontSize: 16 }, }, { type: 'category', position: 'bottom', title: { text: 'Month', fontSize: 16 }, label: { font: 'bold Arial', rotate: { degrees: 300 } }, renderer: function (date) { return ["Jan", "Feb", "Mar", "Apr", "May"][date.getMonth()]; } } ], series: [{ type: 'bar3d', xField: 'date', stacked: false, title: ['Total'], yField: ['total'] }], sprites: [{ type: 'text', text: 'Expense by Month', font: '20px Helvetica', width: 120, height: 35, x: 60, y: 40 }] }); Now, let's create the MyApp.model.ExpensebyMonth store used in the preceding bar chart view. This store will display the total amount spent in each month. This data is populated by grouping the expense store with the date field. Take a look at how the data property is configured to populate the data: Ext.define('MyApp.model.ExpensebyMonth', { extend: 'Ext.data.Model', fields: [{name:'date', type: 'date'}, 'total'] }); Ext.define('MyApp.store.ExpensebyMonth', { extend: 'Ext.data.Store', alias: 'store.expensebyMonthStore', model: 'MyApp.model.ExpensebyMonth', data: (function () { var data = []; var expense = Ext.createByAlias('store.expense'); expense.group('date'); var groups = expense.getGroups(); groups.each(function (group) { data.push({ date: group.config.groupKey, total: group.sum('spent') }); }); return data; })() }); Then, the following code is used to generate the pie chart. This chart uses the expense store, but only shows one selected month of data at a time. A drop-down box is added to the main view to select the month. The beforerender is used to filter the expense store to show the data only for the month of January on the load: Ext.define('EA.view.main.Pie', { extend: 'Ext.chart.PolarChart', requires: ['Ext.chart.series.Pie3D'], xtype: 'mainpie', height: 800, legend: { docked: 'bottom' }, insetPadding: { top: 100, bottom: 20, left: 80, right: 80 }, listeners: { beforerender: function () { var dateFiter = new Ext.util.Filter({ filterFn: function(item) { return item.data.date.getMonth() ==0; } }); Ext.getStore('expense').addFilter(dateFiter); } }, store: { type: 'expense' }, series: [{ type: 'pie3d', donut: 50, thickness: 70, distortion: 0.5, angleField: 'spent', label: { field: 'cat', } }] }); So far, we have created the grid, the bar chart, the pie chart, and the stores required for this sample application. Now, we need to link them together in the main view. The following code shows the main view from the classic toolkit. The main view is simply a tab control and specifies what view to render for each tab: Ext.define('EA.view.main.Main', { extend: 'Ext.tab.Panel', xtype: 'app-main', requires: [ 'Ext.plugin.Viewport', 'Ext.window.MessageBox', 'EA.view.main.MainController', 'EA.view.main.List', 'EA.view.main.Bar', 'EA.view.main.Pie' ], controller: 'main', autoScroll: true, ui: 'navigation', // Truncated code items: [{ title: 'Year to Date', iconCls: 'fa-bar-chart', items: [ { html: '<h3>Your average expense per month is: ' + Ext.createByAlias('store.expensebyMonthStore').average('total') + '</h3>', height: 70, }, { xtype: 'mainlist'}, { xtype: 'mainbar' } ] }, { title: 'By Month', iconCls: 'fa-pie-chart', items: [{ xtype: 'combo', value: 'Jan', fieldLabel: 'Select Month', store: ['Jan', 'Feb', 'Mar', 'Apr', 'May'], listeners: { select: 'onMonthSelect' } }, { xtype: 'mainpie' }] }] }); Summary In this article, we looked at the different kinds of charts available in Ext JS. We also created a simple sample project called Expense Analyzer and used some of the concepts you learned in this article. Resources for Article: Further resources on this subject: Ext JS 5 – an Introduction[article] Constructing Common UI Widgets[article] Static Data Management [article]
Read more
  • 0
  • 0
  • 3714

article-image-creating-test-suites-specs-and-expectations-jest
Packt
12 Aug 2015
7 min read
Save for later

Creating test suites, specs and expectations in Jest

Packt
12 Aug 2015
7 min read
In this article by Artemij Fedosejev, the author of React.js Essentials, we will take a look at test suites, specs, and expectations. To write a test for JavaScript functions, you need a testing framework. Fortunately, Facebook built their own unit test framework for JavaScript called Jest. It is built on top of Jasmine - another well-known JavaScript test framework. If you’re familiar with Jasmine you’ll find Jest's approach to testing very similar. However I'll make no assumptions about your prior experience with testing frameworks and discuss the basics first. The fundamental idea of unit testing is that you test only one piece of functionality in your application that usually is implemented by one function. And you test it in isolation - meaning that all other parts of your application which that function depends on are not used by your tests. Instead, they are imitated by your tests. To imitate a JavaScript object is to create a fake one that simulates the behavior of the real object. In unit testing the fake object is called mock and the process of creating it is called mocking. Jest automatically mocks dependencies when you're running your tests. Better yet, it automatically finds tests to execute in your repository. Let's take a look at the example. Create a directory called ./snapterest/source/js/utils/ and create a new file called TweetUtils.js within it, with the following contents: function getListOfTweetIds(tweets) {  return Object.keys(tweets);}module.exports.getListOfTweetIds = getListOfTweetIds; TweetUtils.js file is a module with the getListOfTweetIds() utility function for our application to use. Given an object with tweets, getListOfTweetIds() returns an array of tweet IDs. Using the CommonJS module pattern we export this function: module.exports.getListOfTweetIds = getListOfTweetIds; Jest Unit Testing Now let's write our first unit test with Jest. We'll be testing our getListOfTweetIds() function. Create a new directory: ./snapterest/source/js/utils/__tests__/. Jest will run any tests in any __tests__ directories that it finds within your project structure. So it's important to name your directories with tests: __tests__. Create a TweetUtils-test.js file inside of __tests__:jest.dontMock('../TweetUtils');describe('Tweet utilities module', function () {  it('returns an array of tweet ids', function () {    var TweetUtils = require('../TweetUtils');    var tweetsMock = {      tweet1: {},      tweet2: {},      tweet3: {}    };    var expectedListOfTweetIds = ['tweet1', 'tweet2', 'tweet3'];    var actualListOfTweetIds = TweetUtils.getListOfTweetIds(tweetsMock);    expect(actualListOfTweetIds).toBe(expectedListOfTweetIds);  });}); First we tell Jest not to mock our TweetUtils module: jest.dontMock('../TweetUtils'); We do this because Jest will automatically mock modules returned by the require() function. In our test we're requiring the TweetUtils module: var TweetUtils = require('../TweetUtils'); Without the jest.dontMock('../TweetUtils') call, Jest would return an imitation of our TweetUtils module, instead of the real one. But in this case we actually need the real TweetUtils module, because that's what we're testing. Creating test suites Next we call a global Jest function describe(). In our TweetUtils-test.js file we're not just creating a single test, instead we're creating a suite of tests. A suite is a collection of tests that collectively test a bigger unit of functionality. For example a suite can have multiple tests which tests all individual parts of a larger module. In our example, we have a TweetUtils module with a number of utility functions. In that situation we would create a suite for the TweetUtils module and then create tests for each individual utility function, like getListOfTweetIds(). describe defines a suite and takes two parameters: Suite name - the description of what is being tested: 'Tweet utilities module'. Suit implementation: the function that implements this suite. In our example, the suite is: describe('Tweet utilities module', function () {  // Suite implementation goes here...}); Defining specs How do you create an individual test? In Jest, individual tests are called specs. They are defined by calling another global Jest function it(). Just like describe(), it() takes two parameters: Spec name: the title that describes what is being tested by this spec: 'returns an array of tweet ids'. Spec implementation: the function that implements this spec. In our example, the spec is: it('returns an array of tweet ids', function () {  // Spec implementation goes here...}); Let's take a closer look at the implementation of our spec: var TweetUtils = require('../TweetUtils');var tweetsMock = {  tweet1: {},  tweet2: {},  tweet3: {}};var expectedListOfTweetIds = ['tweet1', 'tweet2', 'tweet3'];var actualListOfTweetIds = TweetUtils.getListOfTweetIds(tweetsMock);expect(actualListOfTweetIds).toEqual(expectedListOfTweetIds); This spec tests whether getListOfTweetIds() method of our TweetUtils module returns an array of tweet IDs when given an object with tweets. First we import the TweetUtils module: var TweetUtils = require('../TweetUtils'); Then we create a mock object that simulates the real tweets object: var tweetsMock = {  tweet1: {},  tweet2: {},  tweet3: {}}; The only requirement for this mock object is to have tweet IDs as object keys. The values are not important hence we choose empty objects. Key names are not important either, so we can name them tweet1, tweet2 and tweet3. This mock object doesn't fully simulate the real tweet object. Its sole purpose is to simulate the fact that its keys are tweet IDs. The next step is to create an expected list of tweet IDs: var expectedListOfTweetIds = ['tweet1', 'tweet2', 'tweet3']; We know what tweet IDs to expect because we've mocked a tweets object with the same IDs. The next step is to extract the actual tweet IDs from our mocked tweets object. For that we use getListOfTweetIds()that takes the tweets object and returns an array of tweet IDs: var actualListOfTweetIds = TweetUtils.getListOfTweetIds(tweetsMock); We pass tweetsMock to that method and store the results in actualListOfTweetIds. The reason this variable is named actualListOfTweetIds is because this list of tweet IDs is produced by the actual getListOfTweetIds() function that we're testing. Setting Expectations The final step will introduce us to a new important concept: expect(actualListOfTweetIds).toEqual(expectedListOfTweetIds); Let's think about the process of testing. We need to take an actual value produced by the method that we're testing - getListOfTweetIds(), and match it to the expected value that we know in advance. The result of that match will determine if our test has passed or failed. The reason why we can guess what getListOfTweetIds() will return in advance is because we've prepared the input for it - that's our mock object: var tweetsMock = {  tweet1: {},  tweet2: {},  tweet3: {}}; So we can expect the following output from calling TweetUtils.getListOfTweetIds(tweetsMock): ['tweet1', 'tweet2', 'tweet3'] But because something can go wrong inside of getListOfTweetIds() we cannot guarantee this result - we can only expect it. That's why we need to create an expectation. In Jest, an Expectation is built using expect()which takes an actual value, for example: actualListOfTweetIds. expect(actualListOfTweetIds) Then we chain it with a Matcher function that compares the actual value with the expected value and tells Jest whether the expectation was met. expect(actualListOfTweetIds).toEqual(expectedListOfTweetIds); In our example we use the toEqual() matcher function to compare two arrays. Click here for a list of all built-in matcher functions in Jest. And that's how you create a spec. A spec contains one or more expectations. Each expectation tests the state of your code. A spec can be either a passing spec or a failing spec. A spec is a passing spec only when all expectations are met, otherwise it's a failing spec. Well done, you've written your first testing suite with a single spec that has one expectation. Continue reading React.js Essentials to continue your journey into testing.
Read more
  • 0
  • 0
  • 24373

article-image-creating-functions-and-operations
Packt
10 Aug 2015
18 min read
Save for later

Creating Functions and Operations

Packt
10 Aug 2015
18 min read
In this article by Alex Libby, author of the book Sass Essentials, we will learn how to use operators or functions to construct a whole site theme from just a handful of colors, or defining font sizes for the entire site from a single value. You will learn how to do all these things in this article. Okay, so let's get started! (For more resources related to this topic, see here.) Creating values using functions and operators Imagine a scenario where you're creating a masterpiece that has taken days to put together, with a stunning choice of colors that has taken almost as long as the building of the project and yet, the client isn't happy with the color choice. What to do? At this point, I'm sure that while you're all smiles to the customer, you'd be quietly cursing the amount of work they've just landed you with, this late on a Friday. Sound familiar? I'll bet you scrap the colors and go back to poring over lots of color combinations, right? It'll work, but it will surely take a lot more time and effort. There's a better way to achieve this; instead of creating or choosing lots of different colors, we only need to choose one and create all of the others automatically. How? Easy! When working with Sass, we can use a little bit of simple math to build our color palette. One of the key tenets of Sass is its ability to work out values dynamically, using nothing more than a little simple math; we could define font sizes from H1 to H6 automatically, create new shades of colors, or even work out the right percentages to use when creating responsive sites! We will take a look at each of these examples throughout the article, but for now, let's focus on the principles of creating our colors using Sass. Creating colors using functions We can use simple math and functions to create just about any type of value, but colors are where these two really come into their own. The great thing about Sass is that we can work out the hex value for just about any color we want to, from a limited range of colors. This can easily be done using techniques such as adding two values together, or subtracting one value from another. To get a feel of how the color operators work, head over to the official documentation at http://sass-lang.com/documentation/file.SASS_REFERENCE.html#color_operations—it is worth reading! Nothing wrong with adding or subtracting values—it's a perfectly valid option, and will result in a valid hex code when compiled. But would you know that both values are actually deep shades of blue? Therein lies the benefit of using functions; instead of using math operators, we can simply say this: p { color: darken(#010203, 10%); } This, I am sure you will agree, is easier to understand as well as being infinitely more readable! The use of functions opens up a world of opportunities for us. We can use any one of the array of functions such as lighten(), darken(), mix(), or adjust-hue() to get a feel of how easy it is to get the values. If we head over to http://jackiebalzer.com/color, we can see that the author has exploded a number of Sass (and Compass—we will use this later) functions, so we can see what colors are displayed, along with their numerical values, as soon as we change the initial two values. Okay, we could play with the site ad infinitum, but I feel a demo coming on—to explore the effects of using the color functions to generate new colors. Let's construct a simple demo. For this exercise, we will dig up a copy of the colorvariables demo and modify it so that we're only assigning one color variable, not six. For this exercise, I will assume you are using Koala to compile the code. Okay, let's make a start: We'll start with opening up a copy of colorvariables.scss in your favorite text editor and removing lines 1 to 15 from the start of the file. Next, add the following lines, so that we should be left with this at the start of the file: $darkRed: #a43; $white: #fff; $black: #000;   $colorBox1: $darkRed; $colorBox2: lighten($darkRed, 30%); $colorBox3: adjust-hue($darkRed, 35%); $colorBox4: complement($darkRed); $colorBox5: saturate($darkRed, 30%); $colorBox6: adjust-color($darkRed, $green: 25); Save the file as colorfunctions.scss. We need a copy of the markup file to go with this code, so go ahead and extract a copy of colorvariables.html from the code download, saving it as colorfunctions.html in the root of our project area. Don't forget to change the link for the CSS file within to colorfunctions.css! Fire up Koala, then drag and drop colorfunctions.scss from our project area over the main part of the application window to add it to the list: Right-click on the file name and select Compile, and then wait for it to show Success in a green information box. If we preview the results of our work in a browser, we should see the following boxes appear: At this point, we have a working set of colors—granted, we might have to work a little on making sure that they all work together. But the key point here is that we have only specified one color, and that the others are all calculated automatically through Sass. Now that we are only defining one color by default, how easy is it to change the colors in our code? Well, it is a cinch to do so. Let's try it out using the help of the SassMeister playground. Changing the colors in use We can easily change the values used in the code, and continue to refresh the browser after each change. However, this isn't a quick way to figure out which colors work; to get a quicker response, there is an easier way: use the online Sass playground at http://www.sassmeister.com. This is the perfect way to try out different colors—the site automatically recompiles the code and updates the result as soon as we make a change. Try copying the HTML and SCSS code into the play area to view the result. The following screenshot shows the same code used in our demo, ready for us to try using different calculations: All images work on the principle that we take a base color (in this case, $dark-blue, or #a43), then adjust the color either by a percentage or a numeric value. When compiled, Sass calculates what the new value should be and uses this in the CSS. Take, for example, the color used for #box6, which is a dark orange with a brown tone, as shown in this screenshot: To get a feel of some of the functions that we can use to create new colors (or shades of existing colors), take a look at the main documentation at http://sass-lang.com/documentation/Sass/Script/Functions.html, or https://www.makerscabin.com/web/sass/learn/colors. These sites list a variety of different functions that we can use to create our masterpiece. We can also extend the functions that we have in Sass with the help of custom functions, such as the toolbox available at https://github.com/at-import/color-schemer—this may be worth a look. In our demo, we used a dark red color as our base. If we're ever stuck for ideas on colors, or want to get the right HEX, RGB(A), or even HSL(A) codes, then there are dozens of sites online that will give us these values. Here are a couple of them that you can try: HSLa Explorer, by Chris Coyier—this is available at https://css-tricks.com/examples/HSLaExplorer/. HSL Color Picker by Brandon Mathis—this is available at http://hslpicker.com/. If we know the name, but want to get a Sass value, then we can always try the list of 1,500+ colors at https://github.com/FearMediocrity/sass-color-palettes/blob/master/colors.scss. What's more, the list can easily be imported into our CSS, although it would make better sense to simply copy the chosen values into our Sass file, and compile from there instead. Mixing colors The one thing that we've not discussed, but is equally useful is that we are not limited to using functions on their own; we can mix and match any number of functions to produce our colors. A great way to choose colors, and get the appropriate blend of functions to use, is at http://sassme.arc90.com/. Using the available sliders, we can choose our color, and get the appropriate functions to use in our Sass code. The following image shows how: In most cases, we will likely only need to use two functions (a mix of darken and adjust hue, for example); if we are using more than two–three functions, then we should perhaps rethink our approach! In this case, a better alternative is to use Sass's mix() function, as follows: $white: #fff; $berry: hsl(267, 100%, 35%); p { mix($white, $berry, 0.7) } …which will give the following valid CSS: p { color: #5101b3; } This is a useful alternative to use in place of the command we've just touched on; after all, would you understand what adjust_hue(desaturate(darken(#db4e29, 2), 41), 67) would give as a color? Granted, it is something of an extreme calculation, nonetheless, it is technically valid. If we use mix() instead, it matches more closely to what we might do, for example, when mixing paint. After all, how else would we lighten its color, if not by adding a light-colored paint? Okay, let's move on. What's next? I hear you ask. Well, so far we've used core Sass for all our functions, but it's time to go a little further afield. Let's take a look at how you can use external libraries to add extra functionality. In our next demo, we're going to introduce using Compass, which you will often see being used with Sass. Using an external library So far, we've looked at using core Sass functions to produce our colors—nothing wrong with this; the question is, can we take things a step further? Absolutely, once we've gained some experience with using these functions, we can introduce custom functions (or helpers) that expand what we can do. A great library for this purpose is Compass, available at http://www.compass-style.org; we'll make use of this to change the colors which we created from our earlier boxes demo, in the section, Creating colors using functions. Compass is a CSS authoring framework, which provides extra mixins and reusable patterns to add extra functionality to Sass. In our demo, we're using shade(), which is one of the several color helpers provided by the Compass library. Let's make a start: We're using Compass in this demo, so we'll begin with installing the library. To do this, fire up Command Prompt, then navigate to our project area. We need to make sure that our installation RubyGems system software is up to date, so at Command Prompt, enter the following, and then press Enter: gem update --system Next, we're installing Compass itself—at the prompt, enter this command, and then press Enter: gem install compass Compass works best when we get it to create a project shell (or template) for us. To do this, first browse to http://www.compass-style.org/install, and then enter the following in the Tell us about your project… area: Leave anything in grey text as blank. This produces the following commands—enter each at Command Prompt, pressing Enter each time: Navigate back to Command Prompt. We need to compile our SCSS code, so go ahead and enter this command at the prompt (or copy and paste it), then press Enter: compass watch –sourcemap Next, extract a copy of the colorlibrary folder from the code download, and save it to the project area. In colorlibrary.scss, comment out the existing line for $backgrd_box6_color, and add the following immediately below it: $backgrd_box6_color: shade($backgrd_box5_color, 25%); Save the changes to colorlibrary.scss. If all is well, Compass's watch facility should kick in and recompile the code automatically. To verify that this has been done, look in the css subfolder of the colorlibrary folder, and you should see both the compiled CSS and the source map files present. If you find Compass compiles files in unexpected folders, then try using the following command to specify the source and destination folders when compiling: compass watch --sass-dir sass --css-dir css If all is well, we will see the boxes, when previewing the results in a browser window, as in the following image. Notice how Box 6 has gone a nice shade of deep red (if not almost brown)? To really confirm that all the changes have taken place as required, we can fire up a DOM inspector such as Firebug; a quick check confirms that the color has indeed changed: If we explore even further, we can see that the compiled code shows that the original line for Box 6 has been commented out, and that we're using the new function from the Compass helper library: This is a great way to push the boundaries of what we can do when creating colors. To learn more about using the Compass helper functions, it's worth exploring the official documentation at http://compass-style.org/reference/compass/helpers/colors/. We used the shade() function in our code, which darkens the color used. There is a key difference to using something such as darken() to perform the same change. To get a feel of the difference, take a look at the article on the CreativeBloq website at http://www.creativebloq.com/css3/colour-theming-sass-and-compass-6135593, which explains the difference very well. The documentation is a little lacking in terms of how to use the color helpers; the key is not to treat them as if they were normal mixins or functions, but to simply reference them in our code. To explore more on how to use these functions, take a look at the article by Antti Hiljá at http://clubmate.fi/how-to-use-the-compass-helper-functions/. We can, of course, create mixins to create palettes—for a more complex example, take a look at http://www.zingdesign.com/how-to-generate-a-colour-palette-with-compass/ to understand how such a mixin can be created using Compass. Okay, let's move on. So far, we've talked about using functions to manipulate colors; the flip side is that we are likely to use operators to manipulate values such as font sizes. For now, let's change tack and take a look at creating new values for changing font sizes. Changing font sizes using operators We already talked about using functions to create practically any value. Well, we've seen how to do it with colors; we can apply similar principles to creating font sizes too. In this case, we set a base font size (in the same way that we set a base color), and then simply increase or decrease font sizes as desired. In this instance, we won't use functions, but instead, use standard math operators, such as add, subtract, or divide. When working with these operators, there are a couple of points to remember: Sass math functions preserve units—this means we can't work on numbers with different units, such as adding a px value to a rem value, but can work with numbers that can be converted to the same format, such as inches to centimeters If we multiply two values with the same units, then this will produce square units (that is, 10px * 10px == 100px * px). At the same time, px * px will throw an error as it is an invalid unit in CSS. There are some quirks when working with / as a division operator —in most instances, it is normally used to separate two values, such as defining a pair of font size values. However, if the value is surrounded in parentheses, used as a part of another arithmetic expression, or is stored in a variable, then this will be treated as a division operator. For full details, it is worth reading the relevant section in the official documentation at http://sass-lang.com/documentation/file.Sass_REFERENCE.html#division-and-slash. With these in mind, let's create a simple demo—a perfect use for Sass is to automatically work out sizes from H1 through to H6. We could just do this in a simple text editor, but this time, let's break with tradition and build our demo directly into a session on http://www.sassmeister.com. We can then play around with the values set, and see the effects of the changes immediately. If we're happy with the results of our work, we can copy the final version into a text editor and save them as standard SCSS (or CSS) files. Let's begin by browsing to http://www.sassmeister.com, and adding the following HTML markup window: <html> <head>    <meta charset="utf-8" />    <title>Demo: Assigning colors using variables</title>    <link rel="stylesheet" type="text/css" href="css/     colorvariables.css"> </head> <body>    <h1>The cat sat on the mat</h1>    <h2>The cat sat on the mat</h2>    <h3>The cat sat on the mat</h3>    <h4>The cat sat on the mat</h4>    <h5>The cat sat on the mat</h5>    <h6>The cat sat on the mat</h6> </body> </html> Next, add the following to the SCSS window—we first set a base value of 3.0, followed by a starting color of #b26d61, or a dark, moderate red: $baseSize: 3.0; $baseColor: #b26d61; We need to add our H1 to H6 styles. The rem mixin was created by Chris Coyier, at https://css-tricks.com/snippets/css/less-mixin-for-rem-font-sizing/. We first set the font size, followed by setting the font color, using either the base color set earlier, or a function to produce a different shade: h1 { font-size: $baseSize; color: $baseColor; }   h2 { font-size: ($baseSize - 0.2); color: darken($baseColor, 20%); }   h3 { font-size: ($baseSize - 0.4); color: lighten($baseColor, 10%); }   h4 { font-size: ($baseSize - 0.6); color: saturate($baseColor, 20%); }   h5 { font-size: ($baseSize - 0.8); color: $baseColor - 111; }   h6 { font-size: ($baseSize - 1.0); color: rgb(red($baseColor) + 10, 23, 145); } SassMeister will automatically compile the code to produce a valid CSS, as shown in this screenshot: Try changing the base size of 3.0 to a different value—using http://www.sassmeister.com, we can instantly see how this affects the overall size of each H value. Note how we're multiplying the base variable by 10 to set the pixel value, or simply using the value passed to render each heading. In each instance, we can concatenate the appropriate unit using a plus (+) symbol. We then subtract an increasing value from $baseSize, before using this value as the font size for the relevant H value. You can see a similar example of this by Andy Baudoin as a CodePen, at http://codepen.io/baudoin/pen/HdliD/. He makes good use of nesting to display the color and strength of shade. Note that it uses a little JavaScript to add the text of the color that each line represents, and can be ignored; it does not affect the Sass used in the demo. The great thing about using a site such SassMeister is that we can play around with values and immediately see the results. For more details on using number operations in Sass, browse to the official documentation, which is at http://sass-lang.com/documentation/file.Sass_REFERENCE.html#number_operations. Okay, onwards we go. Let's turn our attention to creating something a little more substantial; we're going to create a complete site theme using the power of Sass and a few simple calculations. Summary Phew! What a tour! One of the key concepts of Sass is the use of functions and operators to create values, so let's take a moment to recap what we have covered throughout this article. We kicked off with a look at creating color values using functions, before discovering how we can mix and match different functions to create different shades, or using external libraries to add extra functionality to Sass. We then moved on to take a look at another key use of functions, with a look at defining different font sizes, using standard math operators. Resources for Article: Further resources on this subject: Nesting, Extend, Placeholders, and Mixins [article] Implementation of SASS [article] Constructing Common UI Widgets [article]
Read more
  • 0
  • 0
  • 825

article-image-rest-apis-social-network-data-using-py2neo
Packt
14 Jul 2015
20 min read
Save for later

REST APIs for social network data using py2neo

Packt
14 Jul 2015
20 min read
In this article wirtten by Sumit Gupta, author of the book Building Web Applications with Python and Neo4j we will discuss and develop RESTful APIs for performing CRUD and search operations over our social network data, using Flask-RESTful extension and py2neo extension—Object-Graph Model (OGM). Let's move forward to first quickly talk about the OGM and then develop full-fledged REST APIs over our social network data. (For more resources related to this topic, see here.) ORM for graph databases py2neo – OGM We discussed about the py2neo in Chapter 4, Getting Python and Neo4j to Talk Py2neo. In this section, we will talk about one of the py2neo extensions that provides high-level APIs for dealing with the underlying graph database as objects and its relationships. Object-Graph Mapping (http://py2neo.org/2.0/ext/ogm.html) is one of the popular extensions of py2neo and provides the mapping of Neo4j graphs in the form of objects and relationships. It provides similar functionality and features as Object Relational Model (ORM) available for relational databases py2neo.ext.ogm.Store(graph) is the base class which exposes all operations with respect to graph data models. Following are important methods of Store which we will be using in the upcoming section for mutating our social network data: Store.delete(subj): It deletes a node from the underlying graph along with its associated relationships. subj is the entity that needs to be deleted. It raises an exception in case the provided entity is not linked to the server. Store.load(cls, node): It loads the data from the database node into cls, which is the entity defined by the data model. Store.load_related(subj, rel_type, cls): It loads all the nodes related to subj of relationship as defined by rel_type into cls and then further returns the cls object. Store.load_indexed(index_name, key,value, cls): It queries the legacy index, loads all the nodes that are mapped by key-value, and returns the associated object. Store.relate(subj, rel_type, obj, properties=None): It defines the relationship between two nodes, where subj and cls are two nodes connected by rel_type. By default, all relationships point towards the right node. Store.save(subj, node=None): It save and creates a given entity/node—subj into the graph database. The second argument is of type Node, which if given will not create a new node and will change the already existing node. Store.save_indexed(index_name,key,value,subj): It saves the given entity into the graph and also creates an entry into the given index for future reference. Refer to http://py2neo.org/2.0/ext/ogm.html#py2neo.ext.ogm.Store for the complete list of methods exposed by Store class. Let's move on to the next section where we will use the OGM for mutating our social network data model. OGM supports Neo4j version 1.9, so all features of Neo4j 2.0 and above are not supported such as labels. Social network application with Flask-RESTful and OGM In this section, we will develop a full-fledged application for mutating our social network data and will also talk about the basics of Flask-RESTful and OGM. Creating object model Perform the following steps to create the object model and CRUD/search functions for our social network data: Our social network data contains two kind of entities—Person and Movies. So as a first step let's create a package model and within the model package let's define a module SocialDataModel.py with two classes—Person and Movie: class Person(object):    def __init__(self, name=None,surname=None,age=None,country=None):        self.name=name        self.surname=surname        self.age=age        self.country=country   class Movie(object):    def __init__(self, movieName=None):        self.movieName=movieName Next, let's define another package operations and two python modules ExecuteCRUDOperations.py and ExecuteSearchOperations.py. The ExecuteCRUDOperations module will contain the following three classes: DeleteNodesRelationships: It will contain one method each for deleting People nodes and Movie nodes and in the __init__ method, we will establish the connection to the graph database. class DeleteNodesRelationships(object):    '''    Define the Delete Operation on Nodes    '''    def __init__(self,host,port,username,password):        #Authenticate and Connect to the Neo4j Graph Database        py2neo.authenticate(host+':'+port, username, password)        graph = Graph('http://'+host+':'+port+'/db/data/')        store = Store(graph)        #Store the reference of Graph and Store.        self.graph=graph        self.store=store      def deletePersonNode(self,node):        #Load the node from the Neo4j Legacy Index cls = self.store.load_indexed('personIndex', 'name', node.name, Person)          #Invoke delete method of store class        self.store.delete(cls[0])      def deleteMovieNode(self,node):        #Load the node from the Neo4j Legacy Index cls = self.store.load_indexed('movieIndex',   'name',node.movieName, Movie)        #Invoke delete method of store class            self.store.delete(cls[0]) Deleting nodes will also delete the associated relationships, so there is no need to have functions for deleting relationships. Nodes without any relationship do not make much sense for many business use cases, especially in a social network, unless there is a specific need or an exceptional scenario. UpdateNodesRelationships: It will contain one method each for updating People nodes and Movie nodes and, in the __init__ method, we will establish the connection to the graph database. class UpdateNodesRelationships(object):    '''      Define the Update Operation on Nodes    '''      def __init__(self,host,port,username,password):        #Write code for connecting to server      def updatePersonNode(self,oldNode,newNode):        #Get the old node from the Index        cls = self.store.load_indexed('personIndex', 'name', oldNode.name, Person)        #Copy the new values to the Old Node        cls[0].name=newNode.name        cls[0].surname=newNode.surname        cls[0].age=newNode.age        cls[0].country=newNode.country        #Delete the Old Node form Index        self.store.delete(cls[0])       #Persist the updated values again in the Index        self.store.save_unique('personIndex', 'name', newNode.name, cls[0])      def updateMovieNode(self,oldNode,newNode):          #Get the old node from the Index        cls = self.store.load_indexed('movieIndex', 'name', oldNode.movieName, Movie)        #Copy the new values to the Old Node        cls[0].movieName=newNode.movieName        #Delete the Old Node form Index        self.store.delete(cls[0])        #Persist the updated values again in the Index        self.store.save_ unique('personIndex', 'name', newNode.name, cls[0]) CreateNodesRelationships: This class will contain methods for creating People and Movies nodes and relationships and will then further persist them to the database. As with the other classes/ module, it will establish the connection to the graph database in the __init__ method: class CreateNodesRelationships(object):    '''    Define the Create Operation on Nodes    '''    def __init__(self,host,port,username,password):        #Write code for connecting to server    '''    Create a person and store it in the Person Dictionary.    Node is not saved unless save() method is invoked. Helpful in bulk creation    '''    def createPerson(self,name,surName=None,age=None,country=None):        person = Person(name,surName,age,country)        return person      '''    Create a movie and store it in the Movie Dictionary.    Node is not saved unless save() method is invoked. Helpful in bulk creation    '''    def createMovie(self,movieName):        movie = Movie(movieName)        return movie      '''    Create a relationships between 2 nodes and invoke a local method of Store class.    Relationship is not saved unless Node is saved or save() method is invoked.    '''    def createFriendRelationship(self,startPerson,endPerson):        self.store.relate(startPerson, 'FRIEND', endPerson)      '''    Create a TEACHES relationships between 2 nodes and invoke a local method of Store class.    Relationship is not saved unless Node is saved or save() method is invoked.    '''    def createTeachesRelationship(self,startPerson,endPerson):        self.store.relate(startPerson, 'TEACHES', endPerson)    '''    Create a HAS_RATED relationships between 2 nodes and invoke a local method of Store class.    Relationship is not saved unless Node is saved or save() method is invoked.    '''    def createHasRatedRelationship(self,startPerson,movie,ratings):      self.store.relate(startPerson, 'HAS_RATED', movie,{'ratings':ratings})    '''    Based on type of Entity Save it into the Server/ database    '''    def save(self,entity,node):        if(entity=='person'):            self.store.save_unique('personIndex', 'name', node.name, node)        else:            self.store.save_unique('movieIndex','name',node.movieName,node) Next we will define other Python module operations, ExecuteSearchOperations.py. This module will define two classes, each containing one method for searching Person and Movie node and of-course the __init__ method for establishing a connection with the server: class SearchPerson(object):    '''    Class for Searching and retrieving the the People Node from server    '''      def __init__(self,host,port,username,password):        #Write code for connecting to server      def searchPerson(self,personName):        cls = self.store.load_indexed('personIndex', 'name', personName, Person)        return cls;   class SearchMovie(object):    '''    Class for Searching and retrieving the the Movie Node from server    '''    def __init__(self,host,port,username,password):        #Write code for connecting to server      def searchMovie(self,movieName):        cls = self.store.load_indexed('movieIndex', 'name', movieName, Movie)        return cls; We are done with our data model and the utility classes that will perform the CRUD and search operation over our social network data using py2neo OGM. Now let's move on to the next section and develop some REST services over our data model. Creating REST APIs over data models In this section, we will create and expose REST services for mutating and searching our social network data using the data model created in the previous section. In our social network data model, there will be operations on either the Person or Movie nodes, and there will be one more operation which will define the relationship between Person and Person or Person and Movie. So let's create another package service and define another module MutateSocialNetworkDataService.py. In this module, apart from regular imports from flask and flask_restful, we will also import classes from our custom packages created in the previous section and create objects of model classes for performing CRUD and search operations. Next we will define the different classes or services which will define the structure of our REST Services. The PersonService class will define the GET, POST, PUT, and DELETE operations for searching, creating, updating, and deleting the Person nodes. class PersonService(Resource):    '''    Defines operations with respect to Entity - Person    '''    #example - GET http://localhost:5000/person/Bradley    def get(self, name):        node = searchPerson.searchPerson(name)        #Convert into JSON and return it back        return jsonify(name=node[0].name,surName=node[0].surname,age=node[0].age,country=node[0].country)      #POST http://localhost:5000/person    #{"name": "Bradley","surname": "Green","age": "24","country": "US"}    def post(self):          jsonData = request.get_json(cache=False)        attr={}        for key in jsonData:            attr[key]=jsonData[key]            print(key,' = ',jsonData[key] )        person = createOperation.createPerson(attr['name'],attr['surname'],attr['age'],attr['country'])        createOperation.save('person',person)          return jsonify(result='success')    #POST http://localhost:5000/person/Bradley    #{"name": "Bradley1","surname": "Green","age": "24","country": "US"}    def put(self,name):        oldNode = searchPerson.searchPerson(name)        jsonData = request.get_json(cache=False)        attr={}        for key in jsonData:            attr[key] = jsonData[key]            print(key,' = ',jsonData[key] )        newNode = Person(attr['name'],attr['surname'],attr['age'],attr['country'])          updateOperation.updatePersonNode(oldNode[0],newNode)          return jsonify(result='success')      #DELETE http://localhost:5000/person/Bradley1    def delete(self,name):        node = searchPerson.searchPerson(name)        deleteOperation.deletePersonNode(node[0])        return jsonify(result='success') The MovieService class will define the GET, POST, and DELETE operations for searching, creating, and deleting the Movie nodes. This service will not support the modification of Movie nodes because, once the Movie node is defined, it does not change in our data model. Movie service is similar to our Person service and leverages our data model for performing various operations. The RelationshipService class only defines POST which will create the relationship between the person and other given entity and can either be another Person or Movie. Following is the structure of the POST method: '''    Assuming that the given nodes are already created this operation    will associate Person Node either with another Person or Movie Node.      Request for Defining relationship between 2 persons: -        POST http://localhost:5000/relationship/person/Bradley        {"entity_type":"person","person.name":"Matthew","relationship": "FRIEND"}    Request for Defining relationship between Person and Movie        POST http://localhost:5000/relationship/person/Bradley        {"entity_type":"Movie","movie.movieName":"Avengers","relationship": "HAS_RATED"          "relationship.ratings":"4"}    '''    def post(self, entity,name):        jsonData = request.get_json(cache=False)        attr={}        for key in jsonData:            attr[key]=jsonData[key]            print(key,' = ',jsonData[key] )          if(entity == 'person'):            startNode = searchPerson.searchPerson(name)            if(attr['entity_type']=='movie'):                endNode = searchMovie.searchMovie(attr['movie.movieName'])                createOperation.createHasRatedRelationship(startNode[0], endNode[0], attr['relationship.ratings'])                createOperation.save('person', startNode[0])            elif (attr['entity_type']=='person' and attr['relationship']=='FRIEND'):                endNode = searchPerson.searchPerson(attr['person.name'])                createOperation.createFriendRelationship(startNode[0], endNode[0])                createOperation.save('person', startNode[0])            elif (attr['entity_type']=='person' and attr['relationship']=='TEACHES'):                endNode = searchPerson.searchPerson(attr['person.name'])                createOperation.createTeachesRelationship(startNode[0], endNode[0])                createOperation.save('person', startNode[0])        else:            raise HTTPException("Value is not Valid")          return jsonify(result='success') At the end, we will define our __main__ method, which will bind our services with the specific URLs and bring up our application: if __name__ == '__main__':    api.add_resource(PersonService,'/person','/person/<string:name>')    api.add_resource(MovieService,'/movie','/movie/<string:movieName>')    api.add_resource(RelationshipService,'/relationship','/relationship/<string:entity>/<string:name>')    webapp.run(debug=True) And we are done!!! Execute our MutateSocialNetworkDataService.py as a regular Python module and your REST-based services are up and running. Users of this app can use any REST-based clients such as SOAP-UI and can execute the various REST services for performing CRUD and search operations. Follow the comments provided in the code samples for the format of the request/response. In this section, we created and exposed REST-based services using Flask, Flask-RESTful, and OGM and performed CRUD and search operations over our social network data model. Using Neomodel in a Django app In this section, we will talk about the integration of Django and Neomodel. Django is a Python-based, powerful, robust, and scalable web-based application development framework. It is developed upon the Model-View-Controller (MVC) design pattern where developers can design and develop a scalable enterprise-grade application within no time. We will not go into the details of Django as a web-based framework but will assume that the readers have a basic understanding of Django and some hands-on experience in developing web-based and database-driven applications. Visit https://docs.djangoproject.com/en/1.7/ if you do not have any prior knowledge of Django. Django provides various signals or triggers that are activated and used to invoke or execute some user-defined functions on a particular event. The framework invokes various signals or triggers if there are any modifications requested to the underlying application data model such as pre_save(), post_save(), pre_delete, post_delete, and a few more. All the functions starting with pre_ are executed before the requested modifications are applied to the data model, and functions starting with post_ are triggered after the modifications are applied to the data model. And that's where we will hook our Neomodel framework, where we will capture these events and invoke our custom methods to make similar changes to our Neo4j database. We can reuse our social data model and the functions defined in ExploreSocialDataModel.CreateDataModel. We only need to register our event and things will be automatically handled by the Django framework. For example, you can register for the event in your Django model (models.py) by defining the following statement: signals.pre_save.connect(preSave, sender=Male) In the previous statement, preSave is the custom or user-defined method, declared in models.py. It will be invoked before any changes are committed to entity Male, which is controlled by the Django framework and is different from our Neomodel entity. Next, in preSave you need to define the invocations to the Neomodel entities and save them. Refer to the documentation at https://docs.djangoproject.com/en/1.7/topics/signals/ for more information on implementing signals in Django. Signals in Neomodel Neomodel also provides signals that are similar to Django signals and have the same behavior. Neomodel provides the following signals: pre_save, post_save, pre_delete, post_delete, and post_create. Neomodel exposes the following two different approaches for implementing signals: Define the pre..() and post..() methods in your model itself and Neomodel will automatically invoke it. For example, in our social data model, we can define def pre_save(self) in our Model.Male class to receive all events before entities are persisted in the database or server. Another approach is using Django-style signals, where we can define the connect() method in our Neomodel Model.py and it will produce the same results as in Django-based models: signals.pre_save.connect(preSave, sender=Male) Refer to http://neomodel.readthedocs.org/en/latest/hooks.html for more information on signals in Neomodel. In this section, we discussed about the integration of Django with Neomodel using Django signals. We also talked about the signals provided by Neomodel and their implementation approach. Summary Here we learned about creating web-based applications using Flask. We also used Flasks extensions such as Flask-RESTful for creating/exposing REST APIs for data manipulation. Finally, we created a full blown REST-based application over our social network data using Flask, Flask-RESTful, and py2neo OGM. We also learned about Neomodel and its various features and APIs provided to work with Neo4j. We also discussed about the integration of Neomodel with the Django framework. Resources for Article: Further resources on this subject: Firebase [article] Developing Location-based Services with Neo4j [article] Learning BeagleBone Python Programming [article]
Read more
  • 0
  • 0
  • 2790

article-image-creating-subtle-ui-details-using-midnightjs-wowjs-and-animatecss
Roberto González
10 Jul 2015
9 min read
Save for later

Creating subtle UI details using Midnight.js, Wow.js, and Animate.css

Roberto González
10 Jul 2015
9 min read
Creating animations in CSS or JavaScript is often annoying and/or time-consuming, so most people tend to pay a lot of attention to the content that’s below "the fold" ("the fold" is quickly becoming an outdated concept, but you know what I mean). I’ll be covering a few techniques to help you add some nice touches to your landing pages that only take a few minutes to implement and require pretty much no development work at all. To create a base for this project, I put together a bunch of photographs from https://unsplash.com/ with some text on top so we have something to work with. Download the files from http://aerolab.github.io/subtle-animations/assets/basics.zip and put them in a new folder. You can also check out the final result at http://aerolab.github.io/subtle-animations. Dynamically change your fixed headers using Midnight.js If you took a look at the demo site, you probably noticed that the minimalistic header we are using for "A How To Guide" becomes illegible in very light backgrounds. When this happens in most sites, we typically end up putting a background on the header, which usually improves legibility at the cost of making the design worse. Midnight.js is a jQuery plugin that changes your headers as you scroll, so the header always has a design that matches the content below it. This is particularly useful for minimalistic websites as they often use transparent headers. Implementation is quite simple as the setup is pretty much automatic. Start by adding a fixed header into the site. The example has one ready to go: <nav class="fixed"> <div class="container"> <span class="logo">A How To Guide</span> </div> </nav> Most of the setting up comes in specifying which header corresponds to which section. This is done by adding data-midnight="your-class" to any section or piece of content that requires a different design for the header. For the first section, we’ll be using a white header, so we’ll add data-midnight="white" to this section (it doesn’t have to be only a section, any large element works well). <section class="fjords" data-midnight="white"> <article> <h1>Adding Subtle UI Details</h1> <p>Using Midnight.js, Wow.js and Animate.css</p> </article> </section> In the next section, which is a photo of ships in very thick white fog, we’ll be using a darker header to help improve contrast. Let’s use data-midnight="gray" for the second one and data-midgnight="pink" for the last one, so it feels more in line with the content: <section class="ships" data-midnight="gray"> <article> <h1>Be quiet</h1> <p>I'm hunting wabbits</p> </article> </section> <section class="puppy" data-midnight="pink"> <article> <h1>OMG A PUPPY &lt;3</h1> </article> </section> Now we just need to add some css rules to change the look of the header in those cases. We’ll just be changing the color of the text for the moment, so open up css/styles.css and add the following rules: /* Styles for White, Gray and Pink headers */.midnightHeader.white { color: #fff; } .midnightHeader.gray { color: #999; } .midnightHeader.pink { color: #ffc0cb; } Last but not least, we need to include the necessary libraries. We’ll add two libraries right before the end of the body: jQuery and Midnight.js (they are included in the project files inside the js folder): <script src="js/jquery-1.11.1.min.js"></script> <script src="js/midnight.jquery.min.js"></script> Right after that, we start Midnight.js on document.ready, using $('nav.fixed').midnight() (you can change the selector to whatever you are using on your site): <script> $(document).ready(function(){ $('nav.fixed').midnight(); }); </script> If you check the site now, you’ll notice that the fixed header gracefully changes color when you start scrolling into the ships section. It’s a very subtle effect, but it helps keep your designs clean. Bonus Feature! It’s possible to completely change the markup of your header just for a specific section. It’s mostly used to add some visual details that require extra markup, but it can be used to completely alter your headers as necessary. In this case, we’ll be changing the “logo" from "A How To Guide" to "Shhhhhhhhh" on the ships section, and a bunch of hearts for the part of the puppy for additional bad comedy. To do this, we need to alter our fixed header a bit. First we need to identify the “default" header (all headers that don't have custom markup will be based on this one), and then add the markup we need for any custom headers, like the gray one. This is done by creating multiple copies of the header and wrapping them in .midnightHeader.default,.midnightHeader.gray and .midnightHeader.pink respectively: <nav class="fixed"> <div class="midnightHeader default"> <div class="container"> <span class="logo">A How To Guide</span> </div> </div> <div class="midnightHeader gray"> <div class="container"> <span class="logo">Shhhhhhhhh</span> </div> </div> <div class="midnightHeader pink"> <div class="container"> <span class="logo">❤❤❤ OMG PUPPIES ❤❤❤</span> </div> </div> </nav> If you test the site now, you’ll notice that the header not only changes color, but it also changes the "name" of the site to match the section, which gives you more freedom in terms of navigation and design. Simple animations with Wow.js and Animate.css Wow.js looks more like a toy than a serious plugin, but it’s actually a very powerful library that’s extremely easy to implement. Wow.js lets you animate things as they come into view. For instance, you can fade something in when you scroll to that section, letting users enjoy some extra UI candy. You can choose from a large set of animations from Animate.css so you don’t even have to touch the CSS (but you can still do that if you want). To get Wow.JS to work, we have to include just two things: Animate.css, which contains all the animations we need. Of course, you can create your own, or even tweak those to match your tastes. Just add a link to animate.css in the head of the document: <linkrel="stylesheet"href="css/animate.css"/> Wow.JS. This is simply just including the script and initializing it, which is done by adding the following just before the end of the document: <script src="js/wow.min.js"></script> <script>new WOW().init()</script> That’s it! To animate an element as soon as it gets into view, you just need to add the .wow class to that element, and then any animation from Animate.css (like .fadeInUp, .slideInLeft, or one of the many options available at http://daneden.github.io/animate.css/). For example, to make something fade in from the bottom of the screen, you just have to add wow fadeInUp. Let’s try this on the h1 our first section: <section class="fjords" data-midnight="white"> <article> <h1 class="wow fadeInUp">Adding Subtle UI Details</h1> <p>Using Midnight.js, Wow.js and Animate.css</p> </article> </section> If you feel like altering the animation slightly, you have quite a bit of control over how it behaves. For instance, let’s fade in the subtitle but do it a few milliseconds after the title, so it follows a sequence. We can use data-wow-delay="0.5s" to make the subtitle wait for half a second before making its appearance: <section class="fjords" data-midnight="white"> <article> <h1 class="wow fadeInUp">Adding Subtle UI Details</h1> <p class="wow fadeInUp" data-wow-delay="0.5s">Using Midnight.js, Wow.js and Animate.css</p> </article> </section> We can even tweak how long the animation takes by using data-wow-duration="1.5s" so it lasts a second and a half. This is particularly useful in the second section, combined with another delay: <section class="ships" data-midnight="gray"> <article> <h1 class="wow fadeIn" data-wow-duration="1.5s">Be quiet</h1> <p class="wow fadeIn" data-wow-delay="0.5s" data-wow-duration="1.5s">I'm hunting wabbits</p> </article> </section> We can even repeat an animation a few times. Let’s make the last title shake a few times as soon as it gets into view with data-wow-iteration="5". We'll take this opportunity to use all the properties, like data-wow-duration="0.5s" to make each shake last half a second, and we'll also add a large delay for the last piece so it appears after the main animation has finished: <section class="puppy"> <article> <h1 class="wow shake" data-wow-iteration="5" data-wow-duration="0.5s">OMG A PUPPY &lt;3</h1> <p class="wow fadeIn" data-wow-delay="2.5s">Ok, this one wasn't subtle at all</p> </article> </section> Summary That’s pretty much all there is to know about using Midnight.js, Wow.js and Animate.css! All you need to do now is find a project and experiment a bit with different animations. It’s a great tool to add some last-minute eye candy and - as long as you don’t overdo it - looks fantastic on most sites. I hope you enjoyed the article! About the author Roberto González is the co-founder of Aerolab, "an awesome place where we really push the barriers to create amazing, well coded design for the best digital products."He can be reached at @robertcode. From the 11th to 17th April, save 50% on top web development eBooks and 70% on our specially selected video courses. From Angular 2 to React and much more, find them all here.
Read more
  • 0
  • 0
  • 4175

article-image-why-meteor-rocks
Packt
08 Jul 2015
23 min read
Save for later

Why Meteor Rocks!

Packt
08 Jul 2015
23 min read
In this article by Isaac Strack, the author of the book, Getting Started with Meteor.js JavaScript Framework - Second Edition, has discussed some really amazing features of Meteor that has contributed a lot to the success of Meteor. Meteor is a disruptive (in a good way!) technology. It enables a new type of web application that is faster, easier to build, and takes advantage of modern techniques, such as Full Stack Reactivity, Latency Compensation, and Data On The Wire. (For more resources related to this topic, see here.) This article explains how web applications have changed over time, why that matters, and how Meteor specifically enables modern web apps through the above-mentioned techniques. By the end of this article, you will have learned: What a modern web application is What Data On The Wire means and how it's different How Latency Compensation can improve your app experience Templates and Reactivity—programming the reactive way! Modern web applications Our world is changing. With continual advancements in displays, computing, and storage capacities, things that weren't even possible a few years ago are now not only possible but are critical to the success of a good application. The Web in particular has undergone significant change. The origin of the web app (client/server) From the beginning, web servers and clients have mimicked the dumb terminal approach to computing where a server with significantly more processing power than a client will perform operations on data (writing records to a database, math calculations, text searches, and so on), transform the data and render it (turn a database record into HTML and so on), and then serve the result to the client, where it is displayed for the user. In other words, the server does all the work, and the client acts as more of a display, or a dumb terminal. This design pattern for this is called…wait for it…the client/server design pattern. The diagrammatic representation of the client-server architecture is shown in the following diagram: This design pattern, borrowed from the dumb terminals and mainframes of the 60s and 70s, was the beginning of the Web as we know it and has continued to be the design pattern that we think of when we think of the Internet. The rise of the machines (MVC) Before the Web (and ever since), desktops were able to run a program such as a spreadsheet or a word processor without needing to talk to a server. This type of application could do everything it needed to, right there on the big and beefy desktop machine. During the early 90s, desktop computers got even more beefy. At the same time, the Web was coming alive, and people started having the idea that a hybrid between the beefy desktop application (a fat app) and the connected client/server application (a thin app) would produce the best of both worlds. This kind of hybrid app—quite the opposite of a dumb terminal—was called a smart app. Many business-oriented smart apps were created, but the easiest examples can be found in computer games. Massively Multiplayer Online games (MMOs), first-person shooters, and real-time strategies are smart apps where information (the data model) is passed between machines through a server. The client in this case does a lot more than just display the information. It performs most of the processing (or acts as a controller) and transforms the data into something to be displayed (the view). This design pattern is simple but very effective. It's called the Model View Controller (MVC) pattern. The model is essentially the data for an application. In the context of a smart app, the model is provided by a server. The client makes requests to the server for data and stores that data as the model. Once the client has a model, it performs actions/logic on that data and then prepares it to be displayed on the screen. This part of the application (talking to the server, modifying the data model, and preparing data for display) is called the controller. The controller sends commands to the view, which displays the information. The view also reports back to the controller when something happens on the screen (a button click, for example). The controller receives the feedback, performs the logic, and updates the model. Lather, rinse, repeat! Since web browsers were built to be "dumb clients", the idea of using a browser as a smart app back then was out of question. Instead, smart apps were built on frameworks such as Microsoft .NET, Java, or Macromedia (now Adobe) Flash. As long as you had the framework installed, you could visit a web page to download/run a smart app. Sometimes, you could run the app inside the browser, and sometimes, you would download it first, but either way, you were running a new type of web app where the client application could talk to the server and share the processing workload. The browser grows up Beginning in the early 2000s, a new twist on the MVC pattern started to emerge. Developers started to realize that, for connected/enterprise "smart apps", there was actually a nested MVC pattern. The server code (controller) was performing business logic against the database (model) through the use of business objects and then sending processed/rendered data to the client application (a "view"). The client was receiving this data from the server and treating it as its own personal "model". The client would then act as a proper controller, perform logic, and send the information to the view to be displayed on the screen. So, the "view" for the server MVC was the "model" for the client MVC. As browser technologies (HTML and JavaScript) matured, it became possible to create smart apps that used the Nested MVC design pattern directly inside an HTML web page. This pattern makes it possible to run a full-sized application using only JavaScript. There is no longer any need to download multiple frameworks or separate apps. You can now get the same functionality from visiting a URL as you could previously by buying a packaged product. A giant Meteor appears! Meteor takes modern web apps to the next level. It enhances and builds upon the nested MVC design pattern by implementing three key features: Data On The Wire through the Distributed Data Protocol (DDP) Latency Compensation with Mini Databases Full Stack Reactivity with Blaze and Tracker Let's walk through these concepts to see why they're valuable, and then, we'll apply them to our Lending Library application. Data On The Wire The concept of Data On The Wire is very simple and in tune with the nested MVC pattern; instead of having a server process everything, render content, and then send HTML across the wire, why not just send the data across the wire and let the client decide what to do with it? This concept is implemented in Meteor using the Distributed Data Protocol, or DDP. DDP has a JSON-based syntax and sends messages similar to the REST protocol. Additions, deletions, and changes are all sent across the wire and handled by the receiving service/client/device. Since DDP uses WebSockets rather than HTTP, the data can be pushed whenever changes occur. But the true beauty of DDP lies in the generic nature of the communication. It doesn't matter what kind of system sends or receives data over DDP—it can be a server, a web service, or a client app—they all use the same protocol to communicate. This means that none of the systems know (or care) whether the other systems are clients or servers. With the exception of the browser, any system can be a server, and without exception, any server can act as a client. All the traffic looks the same and can be treated in a similar manner. In other words, the traditional concept of having a single server for a single client goes away. You can hook multiple servers together, each serving a discreet purpose, or you can have a client connect to multiple servers, interacting with each one differently. Think about what you can do with a system like that: Imagine multiple systems all coming together to create, for example, a health monitoring system. Some systems are built with C++, some with Arduino, some with…well, we don't really care. They all speak DDP. They send and receive data on the wire and decide individually what to do with that data. Suddenly, very difficult and complex problems become much easier to solve. DDP has been implemented in pretty much every major programming language, allowing you true freedom to architect an enterprise application. Latency Compensation Meteor employs a very clever technique called Mini Databases. A mini database is a "lite" version of a normal database that lives in the memory on the client side. Instead of the client sending requests to a server, it can make changes directly to the mini database on the client. This mini database then automatically syncs with the server (using DDP of course), which has the actual database. Out of the box, Meteor uses MongoDB and Minimongo: When the client notices a change, it first executes that change against the client-side Minimongo instance. The client then goes on its merry way and lets the Minimongo handlers communicate with the server over DDP. If the server accepts the change, it then sends out a "changed" message to all connected clients, including the one that made the change. If the server rejects the change, or if a newer change has come in from a different client, the Minimongo instance on the client is corrected, and any affected UI elements are updated as a result. All of this doesn't seem very groundbreaking, but here's the thing—it's all asynchronous, and it's done using DDP. This means that the client doesn't have to wait until it gets a response back from the server. It can immediately update the UI based on what is in the Minimongo instance. What if the change was illegal or other changes have come in from the server? This is not a problem as the client is updated as soon as it gets word from the server. Now, what if you have a slow internet connection or your connection goes down temporarily? In a normal client/server environment, you couldn't make any changes, or the screen would take a while to refresh while the client waits for permission from the server. However, Meteor compensates for this. Since the changes are immediately sent to Minimongo, the UI gets updated immediately. So, if your connection is down, it won't cause a problem: All the changes you make are reflected in your UI, based on the data in Minimongo. When your connection comes back, all the queued changes are sent to the server, and the server will send authorized changes to the client. Basically, Meteor lets the client take things on faith. If there's a problem, the data coming in from the server will fix it, but for the most part, the changes you make will be ratified and broadcast by the server immediately. Coding this type of behavior in Meteor is crazy easy (although you can make it more complex and therefore more controlled if you like): lists = new Mongo.Collection("lists"); This one line declares that there is a lists data model. Both the client and server will have a version of it, but they treat their versions differently. The client will subscribe to changes announced by the server and update its model accordingly. The server will publish changes, listen to change requests from the client, and update its model (its master copy) based on these change requests. Wow, one line of code that does all that! Of course, there is more to it, but that's beyond the scope of this article, so we'll move on. To better understand Meteor data synchronization, see the Publish and subscribe section of the meteor documentation at http://docs.meteor.com/#/full/meteor_publish. Full Stack Reactivity Reactivity is integral to every part of Meteor. On the client side, Meteor has the Blaze library, which uses HTML templates and JavaScript helpers to detect changes and render the data in your UI. Whenever there is a change, the helpers re-run themselves and add, delete, and change UI elements, as appropriate, based on the structure found in the templates. These functions that re-run themselves are called reactive computations. On both the client and the server, Meteor also offers reactive computations without having to use a UI. Called the Tracker library, these helpers also detect any data changes and rerun themselves accordingly. Because both the client and the server are JavaScript-based, you can use the Tracker library anywhere. This is defined as isomorphic or full stack reactivity because you're using the same language (and in some cases the same code!) on both the client and the server. Re-running functions on data changes has a really amazing benefit for you, the programmer: you get to write code declaratively, and Meteor takes care of the reactive part automatically. Just tell Meteor how you want the data displayed, and Meteor will manage any and all data changes. This declarative style is usually accomplished through the use of templates. Templates work their magic through the use of view data bindings. Without getting too deep, a view data binding is a shared piece of data that will be displayed differently if the data changes. Let's look at a very simple data binding—one for which you don't technically need Meteor—to illustrate the point. Let's perform the following set of steps to understand the concept in detail: In LendLib.html, you will see an HTML-based template expression: <div id="categories-container">      {{> categories}}   </div> This expression is a placeholder for an HTML template that is found just below it: <template name="categories">    <h2 class="title">my stuff</h2>.. So, {{> categories}} is basically saying, "put whatever is in the template categories right here." And the HTML template with the matching name is providing that. If you want to see how data changes will affect the display, change the h2 tag to an h4 tag and save the change: <template name="categories">    <h4 class="title">my stuff</h4> You'll see the effect in your browser. (my stuff will become itsy bitsy.) That's view data binding at work. Change the h4 tag back to an h2 tag and save the change, unless you like the change. No judgment here...okay, maybe a little bit of judgment. It's ugly, and tiny, and hard to read. Seriously, you should change it back before someone sees it and makes fun of you! Alright, now that we know what a view data binding is, let's see how Meteor uses it. Inside the categories template in LendLib.html, you'll find even more templates: <template name="categories"> <h4 class="title">my stuff</h4> <div id="categories" class="btn-group">    {{#each lists}}      <div class="category btn btn-primary">        {{Category}}      </div>    {{/each}} </div> </template> Meteor uses a template language called Spacebars to provide instructions inside templates. These instructions are called expressions, and they let us do things like add HTML for every record in a collection, insert the values of properties, and control layouts with conditional statements. The first Spacebars expression is part of a pair and is a for-each statement. {{#each lists}} tells the interpreter to perform the action below it (in this case, it tells it to make a new div element) for each item in the lists collection. lists is the piece of data, and {{#each lists}} is the placeholder. Now, inside the {{#each lists}} expression, there is one more Spacebars expression: {{Category}} Since the expression is found inside the #each expression, it is considered a property. That is to say that {{Category}} is the same as saying this.Category, where this is the current item in the for-each loop. So, the placeholder is saying, "add the value of the Category property for the current record." Now, if we look in LendLib.js, we will see the reactive values (called reactive contexts) behind the templates: lists : function () { return lists.find(... Here, Meteor is declaring a template helper named lists. The helper, lists, is found inside the template helpers belonging to categories. The lists helper happens to be a function that returns all the data in the lists collection, which we defined previously. Remember this line? lists = new Mongo.Collection("lists"); This lists collection is returned by the above-mentioned helper. When there is a change to the lists collection, the helper gets updated and the template's placeholder is changed as well. Let's see this in action. On your web page pointing to http://localhost:3000, open the browser console and enter the following line: > lists.insert({Category:"Games"}); This will update the lists data collection. The template will see this change and update the HTML code/placeholder. Each of the placeholders will run one additional time for the new entry in lists, and you'll see the following screen: When the lists collection was updated, the Template.categories.lists helper detected the change and reran itself (recomputed). This changed the contents of the code meant to be displayed in the {{> categories}} placeholder. Since the contents were changed, the affected part of the template was re-run. Now, take a minute here and think about how little we had to do to get this reactive computation to run: we simply created a template, instructing Blaze how we want the lists data collection to be displayed, and we put in a placeholder. This is simple, declarative programming at its finest! Let's create some templates We'll now see a real-life example of reactive computations and work on our Lending Library at the same time. Adding categories through the console has been a fun exercise, but it's not a long-term solution. Let's make it so that we can do that on the page instead as follows: Open LendLib.html and add a new button just before the {{#each lists}} expression: <div id="categories" class="btn-group"> <div class="category btn btn-primary" id="btnNewCat">    <span class="glyphicon glyphicon-plus"></span> </div> {{#each lists}} This will add a plus button on the page, as follows: Now, we want to change the button into a text field when we click on it. So let's build that functionality by using the reactive pattern. We will make it based on the value of a variable in the template. Add the following {{#if…else}} conditionals around our new button: <div id="categories" class="btn-group"> {{#if new_cat}} {{else}}    <div class="category btn btn-primary" id="btnNewCat">      <span class="glyphicon glyphicon-plus"></span>    </div> {{/if}} {{#each lists}} The first line, {{#if new_cat}}, checks to see whether new_cat is true or false. If it's false, the {{else}} section is triggered, and it means that we haven't yet indicated that we want to add a new category, so we should be displaying the button with the plus sign. In this case, since we haven't defined it yet, new_cat will always be false, and so the display won't change. Now, let's add the HTML code to display when we want to add a new category: {{#if new_cat}} <div class="category form-group" id="newCat">      <input type="text" id="add-category" class="form-control" value="" />    </div> {{else}} ... {{/if}} There's the smallest bit of CSS we need to take care of as well. Open ~/Documents/Meteor/LendLib/LendLib.css and add the following declaration: #newCat { max-width: 250px; } Okay, so now we've added an input field, which will show up when new_cat is true. The input field won't show up unless it is set to true; so, for now, it's hidden. So, how do we make new_cat equal to true? Save your changes if you haven't already done so, and open LendLib.js. First, we'll declare a Session variable, just below our Meteor.isClient check function, at the top of the file: if (Meteor.isClient) { // We are declaring the 'adding_category' flag Session.set('adding_category', false); Now, we'll declare the new template helper new_cat, which will be a function returning the value of adding_category. We need to place the new helper in the Template.categories.helpers() method, just below the declaration for lists: Template.categories.helpers({ lists: function () {    ... }, new_cat: function(){    //returns true if adding_category has been assigned    //a value of true    return Session.equals('adding_category',true); } }); Note the comma (,) on the line above new_cat. It's important that you add that comma, or your code will not execute. Save these changes, and you'll see that nothing has changed. Ta-da! In reality, this is exactly as it should be because we haven't done anything to change the value of adding_category yet. Let's do this now: First, we'll declare our click event handler, which will change the value in our Session variable. To do this, add the following highlighted code just below the Template.categories.helpers() block: Template.categories.helpers({ ... }); Template.categories.events({ 'click #btnNewCat': function (e, t) {    Session.set('adding_category', true);    Tracker.flush();    focusText(t.find("#add-category")); } }); Now, let's take a look at the following line of code: Template.categories.events({ This line declares that events will be found in the category template. Now, let's take a look at the next line: 'click #btnNewCat': function (e, t) { This tells us that we're looking for a click event on the HTML element with an id="btnNewCat" statement (which we already created in LendLib.html). Session.set('adding_category', true); Tracker.flush(); focusText(t.find("#add-category")); Next, we set the Session variable, adding_category = true, flush the DOM (to clear up anything wonky), and then set the focus onto the input box with the id="add-category" expression. There is one last thing to do, and that is to quickly add the focusText(). helper function. To do this, just before the closing tag for the if (Meteor.isClient) function, add the following code: /////Generic Helper Functions///// //this function puts our cursor where it needs to be. function focusText(i) { i.focus(); i.select(); }; } //<------closing bracket for if(Meteor.isClient){} Now, when you save the changes and click on the plus button, you will see the input box: Fancy! However, it's still not useful, and we want to pause for a second and reflect on what just happened; we created a conditional template in the HTML page that will either show an input box or a plus button, depending on the value of a variable. This variable is a reactive variable, called a reactive context. This means that if we change the value of the variable (like we do with the click event handler), then the view automatically updates because the new_cat helpers function (a reactive computation) will rerun. Congratulations, you've just used Meteor's reactive programming model! To really bring this home, let's add a change to the lists collection (which is also a reactive context, remember?) and figure out a way to hide the input field when we're done. First, we need to add a listener for the keyup event. Or, to put it another way, we want to listen when the user types something in the box and hits Enter. When this happens, we want to add a category based on what the user typed. To do this, let's first declare the event handler. Just after the click handler for #btnNewCat, let's add another event handler: 'click #btnNewCat': function (e, t) {    ... }, 'keyup #add-category': function (e,t){    if (e.which === 13)    {      var catVal = String(e.target.value || "");      if (catVal)      {        lists.insert({Category:catVal});        Session.set('adding_category', false);      }    } } We add a "," character at the end of the first click handler, and then add the keyup event handler. Now, let's check each of the lines in the preceding code: This line checks to see whether we hit the Enter/Return key. if (e.which === 13) This line of code checks to see whether the input field has any value in it: var catVal = String(e.target.value || ""); if (catVal) If it does, we want to add an entry to the lists collection: lists.insert({Category:catVal}); Then, we want to hide the input box, which we can do by simply modifying the value of adding_category: Session.set('adding_category', false); There is one more thing to add and then we'll be done. When we click away from the input box, we want to hide it and bring back the plus button. We already know how to do this reactively, so let's add a quick function that changes the value of adding_category. To do this, add one more comma after the keyup event handler and insert the following event handler: 'keyup #add-category': function (e,t){ ... }, 'focusout #add-category': function(e,t){    Session.set('adding_category',false); } Save your changes, and let's see this in action! In your web browser on http://localhost:3000, click on the plus sign, add the word Clothes, and hit Enter. Your screen should now resemble the following screenshot: Feel free to add more categories if you like. Also, experiment by clicking on the plus button, typing something in, and then clicking away from the input field. Summary In this article, you learned about the history of web applications and saw how we've moved from a traditional client/server model to a nested MVC design pattern. You learned what smart apps are, and you also saw how Meteor has taken smart apps to the next level with Data On The Wire, Latency Compensation, and Full Stack Reactivity. You saw how Meteor uses templates and helpers to automatically update content, using reactive variables and reactive computations. Lastly, you added more functionality to the Lending Library. You made a button and an input field to add categories, and you did it all using reactive programming rather than directly editing the HTML code. Resources for Article: Further resources on this subject: Building the next generation Web with Meteor [article] Quick start - creating your first application [article] Meteor.js JavaScript Framework: Why Meteor Rocks! [article]
Read more
  • 0
  • 0
  • 1892
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-installing-jquery
Packt
04 Jun 2015
25 min read
Save for later

Installing jQuery

Packt
04 Jun 2015
25 min read
 In this article by Alex Libby, author of the book Mastering jQuery, we will examine some of the options available to help develop your skills even further. (For more resources related to this topic, see here.) Local or CDN, I wonder…? Which version…? Do I support old IE…? Installing jQuery is a thankless task that has to be done countless times by any developer—it is easy to imagine that person asking some of the questions. It is easy to imagine why most people go with the option of using a Content Delivery Network (CDN) link, but there is more to installing jQuery than taking the easy route! There are more options available, where we can be really specific about what we need to use—throughout this article, we will. We'll cover a number of topics, which include: Downloading and installing jQuery Customizing jQuery downloads Building from Git Using other sources to install jQuery Adding source map support Working with Modernizr as a fallback Intrigued? Let's get started. Downloading and installing jQuery As with all projects that require the use of jQuery, we must start somewhere—no doubt you've downloaded and installed jQuery a thousand times; let's just quickly recap to bring ourselves up to speed. If we browse to http://www.jquery.com/download, we can download jQuery using one of the two methods: downloading the compressed production version or the uncompressed development version. If we don't need to support old IE (IE6, 7, and 8), then we can choose the 2.x branch. If, however, you still have some diehards who can't (or don't want to) upgrade, then the 1.x branch must be used instead. To include jQuery, we just need to add this link to our page: <script src="http://code.jquery.com/jquery-X.X.X.js"></script> Here, X.X.X marks the version number of jQuery or the Migrate plugin that is being used in the page. Conventional wisdom states that the jQuery plugin (and this includes the Migrate plugin too) should be added to the <head> tag, although there are valid arguments to add it as the last statement before the closing <body> tag; placing it here may help speed up loading times to your site. This argument is not set in stone; there may be instances where placing it in the <head> tag is necessary and this choice should be left to the developer's requirements. My personal preference is to place it in the <head> tag as it provides a clean separation of the script (and the CSS) code from the main markup in the body of the page, particularly on lighter sites. I have even seen some developers argue that there is little perceived difference if jQuery is added at the top, rather than at the bottom; some systems, such as WordPress, include jQuery in the <head> section too, so either will work. The key here though is if you are perceiving slowness, then move your scripts to just before the <body> tag, which is considered a better practice. Using jQuery in a development capacity A useful point to note at this stage is that best practice recommends that CDN links should not be used within a development capacity; instead, the uncompressed files should be downloaded and referenced locally. Once the site is complete and is ready to be uploaded, then CDN links can be used. Adding the jQuery Migrate plugin If you've used any version of jQuery prior to 1.9, then it is worth adding the jQuery Migrate plugin to your pages. The jQuery Core team made some significant changes to jQuery from this version; the Migrate plugin will temporarily restore the functionality until such time that the old code can be updated or replaced. The plugin adds three properties and a method to the jQuery object, which we can use to control its behavior: Property or Method Comments jQuery.migrateWarnings This is an array of string warning messages that have been generated by the code on the page, in the order in which they were generated. Messages appear in the array only once even if the condition has occurred multiple times, unless jQuery.migrateReset() is called. jQuery.migrateMute Set this property to true in order to prevent console warnings from being generated in the debugging version. If this property is set, the jQuery.migrateWarnings array is still maintained, which allows programmatic inspection without console output. jQuery.migrateTrace Set this property to false if you want warnings but don't want traces to appear on the console. jQuery.migrateReset() This method clears the jQuery.migrateWarnings array and "forgets" the list of messages that have been seen already. Adding the plugin is equally simple—all you need to do is add a link similar to this, where X represents the version number of the plugin that is used: <script src="http://code.jquery.com/jquery-migrate- X.X.X.js"></script> If you want to learn more about the plugin and obtain the source code, then it is available for download from https://github.com/jquery/jquery-migrate. Using a CDN We can equally use a CDN link to provide our jQuery library—the principal link is provided by MaxCDN for the jQuery team, with the current version available at http://code.jquery.com. We can, of course, use CDN links from some alternative sources, if preferred—a reminder of these is as follows: Google (https://developers.google.com/speed/libraries/devguide#jquery) Microsoft (http://www.asp.net/ajaxlibrary/cdn.ashx#jQuery_Releases_on_the_CDN_0) CDNJS (http://cdnjs.com/libraries/jquery/) jsDelivr (http://www.jsdelivr.com/#%!jquery) Don't forget though that if you need, we can always save a copy of the file provided on CDN locally and reference this instead. The jQuery CDN will always have the latest version, although it may take a couple of days for updates to appear via the other links. Using other sources to install jQuery Right. Okay, let's move on and develop some code! "What's next?" I hear you ask. Aha! If you thought downloading and installing jQuery from the main site was the only way to do this, then you are wrong! After all, this is about mastering jQuery, so you didn't think I will only talk about something that I am sure you are already familiar with, right? Yes, there are more options available to us to install jQuery than simply using the CDN or main download page. Let's begin by taking a look at using Node. Each demo is based on Windows, as this is the author's preferred platform; alternatives are given, where possible, for other platforms. Using Node JS to install jQuery So far, we've seen how to download and reference jQuery, which is to use the download from the main jQuery site or via a CDN. The downside of this method is the manual work required to keep our versions of jQuery up to date! Instead, we can use a package manager to help manage our assets. Node.js is one such system. Let's take a look at the steps that need to be performed in order to get jQuery installed: We first need to install Node.js—head over to http://www.nodejs.org in order to download the package for your chosen platform; accept all the defaults when working through the wizard (for Mac and PC). Next, fire up a Node Command Prompt and then change to your project folder. In the prompt, enter this command: npm install jquery Node will fetch and install jQuery—it displays a confirmation message when the installation is complete: You can then reference jQuery by using this link: <name of drive>:websitenode_modulesjquerydistjquery.min.js. Node is now installed and ready for use—although we've installed it in a folder locally, in reality, we will most likely install it within a subfolder of our local web server. For example, if we're running WampServer, we can install it, then copy it into the /wamp/www/js folder, and reference it using http://localhost/js/jquery.min.js. If you want to take a look at the source of the jQuery Node Package Manager (NPM) package, then check out https://www.npmjs.org/package/jquery. Using Node to install jQuery makes our work simpler, but at a cost. Node.js (and its package manager, NPM) is primarily aimed at installing and managing JavaScript components and expects packages to follow the CommonJS standard. The downside of this is that there is no scope to manage any of the other assets that are often used within websites, such as fonts, images, CSS files, or even HTML pages. "Why will this be an issue?," I hear you ask. Simple, why make life hard for ourselves when we can manage all of these assets automatically and still use Node? Installing jQuery using Bower A relatively new addition to the library is the support for installation using Bower—based on Node, it's a package manager that takes care of the fetching and installing of packages from over the Internet. It is designed to be far more flexible about managing the handling of multiple types of assets (such as images, fonts, and CSS files) and does not interfere with how these components are used within a page (unlike Node). For the purpose of this demo, I will assume that you have already installed it; if not, you will need to revisit it before continuing with the following steps: Bring up the Node Command Prompt, change to the drive where you want to install jQuery, and enter this command: bower install jquery This will download and install the script, displaying the confirmation of the version installed when it has completed. The library is installed in the bower_components folder on your PC. It will look similar to this example, where I've navigated to the jquery subfolder underneath. By default, Bower will install jQuery in its bower_components folder. Within bower_components/jquery/dist/, we will find an uncompressed version, compressed release, and source map file. We can then reference jQuery in our script using this line: <script src="/bower_components/jquery/jquery.js"></script> We can take this further though. If we don't want to install the extra files that come with a Bower installation by default, we can simply enter this in a Command Prompt instead to just install the minified version 2.1 of jQuery: bower install http://code.jquery.com/jquery-2.1.0.min.js Now, we can be really clever at this point; as Bower uses Node's JSON files to control what should be installed, we can use this to be really selective and set Bower to install additional components at the same time. Let's take a look and see how this will work—in the following example, we'll use Bower to install jQuery 2.1 and 1.10 (the latter to provide support for IE6-8). In the Node Command Prompt, enter the following command: bower init This will prompt you for answers to a series of questions, at which point you can either fill out information or press Enter to accept the defaults. Look in the project folder; you should find a bower.json file within. Open it in your favorite text editor and then alter the code as shown here: {"ignore": [ "**/.*", "node_modules", "bower_components","test", "tests" ] ,"dependencies": {"jquery-legacy": "jquery#1.11.1","jquery-modern": "jquery#2.10"}} At this point, you have a bower.json file that is ready for use. Bower is built on top of Git, so in order to install jQuery using your file, you will normally need to publish it to the Bower repository. Instead, you can install an additional Bower package, which will allow you to install your custom package without the need to publish it to the Bower repository: In the Node Command Prompt window, enter the following at the prompt: npm install -g bower-installer When the installation is complete, change to your project folder and then enter this command line: bower-installer The bower-installer command will now download and install both the versions of jQuery. At this stage, you now have jQuery installed using Bower. You're free to upgrade or remove jQuery using the normal Bower process at some point in the future. If you want to learn more about how to use Bower, there are plenty of references online; https://www.openshift.com/blogs/day-1-bower-manage-your-client-side-dependencies is a good example of a tutorial that will help you get accustomed to using Bower. In addition, there is a useful article that discusses both Bower and Node, available at http://tech.pro/tutorial/1190/package-managers-an-introductory-guide-for-the-uninitiated-front-end-developer. Bower isn't the only way to install jQuery though—while we can use it to install multiple versions of jQuery, for example, we're still limited to installing the entire jQuery library. We can improve on this by referencing only the elements we need within the library. Thanks to some extensive work undertaken by the jQuery Core team, we can use the Asynchronous Module Definition (AMD) approach to reference only those modules that are needed within our website or online application. Using the AMD approach to load jQuery In most instances, when using jQuery, developers are likely to simply include a reference to the main library in their code. There is nothing wrong with it per se, but it loads a lot of extra code that is surplus to our requirements. A more efficient method, although one that takes a little effort in getting used to, is to use the AMD approach. In a nutshell, the jQuery team has made the library more modular; this allows you to use a loader such as require.js to load individual modules when needed. It's not suitable for every approach, particularly if you are a heavy user of different parts of the library. However, for those instances where you only need a limited number of modules, then this is a perfect route to take. Let's work through a simple example to see what it looks like in practice. Before we start, we need one additional item—the code uses the Fira Sans regular custom font, which is available from Font Squirrel at http://www.fontsquirrel.com/fonts/fira-sans. Let's make a start using the following steps: The Fira Sans font doesn't come with a web format by default, so we need to convert the font to use the web font format. Go ahead and upload the FiraSans-Regular.otf file to Font Squirrel's web font generator at http://www.fontsquirrel.com/tools/webfont-generator. When prompted, save the converted file to your project folder in a subfolder called fonts. We need to install jQuery and RequireJS into our project folder, so fire up a Node.js Command Prompt and change to the project folder. Next, enter these commands one by one, pressing Enter after each: bower install jquerybower install requirejs We need to extract a copy of the amd.html and amd.css files—it contains some simple markup along with a link to require.js; the amd.css file contains some basic styling that we will use in our demo. We now need to add in this code block, immediately below the link for require.js—this handles the calls to jQuery and RequireJS, where we're calling in both jQuery and Sizzle, the selector engine for jQuery: <script>require.config({paths: {"jquery": "bower_components/jquery/src","sizzle": "bower_components/jquery/src/sizzle/dist/sizzle"}});require(["js/app"]);</script> Now that jQuery has been defined, we need to call in the relevant modules. In a new file, go ahead and add the following code, saving it as app.js in a subfolder marked js within our project folder: define(["jquery/core/init", "jquery/attributes/classes"],function($) {$("div").addClass("decoration");}); We used app.js as the filename to tie in with the require(["js/app"]); reference in the code. If all went well, when previewing the results of our work in a browser. Although we've only worked with a simple example here, it's enough to demonstrate how easy it is to only call those modules we need to use in our code rather than call the entire jQuery library. True, we still have to provide a link to the library, but this is only to tell our code where to find it; our module code weighs in at 29 KB (10 KB when gzipped), against 242 KB for the uncompressed version of the full library! Now, there may be instances where simply referencing modules using this method isn't the right approach; this may apply if you need to reference lots of different modules regularly. A better alternative is to build a custom version of the jQuery library that only contains the modules that we need to use and the rest are removed during build. It's a little more involved but worth the effort—let's take a look at what is involved in the process. Customizing the downloads of jQuery from Git If we feel so inclined, we can really push the boat out and build a custom version of jQuery using the JavaScript task runner, Grunt. The process is relatively straightforward but involves a few steps; it will certainly help if you have some prior familiarity with Git! The demo assumes that you have already installed Node.js—if you haven't, then you will need to do this first before continuing with the exercise. Okay, let's make a start by performing the following steps: You first need to install Grunt if it isn't already present on your system—bring up the Node.js Command Prompt and enter this command: npm install -g grunt-cli Next, install Git—for this, browse to http://msysgit.github.io/ in order to download the package. Double-click on the setup file to launch the wizard, accepting all the defaults is sufficient for our needs. If you want more information on how to install Git, head over and take a look at https://github.com/msysgit/msysgit/wiki/InstallMSysGit for more details. Once Git is installed, change to the jquery folder from within the Command Prompt and enter this command to download and install the dependencies needed to build jQuery: npm install The final stage of the build process is to build the library into the file we all know and love; from the same Command Prompt, enter this command: grunt Browse to the jquery folder—within this will be a folder called dist, which contains our custom build of jQuery, ready for use. If there are modules within the library that we don't need, we can run a custom build. We can set the Grunt task to remove these when building the library, leaving in those that are needed for our project. For a complete list of all the modules that we can exclude, see https://github.com/jquery/jquery#modules. For example, to remove AJAX support from our build, we can run this command in place of step 5, as shown previously: grunt custom:-ajax This results in a file saving on the original raw version of 30 KB as shown in the following screenshot: The JavaScript and map files can now be incorporated into our projects in the usual way. For a detailed tutorial on the build process, this article by Dan Wellman is worth a read (https://www.packtpub.com/books/content/building-custom-version-jquery). Using a GUI as an alternative There is an online GUI available, which performs much the same tasks, without the need to install Git or Grunt. It's available at hhttp://projects.jga.me/jquery-builder/, although it is worth noting that it hasn't been updated for a while! Okay, so we have jQuery installed; let's take a look at one more useful function that will help in the event of debugging errors in our code. Support for source maps has been made available within jQuery since version 1.9. Let's take a look at how they work and see a simple example in action. Adding source map support Imagine a scenario, if you will, where you've created a killer site, which is running well, until you start getting complaints about problems with some of the jQuery-based functionality that is used on the site. Sounds familiar? Using an uncompressed version of jQuery on a production site is not an option; instead we can use source maps. Simply put, these map a compressed version of jQuery against the relevant line in the original source. Historically, source maps have given developers a lot of heartache when implementing, to the extent that the jQuery Team had to revert to disabling the automatic use of maps! For best effects, it is recommended that you use a local web server, such as WAMP (PC) or MAMP (Mac), to view this demo and that you use Chrome as your browser. Source maps are not difficult to implement; let's run through how you can implement them: Extract a copy of the sourcemap folder and save it to your project area locally. Press Ctrl + Shift + I to bring up the Developer Tools in Chrome. Click on Sources, then double-click on the sourcemap.html file—in the code window, and finally click on 17. Now, run the demo in Chrome—we will see it paused; revert back to the developer toolbar where line 17 is highlighted. The relevant calls to the jQuery library are shown on the right-hand side of the screen: If we double-click on the n.event.dispatch entry on the right, Chrome refreshes the toolbar and displays the original source line (highlighted) from the jQuery library, as shown here: It is well worth spending the time to get to know source maps—all the latest browsers support it, including IE11. Even though we've only used a simple example here, it doesn't matter as the principle is exactly the same, no matter how much code is used in the site. For a more in-depth tutorial that covers all the browsers, it is worth heading over to http://blogs.msdn.com/b/davrous/archive/2014/08/22/enhance-your-javascript-debugging-life-thanks-to-the-source-map-support-available-in-ie11-chrome-opera-amp-firefox.aspx—it is worth a read! Adding support for source maps We've just previewed the source map, source map support has already been added to the library. It is worth noting though that source maps are not included with the current versions of jQuery by default. If you need to download a more recent version or add support for the first time, then follow these steps: Source maps can be downloaded from the main site using http://code.jquery.com/jquery-X.X.X.min.map, where X represents the version number of jQuery being used. Open a copy of the minified version of the library and then add this line at the end of the file: //# sourceMappingURL=jquery.min.map Save it and then store it in the JavaScript folder of your project. Make sure you have copies of both the compressed and uncompressed versions of the library within the same folder. Let's move on and look at one more critical part of loading jQuery: if, for some unknown reason, jQuery becomes completely unavailable, then we can add a fallback position to our site that allows graceful degradation. It's a small but crucial part of any site and presents a better user experience than your site simply falling over! Working with Modernizr as a fallback A best practice when working with jQuery is to ensure that a fallback is provided for the library, should the primary version not be available. (Yes, it's irritating when it happens, but it can happen!) Typically, we might use a little JavaScript, such as the following example, in the best practice suggestions. This would work perfectly well but doesn't provide a graceful fallback. Instead, we can use Modernizr to perform the check for us and provide a graceful degradation if all fails. Modernizr is a feature detection library for HTML5/CSS3, which can be used to provide a standardized fallback mechanism in the event of a functionality not being available. You can learn more at http://www.modernizr.com. As an example, the code might look like this at the end of our website page. We first try to load jQuery using the CDN link, falling back to a local copy if that hasn't worked or an alternative if both fail: <body><script src="js/modernizr.js"></script><script type="text/javascript">Modernizr.load([{load: 'http://code.jquery.com/jquery-2.1.1.min.js',complete: function () {// Confirm if jQuery was loaded using CDN link// if not, fall back to local versionif ( !window.jQuery ) {Modernizr.load('js/jquery-latest.min.js');}}},// This script would wait until fallback is loaded, beforeloading{ load: 'jquery-example.js' }]);</script></body> In this way, we can ensure that jQuery either loads locally or from the CDN link—if all else fails, then we can at least make a graceful exit. Best practices for loading jQuery So far, we've examined several ways of loading jQuery into our pages, over and above the usual route of downloading the library locally or using a CDN link in our code. Now that we have it installed, it's a good opportunity to cover some of the best practices we should try to incorporate into our pages when loading jQuery: Always try to use a CDN to include jQuery on your production site. We can take advantage of the high availability and low latency offered by CDN services; the library may already be precached too, avoiding the need to download it again. Try to implement a fallback on your locally hosted library of the same version. If CDN links become unavailable (and they are not 100 percent infallible), then the local version will kick in automatically, until the CDN link becomes available again: <script type="text/javascript" src="//code.jquery.com/jquery-1.11.1.min.js"></script><script>window.jQuery || document.write('<scriptsrc="js/jquery-1.11.1.min.js"></script>')</script> Note that although this will work equally well as using Modernizr, it doesn't provide a graceful fallback if both the versions of jQuery should become unavailable. Although one hopes to never be in this position, at least we can use CSS to provide a graceful exit! Use protocol-relative/protocol-independent URLs; the browser will automatically determine which protocol to use. If HTTPS is not available, then it will fall back to HTTP. If you look carefully at the code in the previous point, it shows a perfect example of a protocol-independent URL, with the call to jQuery from the main jQuery Core site. If possible, keep all your JavaScript and jQuery inclusions at the bottom of your page—scripts block the rendering of the rest of the page until they have been fully rendered. Use the jQuery 2.x branch, unless you need to support IE6-8; in this case, use jQuery 1.x instead—do not load multiple jQuery versions. If you load jQuery using a CDN link, always specify the complete version number you want to load, such as jquery-1.11.1.min.js. If you are using other libraries, such as Prototype, MooTools, Zepto, and so on, that use the $ sign as well, try not to use $ to call jQuery functions and simply use jQuery instead. You can return the control of $ back to the other library with a call to the $.noConflict() function. For advanced browser feature detection, use Modernizr. It is worth noting that there may be instances where it isn't always possible to follow best practices; circumstances may dictate that we need to make allowances for requirements, where best practices can't be used. However, this should be kept to a minimum where possible; one might argue that there are flaws in our design if most of the code doesn't follow best practices! Summary If you thought that the only methods to include jQuery were via a manual download or using a CDN link, then hopefully this article has opened your eyes to some alternatives—let's take a moment to recap what we have learned. We kicked off with a customary look at how most developers are likely to include jQuery before quickly moving on to look at other sources. We started with a look at how to use Node, before turning our attention to using the Bower package manager. Next, we had a look at how we can reference individual modules within jQuery using the AMD approach. We then moved on and turned our attention to creating custom builds of the library using Git. We then covered how we can use source maps to debug our code, with a look at enabling support for them within Google's Chrome browser. To round out our journey of loading jQuery, we saw what might happen if we can't load jQuery at all and how we can get around this, by using Modernizr to allow our pages to degrade gracefully. We then finished the article with some of the best practices that we can follow when referencing jQuery. Resources for Article: Further resources on this subject: Using different jQuery event listeners for responsive interaction [Article] Building a Custom Version of jQuery [Article] Learning jQuery [Article]
Read more
  • 0
  • 0
  • 29453

article-image-preparing-optimizations
Packt
04 Jun 2015
11 min read
Save for later

Preparing Optimizations

Packt
04 Jun 2015
11 min read
In this article by Mayur Pandey and Suyog Sarda, authors of LLVM Cookbook, we will look into the following recipes: Various levels of optimization Writing your own LLVM pass Running your own pass with the opt tool Using another pass in a new pass (For more resources related to this topic, see here.) Once the source code transformation completes, the output is in the LLVM IR form. This IR serves as a common platform for converting into assembly code, depending on the backend. However, before converting into an assembly code, the IR can be optimized to produce more effective code. The IR is in the SSA form, where every new assignment to a variable is a new variable itself—a classic case of an SSA representation. In the LLVM infrastructure, a pass serves the purpose of optimizing LLVM IR. A pass runs over the LLVM IR, processes the IR, analyzes it, identifies the optimization opportunities, and modifies the IR to produce optimized code. The command-line interface opt is used to run optimization passes on LLVM IR. Various levels of optimization There are various levels of optimization, starting at 0 and going up to 3 (there is also s for space optimization). The code gets more and more optimized as the optimization level increases. Let's try to explore the various optimization levels. Getting ready... Various optimization levels can be understood by running the opt command-line interface on LLVM IR. For this, an example C program can first be converted to IR using the Clang frontend. Open an example.c file and write the following code in it: $ vi example.c int main(int argc, char **argv) { int i, j, k, t = 0; for(i = 0; i < 10; i++) {    for(j = 0; j < 10; j++) {      for(k = 0; k < 10; k++) {        t++;      }    }    for(j = 0; j < 10; j++) {      t++;    } } for(i = 0; i < 20; i++) {    for(j = 0; j < 20; j++) {      t++;    }    for(j = 0; j < 20; j++) {      t++;    } } return t; } Now convert this into LLVM IR using the clang command, as shown here: $ clang –S –O0 –emit-llvm example.c A new file, example.ll, will be generated, containing LLVM IR. This file will be used to demonstrate the various optimization levels available. How to do it… Do the following steps: The opt command-line tool can be run on the IR-generated example.ll file: $ opt –O0 –S example.ll The –O0 syntax specifies the least optimization level. Similarly, you can run other optimization levels: $ opt –O1 –S example.ll $ opt –O2 –S example.ll $ opt –O3 –S example.ll How it works… The opt command-line interface takes the example.ll file as the input and runs the series of passes specified in each optimization level. It can repeat some passes in the same optimization level. To see which passes are being used in each optimization level, you have to add the --debug-pass=Structure command-line option with the previous opt commands. See Also To know more on various other options that can be used with the opt tool, refer to http://llvm.org/docs/CommandGuide/opt.html Writing your own LLVM pass All LLVM passes are subclasses of the pass class, and they implement functionality by overriding the virtual methods inherited from pass. LLVM applies a chain of analyses and transformations on the target program. A pass is an instance of the Pass LLVM class. Getting ready Let's see how to write a pass. Let's name the pass function block counter; once done, it will simply display the name of the function and count the basic blocks in that function when run. First, a Makefile needs to be written for the pass. Follow the given steps to write a Makefile: Open a Makefile in the llvm lib/Transform folder: $ vi Makefile Specify the path to the LLVM root folder and the library name, and make this pass a loadable module by specifying it in Makefile, as follows: LEVEL = ../../.. LIBRARYNAME = FuncBlockCount LOADABLE_MODULE = 1 include $(LEVEL)/Makefile.common This Makefile specifies that all the .cpp files in the current directory are to be compiled and linked together in a shared object. How to do it… Do the following steps: Create a new .cpp file called FuncBlockCount.cpp: $ vi FuncBlockCount.cpp In this file, include some header files from LLVM: #include "llvm/Pass.h" #include "llvm/IR/Function.h" #include "llvm/Support/raw_ostream.h" Include the llvm namespace to enable access to LLVM functions: using namespace llvm; Then start with an anonymous namespace: namespace { Next declare the pass: struct FuncBlockCount : public FunctionPass { Then declare the pass identifier, which will be used by LLVM to identify the pass: static char ID; FuncBlockCount() : FunctionPass(ID) {} This step is one of the most important steps in writing a pass—writing a run function. Since this pass inherits FunctionPass and runs on a function, a runOnFunction is defined to be run on a function: bool runOnFunction(Function &F) override {      errs() << "Function " << F.getName() << 'n';      return false;    } }; } This function prints the name of the function that is being processed. The next step is to initialize the pass ID: char FuncBlockCount::ID = 0; Finally, the pass needs to be registered, with a command-line argument and a name: static RegisterPass<FuncBlockCount> X("funcblockcount", "Function Block Count", false, false); Putting everything together, the entire code looks like this: #include "llvm/Pass.h" #include "llvm/IR/Function.h" #include "llvm/Support/raw_ostream.h" using namespace llvm; namespace { struct FuncBlockCount : public FunctionPass { static char ID; FuncBlockCount() : FunctionPass(ID) {} bool runOnFunction(Function &F) override {    errs() << "Function " << F.getName() << 'n';    return false; }            };        }        char FuncBlockCount::ID = 0;        static RegisterPass<FuncBlockCount> X("funcblockcount", "Function Block Count", false, false); How it works A simple gmake command compiles the file, so a new file FuncBlockCount.so is generated at the LLVM root directory. This shared object file can be dynamically loaded to the opt tool to run it on a piece of LLVM IR code. How to load and run it will be demonstrated in the next section. See also To know more on how a pass can be built from scratch, visit http://llvm.org/docs/WritingAnLLVMPass.html Running your own pass with the opt tool The pass written in the previous recipe, Writing your own LLVM pass, is ready to be run on the LLVM IR. This pass needs to be loaded dynamically for the opt tool to recognize and execute it. How to do it… Do the following steps: Write the C test code in the sample.c file, which we will convert into an .ll file in the next step: $ vi sample.c   int foo(int n, int m) { int sum = 0; int c0; for (c0 = n; c0 > 0; c0--) {    int c1 = m;  for (; c1 > 0; c1--) {      sum += c0 > c1 ? 1 : 0;    } } return sum; } Convert the C test code into LLVM IR using the following command: $ clang –O0 –S –emit-llvm sample.c –o sample.ll This will generate a sample.ll file. Run the new pass with the opt tool, as follows: $ opt -load (path_to_.so_file)/FuncBlockCount.so -funcblockcount sample.ll The output will look something like this: Function foo How it works… As seen in the preceding code, the shared object loads dynamically into the opt command-line tool and runs the pass. It goes over the function and displays its name. It does not modify the IR. Further enhancement in the new pass is demonstrated in the next recipe. See also To know more about the various types of the Pass class, visit http://llvm.org/docs/WritingAnLLVMPass.html#pass-classes-and-requirements Using another pass in a new pass A pass may require another pass to get some analysis data, heuristics, or any such information to decide on a further course of action. The pass may just require some analysis such as memory dependencies, or it may require the altered IR as well. The new pass that you just saw simply prints the name of the function. Let's see how to enhance it to count the basic blocks in a loop, which also demonstrates how to use other pass results. Getting ready The code used in the previous recipe remains the same. Some modifications are required, however, to enhance it—as demonstrated in next section—so that it counts the number of basic blocks in the IR. How to do it… The getAnalysis function is used to specify which other pass will be used: Since the new pass will be counting the number of basic blocks, it requires loop information. This is specified using the getAnalysis loop function: LoopInfo *LI = &getAnalysis<LoopInfoWrapperPass>().getLoopInfo(); This will call the LoopInfo pass to get information on the loop. Iterating through this object gives the basic block information: unsigned num_Blocks = 0; Loop::block_iterator bb; for(bb = L->block_begin(); bb != L->block_end();++bb)    num_Blocks++; errs() << "Loop level " << nest << " has " << num_Blocks << " blocksn"; This will go over the loop to count the basic blocks inside it. However, it counts only the basic blocks in the outermost loop. To get information on the innermost loop, recursive calling of the getSubLoops function will help. Putting the logic in a separate function and calling it recursively makes more sense: void countBlocksInLoop(Loop *L, unsigned nest) { unsigned num_Blocks = 0; Loop::block_iterator bb; for(bb = L->block_begin(); bb != L->block_end();++bb)    num_Blocks++; errs() << "Loop level " << nest << " has " << num_Blocks << " blocksn"; std::vector<Loop*> subLoops = L->getSubLoops(); Loop::iterator j, f; for (j = subLoops.begin(), f = subLoops.end(); j != f; ++j)    countBlocksInLoop(*j, nest + 1); } virtual bool runOnFunction(Function &F) { LoopInfo *LI = &getAnalysis<LoopInfoWrapperPass>().getLoopInfo(); errs() << "Function " << F.getName() + "n"; for (Loop *L : *LI)    countBlocksInLoop(L, 0); return false; } How it works… The newly modified pass now needs to run on a sample program. Follow the given steps to modify and run the sample program: Open the sample.c file and replace its content with the following program: int main(int argc, char **argv) { int i, j, k, t = 0; for(i = 0; i < 10; i++) {    for(j = 0; j < 10; j++) {      for(k = 0; k < 10; k++) {        t++;      }    }    for(j = 0; j < 10; j++) {      t++;    } } for(i = 0; i < 20; i++) {    for(j = 0; j < 20; j++) {      t++;    }    for(j = 0; j < 20; j++) {      t++;    } } return t; } Convert it into a .ll file using Clang: $ clang –O0 –S –emit-llvm sample.c –o sample.ll Run the new pass on the previous sample program: $ opt -load (path_to_.so_file)/FuncBlockCount.so - funcblockcount sample.ll The output will look something like this: Function main Loop level 0 has 11 blocks Loop level 1 has 3 blocks Loop level 1 has 3 blocks Loop level 0 has 15 blocks Loop level 1 has 7 blocks Loop level 2 has 3 blocks Loop level 1 has 3 blocks There's more… The LLVM's pass manager provides a debug pass option that gives us the chance to see which passes interact with our analyses and optimizations, as follows: $ opt -load (path_to_.so_file)/FuncBlockCount.so - funcblockcount sample.ll –disable-output –debug-pass=Structure Summary In this article you have explored various optimization levels, and the optimization techniques kicking at each level. We also saw the step-by-step approach to writing our own LLVM pass. Resources for Article: Further resources on this subject: Integrating a D3.js visualization into a simple AngularJS application [article] Getting Up and Running with Cassandra [article] Cassandra Architecture [article]
Read more
  • 0
  • 0
  • 827

article-image-regex-practice
Packt
04 Jun 2015
24 min read
Save for later

Regex in Practice

Packt
04 Jun 2015
24 min read
Knowing Regex's syntax allows you to model text patterns, but sometimes coming up with a good reliable pattern can be more difficult, so taking a look at some actual use cases can really help you learn some common design patterns. So, in this article by Loiane Groner and Gabriel Manricks, coauthors of the book JavaScript Regular Expressions, we will develop a form, and we will explore the following topics: Validating a name Validating e-mails Validating a Twitter username Validating passwords Validating URLs Manipulating text (For more resources related to this topic, see here.) Regular expressions and form validation By far, one of the most common uses for regular expressions on the frontend is for use with user submitted forms, so this is what we will be building. The form we will be building will have all the common fields, such as name, e-mail, website, and so on, but we will also experiment with some text processing besides all the validations. In real-world applications, you usually are not going to implement the parsing and validation code manually. You can create a regular expression and rely on some JavaScript libraries, such as: jQuery validation: Refer to http://jqueryvalidation.org/ Parsely.js: Refer to http://parsleyjs.org/ Even the most popular frameworks support the usage of regular expressions with its native validation engine, such as AngularJS (refer to http://www.ng-newsletter.com/posts/validations.html). Setting up the form This demo will be for a site that allows users to create an online bio, and as such, consists of different types of fields. However, before we get into this (since we won't be building a backend to handle the form), we are going to setup some HTML and JavaScript code to catch the form submission and extract/validate the data entered in it. To keep the code neat, we will create an array with all the validation functions, and a data object where all the final data will be kept. Here is a basic outline of the HTML code for which we begin by adding fields: <!DOCTYPE HTML> <html>    <head>        <title>Personal Bio Demo</title>    </head>    <body>        <form id="main_form">            <input type="submit" value="Process" />        </form>          <script>            // js goes here        </script>    </body> </html> Next, we need to write some JavaScript to catch the form and run through the list of functions that we will be writing. If a function returns false, it means that the verification did not pass and we will stop processing the form. In the event where we get through the entire list of functions and no problems arise, we will log out of the console and data object, which contain all the fields we extracted: <script>    var fns = [];    var data = {};      var form = document.getElementById("main_form");      form.onsubmit = function(e) {      e.preventDefault();          data = {};          for (var i = 0; i < fns.length; i++) {            if (fns[i]() == false) {                return;            }        }          console.log("Verified Data: ", data);    } </script> The JavaScript starts by creating the two variables I mentioned previously, we then pull the form's object from the DOM and set the submit handler. The submit handler begins by preventing a page from actually submitting, (as we don't have any backend code in this example) and then we go through the list of functions running them one by one. Validating fields In this section, we will explore how to validate different types of fields manually, such as name, e-mail, website URL, and so on. Matching a complete name To get our feet wet, let's begin with a simple name field. It's something we have gone through briefly in the past, so it should give you an idea of how our system will work. The following code goes inside the script tags, but only after everything we have written so far: function process_name() {    var field = document.getElementById("name_field");    var name = field.value;      var name_pattern = /^(S+) (S*) ?b(S+)$/;      if (name_pattern.test(name) === false) {        alert("Name field is invalid");         return false;    }      var res = name_pattern.exec(name);    data.first_name = res[1];    data.last_name = res[3];      if (res[2].length > 0) {        data.middle_name = res[2];    }      return true; }   fns.push(process_name); We get the name field in a similar way to how we got the form, then, we extract the value and test it against a pattern to match a full name. If the name doesn't match the pattern, we simply alert the user and return false to let the form handler know that the validations have failed. If the name field is in the correct format, we set the corresponding fields on the data object (remember, the middle name is optional here). The last line just adds this function to the array of functions, so it will be called when the form is submitted. The last thing required to get this working is to add HTML for this form field, so inside the form tags (right before the submit button), you can add this text input: Name: <input type="text" id="name_field" /><br /> Opening this page in your browser, you should be able to test it out by entering different values into the Name box. If you enter a valid name, you should get the data object printed out with the correct parameters, otherwise you should be able to see this alert message: Understanding the complete name Regex Let's go back to the regular expression used to match the name entered by a user: /^(S+) (S*) ?b(S+)$/ The following is a brief explanation of the Regex: The ^ character asserts its position at the beginning of a string The first capturing group (S+) S+ matches a non-white space character [^rntf] The + quantifier between one and unlimited times The second capturing group (S*) S* matches any non-whitespace character [^rntf] The * quantifier between zero and unlimited times " ?" matches the whitespace character The ? quantifier between zero and one time b asserts its position at a (^w|w$|Ww|wW) word boundary The third capturing group (S+) S+ matches a non-whitespace character [^rntf] The + quantifier between one and unlimited times $ asserts its position at the end of a string Matching an e-mail with Regex The next type of field we may want to add is an e-mail field. E-mails may look pretty simple at first glance, but there are a large variety of e-mails out there. You may just think of creating a word@word.word pattern, but the first section can contain many additional characters besides just letters, the domain can be a subdomain, or the suffix could have multiple parts (such as .co.uk for the UK). Our pattern will simply look for a group of characters that are not spaces or instances where the @ symbol has been used in the first section. We will then want an @ symbol, followed by another set of characters that have at least one period, followed by the suffix, which in itself could contain another suffix. So, this can be accomplished in the following manner: /[^s@]+@[^s@.]+.[^s@]+/ The pattern of our example is very simple and will not match every valid e-mail address. There is an official standard for an e-mail address's regular expressions called RFC 5322. For more information, please read http://www.regular-expressions.info/email.html. So, let's add the field to our page: Email: <input type="text" id="email_field" /><br /> We can then add this function to verify it: function process_email() {    var field = document.getElementById("email_field");    var email = field.value;      var email_pattern = /^[^s@]+@[^s@.]+.[^s@]+$/;      if (email_pattern.test(email) === false) {        alert("Email is invalid");        return false;    }      data.email = email;    return true; }   fns.push(process_email); There is an HTML5 field type specifically designed for e-mails, but here we are verifying manually, as this is a Regex book. For more information, please refer to http://www.w3.org/TR/html-markup/input.email.html. Understanding the e-mail Regex Let's go back to the regular expression used to match the name entered by the user: /^[^s@]+@[^s@.]+.[^s@]+$/ Following is a brief explanation of the Regex: ^ asserts a position at the beginning of the string [^s@]+ matches a single character that is not present in the following list: The + quantifier between one and unlimited times s matches any white space character [rntf ] @ matches the @ literal character [^s@.]+ matches a single character that is not present in the following list: The + quantifier between one and unlimited times s matches a [rntf] whitespace character @. is a single character in the @. list, literally . matches the . character literally [^s@]+ match a single character that is not present in the following list: The + quantifier between one and unlimited times s matches [rntf] a whitespace character @ is the @ literal character $ asserts its position at end of a string Matching a Twitter name The next field we are going to add is a field for a Twitter username. For the unfamiliar, a Twitter username is in the @username format, but when people enter this in, they sometimes include the preceding @ symbol and on other occasions, they only write the username by itself. Obviously, internally we would like everything to be stored uniformly, so we will need to extract the username, regardless of the @ symbol, and then manually prepend it with one, so regardless of whether it was there or not, the end result will look the same. So again, let's add a field for this: Twitter: <input type="text" id="twitter_field" /><br /> Now, let's write the function to handle it: function process_twitter() {    var field = document.getElementById("twitter_field");    var username = field.value;      var twitter_pattern = /^@?(w+)$/;      if (twitter_pattern.test(username) === false) {        alert("Twitter username is invalid");        return false;    }      var res = twitter_pattern.exec(username);    data.twitter = "@" + res[1];    return true; }   fns.push(process_twitter); If a user inputs the @ symbol, it will be ignored, as we will add it manually after checking the username. Understanding the twitter username Regex Let's go back to the regular expression used to match the name entered by the user: /^@?(w+)$/ This is a brief explanation of the Regex: ^ asserts its position at start of the string @? matches the @ character, literally The ? quantifier between zero and one time First capturing group (w+) w+ matches a [a-zA-Z0-9_] word character The + quantifier between one and unlimited times $ asserts its position at end of a string Matching passwords Another popular field, which can have some unique constraints, is a password field. Now, not every password field is interesting; you may just allow just about anything as a password, as long as the field isn't left blank. However, there are sites where you need to have at least one letter from each case, a number, and at least one other character. Considering all the ways these can be combined, creating a pattern that can validate this could be quite complex. A much better solution for this, and one that allows us to be a bit more verbose with our error messages, is to create four separate patterns and make sure the password matches each of them. For the input, it's almost identical: Password: <input type="password" id="password_field" /><br /> The process_password function is not very different from the previous example as we can see its code as follows: function process_password() {    var field = document.getElementById("password_field");    var password = field.value;      var contains_lowercase = /[a-z]/;    var contains_uppercase = /[A-Z]/;    var contains_number = /[0-9]/;    var contains_other = /[^a-zA-Z0-9]/;      if (contains_lowercase.test(password) === false) {        alert("Password must include a lowercase letter");        return false;    }      if (contains_uppercase.test(password) === false) {        alert("Password must include an uppercase letter");        return false;    }      if (contains_number.test(password) === false) {        alert("Password must include a number");        return false;    }      if (contains_other.test(password) === false) {        alert("Password must include a non-alphanumeric character");        return false;    }      data.password = password;    return true; }   fns.push(process_password); All in all, you may say that this is a pretty basic validation and something we have already covered, but I think it's a great example of working smart as opposed to working hard. Sure, we probably could have created one long pattern that would check everything together, but it would be less clear and less flexible. So, by breaking it into smaller and more manageable validations, we were able to make clear patterns, and at the same time, improve their usability with more helpful alert messages. Matching URLs Next, let's create a field for the user's website; the HTML for this field is: Website: <input type="text" id="website_field" /><br /> A URL can have many different protocols, but for this example, let's restrict it to only http or https links. Next, we have the domain name with an optional subdomain, and we need to end it with a suffix. The suffix itself can be a single word, such as .com or it can have multiple segments, such as.co.uk. All in all, our pattern looks similar to this: /^(?:https?://)?w+(?:.w+)?(?:.[A-Z]{2,3})+$/i Here, we are using multiple noncapture groups, both for when sections are optional and for when we want to repeat a segment. You may have also noticed that we are using the case insensitive flag (/i) at the end of the regular expression, as links can be written in lowercase or uppercase. Now, we'll implement the actual function: function process_website() {    var field = document.getElementById("website_field");    var website = field.value;      var pattern = /^(?:https?://)?w+(?:.w+)?(?:.[A-Z]{2,3})+$/i      if (pattern.test(website) === false) {       alert("Website is invalid");        return false;    }      data.website = website;    return true; }   fns.push(process_website); At this point, you should be pretty familiar with the process of adding fields to our form and adding a function to validate them. So, for our remaining examples let's shift our focus a bit from validating inputs to manipulating data. Understanding the URL Regex Let's go back to the regular expression used to match the name entered by the user: /^(?:https?://)?w+(?:.w+)?(?:.[A-Z]{2,3})+$/i This is a brief explanation of the Regex: ^ asserts its position at start of a string (?:https?://)? is anon-capturing group The ? quantifier between zero and one time http matches the http characters literally (case-insensitive) s? matches the s character literally (case-insensitive) The ? quantifier between zero and one time : matches the : character literally / matches the / character literally / matches the / character literally w+ matches a [a-zA-Z0-9_] word character The + quantifier between one and unlimited times (?:.w+)? is a non-capturing group The ? quantifier between zero and one time . matches the . character literally w+ matches a [a-zA-Z0-9_] word character The + quantifier between one and unlimited times (?:.[A-Z]{2,3})+ is a non-capturing group The + quantifier between one and unlimited times . matches the . character literally [A-Z]{2,3} matches a single character present in this list The {2,3} quantifier between2 and 3 times A-Z is a single character in the range between A and Z (case insensitive) $ asserts its position at end of a string i modifier: insensitive. Case insensitive letters, meaning it will match a-z and A-Z. Manipulating data We are going to add one more input to our form, which will be for the user's description. In the description, we will parse for things, such as e-mails, and then create both a plain text and HTML version of the user's description. The HTML for this form is pretty straightforward; we will be using a standard textbox and give it an appropriate field: Description: <br /> <textarea id="description_field"></textarea><br /> Next, let's start with the bare scaffold needed to begin processing the form data: function process_description() {    var field = document.getElementById("description_field");    var description = field.value;      data.text_description = description;      // More Processing Here      data.html_description = "<p>" + description + "</p>";      return true; }   fns.push(process_description); This code gets the text from the textbox on the page and then saves both a plain text version and an HTML version of it. At this stage, the HTML version is simply the plain text version wrapped between a pair of paragraph tags, but this is what we will be working on now. The first thing I want to do is split between paragraphs, in a text area the user may have different split-ups—lines and paragraphs. For our example, let's say the user just entered a single new line character, then we will add a <br /> tag and if there is more than one character, we will create a new paragraph using the <p> tag. Using the String.replace method We are going to use JavaScript's replace method on the string object This function can accept a Regex pattern as its first parameter, and a function as its second; each time it finds the pattern it will call the function and anything returned by the function will be inserted in place of the matched text. So, for our example, we will be looking for new line characters, and in the function, we will decide if we want to replace the new line with a break line tag or an actual new paragraph, based on how many new line characters it was able to pick up: var line_pattern = /n+/g; description = description.replace(line_pattern, function(match) {    if (match == "n") {        return "<br />";    } else {        return "</p><p>";    } }); The first thing you may notice is that we need to use the g flag in the pattern, so that it will look for all possible matches as opposed to only the first. Besides this, the rest is pretty straightforward. Consider this form: If you take a look at the output from the console of the preceding code, you should get something similar to this: Matching a description field The next thing we need to do is try and extract e-mails from the text and automatically wrap them in a link tag. We have already covered a Regexp pattern to capture e-mails, but we will need to modify it slightly, as our previous pattern expects that an e-mail is the only thing present in the text. In this situation, we are interested in all the e-mails included in a large body of text. If you were simply looking for a word, you would be able to use the b matcher, which matches any boundary (that can be the end of a word/the end of a sentence), so instead of the dollar sign, which we used before to denote the end of a string, we would place the boundary character to denote the end of a word. However, in our case it isn't quite good enough, as there are boundary characters that are valid e-mail characters, for example, the period character is valid. To get around this, we can use the boundary character in conjunction with a lookahead group and say we want it to end with a word boundary, but only if it is followed by a space or end of a sentence/string. This will ensure we aren't cutting off a subdomain or a part of a domain, if there is some invalid information mid-way through the address. Now, we aren't creating something that will try and parse e-mails no matter how they are entered; the point of creating validators and patterns is to force the user to enter something logical. That said, we assume that if the user wrote an e-mail address and then a period, that he/she didn't enter an invalid address, rather, he/she entered an address and then ended a sentence (the period is not part of the address). In our code, we assume that to the end an address, the user is either going to have a space after, such as some kind of punctuation, or that he/she is ending the string/line. We no longer have to deal with lines because we converted them to HTML, but we do have to worry that our pattern doesn't pick up an HTML tag in the process. At the end of this, our pattern will look similar to this: /b[^s<>@]+@[^s<>@.]+.[^s<>@]+b(?=.?(?:s|<|$))/g We start off with a word boundary, then, we look for the pattern we had before. I added both the (>) greater-than and the (<) less-than characters to the group of disallowed characters, so that it will not pick up any HTML tags. At the end of the pattern, you can see that we want to end on a word boundary, but only if it is followed by a space, an HTML tag, or the end of a string. The complete function, which does all the matching, is as follows: function process_description() {    var field = document.getElementById("description_field");    var description = field.value;      data.text_description = description;      var line_pattern = /n+/g;    description = description.replace(line_pattern, function(match) {        if (match == "n") {            return "<br />";        } else {            return "</p><p>";        }    });      var email_pattern = /b[^s<>@]+@[^s<>@.]+.[^s<>@]+b(?=.?(?:s|<|$))/g;    description = description.replace(email_pattern, function(match){        return "<a href='mailto:" + match + "'>" + match + "</a>";    });      data.html_description = "<p>" + description + "</p>";      return true; } We can continue to add fields, but I think the point has been understood. You have a pattern that matches what you want, and with the extracted data, you are able to extract and manipulate the data into any format you may need. Understanding the description Regex Let's go back to the regular expression used to match the name entered by the user: /b[^s<>@]+@[^s<>@.]+.[^s<>@]+b(?=.?(?:s|<|$))/g This is a brief explanation of the Regex: b asserts its position at a (^w|w$|Ww|wW) word boundary [^s<>@]+ matches a single character not present in the this list: The + quantifier between one and unlimited times s matches a [rntf ] whitespace character <>@ is a single character in the <>@ list (case-sensitive) @ matches the @ character literally [^s<>@.]+ matches a single character not present in this list: The + quantifier between one and unlimited times s matches any [rntf] whitespace character <>@. is a single character in the <>@. list literally (case sensitive) . matches the . character literally [^s<>@]+ matches a single character not present in this the list: The + quantifier between one and unlimited times s matches a [rntf ] whitespace character <>@ isa single character in the <>@ list literally (case sensitive) b asserts its position at a (^w|w$|Ww|wW) word boundary (?=.?(?:s|<|$)) Positive Lookahead - Assert that the Regex below can be matched .? matches any character (except new line) The ? quantifier between zero and one time (?:s|<|$) is a non-capturing group: First alternative: s matches any white space character [rntf] Second alternative: < matches the character < literally Third alternative: $ assert position at end of the string The g modifier: global match. Returns all matches of the regular expression, not only the first one Explaining a Markdown example More examples of regular expressions can be seen with the popular Markdown syntax (refer to http://en.wikipedia.org/wiki/Markdown). This is a situation where a user is forced to write things in a custom format, although it's still a format, which saves typing and is easier to understand. For example, to create a link in Markdown, you would type something similar to this: [Click Me](http://gabrielmanricks.com) This would then be converted to: <a href="http://gabrielmanricks.com">Click Me</a> Disregarding any validation on the URL itself, this can easily be achieved using this pattern: /[([^]]*)](([^(]*))/g It looks a little complex, because both the square brackets and parenthesis are both special characters that need to be escaped. Basically, what we are saying is that we want an open square bracket, anything up to the closing square bracket, then we want an open parenthesis, and again, anything until the closing parenthesis. A good website to write markdown documents is http://dillinger.io/. Since we wrapped each section into its own capture group, we can write this function: text.replace(/[([^]]*)](([^(]*))/g, function(match, text, link){    return "<a href='" + link + "'>" + text + "</a>"; }); We haven't been using capture groups in our manipulation examples, but if you use them, then the first parameter to the callback is the entire match (similar to the ones we have been working with) and then all the individual groups are passed as subsequent parameters, in the order that they appear in the pattern. Summary In this article, we covered a couple of examples that showed us how to both validate user inputs as well as manipulate them. We also took a look at some common design patterns and saw how it's sometimes better to simplify the problem instead of using brute force in one pattern for the purpose of creating validations. Resources for Article: Further resources on this subject: Getting Started with JSON [article] Function passing [article] YUI Test [article]
Read more
  • 0
  • 0
  • 2736

article-image-object-oriented-javascript-backbone-classes
Packt
03 Jun 2015
9 min read
Save for later

Object-Oriented JavaScript with Backbone Classes

Packt
03 Jun 2015
9 min read
In this Article by Jeremy Walker, author of the book Backbone.js Essentials, we will explore the following topics: The differences between JavaScript's class system and the class systems of traditional object-oriented languages How new, this, and prototype enable JavaScript's class system Extend, Backbone's much easier mechanism for creating subclasses (For more resources related to this topic, see here.) JavaScript's class system Programmers who use JavaScript can use classes to encapsulate units of logic in the same way as programmers of other languages. However, unlike those languages, JavaScript relies on a less popular form of inheritance known as prototype-based inheritance. Since Backbone classes are, at their core, just JavaScript classes, they too rely on the prototype system and can be subclassed in the same way as any other JavaScript class. For instance, let's say you wanted to create your own Book subclass of the Backbone Model class with additional logic that Model doesn't have, such as book-related properties and methods. Here's how you can create such a class using only JavaScript's native object-oriented capabilities: // Define Book's Initializervar Book = function() {// define Book's default propertiesthis.currentPage = 1;this.totalPages = 1;}// Define book's parent classBook.prototype = new Backbone.Model();// Define a method of BookBook.prototype.turnPage = function() {this.currentPage += 1;return this.currentPage;} If you've never worked with prototypes in JavaScript, the preceding code may look a little intimidating. Fortunately, Backbone provides a much easier and easier to read mechanism for creating subclasses. However, since that system is built on top of JavaScript's native system, it's important to first understand how the native system works. This understanding will be helpful later when you want to do more complex class-related tasks, such as calling a method defined on a parent class. The new keyword The new keyword is a relatively simple but extremely useful part of JavaScript's class system. The first thing that you need to understand about new is that it doesn't create objects in the same way as other languages. In JavaScript, every variable is either a function, object, or primitive, which means that when we refer to a class, what we're really referring to is a specially designed initialization function. Creating this class-like function is as simple as defining a function that modifies this and then using the new keyword to call that function. Normally, when you call a function, its this is obvious. For instance, when you call the turnPage method of a book object, the this method inside turnPage will be set to this book object, as shown here: var simpleBook = {currentPage: 3, pages: 60};simpleBook.turnPage = function() {this.currentPage += 1;return this.currentPage;}simpleBook.turnPage(); // == 4 Calling a function that isn't attached to an object (in other words, a function that is not a method) results in this being set to the global scope. In a web browser, this means the window object: var testGlobalThis = function() {alert(this);}testGlobalThis(); // alerts window When we use the new keyword before calling an initialization function, three things happen (well, actually four, but we'll wait to explain the fourth one until we explain prototypes): JavaScript creates a brand new object ({})for us JavaScript sets the this method inside the initialization function to the newly created object After the function finishes, JavaScript ignores the normal return value and instead returns the object that was created As you can see, although the new keyword is simple, it's nevertheless important because it allows you to treat initialization functions as if they really are actual classes. At the same time, it does so without violating the JavaScript principle that all variables must either be a function, object, or primitive. Prototypal inheritance That's all well and good, but if JavaScript has no true concept of classes, how can we create subclasses? As it turns out, every object in JavaScript has two special properties to solve this problem: prototype and __proto__ (hidden). These two properties are, perhaps, the most commonly misunderstood aspects of JavaScript, but once you learn how they work, they are actually quite simple to use. When you call a method on an object or try to retrieve a property JavaScript first checks whether the object has the method or property defined in the object itself. In other words if you define a method such as this one: book.turnPage = function()this.currentPage += 1;}; JavaScript will use that definition first when you call turnPage. In real-world code, however, you will almost never want to put methods directly in your objects for two reasons. First, doing that will result in duplicate copies of those methods, as each instance of your class will have its own separate copy. Second, adding methods in this way requires an extra step, and that step can be easily forgotten when you create new instances. If the object doesn't have a turnPage method defined in it, JavaScript will next check the object's hidden __proto__ property. If this __proto__ object doesn't have a turnPage method, then JavaScript will look at the __proto__ property on the object's __proto__. If that doesn't have the method, JavaScript continues to check the __proto__ of the __proto__ of the __proto__ and keeps checking each successive __proto__ until it has exhausted the chain. This is similar to single-class inheritance in more traditional object-oriented languages, except that instead of going through a class chain, JavaScript instead uses a prototype chain. Just as in an object-oriented language we wind up with only a single copy of each method, but instead of the method being defined on the class itself, it's defined on the class's prototype. In a future version of JavaScript (ES6), it will be possible to work with the __proto__ object directly, but for now, the only way to actually see the __proto__ property is to use your browser's debugging tool (for instance, the Chrome Developer Tools debugger):   This means that you can't use this line of code: book.__proto__.turnPage(); Also, you can't use the following code: book.__proto__ = {turnPage: function() {this.currentPage += 1;}}; But, if you can't manipulate __proto__ directly, how can you take advantage of it? Fortunately, it is possible to manipulate __proto__, but you can only do this indirectly by manipulating prototype. Do you remember I mentioned that the new keyword actually does four things? The fourth thing is that it sets the __proto__ property of the new object it creates to the prototype property of the initialization function. In other words, if you want to add a turnPage method to every new instance of Book that you create, you can assign this turnPage method to the prototype property of the Book initialization function, For example: var Book = function() {};Book.prototype.turnPage = function() {this.currentPage += 1;};var book = new Book();book.turnPage();// this works because book.__proto__ == Book.prototype Since these concepts often cause confusion, let's briefly recap: Every object has a prototype property and a hidden __proto__ property An object's __proto__ property is set to the prototype property of its constructor when it is first created and cannot be changed Whenever JavaScript can't find a property or method on an object, it "checks each step of the __proto__ chain until it finds one or until it runs "out of chain Extending Backbone classes With that explanation out of the way, we can finally get down to the workings of Backbone's subclassing system, which revolves around Backbone's extend method. To use extend, you simply call it from the class that your new subclass will be based on, and extend will return the new subclass. This new subclass will have its __proto__ property set to the prototype property of its parent class, allowing objects created with the new subclass to access all the properties and methods of the parent class. Take an example of the following code snippet: var Book = Backbone.Model.extend();// Book.prototype.__proto__ == Backbone.Model.prototype;var book = new Book();book.destroy(); In the preceding example, the last line works because JavaScript will look up the __proto__ chain, find the Model method destroy, and use it. In other words, all the functionality of our original class has been inherited by our new class. But of course, extend wouldn't be exciting if all it can do is make exact clones of the parent classes, which is why extend takes a properties object as its first argument. Any properties or methods on this object will be added to the new class's prototype. For instance, let's try making our Book class a little more interesting by adding a property and a method: var Book = Backbone.Model.extend({currentPage: 1,turnPage: function() {this.currentPage += 1;}});var book = new Book();book.currentPage; // == 1book.turnPage(); // increments book.currentPage by one The extend method also allows you to create static properties or methods, or in other words, properties or methods that live on the class rather than on objects created from that class. These static properties and methods are passed in as the second classProperties argument to extend. Here's a quick example of how to add a static method to our Book class: var Book = Backbone.Model.extend({}, {areBooksGreat: function() {alert("yes they are!");}});Book.areBooksGreat(); // alerts "yes they are!"var book = new Book();book.areBooksGreat(); // fails because static methods must becalled on a class As you can see, there are several advantages to Backbone's approach to inheritance over the native JavaScript approach. First, the word prototype did not appear even once in any of the previously mentioned code; while you still need to understand how prototype works, you don't have to think about it just to create a class. Another benefit is that the entire class definition is contained within a single extend call, keeping all of the class's parts together visually. Also, when we use extend, the various pieces of logic that make up the class are ordered the same way as in most other programming languages, defining the super class first and then the initializer and properties, instead of the other way around. Summary In this article, we explored how JavaScript's native class system works and how the new, this, and prototype keywords/properties form the basis of it. We also learned how Backbone's extend method makes creating new subclasses much more convenient as well as how to use apply and call to invoke parent methods (or when providing callback functions) to preserve the desired this method. Resources for Article: Further resources on this subject: Testing Backbone.js Application [Article] Building an app using Backbone.js [Article] Organizing Backbone Applications - Structure, Optimize, and Deploy [Article]
Read more
  • 0
  • 0
  • 3083
article-image-lets-build-angularjs-and-bootstrap
Packt
03 Jun 2015
14 min read
Save for later

Let's Build with AngularJS and Bootstrap

Packt
03 Jun 2015
14 min read
In this article by Stephen Radford, author of the book Learning Web Development with Bootstrap and AngularJS, we're going to use Bootstrap and AngularJS. We'll look at building a maintainable code base as well as exploring the full potential of both frameworks. (For more resources related to this topic, see here.) Working with directives Something we've been using already without knowing it is what Angular calls directives. These are essentially powerful functions that can be called from an attribute or even its own element, and Angular is full of them. Whether we want to loop data, handle clicks, or submit forms, Angular will speed everything up for us. We first used a directive to initialize Angular on the page using ng-app, and all of the directives we're going to look at in this article are used in the same way—by adding an attribute to an element. Before we take a look at some more of the built-in directives, we need to quickly make a controller. Create a new file and call it controller.js. Save this to your js directory within your project and open it up in your editor. Controllers are just standard JS constructor functions that we can inject Angular's services such as $scope into. These functions are instantiated when Angular detects the ng-controller attribute. As such, we can have multiple instances of the same controller within our application, allowing us to reuse a lot of code. This familiar function declaration is all we need for our controller. function AppCtrl(){} To let the framework know this is the controller we want to use, we need to include this on the page after Angular is loaded and also attach the ng-controller directive to our opening <html> tag: <html ng-controller="AppCtl">…<script type="text/javascript"src="assets/js/controller.js"></script> ng-click and ng-mouseover One of the most basic things you'll have ever done with JavaScript is listened for a click event. This could have been using the onclick attribute on an element, using jQuery, or even with an event listener. In Angular, we use a directive. To demonstrate this, we'll create a button that will launch an alert box—simple stuff. First, let's add the button to our content area we created earlier: <div class="col-sm-8"><button>Click Me</button></div> If you open this up in your browser, you'll see a standard HTML button created—no surprises there. Before we attach the directive to this element, we need to create a handler in our controller. This is just a function within our controller that is attached to the scope. It's very important we attach our function to the scope or we won't be able to access it from our view at all: function AppCtl($scope){$scope.clickHandler = function(){window.alert('Clicked!');};} As we already know, we can have multiple scopes on a page and these are just objects that Angular allows the view and the controller to have access to. In order for the controller to have access, we've injected the $scope service into our controller. This service provides us with the scope Angular creates on the element we added the ng-controller attribute to. Angular relies heavily on dependency injection, which you may or may not be familiar with. As we've seen, Angular is split into modules and services. Each of these modules and services depend upon one another and dependency injection provides referential transparency. When unit testing, we can also mock objects that will be injected to confirm our test results. DI allows us to tell Angular what services our controller depends upon, and the framework will resolve these for us. An in-depth explanation of AngularJS' dependency injection can be found in the official documentation at https://docs.angularjs.org/guide/di. Okay, so our handler is set up; now we just need to add our directive to the button. Just like before, we need to add it as an additional attribute. This time, we're going to pass through the name of the function we're looking to execute, which in this case is clickHandler. Angular will evaluate anything we put within our directive as an AngularJS expression, so we need to be sure to include two parentheses indicating that this is a function we're calling: <button ng-click="clickHandler()">Click Me</button> If you load this up in your browser, you'll be presented with an alert box when you click the button. You'll also notice that we don't need to include the $scope variable when calling the function in our view. Functions and variables that can be accessed from the view live within the current scope or any ancestor scope.   Should we wish to display our alert box on hover instead of click, it's just a case of changing the name of the directive to ng-mouseover, as they both function in the exact same way. ng-init The ng-init directive is designed to evaluate an expression on the current scope and can be used on its own or in conjunction with other directives. It's executed at a higher priority than other directives to ensure the expression is evaluated in time. Here's a basic example of ng-init in action: <div ng-init="test = 'Hello, World'"></div>{{test}} This will display Hello, World onscreen when the application is loaded in your browser. Above, we've set the value of the test model and then used the double curly-brace syntax to display it. ng-show and ng-hide There will be times when you'll need to control whether an element is displayed programmatically. Both ng-show and ng-hide can be controlled by the value returned from a function or a model. We can extend upon our clickHandler function we created to demonstrate the ng-click directive to toggle the visibility of our element. We'll do this by creating a new model and toggling the value between true or false. First of all, let's create the element we're going to be showing or hiding. Pop this below your button: <div ng-hide="isHidden">Click the button above to toggle.</div> The value within the ng-hide attribute is our model. Because this is within our scope, we can easily modify it within our controller: $scope.clickHandler = function(){$scope.isHidden = !$scope.isHidden;}; Here we're just reversing the value of our model, which in turn toggles the visibility of our <div>. If you open up your browser, you'll notice that the element isn't hidden by default. There are a few ways we could tackle this. Firstly, we could set the value of $scope.hidden to true within our controller. We could also set the value of hidden to true using the ng-init directive. Alternatively, we could switch to the ng-show directive, which functions in reverse to ng-hide and will only make an element visible if a model's value is set to true. Ensure Angular is loaded within your header or ng-hide and ng-show won't function correctly. This is because Angular uses its own classes to hide elements and these need to be loaded on page render. ng-if Angular also includes an ng-if directive that works in a similar fashion to ng-show and ng-hide. However, ng-if actually removes the element from the DOM whereas ng-show and ng-hide just toggles the elements' visibility. Let's take a quick look at how we'd use ng-if with the preceding code: <div ng-if="isHidden">Click the button above to toggle.</div> If we wanted to reverse the statement's meaning, we'd simply just need to add an exclamation point before our expression: <div ng-if="!isHidden">Click the button above to toggle.</div> ng-repeat Something you'll come across very quickly when building a web app is the need to render an array of items. For example, in our contacts manager, this would be a list of contacts, but it could be anything. Angular allows us to do this with the ng-repeat directive. Here's an example of some data we may come across. It's array of objects with multiple properties within it. To display the data, we're going to need to be able to access each of the properties. Thankfully, ng-repeat allows us to do just that. Here's our controller with an array of contact objects assigned to the contacts model: function AppCtrl($scope){$scope.contacts = [{name: 'John Doe',phone: '01234567890',email: 'john@example.com'},{name: 'Karan Bromwich',phone: '09876543210',email: 'karan@email.com'}];} We have just a couple of contacts here, but as you can imagine, this could be hundreds of contacts served from an API that just wouldn't be feasible to work with without ng-repeat. First, add an array of contacts to your controller and assign it to $scope.contacts. Next, open up your index.html file and create a <ul> tag. We're going to be repeating a list item within this unordered list so this is the element we need to add our directive to: <ul><li ng-repeat="contact in contacts"></li></ul> If you're familiar with how loops work in PHP or Ruby, then you'll feel right at home here. We create a variable that we can access within the current element being looped. The variable after the in keyword references the model we created on $scope within our controller. This now gives us the ability to access any of the properties set on that object with each iteration or item repeated gaining a new scope. We can display these on the page using Angular's double curly-brace syntax. <ul><li ng-repeat="contact in contacts">{{contact.name}}</li></ul> You'll notice that this outputs the name within our list item as expected, and we can easily access any property on our contact object by referencing it using the standard dot syntax. ng-class Often there are times where you'll want to change or add a class to an element programmatically. We can use the ng-class directive to achieve this. It will let us define a class to add or remove based on the value of a model. There are a couple of ways we can utilize ng-class. In its most simple form, Angular will apply the value of the model as a CSS class to the element: <div ng-class="exampleClass"></div> Should the model referenced be undefined or false, Angular won't apply a class. This is great for single classes, but what if you want a little more control or want to apply multiple classes to a single element? Try this: <div ng-class="{className: model, class2: model2}"></div> Here, the expression is a little different. We've got a map of class names with the model we wish to check against. If the model returns true, then the class will be added to the element. Let's take a look at this in action. We'll use checkboxes with the ng-model attribute, to apply some classes to a paragraph: <p ng-class="{'text-center': center, 'text-danger': error}">Lorem ipsum dolor sit amet</p> I've added two Bootstrap classes: text-center and text-danger. These observe a couple of models, which we can quickly change with some checkboxes: The single quotations around the class names within the expression are only required when using hyphens, or an error will be thrown by Angular. <label><input type="checkbox" ng-model="center"> textcenter</label><label><input type="checkbox" ng-model="error"> textdanger</label> When these checkboxes are checked, the relevant classes will be applied to our element. ng-style In a similar way to ng-class, this directive is designed to allow us to dynamically style an element with Angular. To demonstrate this, we'll create a third checkbox that will apply some additional styles to our paragraph element. The ng-style directive uses a standard JavaScript object, with the keys being the property we wish to change (for example, color and background). This can be applied from a model or a value returned from a function. Let's take a look at hooking it up to a function that will check a model. We can then add this to our checkbox to turn the styles off and on. First, open up your controller.js file and create a new function attached to the scope. I'm calling mine styleDemo: $scope.styleDemo = function(){if(!$scope.styler){return;}return {background: 'red',fontWeight: 'bold'};}; Inside the function, we need to check the value of a model; in this example, it's called styler. If it's false, we don't need to return anything, otherwise we're returning an object with our CSS properties. You'll notice that we used fontWeight rather than font-weight in our returned object. Either is fine, and Angular will automatically switch the CamelCase over to the correct CSS property. Just remember than when using hyphens in JavaScript object keys, you'll need to wrap them in quotation marks. This model is going to be attached to a checkbox, just like we did with ng-class: <label><input type="checkbox" ng-model="styler"> ng-style</label> The last thing we need to do is add the ng-style directive to our paragraph element: <p .. ng-style="styleDemo()">Lorem ipsum dolor sit amet</p> Angular is clever enough to recall this function every time the scope changes. This means that as soon as our model's value changes from false to true, our styles will be applied and vice versa. ng-cloak The final directive we're going to look at is ng-cloak. When using Angular's templates within our HTML page, the double curly braces are temporarily displayed before AngularJS has finished loading and compiling everything on our page. To get around this, we need to temporarily hide our template before it's finished rendering. Angular allows us to do this with the ng-cloak directive. This sets an additional style on our element whilst it's being loaded: display: none !important;. To ensure there's no flashing while content is being loaded, it's important that Angular is loaded in the head section of our HTML page. Summary We've covered a lot in this article, let's recap it all. Bootstrap allowed us to quickly create a responsive navigation. We needed to include the JavaScript file included with our Bootstrap download to enable the toggle on the mobile navigation. We also looked at the powerful responsive grid system included with Bootstrap and created a simple two-column layout. While we were doing this, we learnt about the four different column class prefixes as well as nesting our grid. To adapt our layout, we discovered some of the helper classes included with the framework to allow us to float, center, and hide elements. In this article, we saw in detail Angular's built-in directives: functions Angular allows us to use from within our view. Before we could look at them, we needed to create a controller, which is just a function that we can pass Angular's services into using dependency injection. Directives such as ng-click and ng-mouseover are essentially just new ways of handling events that you will have no doubt done using either jQuery or vanilla JavaScript. However, directives such as ng-repeat will probably be a completely new way of working. It brings some logic directly within our view to loop through data and display it on the page. We also looked at directives that observe models on our scope and perform different actions based on their values. Directives like ng-show and ng-hide will show or hide an element based on a model's value. We also saw this in action in ng-class, which allowed us to add some classes to our elements based on our models' values. Resources for Article: Further resources on this subject: AngularJS Performance [Article] AngularJS Web Application Development Cookbook [Article] Role of AngularJS [Article]
Read more
  • 0
  • 0
  • 1915

article-image-building-reusable-components
Packt
26 May 2015
11 min read
Save for later

Building Reusable Components

Packt
26 May 2015
11 min read
In this article by Suchit Puri, author of the book Ember.js Web Development with Ember CLI, we will focus on building reusable view components using Ember.js views and component classes. (For more resources related to this topic, see here.) In this article, we shall cover: Introducing Ember views and components: Custom tags with Ember.Component Defining your own components Passing data to your component Providing custom HTML to your components Extending Ember.Component: Changing your component's tag Adding custom CSS classes to your component Adding custom attributes to your component's DOM element Handling actions for your component Mapping component actions to the rest of your application Extending Ember.Component Till now, we have been using Ember components in their default form. Ember.js lets you programmatically customize the component you are building by backing them with your own component JavaScript class. Changing your component's tag One of the most common use case for backing your component with custom JavaScript code is to wrap your component in a tag, other than the default <div> tag. When you include a component in your template, the component is by default rendered inside a div tag. For instance, we included the copyright footer component in our application template using {{copyright-footer}}. This resulted in the following HTML code: <div id="ember391" class="ember-view"> <footer>    <div>        © 20014-2015 Ember.js Essentials by Packt Publishing    </div>    <div>        Content is available under MIT license    </div> </footer> </div> The copyright footer component HTML enclosed within a <div> tag. You can see that the copyright component's content is enclosed inside a div that has an ID ember391 and class ember-view. This works for most of the cases, but sometimes you may want to change this behavior to enclose the component in the enclosing tag of your choice. To do that, let's back our component with a matching component JavaScript class. Let's take an instance in which we need to wrap the text in a <p> tag, rather than a <div> tag for the about us page of our application. All the components of the JavaScript classes go inside the app/components folder. The file name of the JavaScript component class should be the same as the file name of the component's template that goes inside the app/templates/components/ folder. For the above use case, first let's create a component JavaScript class, whose contents should be wrapped inside a <p> tag. Let us create a new file inside the app/components folder named about-us-intro.js, with the following contents: import Ember from 'ember'; export default Ember.Component.extend({ tagName: "p" }); As you can see, we extended the Ember.Component class and overrode the tagName property to use a p tag instead of the div tag. Now, let us create the template for this component. The Ember.js framework will look for the matching template for the above component at app/templates/components/about-us-intro.hbs. As we are enclosing the contents of the about-us-intro component in the <p> tag, we can simply write the about us introduction in the template as follows: This is the about us introduction.Everything that is present here   will be enclosed within a &lt;p&gt; tag. We can now include the {{about-us-intro}} in our templates, and it will wrap the above text inside the <p> tag. Now, if you visit the http://localhost:4200/about-us page, you should see the preceding text wrapped inside the <p> tag. In the preceding example, we used a fixed tagName property in our component's class. But, in reality, the tagName property of our component could be a computed property in your controller or model class that uses your own custom logic to derive the tagName of the component: import Ember from "ember"; export default Ember.ObjectController.extend({ tagName: function(){    //do some computation logic here    return "p"; }.property() }); Then, you can override the default tagName property, with your own computed tagName from the controller: {{about-us-intro tagName=tagName}} For very simple cases, you don't even need to define your custom component's JavaScript class. You can override the properties such as tagName and others of your component when you use the component tag: {{about-us-intro tagName="p"}} Here, since you did not create a custom component class, the Ember.js framework generates one for you in the background, and then overrides the tagName property to use p, instead of div. Adding custom CSS classes to your component Similar to the tagName property of your component, you can also add additional CSS classes and customize the attributes of your HTML tags by using custom component classes. To provide static class names that should be applied to your components, you can override the classNames property of your component. The classNames property if of type array should be assigned properties accordingly. Let's continue with the above example, and add two additional classes to our component: import Ember from 'ember'; export default Ember.Component.extend({    tagName: "p",    classNames: ["intro","text"] }); This will add two additional classes, intro and text, to the generated <p> tag. If you want to bind your class names to other component properties, you can use the classNameBinding property of the component as follows: export default Ember.Component.extend({ tagName: "p", classNameBindings: ["intro","text"], intro: "intro-css-class", text: "text-css-class" }); This will produce the following HTML for your component: <p id="ember401" class="ember-view intro-css-class   text-css-class">This is the about us introduction.Everything   that is present here will be enclosed within a &lt;p&gt;   tag.</p> As you can see, the <p> tag now has additional intro-css-class and text-css-class classes added. The classNameBindings property of the component tells the framework to bind the class attribute of the HTML tag of the component with the provided properties of the component. In case the property provided inside the classNameBindings returns a boolean value, the class names are computed differently. If the bound property returns a true boolean value, then the name of the property is used as the class name and is applied to the component. On the other hand, if the bound property returns to false, then no class is applied to the component. Let us see this in an example: import Ember from 'ember'; export default Ember.Component.extend({ tagName: "p", classNames: ["static-class","another-static-class"], classNameBindings: ["intro","text","trueClass","falseClass"], intro: "intro-css-class", text: "text-css-class", trueClass: function(){    //Do Some logic    return true; }.property(), falseClass: false }); Continuing with the above about-us-intro component, you can see that we have added two additional strings in the classNameBindings array, namely, trueClass and falseClass. Now, when the framework tries to bind the trueClass to the corresponding component's property, it sees that the property is returning a boolean value and not a string, and then computes the class names accordingly. The above component shall produce the following HTML content: <p id="ember401" class="ember-view static-class   another-static-class intro-css-class text-css-class true-class"> This is the about us introduction.Everything that is present   here will be enclosed within a &lt;p&gt; tag. </p> Notice that in the given example, true-class was added instead of trueClass. The Ember.js framework is intelligent enough to understand the conventions used in CSS class names, and automatically converts our trueClass to a valid true-class. Adding custom attributes to your component's DOM element Till now, we have seen how we can change the default tag and CSS classes for your component. Ember.js frameworks let you specify and customize HTML attributes for your component's DOM (Document Object Model) element. Many JavaScript libraries also use HTML attributes to provide additional details about the DOM element. Ember.js framework provides us with attributeBindings to bind different HTML attributes with component's properties. The attributeBindings which is similar to classNameBindings, is also of array type and works very similarly to it. Let's create a new component, called as {{ember-image}}, by creating a file at app/component/ember-image.js, and use attributes bindings to bind the src, width, and height attributes of the <img> tag. import Ember from 'ember'; export default Ember.Component.extend({ tagName: "img", attributeBindings: ["src","height","width"], src: "http://emberjs.com/images/logos/ember-logo.png", height:"80px", width:"200px" }); This will result in the following HTML: <img id="ember401" class="ember-view" src="http://emberjs.com/images/logos/ember-logo.png" height="80px" width="200px"> There could be cases in which you would want to use a different component's property name and a different HTML attribute name. For those cases, you can use the following notation: attributeBindings: ["componentProperty:HTML-DOM-property] import Ember from 'ember'; export default Ember.Component.extend({ tagName: "img", attributeBindings: ["componentProperty:HTML-DOM-property], componentProperty: "value" }); This will result in the the following HTML code: <img id="ember402" HTML-DOM-property="value"> Handling actions for your component Now that we have learned to create and customize Ember.js components, let's see how we can make our components interactive and handle different user interactions with our component. Components are unique in the way they handle user interactions or the action events that are defined in the templates. The only difference is that the events from a component's template are sent directly to the component, and they don't bubble up to controllers or routes. If the event that is emitted from a component's template is not handled in Ember.Component instance, then that event will be ignored and will do nothing. Let's create a component that has a lot of text inside it, but the full text is only visible if you click on the Show More button: For that, we will have to first create the component's template. So let us create a new file, long-text.hbs, in the app/templates/components/ folder. The contents of the template should have a Show More and Show Less button, which show the full text and hide the additional text, respectively. <p> This is a long text and we intend to show only this much unlessthe user presses the show more button below. </p> {{#if showMoreText}} This is the remaining text that should be visible when we pressthe show more button. Ideally this should contain a lot moretext, but for example's sake this should be enough. <br> <br> <button {{action "toggleMore"}}> Show Less </button> {{else}} <button {{action "toggleMore"}}> Show More </button> {{/if}} As you can see, we use the {{action}} helper method in our component's template to trigger actions on the component. In order for the above template to work properly, we need to handle the toggleMore in our component class. So, let's create long-text.js at app/components/ folder. import Ember from 'ember'; export default Ember.Component.extend({    showMoreText: false,    actions:{    toggleMore: function(){        this.toggleProperty("showMoreText");    }    } }); All action handlers should go inside the actions object, which is present in the component definition. As you can see, we have added a toggleMore action handler inside the actions object in the component's definition. The toggleMore just toggles the boolean property showMoreText that we use in the template to show or hide text. When the above component is included in about-us template, it should present a brief text, followed by the Show More button. When you click the Show More button, the rest of text appears and the Show Less button appears, which, when clicked on, should hide the text. The long-text component being used at the about-us page showing only limited text, followed by the Show More button Clicking Show More shows more text on the screen along with the Show Less button to rollback Summary In this article, we learned how easy it is to define your own components and use them in your templates. We then delved into the detail of ember components, and learned how we can pass in data from our template's context to our component. This was followed by how can we programmatically extend the Ember.Component class, and customize our component's attributes, including the tag type, HTML attributes, and CSS classes. Finally, we learned how we send the component's actions to respective controllers. Resources for Article: Further resources on this subject: Routing [Article] Introducing the Ember.JS framework [Article] Angular Zen [Article]
Read more
  • 0
  • 0
  • 1550

article-image-text-and-appearance-bindings-and-form-field-bindings
Packt
25 May 2015
14 min read
Save for later

Text and appearance bindings and form field bindings

Packt
25 May 2015
14 min read
In this article by Andrey Akinshin, the author of Getting Started with Knockout.js for .Net Developers, we will look at the various binding offered by Knockout.js. Knockout.js provides you with a huge number of useful HTML data bindings to control the text and its appearance. In this section, we take a brief look at the most common bindings: The text binding The html binding The css binding The style binding The attr binding The visible binding (For more resources related to this topic, see here.) The text binding The text binding is one of the most useful bindings. It allows us to bind text of an element (for example, span) to a property of the ViewModel. Let's create an example in which a person has a single firstName property. The Model will be as follows: var person = { firstName: "John" }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.firstName = ko.observable(person.firstName); }; The View will be as follows: The first name is <span data-bind="text: firstName"></span>. It is really a very simple example. The Model (the person object) has only the firstName property with the initial value John. In the ViewModel, we created the firstName property, which is represented by ko.observable. The View contains a span element with a single data binding; the text property of span binds to the firstName property of the ViewModel. In this example, any changes to personViewModel.firstName will entail an automatic update of text in the span element. If we run the example, we will see a single text line: The first name is John. Let's upgrade our example by adding the age property for the person. In the View, we will print young person for age less than 18 or adult person for age greater than or equal to 18 (PersonalPage-Binding-Text2.html): The Model will be as follows: var person = { firstName: "John", age: 30 }; The ViewModel will be as follows: var personViewModel = function() { var self = this; self.firstName = ko.observable(person.firstName); self.age = ko.observable(person.age); }; The View will be as follows: <span data-bind="text: firstName"></span> is <span data- bind="text: age() >= 18 ? 'adult' : 'young'"></span>   person. This example uses an expression binding in the View. The second span element binds its text property to a JavaScript expression. In this case, we will see the text John is adult person because we set age to 30 in the Model. Note that it is bad practice to write expressions such as age() >= 18 directly inside the binding value. The best way is to define the so-called computed observable property that contains a boolean expression and uses the name of the defined property instead of the expression. We will discuss this method later. The html binding In some cases, we may want to use HTML tags inside our data binding. However, if we include HTML tags in the text binding, tags will be shown in the raw form. We should use the html binding to render tags, as shown in the following example: The Model will be as follows: var person = { about: "John's favorite site is <a     href='http://www.packtpub.com'>PacktPub</a>." }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.about = ko.observable(person.about); }; The View will be as follows: <span data-bind="html: about"></span> Thanks to the html binding, the about message will be displayed correctly and the <a> tag will be transformed into a hyperlink. When you try to display a link with the text binding, the HTML will be encoded, so the user will see not a link but special characters. The css binding The html binding is a good way to include HTML tags in the binding value, but it is a bad practice for its styling. Instead of this, we should use the css binding for this aim. Let's consider the following example: The Model will be as follows: var person = { favoriteColor: "red" }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.favoriteColor = ko.observable(person.favoriteColor); }; The View will be as follows: <style type="text/css"> .redStyle {    color: red; } .greenStyle {    color: green; } </style> <div data-bind="css: { redStyle: favoriteColor() == 'red',   greenStyle: favoriteColor() == 'green' }"> John's favorite color is <span data-bind="text:   favoriteColor"></span>. </div> In the View, there are two CSS classes: redStyle and greenStyle. In the Model, we use favoriteColor to define the favorite color of our person. The expression binding for the div element applies the redStyle CSS style for red color and greenStyle for green color. It uses the favoriteColor observable property as a function to get its value. When favoriteColor is not an observable, the data binding will just be favoriteColor== 'red'. Of course, when favoriteColor changes, the DOM will not be updated because it won't be notified. The style binding In some cases, we do not have access to CSS, but we still need to change the style of the View. For example, CSS files are placed in an application theme and we may not have enough rights to modify it. The style binding helps us in such a case: The Model will be as follows: var person = { favoriteColor: "red" }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.favoriteColor = ko.observable(person.favoriteColor); }; The View will be as follows: <div data-bind="style: { color: favoriteColor() }"> John's favorite color is <span data-bind="text:   favoriteColor"></span>. </div> This example is analogous to the previous one, with the only difference being that we use the style binding instead of the css binding. The attr binding The attr binding is also a good way to work with DOM elements. It allows us to set the value of any attributes of elements. Let's look at the following example: The Model will be as follows: var person = { favoriteUrl: "http://www.packtpub.com" }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.favoriteUrl = ko.observable(person.favoriteUrl); }; The View will be as follows: John's favorite site is <a data-bind="attr: { href: favoriteUrl()   }">PacktPub</a>. The href attribute of the <a> element binds to the favoriteUrl property of the ViewModel via the attr binding. The visible binding The visible binding allows us to show or hide some elements according to the ViewModel. Let's consider an example with a div element, which is shown depending on a conditional binding: The Model will be as follows: var person = { favoriteSite: "PacktPub" }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.favoriteSite = ko.observable(person.favoriteSite); }; The View will be as follows: <div data-bind="visible: favoriteSite().length > 0"> John's favorite site is <span data-bind="text:   favoriteSite"></span>. </div> In this example, the div element with information about John's favorite site will be shown only if the information was defined. Form fields bindings Forms are important parts of many web applications. In this section, we will learn about a number of data bindings to work with the form fields: The value binding The click binding The submit binding The event binding The checked binding The enable binging The disable binding The options binding The selectedOptions binding The value binding Very often, forms use the input, select and textarea elements to enter text. Knockout.js allows work with such text via the value binding, as shown in the following example: The Model will be as follows: var person = { firstName: "John" }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.firstName = ko.observable(person.firstName); }; The View will be as follows: <form> The first name is <input type="text" data-bind="value:     firstName" />. </form> The value property of the text input element binds to the firstName property of the ViewModel. The click binding We can add some function as an event handler for the onclick event with the click binding. Let's consider the following example: The Model will be as follows: var person = { age: 30 }; The ViewModel will be as follows: var personViewModel = function() { var self = this; self.age = ko.observable(person.age); self.growOld = function() {    var previousAge = self.age();    self.age(previousAge + 1); } }; The View will be as follows: <div> John's age is <span data-bind="text: age"></span>. <button data-bind="click: growOld">Grow old</button> </div> We have the Grow old button in the View. The click property of this button binds to the growOld function of the ViewModel. This function increases the age of the person by one year. Because the age property is an observable, the text in the span element will automatically be updated to 31. The submit binding Typically, the submit event is the last operation when working with a form. Knockout.js supports the submit binding to add the corresponding event handler. Of course, you can also use the click binding for the "submit" button, but that is a different thing because there are alternative ways to submit the form. For example, a user can use the Enter key while typing into a textbox. Let's update the previous example with the submit binding: The Model will be as follows: var person = { age: 30 }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.age = ko.observable(person.age); self.growOld = function() {    var previousAge = self.age();    self.age(previousAge + 1); } }; The View will be as follows: <div> John's age is <span data-bind="text: age"></span>. <form data-bind="submit: growOld">    <button type="submit">Grow old</button> </form> </div> The only new thing is moving the link to the growOld function to the submit binding of the form. The event binding The event binding also helps us interact with the user. This binding allows us to add an event handler to an element, events such as keypress, mouseover, or mouseout. In the following example, we use this binding to control the visibility of a div element according to the mouse position: The Model will be as follows: var person = { }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.aboutEnabled = ko.observable(false); self.showAbout = function() {    self.aboutEnabled(true); }; self.hideAbout = function() {    self.aboutEnabled(false); } }; The View will be as follows: <div> <div data-bind="event: { mouseover: showAbout, mouseout:     hideAbout }">    Mouse over to view the information about John. </div> <div data-bind="visible: aboutEnabled">    John's favorite site is <a       href='http://www.packtpub.com'>PacktPub</a>. </div> </div> In this example, the Model is empty because the web page doesn't have a state outside of the runtime context. The single property aboutEnabled makes sense only to run an application. In such a case, we can omit the corresponding property in the Model and work only with the ViewModel. In particular, we will work with a single ViewModel property aboutEnabled, which defines the visibility of div. There are two event bindings: mouseover and mouseout. They link the mouse behavior to the value of aboutEnabled with the help of the showAbout and hideAbout ViewModel functions. The checked binding Many forms contain checkboxes (<input type="checkbox" />). We can work with its value with the help of the checked binding, as shown in the following example: The Model will be as follows: var person = { isMarried: false }; The ViewModel will be as follows: var personViewModel = function() { var self = this; self.isMarried = ko.observable(person.isMarried); }; The View is as follows: <form> <input type="checkbox" data-bind="checked: isMarried" /> Is married </form> The View contains the Is married checkbox. Its checked property binds to the Boolean isMarried property of the ViewModel. The enable and disable binding A good usability practice suggests setting the enable property of some elements (such as input, select, and textarea) according to a form state. Knockout.js provides us with the enable binding for this purpose. Let's consider the following example: The Model is as follows: var person = { isMarried: false, wife: "" }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.isMarried = ko.observable(person.isMarried); self.wife = ko.observable(person.wife); }; The View will be as follows: <form> <p>    <input type="checkbox" data-bind="checked: isMarried" />    Is married </p> <p>    Wife's name:    <input type="text" data-bind="value: wife, enable: isMarried" /> </p> </form> The View contains the checkbox from the previous example. Only in the case of a married person can we write the name of his wife. This behavior is provided by the enable binding of the text input element. The disable binding works in exactly the opposite way. It allows you to avoid negative expression bindings in some cases. The options binding If the Model contains some collections, then we need a select element to display it. The options binding allows us to link such an element to the data, as shown in the following example: The Model is as follows: var person = { children: ["Jonnie", "Jane", "Richard", "Mary"] }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.children = person.children; }; The View will be as follows: <form> <select multiple="multiple" data-bind="options:     children"></select> </form> In the preceding example, the Model contains the children array. The View represents this array with the help of multiple select elements. Note that, in this example, children is a non-observable array. Therefore, changes to ViewModel in this case do not affect the View. The code is shown only for demonstration of the options binding. The selectedOptions binding In addition to the options binding, we can use the selectedOptions binding to work with selected items in the select element. Let's look at the following example: The Model will be as follows: var person = { children: ["Jonnie", "Jane", "Richard", "Mary"], selectedChildren: ["Jonnie", "Mary"] }; The ViewModel will be as follows: var PersonViewModel = function() { var self = this; self.children = person.children; self.selectedChildren = person.selectedChildren }; The View will be as follows: <form> <select multiple="multiple" data-bind="options: children,     selectedOptions: selectedChildren"></select> </form> The selectedChildren property of the ViewModel defines a set of selected items in the select element. Note that, as shown in the previous example, selectedChildren is a non-observable array; the preceding code only shows the use of the selectedOptions binding. In a real-world application, most of the time, the value of the selectedChildren binding will be an observable array. Summary In this article, we have looked at examples that illustrate the use of bindings for various purposes. Resources for Article: Further resources on this subject: So, what is Ext JS? [article] Introducing a feature of IntroJs [article] Top features of KnockoutJS [article]
Read more
  • 0
  • 0
  • 1314
article-image-introducing-web-components
Packt
19 May 2015
16 min read
Save for later

Introducing Web Components

Packt
19 May 2015
16 min read
In this article by Sandeep Kumar Patel, author of the book Learning Web Component Development, we will learn about the web component specification in detail. Web component is changing the web application development process. It comes with standard and technical features, such as templates, custom elements, Shadow DOM, and HTML Imports. The main topics that we will cover in this article about web component specification are as follows: What are web components? Benefits and challenges of web components The web component architecture Template element HTML Import (For more resources related to this topic, see here.) What are web components? Web components are a W3C specification to build a standalone component for web applications. It helps developers leverage the development process to build reusable and reliable widgets. A web application can be developed in various ways, such as page focus development and navigation-based development, where the developer writes the code based on the requirement. All of these approaches fulfil the present needs of the application, but may fail in the reusability perspective. This problem leads to component-based development. Benefits and challenges of web components There are many benefits of web components: A web component can be used in multiple applications. It provides interoperability between frameworks, developing the web component ecosystem. This makes it reusable. A web component has a template that can be used to put the entire markup separately, making it more maintainable. As web components are developed using HTML, CSS, and JavaScript, it can run on different browsers. This makes it platform independent. Shadow DOM provides encapsulation mechanism to style, script, and HTML markup. This encapsulation mechanism provides private scope and prevents the content of the component being affected by the external document. Equally, some of the challenges for a web component include: Implementation: The W3C web component specification is very new to the browser technology and not completely implemented by the browsers. Shared resource: A web component has its own scoped resources. There may be cases where some of the resources between the components are common. Performance: Increase in the number of web components takes more time to get used inside the DOM. Polyfill size: The polyfill are a workaround for a feature that is not currently implemented by the browsers. These polyfill files have a large memory foot print. SEO: As the HTML markup present inside the template is inert, it creates problems in the search engine for the indexing of web pages. The web component architecture The W3C web component specification has four main building blocks for component development. Web component development is made possible by template, HTML Imports, Shadow DOM, and custom elements and decorators. However, decorators do not have a proper specification at present, which results in the four pillars of web component paradigm. The following diagram shows the building blocks of web component: These four pieces of technology power a web component that can be reusable across the application. In the coming section, we will explore these features in detail and understand how they help us in web component development. Template element The HTML <template> element contains the HTML markup, style, and script, which can be used multiple times. The templating process is nothing new to a web developer. Handlebars, Mustache, and Dust are the templating libraries that are already present and heavily used for web application development. To streamline this process of template use, W3C web component specification has included the <template> element. This template element is very new to web development, so it lacks features compared to the templating libraries such as Handlebars.js that are present in the market. In the near future, it will be equipped with new features, but, for now, let's explore the present template specification. Template element detail The HTML <template> element is an HTMLTemplateElement interface. The interface definition language (IDL) definition of the template element is listed in the following code: interface HTMLTemplateElement : HTMLElement {readonly attribute DocumentFragment content;}; The preceding code is written in IDL language. This IDL language is used by the W3C for writing specification. Browsers that support HTML Import must implement the aforementioned IDL. The details of the preceding code are listed here: HTMLTemplateElement: This is the template interface and extends the HTMLElement class. content: This is the only attribute of the HTML template element. It returns the content of the template and is read-only in nature. DocumentFragment: This is a return type of the content attribute. DocumentFragment is a lightweight version of the document and does not have a parent. To find out more about DocumentFargment, use the following link: https://developer.mozilla.org/en/docs/Web/API/DocumentFragment Template feature detection The HTML <template> element is very new to web application development and not completely implemented by all browsers. Before implementing the template element, we need to check the browser support. The JavaScript code for template support in a browser is listed in the following code: <!DOCTYPE html><html><head lang="en"><meta charset="UTF-8"><title>Web Component: template support</title></head><body><h1 id="message"></h1><script>var isTemplateSupported = function () {var template = document.createElement("template");return 'content' in template;};var isSupported = isTemplateSupported(),message = document.getElementById("message");if (isSupported) {message.innerHTML = "Template element is supported by thebrowser.";} else {message.innerHTML = "Template element is not supported bythe browser.";}</script></body></html> In the preceding code, the isTemplateSupported method checks the content property present inside the template element. If the content attribute is present inside the template element, this method returns either true or false. If the template element is supported by the browser, the h1 element will show the support message. The browser that is used to run the preceding code is Chrome 39 release. The output of the preceding code is shown in following screenshot: The preceding screenshot shows that the browser used for development is supporting the HTML template element. There is also a great online tool called "Can I Use for checking support for the template element in the current browser. To check out the template support in the browser, use the following link: http://caniuse.com/#feat=template The following screenshot shows the current status of the support for the template element in the browsers using the Can I Use online tool: Inert template The HTML content inside the template element is inert in nature until it is activated. The inertness of template content contributes to increasing the performance of the web application. The following code demonstrates the inertness of the template content: <!DOCTYPE html><html><head lang="en"><meta charset="UTF-8"><title>Web Component: A inert template content example.</title></head><body><div id="message"></div><template id="aTemplate"><img id="profileImage"src="http://www.gravatar.com/avatar/c6e6c57a2173fcbf2afdd5fe6786e92f.png"><script>alert("This is a script.");</script></template><script>(function(){var imageElement =document.getElementById("profileImage"),messageElement = document.getElementById("message");messageElement.innerHTML = "IMG element "+imageElement;})();</script></body></html> In the preceding code, a template contains an image element with the src attribute, pointing to a Gravatar profile image, and an inline JavaScript alert method. On page load, the document.getElementById method is looking for an HTML element with the #profileImage ID. The output of the preceding code is shown in the following screenshot: The preceding screenshot shows that the script is not able to find the HTML element with the profileImage ID and renders null in the browser. From the preceding screenshot it is evident that the content of the template is inert in nature. Activating a template By default, the content of the <template> element is inert and are not part of the DOM. The two different ways that can be used to activate the nodes are as follows: Cloning a node Importing a node Cloning a node The cloneNode method can be used to duplicate a node. The syntax for the cloneNode method is listed as follows: <Node> <target node>.cloneNode(<Boolean parameter>) The details of the preceding code syntax are listed here: This method can be applied on a node that needs to be cloned. The return type of this method is Node. The input parameter for this method is of the Boolean type and represents a type of cloning. There are 2 different types of cloning, listed as follows: Deep cloning: In deep cloning, the children of the targeted node also get copied. To implement deep cloning, the Boolean input parameter to cloneNode method needs to be true. Shallow cloning: In shallow cloning, only the targeted node is copied without the children. To implement shallow cloning the Boolean input parameter to cloneNode method needs to be false. The following code shows the use of the cloneNode method to copy the content of a template, having the h1 element with some text: <!DOCTYPE html><html><head lang="en"><meta charset="UTF-8"><title>Web Component: Activating template using cloneNode method</title></head><body><div id="container"></div><template id="aTemplate"><h1>Template is activated using cloneNode method.</h1></template><script>var aTemplate = document.querySelector("#aTemplate"),container = document.getElementById("container"),templateContent = aTemplate.content,activeContent = templateContent.cloneNode(true);container.appendChild(activeContent);</script></body></html> In the preceding code, the template element has the aTemplate ID and is referenced using the querySelector method. The HTML markup content inside the template is then retrieved using a content property and saved in a templateContent variable. The cloneNode method is then used for deep cloning to get the activated node that is later appended to a div element. The following screenshot shows the output of the preceding code: To find out more about the cloneNode method visit: https://developer.mozilla.org/en-US/docs/Web/API/Node.cloneNode Importing a node The importNode method is another way of activating the template content. The syntax for the aforementioned method is listed in the following code: <Node> document.importNode(<target node>,<Boolean parameter>) The details of the preceding code syntax are listed as follows: This method returns a copy of the node from an external document. This method takes two input parameters. The first parameter is the target node that needs to be copied. The second parameter is a Boolean flag and represents the way the target node is cloned. If the Boolean flag is false, the importNode method makes a shallow copy, and for a true value, it makes a deep copy. The following code shows the use of the importNode method to copy the content of a template containing an h1 element with some text: <!DOCTYPE html><html><head lang="en"><meta charset="UTF-8"><title>Web Component: Activating template using importNode method</title></head><body><div id="container"></div><template id="aTemplate"><h1>Template is activated using importNode method.</h1></template><script>var aTemplate = document.querySelector("#aTemplate"),container = document.getElementById("container"),templateContent = aTemplate.content,activeContent = document.importNode(templateContent,true);container.appendChild(activeContent);</script></body></html> In the preceding code, the template element has the aTemplate ID and is referenced using the querySelector method. The HTML markup content inside the template is then retrieved using the content property and saved in the templateContent variable. The importNode method is then used for deep cloning to get the activated node that is later appended to a div element. The following screenshot shows the output of the preceding code: To find out more about the importNode method, visit: http://mdn.io/importNode HTML Import The HTML Import is another important piece of technology of the W3C web component specification. It provides a way to include another HTML document present in a file with the current document. HTML Imports provide an alternate solution to the Iframe element, and are also great for resource bundling. The syntax of the HTML Imports is listed as follows: <link rel="import" href="fileName.html"> The details of the preceding syntax are listed here: The HTML file can be imported using the <link> tag and the rel attribute with import as the value. The href string points to the external HTML file that needs to be included in the current document. The HTML import element is implemented by the HTMLElementLink class. The IDL definition of HTML Import is listed in the following code: partial interface LinkImport {readonly attribute Document? import;};HTMLLinkElement implements LinkImport; The preceding code shows IDL for the HTML Import where the parent interface is LinkImport which has the readonly attribute import. The HTMLLinkElement class implements the LinkImport parent interface. The browser that supports HTML Import must implement the preceding IDL. HTML Import feature detection The HTML Import is new to the browser and may not be supported by all browsers. To check the support of the HTML Import in the browser, we need to check for the import property that is present inside a <link> element. The code to check the HTML import support is as follows: <!DOCTYPE html><html><head lang="en"><meta charset="UTF-8"><title>Web Component: HTML import support</title></head><body><h1 id="message"></h1><script>var isImportSupported = function () {var link = document.createElement("link");return 'import' in link;};var isSupported = isImportSupported(),message = document.getElementById("message");if (isSupported) {message.innerHTML = "Import is supported by the browser.";} else {message.innerHTML = "Import is not supported by thebrowser.";}</script></body></html> The preceding code has a isImportSupported function, which returns the Boolean value for HTML import support in the current browser. The function creates a <link> element and then checks the existence of an import attribute using the in operator. The following screenshot shows the output of the preceding code: The preceding screenshot shows that the import is supported by the current browser as the isImportSupported method returns true. The Can I Use tool can also be utilized for checking support for the HTML Import in the current browser. To check out the template support in the browser, use the following link: http://caniuse.com/#feat=imports The following screenshot shows the current status of support for the HTML Import in browsers using the Can I Use online tool: Accessing the HTML Import document The HTML Import includes the external document to the current page. We can access the external document content using the import property of the link element. In this section, we will learn how to use the import property to refer to the external document. The message.html file is an external HTML file document that needs to be imported. The content of the message.html file is as follows: <h1>This is from another HTML file document.</h1> The following code shows the HTML document where the message.html file is loaded and referenced by the import property: <!DOCTYPE html><html><head lang="en"><link rel="import" href="message.html"></head><body><script>(function(){var externalDocument =document.querySelector('link[rel="import"]').import;headerElement = externalDocument.querySelector('h1')document.body.appendChild(headerElement.cloneNode(true));})();</script></body></html> The details of the preceding code are listed here: In the header section, the <link> element is importing the HTML document present inside the message.html file. In the body section, an inline <script> element using the document.querySelector method is referencing the link elements having the rel attribute with the import value. Once the link element is located, the content of this external document is copied using the import property to the externalDocument variable. The header h1 element inside the external document is then located using a quesrySelector method and saved to the headerElement variable. The header element is then deep copied using the cloneNode method and appended to the body element of the current document. The following screenshot shows the output of the preceding code: HTML Import events The HTML <link> element with the import attribute supports two event handlers. These two events are listed "as follows: load: This event is fired when the external HTML file is imported successfully onto the current page. A JavaScript function can be attached to the onload attribute, which can be executed on a successful load of the external HTML file. error: This event is fired when the external HTML file is not loaded or found(HTTP code 404 not found). A JavaScript function can be attached to the onerror attribute, which can be executed on error of importing the external HTML file. The following code shows the use of these two event types while importing the message.html file to the current page: <!DOCTYPE html><html><head lang="en"><script async>function handleSuccess(e) {//import load Successfulvar targetLink = e.target,externalDocument = targetLink.import;headerElement = externalDocument.querySelector('h1'),clonedHeaderElement = headerElement.cloneNode(true);document.body.appendChild(clonedHeaderElement);}function handleError(e) {//Error in loadalert("error in import");}</script><link rel="import" href="message.html"onload="handleSuccess(event)"onerror="handleError(event)"></head><body></body></html> The details of the preceding code are listed here: handleSuccess: This method is attached to the onload attribute which is executed on the successful load of message.html in the current document. The handleSuccess method imports the document present inside the message.html file, then it finds the h1 element, and makes a deep copy of it . The cloned h1 element then gets appended to the body element. handleError: This method is attached to the onerror attribute of the <link> element. This method will be executed if the message.html file is not found. As the message.html file is imported successfully, the handleSuccess method gets executed and header element h1 is rendered in the browser. The following screenshot shows the output of the preceding code: Summary In this article, we learned about the web component specification. We also explored the building blocks of web components such as HTML Imports and templates. Resources for Article: Further resources on this subject: Learning D3.js Mapping [Article] Machine Learning [Article] Angular 2.0 [Article]
Read more
  • 0
  • 0
  • 1308

article-image-angularjs-web-application-development-cookbook
Packt
08 May 2015
2 min read
Save for later

AngularJS Web Application Development Cookbook

Packt
08 May 2015
2 min read
Architect performant applications and implement best practices in AngularJS. Packed with easy-to-follow recipes, this practical guide will show you how to unleash the full might of the AngularJS framework. Skip straight to practical solutions and quick, functional answers to your problems without hand-holding or slogging through the basics. (For more resources related to this topic, see here.) Some highlights include: Architecting recursive directives Extensively customizing your search filter Custom routing attributes Animating ngRepeat Animating ngInclude, ngView, and ngIf Animating ngSwitch Animating ngClass, and class attributes Animating ngShow, and ngHide The goal of this text is to have you walk away from reading about an AngularJS concept armed with a solid understanding of how it works, insight into the best ways to wield it in real-world applications, and annotated code examples to get you started. Why you should buy this book A collection of recipes demonstrating optimal organization, scaleable architecture, and best practices for use in small and large-scale production applications. Each recipe contains complete, functioning examples and detailed explanations on how and why they are organized and built that way, as well as alternative design choices for different situations. The author of this book is a full stack developer at DoorDash (YC S13), where he joined as the first engineer. He led their adoption of AngularJS, and he also focuses on the infrastructural, predictive, and data projects within the company. Matt has a degree in Computer Engineering from the University of Illinois at Urbana-Champaign. He is the author of the video series Learning AngularJS, available through O'Reilly Media. Previously, he worked as an engineer at several educational technology start-ups. Almost every example in this book has been added to JSFiddle, with the links provided in the book. This allows you to merely visit a URL in order to test and modify the code with no setup of any kind, on any major browser and on any major operating system. Resources for Article:  Further resources on this subject: Working with Live Data and AngularJS [article] Angular Zen [article] AngularJS Project [article]
Read more
  • 0
  • 0
  • 1607