Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Front-End Web Development

341 Articles
article-image-hello-world-program
Packt
20 Apr 2016
12 min read
Save for later

Hello World Program

Packt
20 Apr 2016
12 min read
In this article by Manoj Kumar, author of the book Learning Sinatra, we will write an application. Make sure that you have Ruby installed. We will get a basic skeleton app up and running and see how to structure the application. (For more resources related to this topic, see here.) In this article, we will discuss the following topics: A project that will be used to understand Sinatra Bundler gem File structure of the application Responsibilities of each file Before we begin writing our application, let's write the Hello World application. Getting started The Hello World program is as follows: 1require 'sinatra'23 get '/' do4 return 'Hello World!'5 end The following is how the code works: ruby helloworld.rb Executing this from the command line will run the application and the server will listen to the 4567 port. If we point our browser to http://localhost:4567/, the output will be as shown in the following screenshot: The application To understand how to write a Sinatra application, we will take a small project and discuss every part of the program in detail. The idea We will make a ToDo app and use Sinatra along with a lot of other libraries. The features of the app will be as follows: Each user can have multiple to-do lists Each to-do list will have multiple items To-do lists can be private, public, or shared with a group Items in each to-do list can be assigned to a user or group The modules that we build are as follows: Users: This will manage the users and groups List: This will manage the to-do lists Items: This will manage the items for all the to-do lists Before we start writing the code, let's see what the file structure will be like, understand why each one of them is required, and learn about some new files. The file structure It is always better to keep certain files in certain folders for better readability. We could dump all the files in the home folder; however, that would make it difficult for us to manage the code: The app.rb file This file is the base file that loads all the other files (such as, models, libs, and so on) and starts the application. We can configure various settings of Sinatra here according to the various deployment environments. The config.ru file The config.ru file is generally used when we need to deploy our application with different application servers, such as Passenger, Unicorn, or Heroku. It is also easy to maintain the different deployment environment using config.ru. Gemfile This is one of the interesting stuff that we can do with Ruby applications. As we know, we can use a variety of gems for different purposes. The gems are just pieces of code and are constantly updated. Therefore, sometimes, we need to use specific versions of gems to maintain the stability of our application. We list all the gems that we are going to use for our application with their version. Before we discuss how to use this Gemfile, we will talk about gem bundler. Bundler The gem bundler manages the installation of all the gems and their dependencies. Of course, we would need to install the gem bundler manually: gem install bundler This will install the latest stable version of bundler gem. Once we are done with this, we need to create a new file with the name Gemfile (yes, with a capital G) and add the gems that we will use. It is not necessary to add all the gems to Gemfile before starting to write the application. We can add and remove gems as we require; however, after every change, we need to run the following: bundle install This will make sure that all the required gems and their dependencies are installed. It will also create a 'Gemfile.lock' file. Make sure that we do not edit this file. It contains all the gems and their dependencies information. Therefore, we now know why we should use Gemfile. This is the lib/routes.rb path for folder containing the routes file. What is a route? A route is the URL path for which the application serves a web page when requested. For example, when we type http://www.example.com/, the URL path is / and when we type http://www.example.com/something/, /something/ is the URL path. Now, we need to explicitly define all the routes for which we will be serving requests so that our application knows what to return. It is not important to have this file in the lib folder or to even have it at all. We can also write the routes in the app.rb file. Consider the following examples: get '/' do # code end post '/something' do # code end Both of the preceding routes are valid. The get and post method are the HTTP methods. The first code block will be executed when a GET request is made on / and the second one will be executed when a POST request is made on /something. The only reason we are writing the routes in a separate file is to maintain clean code. The responsibility of each file will be clearly understood in the following: models/: This folder contains all the files that define model of the application. When we write the models for our application, we will save them in this folder. public/: This folder contains all our CSS, JavaScript, and image files. views/: This folder will contain all the files that define the views, such as HTML, HAML, and ERB files. The code Now, we know what we want to build. You also have a rough idea about what our file structure would be. When we run the application, the rackup file that we load will be config.ru. This file tells the server what environment to use and which file is the main application to load. Before running the server, we need to write a minimum code. It includes writing three files, as follows: app.rb config.ru Gemfile We can, of course, write these files in any order we want; however, we need to make sure that all three files have sufficient code for the application to work. Let's start with the app.rb file. The app.rb file This is the file that config.ru loads when the application is executed. This file, in turn, loads all the other files that help it to understand the available routes and the underlying model: 1 require 'sinatra' 2 3 class Todo < Sinatra::Base 4 set :environment, ENV['RACK_ENV'] 5 6 configure do 7 end 8 9 Dir[File.join(File.dirname(__FILE__),'models','*.rb')].each { |model| require model } 10 Dir[File.join(File.dirname(__FILE__),'lib','*.rb')].each { |lib| load lib } 11 12 end What does this code do? Let's see what this code does in the following: 1 require 'sinatra' //This loads the sinatra gem into memory. 3 class Todo < Sinatra::Base 4 set :environment, ENV['RACK_ENV'] 5 6 configure do 7 end 8 9 Dir[File.join(File.dirname(__FILE__),'models','*.rb')].each { |model| require model } 10 Dir[File.join(File.dirname(__FILE__),'lib','*.rb')].each { |lib| load lib } 11 12 end This defines our main application's class. This skeleton is enough to start the basic application. We inherit the Base class of the Sinatra module. Before starting the application, we may want to change some basic configuration settings such as logging, error display, user sessions, and so on. We handle all these configurations through the configure blocks. Also, we might need different configurations for different environments. For example, in development mode, we might want to see all the errors; however, in production we don’t want the end user to see the error dump. Therefore, we can define the configurations for different environments. The first step would be to set the application environment to the concerned one, as follows: 4 set :environment, ENV['RACK_ENV'] We will later see that we can have multiple configure blocks for multiple environments. This line reads the system environment RACK_ENV variable and sets the same environment for the application. When we discuss config.ru, we will see how to set RACK_ENV in the first place: 6 configure do 7 end The following is how we define a configure block. Note that here we have not informed the application that to which environment do these configurations need to be applied. In such cases, this becomes the generic configuration for all the environments and this is generally the last configuration block. All the environment-specific configurations should be written before this block in order to avoid code overriding: 9 Dir[File.join(File.dirname(__FILE__),'models','*.rb')].each { |model| require model } If we see the file structure discussed earlier, we can see that models/ is a directory that contains the model files. We need to import all these files in the application. We have kept all our model files in the models/ folder: Dir[File.join(File.dirname(__FILE__),'models','*.rb')] This would return an array of files having the .rb extension in the models folder. Doing this, avoids writing one require line for each file and modifying this file again: 10 Dir[File.join(File.dirname(__FILE__),'lib','*.rb')].each { |lib| load lib } Similarly, we will import all the files in the lib/ folder. Therefore, in short, the app.rb configures our application according to the deployment environment and imports the model files and the other library files before starting the application. Now, let's proceed to write our next file. The config.ru file The config.ru is the rackup file of the application. This loads all the gems and app.rb. We generally pass this file as a parameter to the server, as follows: 1 require 'sinatra' 2 require 'bundler/setup' 3 Bundler.require 4 5 ENV["RACK_ENV"] = "development" 6 7 require File.join(File.dirname(__FILE__), 'app.rb') 8 9 Todo .start! W Working of the code Let's go through each of the lines, as follows: 1 require 'sinatra' 2 require 'bundler/setup' The first two lines import the gems. This is exactly what we do in other languages. The gem 'sinatra' command will include all the Sinatra classes and help in listening to requests, while the bundler gem will manage all the other gems. As we have discussed earlier, we will always use bundler to manage our gems. 3 Bundler.require This line of the code will check Gemfile and make sure that all the gems available match the version and all the dependencies are met. This does not import all the gems as all gems may not be needed in the memory at all times: 5 ENV["RACK_ENV"] = "development" This code will set the system environment RACK_ENV variable to development. This will help the server know which configurations does it need to use. We will later see how to manage a single configuration file with different settings for different environments and use one particular set of configurations for the given environment. If we use version control for our application, config.ru is not version controlled. It has to be customized on whether our environment is development, staging, testing, or production. We may version control a sample config.ru. We will discuss this when we talk about deploying our application. Next, we will require the main application file, as follows: 7 require File.join(File.dirname(__FILE__), 'app.rb') We see here that we have used the File class to include app.rb: File.dirname(__FILE__) It is a convention to keep config.ru and app.rb in the same folder. It is good practice to give the complete file path whenever we require a file in order to avoid breaking the code. Therefore, this part of the code will return the path of the folder containing config.ru. Now, we know that our main application file is in the same folder as config.ru, therefore, we do the following: File.join(File.dirname(__FILE__), 'app.rb') This would return the complete file path of app.rb and the line 7 will load the main application file in the memory. Now, all we need to do is execute app.rb to start the application, as follows: 9 Todo .start! We see that the start! method is not defined by us in the Todo class in app.rb. This is inherited from the Sinatra::Base class. It starts the application and listens to incoming requests. In short, config.ru checks the availability of all the gems and their dependencies, sets the environment variables, and starts the application. The easiest file to write is Gemfile. It has no complex code and logic. It just contains a list of gems and their version details. Gemfile In Gemfile, we need to specify the source from where the gems will be downloaded and the list of the gems. Therefore, let's write a Gemfile with the following lines: 1 source 'https://rubygems.org' 2 gem 'bundler', '1.6.0' 3 gem 'sinatra', '1.4.4' The first line specifies the source. The https://rubygems.org website is a trusted place to download gems. It has a large collection of gems hosted. We can view this page, search for gems that we want to use, read the documentation, and select the exact version for our application. Generally, the latest stable version of bundler is used. Therefore, we search the site for bundler and find out its version. We do the same for the Sinatra gem. Summary In this article, you learned how to build a Hello World program using Sinatra. Resources for Article: Further resources on this subject: Getting Ready for RubyMotion[article] Quick start - your first Sinatra application[article] Building tiny Web-applications in Ruby using Sinatra[article]
Read more
  • 0
  • 0
  • 1155

article-image-advanced-react
Packt
12 Apr 2016
7 min read
Save for later

Advanced React

Packt
12 Apr 2016
7 min read
In this article by Sven A. Robbestad, author of ReactJS Blueprints, we will cover the following topics: Understanding Webpack Adding Redux to your ReactJS app Understanding Redux reducers, actions, and the store (For more resources related to this topic, see here.) Introduction Understanding the tools you use and the libraries you include in your web app is important to make an efficient web application. In this article, we'll look at some of the difficult parts of modern web development with ReactJS, including Webpack and Redux. Webpack is an important tool for modern web developers. It is a module bundler and works by bundling all modules and files within the context of your base folder. Any file within this context is considered a module and attemptes will be made to bundled it. The only exceptions are files placed in designated vendor folders by default, that are node_modules and web_modules files. Files in these folders are explicitly required in your code to be bundled. Redux is an implementation of the Flux pattern. Flux describes how data should flow through your app. Since the birth of the pattern, there's been an explosion in the number of libraries that attempt to execute on the idea. It's safe to say that while many have enjoyed moderate success, none has been as successful as Redux. Configuring Webpack You can configure Webpack to do almost anything you want, including replacing the current code loaded in your browser with the updated code, while preserving the state of the app. Webpack is configured by writing a special configuration file, usually called webpack.config.js. In this file, you specify the entry and output parameters, plugins, module loaders, and various other configuration parameters. A very basic config file looks like this: var webpack = require('webpack'); module.exports = { entry: [ './entry' ], output: { path: './', filename: 'bundle.js' } }; It's executed by issuing this command from the command line: webpack --config webpack.config.js You can even drop the config parameter, as Webpack will automatically look for the presence of webpack.config.js if not specified. In order to convert the source files before bundling, you use module loaders. Adding this section to the Webpack config file will ensure that the babel-loader module converts JavaScript 2015 code to ECMAScript 5: module: { loaders: [{ test: /.js?$/', loader: 'babel-loader', exclude: /node_modules/, query: { presets: ['es2015','react'] } }] } The first option (required), test, is a regex match that tells Webpack which files these loader operates on. The regex tells Webpack to look for files with a period followed by the letters js and then any optional letters (?) before the end ($). This makes sure that the loader reads both plain JavaScript files and JSX files. The second option (required), loader, is the name of the package that we'll use to convert the code. The third option (optional), exclude, is another regex variable used to explicitly ignore a set of folders or files. The final option (optional), query, contains special configuration options for Babel. The recommended way to do it is actually by setting them in a special file called .babelrc. This file will be picked up automatically by Babel when transpiling files. Adding Redux to your ReactJS app When ReactJS was first introduced to the public in late 2013/early 2014, you would often hear it mentioned together with functional programming. However, there's no inherent requirement to write functional code when writing the ReactJS code, and JavaScript itself being a multi-paradigm language is neither strictly functional nor strictly imperative. Redux chose the functional approach, and it's quickly gaining traction as the superior Flux implementation. There are a number of benefits of choosing a functional, which are as follows: No side effects allowed, that is, the operation is stateless Always returns the same output for a given input Ideal for creating recursive operations Ideal for parallel execution Easy to establish the single source of truth Easy to debug Easy to persist the store state for a faster development cycle Easy to create functionality such as undo and redo Easy to inject the store state for server rendering The concept of stateless operations is possibly the number one benefit, as it makes it very easy to reason about the state of your application. This is, however, not the idiomatic Reflux approach, because it's actually designed to create many stores and has the children listen to changes separately. Application state is the only most difficult part of any application, and every single implementation of Flux has attempted to solve this problem. Redux solves it by not actually doing Flux at all but is an amalgamation of the ideas of Flux and the functional programming language Elm. There are three parts to Redux: actions, reducers, and the global store. The store In Redux, there is only one global store. It is an object that holds the state of your entire application. You create a store by passing your root reducing function (or reducer, for short) to a method called createStore. Rather than creating more stores, you use a concept called reducer composition to split data handling logic. You will then need to use a function called combineReducers to create a single root reducer. The createStore function is derived from Redux and is usually called once in the root of your app (or your store file). It is then passed on to your app and then propagated to the app's children. The only way to change the state of the store is to dispatch an action on it. This is not the same as a Flux dispatcher because Redux doesn't have one. You can also subscribe to changes from the store in order to update your components when the store changes state. Actions An action is an object that represents an intention to change the state. It must have a type field that indicates what kind of action is being performed. They can be defined as constants and imported from other modules. Apart from this requirement, the structure of the object is entirely up to you. A basic action object can look like this: { type: 'UPDATE', payload: { value: "some value" } } The payload property is optional and can be an object, as we saw earlier, or any other valid JavaScript type, such as a function or a primitive. Reducers A reducer is a function that accepts an accumulation and a value and returns a new accumulation. In other words, it returns the next state based on the previous state and an action. It must be a pure function, free of side effects, and it does not mutate the existing state. For smaller apps, it's okay to start with a single reducer, and as your app grows, you split off smaller reducers that manage specific parts of your state tree. This is what's called reducer composition and is the fundamental pattern of building apps with Redux. You start with a single reducer, and as your app grows, split it off into smaller reducers that manage specific parts of the state tree. Because reducers are just functions, you can control the order in which they are called, pass additional data, or even make reusable reducers for common tasks such as pagination. It's okay to have multiple reducers. In fact, it's encouraged. Summary In this article, you learned about Webpack and how to configure it. You also learned about adding Redux to your ReactJS app. Apart from this, you learned about Redux's reducers, actions, and the store. Resources for Article: Further resources on this subject: Getting Started with React [article] Reactive Programming and the Flux Architecture [article] Create Your First React Element [article]
Read more
  • 0
  • 0
  • 1909

article-image-using-native-sdks-and-libraries-react-native
Emilio Rodriguez
07 Apr 2016
6 min read
Save for later

Using Native SDKs and Libraries in React Native

Emilio Rodriguez
07 Apr 2016
6 min read
When building an app in React Native we may end up needing to use third-party SDKs or libraries. Most of the time, these are only available in their native version, and, therefore, only accessible as Objective-C or Swift libraries in the case of iOS apps or as Java Classes for Android apps. Only in a few cases these libraries are written in JavaScript and even then, they may need pieces of functionality not available in React Native such as DOM access or Node.js specific functionality. In my experience, this is one of the main reasons driving developers and IT decision makers in general to run away from React Native when considering a mobile development framework for their production apps. The creators of React Native were fully aware of this potential pitfall and left a door open in the framework to make sure integrating third-party software was not only possible but also quick, powerful, and doable by any non-iOS/Android native developer (i.e. most of the React Native developers). As a JavaScript developer, having to write Objective-C or Java code may not be very appealing in the beginning, but once you realize the whole process of integrating a native SDK can take as little as eight lines of code split in two files (one header file and one implementation file), the fear quickly fades away and the feeling of being able to perform even the most complex task in a mobile app starts to take over. Suddenly, the whole power of iOS and Android can be at any React developer’s disposal. To better illustrate how to integrate a third-party SDK we will use one of the easiest to integrate payment providers: Paymill. If we take a look at their site, we notice that only iOS and Android SDKs are available for mobile payments. That should leave out every app written in React Native if it wasn’t for the ability of this framework to communicate with native modules. For the sake of convenience I will focus this article on the iOS module. Step 1: Create two native files for our bridge. We need to create an Objective-C class, which will serve as a bridge between our React code and Paymill’s native SDK. Normally, an Objective-C class is made out of two files, a .m and a .h, holding the module implementation and the header for this module respectively. To create the .h file we can right-click on our project’s main folder in XCode > New File > Header file. In our case, I will call this file PaymillBridge.h. For React Native to communicate with our bridge, we need to make it implement the RTCBridgeModule included in React Native. To do so, we only have to make sure our .h file looks like this: // PaymillBridge.h #import "RCTBridgeModule.h" @interface PaymillBridge : NSObject <RCTBridgeModule> @end We can follow a similar process to create the .m file: Right-click our project’s main folder in XCode > New File > Objective-C file. The module implementation file should include the RCT_EXPORT_MODULE macro (also provided in any React Native project): // PaymillBridge.m @implementation PaymillBridge RCT_EXPORT_MODULE(); @end A macro is just a predefined piece of functionality that can be imported just by calling it. This will make sure React is aware of this module and would make it available for importing in your app. Now we need to expose the method we need in order to use Paymill’s services from our JavaScript code. For this example we will be using Paymill’s method to generate a token representing a credit card based on a public key and some credit card details: generateTokenWithPublicKey. To do so, we need to use another macro provided by React Native: RCT_EXPORT_METHOD. // PaymillBridge.m @implementation PaymillBridge RCT_EXPORT_MODULE(); RCT_EXPORT_METHOD(generateTokenWithPublicKey: (NSString *)publicKey cardDetails:(NSDictionary *)cardDetails callback:(RCTResponseSenderBlock)callback) { //… Implement the call as described in the SDK’s documentation … callback(@[[NSNull null], token]); } @end In this step we will have to write some Objective-C but most likely it would be a very simple piece of code using the examples stated in the SDK’s documentation. One interesting point is how to send data from the native SDK to our React code. To do so you need to pass a callback as you can see I did as the last parameter of our exported method. Callbacks in React Native’s bridges have to be defined as RCTResponseSenderBlock. Once we do this, we can call this callback passing an array of parameters, which will be sent as parameters for our JavaScript function in React Native (in our case we decided to pass two parameters back: an error set to null following the error handling conventions of node.js, and the token generated by Paymill natively). Step 2: Call our bridge from our React Native code. Once the module is properly set up, React Native makes it available in our app just by importing it from our JavaScript code: // PaymentComponent.js var Paymill = require('react-native').NativeModules.PaymillBridge; Paymill.generateTokenWithPublicKey( '56s4ad6a5s4sd5a6', cardDetails, function(error, token){ console.log(token); }); NativeModules holds the list of modules we created implementing the RCTBridgeModule. React Native makes them available by the name we chose for our Objective-C class name (PaymillBridge in our example). Then, we can call any exported native method as a normal JavaScript method from our React Native Component or library. Going Even Further That should do it for any basic SDK, but React Native gives developers a lot more control on how to communicate with native modules. For example, we may want to force the module to be run in the main thread. For that we just need to add an extra method to our native module implementation: // PaymillBridge.m @implementation PaymillBridge //... - (dispatch_queue_t)methodQueue { return dispatch_get_main_queue(); } Just by adding this method to our PaymillBridge.m React Native will force all the functionality related to this module to be run on the main thread, which will be needed when running main-thread-only iOS API. And there is more: promises, exporting constants, sending events to JavaScript, etc. More complex functionality can be found in the official documentation of React Native; the topics covered on this article, however, should solve 80 percent of the cases when implementing most of the third-party SDKs. About the Author Emilio Rodriguez started working as a software engineer for Sun Microsystems in 2006. Since then, he has focused his efforts on building a number of mobile apps with React Native while contributing to the React Native project. These contributions helped his understand how deep and powerful this framework is.
Read more
  • 0
  • 2
  • 18025

article-image-how-get-started-redux-react-native
Emilio Rodriguez
04 Apr 2016
5 min read
Save for later

How To Get Started with Redux in React Native

Emilio Rodriguez
04 Apr 2016
5 min read
In mobile development there is a need for architectural frameworks, but complex frameworks designed to be used in web environments may end up damaging the development process or even the performance of our app. Because of this, some time ago I decided to introduce in all of my React Native projects the leanest framework I ever worked with: Redux. Redux is basically a state container for JavaScript apps. It is 100 percent library-agnostic so you can use it with React, Backbone, or any other view library. Moreover, it is really small and has no dependencies, which makes it an awesome tool for React Native projects. Step 1: Install Redux in your React Native project. Redux can be added as an npm dependency into your project. Just navigate to your project’s main folder and type: npm install --save react-redux By the time this article was written React Native was still depending on React Redux 3.1.0 since versions above depended on React 0.14, which is not 100 percent compatible with React Native. Because of this, you will need to force version 3.1.0 as the one to be dependent on in your project. Step 2: Set up a Redux-friendly folder structure. Of course, setting up the folder structure for your project is totally up to every developer but you need to take into account that you will need to maintain a number of actions, reducers, and components. Besides, it’s also useful to keep a separate folder for your API and utility functions so these won’t be mixing with your app’s core functionality. Having this in mind, this is my preferred folder structure under the src folder in any React Native project: Step 3: Create your first action. In this article we will be implementing a simple login functionality to illustrate how to integrate Redux inside React Native. A good point to start this implementation is the action, a basic function called from the component whenever we want the whole state of the app to be changed (i.e. changing from the logged out state into the logged in state). To keep this example as concise as possible we won’t be doing any API calls to a backend – only the pure Redux integration will be explained. Our action creator is a simple function returning an object (the action itself) with a type attribute expressing what happened with the app. No business logic should be placed here; our action creators should be really plain and descriptive. Step 4: Create your first reducer. Reducers are the ones in charge of updating the state of the app. Unlike in Flux, Redux only has one store for the whole app, but it will be conveniently name-spaced automatically by Redux once the reducers have been applied. In our example, the user reducer needs to be aware of when the user is logged in. Because of that, it needs to import the LOGIN_SUCCESS constant we defined in our actions before and export a default function, which will be called by Redux every time an action occurs in the app. Redux will automatically pass the current state of the app and the action occurred. It’s up to the reducer to realize if it needs to modify the state or not based on the action.type. That’s why almost every time our reducer will be a function containing a switch statement, which modifies and returns the state based on what action occurred. It’s important to state that Redux works with object references to identify when the state is changed. Because of this, the state should be cloned before any modification. It’s also interesting to know that the action passed to the reducers can contain other attributes apart from type. For example, when doing a more complex login, the user first name and last name can be added to the action by the action created and used by the reducer to update the state of the app. Step 5: Create your component. This step is almost pure React Native coding. We need a component to trigger the action and to respond to the change of state in the app. In our case it will be a simple View containing a button that disappears when logged in. This is a normal React Native component except for some pieces of the Redux boilerplate: The three import lines at the top will require everything we need from Redux ‘mapStateToProps’ and ‘mapDispatchToProps’ are two functions bound with ‘connect’ to the component: this makes Redux know that this component needs to be passed a piece of the state (everything under ‘userReducers’) and all the actions available in the app. Just by doing this, we will have access to the login action (as it is used in the onLoginButtonPress) and to the state of the app (as it is used in the !this.props.user.loggedIn statement) Step 6: Glue it all from your index.ios.js. For Redux to apply its magic, some initialization should be done in the main file of your React Native project (index.ios.js). This is pure boilerplate and only done once: Redux needs to inject a store holding the app state into the app. To do so, it requires a ‘Provider’ wrapping the whole app. This store is basically a combination of reducers. For this article we only need one reducer, but a full app will include many others and each of them should be passed into the combineReducers function to be taken into account by Redux whenever an action is triggered. About the Author Emilio Rodriguez started working as a software engineer for Sun Microsystems in 2006. Since then, he has focused his efforts on building a number of mobile apps with React Native while contributing to the React Native project. These contributions helped his understand how deep and powerful this framework is.
Read more
  • 0
  • 0
  • 22518

article-image-making-app-react-and-material-design
Soham Kamani
21 Mar 2016
7 min read
Save for later

Making an App with React and Material Design

Soham Kamani
21 Mar 2016
7 min read
There has been much progression in the hybrid app development space, and also in React.js. Currently, almost all hybrid apps use cordova to build and run web applications on their platform of choice. Although learning React can be a bit of a steep curve, the benefit you get is that you are forced to make your code more modular, and this leads to huge long-term gains. This is great for developing applications for the browser, but when it comes to developing mobile apps, most web apps fall short because they fail to create the "native" experience that so many users know and love. Implementing these features on your own (through playing around with CSS and JavaScript) may work, but it's a huge pain for even something as simple as a material-design-oriented button. Fortunately, there is a library of react components to help us out with getting the look and feel of material design in our web application, which can then be ported to a mobile to get a native look and feel. This post will take you through all the steps required to build a mobile app with react and then port it to your phone using cordova. Prerequisites and dependencies Globally, you will require cordova, which can be installed by executing this line: npm install -g cordova Now that this is done, you should make a new directory for your project and set up a build environment to use es6 and jsx. Currently, webpack is the most popular build system for react, but if that's not according to your taste, there are many more build systems out there. Once you have your project folder set up, install react as well as all the other libraries you would be needing: npm init npm install --save react react-dom material-ui react-tap-event-plugin Making your app Once we're done, the app should look something like this:   If you just want to get your hands dirty, you can find the source files here. Like all web applications, your app will start with an index.html file: <html> <head> <title>My Mobile App</title> </head> <body> <div id="app-node"> </div> <script src="bundle.js" ></script> </body> </html> Yup, that's it. If you are using webpack, your CSS will be included in the bundle.js file itself, so there's no need to put "style" tags either. This is the only HTML you will need for your application. Next, let's take a look at index.js, the entry point to the application code: //index.js import React from 'react'; import ReactDOM from 'react-dom'; import App from './app.jsx'; const node = document.getElementById('app-node'); ReactDOM.render( <App/>, node ); What this does is grab the main App component and attach it to the app-node DOM node. Drilling down further, let's look at the app.jsx file: //app.jsx'use strict';import React from 'react';import AppBar from 'material-ui/lib/app-bar';import MyTabs from './my-tabs.jsx';let App = React.createClass({ render : function(){ return ( <div> <AppBar title="My App" /> <MyTabs /> </div> ); }});module.exports = App; Following react's philosophy of structuring our code, we can roughly break our app down into two parts: The title bar The tabs below The title bar is more straightforward and directly fetched from the material-ui library. All we have to do is supply a "title" property to the AppBar component. MyTabs is another component that we have made, put in a different file because of the complexity: 'use strict';import React from 'react';import Tabs from 'material-ui/lib/tabs/tabs';import Tab from 'material-ui/lib/tabs/tab';import Slider from 'material-ui/lib/slider';import Checkbox from 'material-ui/lib/checkbox';import DatePicker from 'material-ui/lib/date-picker/date-picker';import injectTapEventPlugin from 'react-tap-event-plugin';injectTapEventPlugin();const styles = { headline: { fontSize: 24, paddingTop: 16, marginBottom: 12, fontWeight: 400 }};const TabsSimple = React.createClass({ render: () => ( <Tabs> <Tab label="Item One"> <div> <h2 style={styles.headline}>Tab One Template Example</h2> <p> This is the first tab. </p> <p> This is to demonstrate how easy it is to build mobile apps with react </p> <Slider name="slider0" defaultValue={0.5}/> </div> </Tab> <Tab label="Item 2"> <div> <h2 style={styles.headline}>Tab Two Template Example</h2> <p> This is the second tab </p> <Checkbox name="checkboxName1" value="checkboxValue1" label="Installed Cordova"/> <Checkbox name="checkboxName2" value="checkboxValue2" label="Installed React"/> <Checkbox name="checkboxName3" value="checkboxValue3" label="Built the app"/> </div> </Tab> <Tab label="Item 3"> <div> <h2 style={styles.headline}>Tab Three Template Example</h2> <p> Choose a Date:</p> <DatePicker hintText="Select date"/> </div> </Tab> </Tabs> )});module.exports = TabsSimple; This file has quite a lot going on, so let’s break it down step by step: We import all the components that we're going to use in our app. This includes tabs, sliders, checkboxes, and datepickers. injectTapEventPlugin is a plugin that we need in order to get tab switching to work. We decide the style used for our tabs. Next, we make our Tabs react component, which consists of three tabs: The first tab has some text along with a slider. The second tab has a group of checkboxes. The third tab has a pop-up datepicker. Each component has a few keys, which are specific to it (such as the initial value of the slider, the value reference of the checkbox, or the placeholder for the datepicker). There are a lot more properties you can assign, which are specific to each component. Building your App For building on Android, you will first need to install the Android SDK. Now that we have all the code in place, all that is left is building the app. For this, make a new directory, start a new cordova project, and add the Android platform, by running the following on your terminal: mkdir my-cordova-project cd my-cordova-project cordova create . cordova platform add android Once the installation is complete, build the code we just wrote previously. If you are using the same build system as the source code, you will have only two files, that is, index.html and bundle.min.js. Delete all the files that are currently present in the www folder of your cordova project and copy those two files there instead. You can check whether your app is working on your computer by running cordova serve and going to the appropriate address on your browser. If all is well, you can build and deploy your app: cordova build android cordova run android This will build and install the app on your Android device (provided it is in debug mode and connected to your computer). Similarly, you can build and install the same app for iOS or windows (you may need additional tools such as XCode or .NET for iOS or Windows). You can also use any other framework to build your mobile app. The angular framework also comes with its own set of material design components. About the Author Soham Kamani is a full-stack web developer and electronics hobbyist.  He is especially interested in JavaScript, Python, and IoT.
Read more
  • 0
  • 0
  • 3692

article-image-app-development-using-react-native-vs-androidios
Manuel Nakamurakare
03 Mar 2016
6 min read
Save for later

App Development Using React Native vs. Android/iOS

Manuel Nakamurakare
03 Mar 2016
6 min read
Until two years ago, I had exclusively done Android native development. I had never developed iOS apps, but that changed last year, when my company decided that I had to learn iOS development. I was super excited at first, but all that excitement started to fade away as I started developing our iOS app and I quickly saw how my productivity was declining. I realized I had to basically re-learn everything I learnt in Android: the framework, the tools, the IDE, etc. I am a person who likes going to meetups, so suddenly I started going to both Android and iOS meetups. I needed to keep up-to-date with the latest features in both platforms. It was very time-consuming and at the same time somewhat frustrating since I was feeling my learning pace was not fast enough. Then, React Native for iOS came out. We didn’t start using it until mid 2015. We started playing around with it and we really liked it. What is React Native? React Native is a technology created by Facebook. It allows developers to use JavaScript in order to create mobile apps in both Android and iOS that look, feel, and are native. A good way to explain how it works is to think of it as a wrapper of native code. There are many components that have been created that are basically wrapping the native iOS or Android functionality. React Native has been gaining a lot of traction since it was released because it has basically changed the game in many ways. Two Ecosystems One reason why mobile development is so difficult and time consuming is the fact that two entirely different ecosystems need to be learned. If you want to develop an iOS app, then you need to learn Swift or Objective-C and Cocoa Touch. If you want to develop Android apps, you need to learn Java and the Android SDK. I have written code in the three languages, Swift, Objective C, and Java. I don’t really want to get into the argument of comparing which of these is better. However, what I can say is that they are different and learning each of them takes a considerable amount of time. A similar thing happens with the frameworks: Cocoa Touch and the Android SDK. Of course, with each of these frameworks, there is also a big bag of other tools such as testing tools, libraries, packages, etc. And we are not even considering that developers need to stay up-to-date with the latest features each ecosystem offers. On the other hand, if you choose to develop on React Native, you will, most of the time, only need to learn one set of tools. It is true that there are many things that you will need to get familiar with: JavaScript, Node, React Native, etc. However, it is only one set of tools to learn. Reusability Reusability is a big thing in software development. Whenever you are able to reuse code that is a good thing. React Native is not meant to be a write once, run everywhere platform. Whenever you want to build an app for them, you have to build a UI that looks and feels native. For this reason, some of the UI code needs to be written according to the platform's best practices and standards. However, there will always be some common UI code that can be shared together with all the logic. Being able to share code has many advantages: better use of human resources, less code to maintain, less chance of bugs, features in both platforms are more likely to be on parity, etc. Learn Once, Write Everywhere As I mentioned before, React Native is not meant to be a write once, run everywhere platform. As the Facebook team that created React Native says, the goal is to be a learn once, write everywhere platform. And this totally makes sense. Since all of the code for Android and iOS is written using the same set of tools, it is very easy to imagine having a team of developers building the app for both platforms. This is not something that will usually happen when doing native Android and iOS development because there are very few developers that do both. I can even go farther and say that a team that is developing a web app using React.js will not have a very hard time learning React Native development and start developing mobile apps. Declarative API When you build applications using React Native, your UI is more predictable and easier to understand since it has a declarative API as opposed to an imperative one. The difference between these approaches is that when you have an application that has different states, you usually need to keep track of all the changes in the UI and modify them. This can become a complex and very unpredictable task as your application grows. This is called imperative programming. If you use React Native, which has declarative APIs, you just need to worry about what the current UI state looks like without having to keep track of the older ones. Hot Reloading The usual developer routine when coding is to test changes every time some code has been written. For this to happen, the application needs to be compiled and then installed in either a simulator or a real device. In case of React Native, you don’t, most of the time, need to recompile the app every time you make a change. You just need to refresh the app in the simulator, emulator, or device and that’s it. There is even a feature called Live Reload that will refresh the app automatically every time it detects a change in the code. Isn’t that cool? Open Source React Native is still a very new technology; it was made open source less than a year ago. It is not perfect yet. It still has some bugs, but, overall, I think it is ready to be used in production for most mobile apps. There are still some features that are available in the native frameworks that have not been exposed to React Native but that is not really a big deal. I can tell from experience that it is somewhat easy to do in case you are familiar with native development. Also, since React Native is open source, there is a big community of developers helping to implement more features, fix bugs, and help people. Most of the time, if you are trying to build something that is common in mobile apps, it is very likely that it has already been built. As you can see, I am really bullish on React Native. I miss native Android and iOS development, but I really feel excited to be using React Native these days. I really think React Native is a game-changer in mobile development and I cannot wait until it becomes the to-go platform for mobile development!
Read more
  • 0
  • 0
  • 2244
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-getting-started-react
Packt
24 Feb 2016
7 min read
Save for later

Getting Started with React

Packt
24 Feb 2016
7 min read
In this article by Vipul Amler and Prathamesh Sonpatki, author of the book ReactJS by Example- Building Modern Web Applications with React, we will learn how web development has seen a huge advent of Single Page Application (SPA) in the past couple of years. Early development was simple—reload a complete page to perform a change in the display or perform a user action. The problem with this was a huge round-trip time for the complete request to reach the web server and back to the client. Then came AJAX, which sent a request to the server, and could update parts of the page without reloading the current page. Moving in the same direction, we saw the emergence of the SPAs. Wrapping up the heavy frontend content and delivering it to the client browser just once, while maintaining a small channel for communication with the server based on any event; this is usually complemented by thin API on the web server. The growth in such apps has been complemented by JavaScript libraries and frameworks such as Ext JS, KnockoutJS, BackboneJS, AngularJS, EmberJS, and more recently, React and Polymer. (For more resources related to this topic, see here.) Let's take a look at how React fits in this ecosystem and get introduced to it in this article. What is React? ReactJS tries to solve the problem from the View layer. It can very well be defined and used as the V in any of the MVC frameworks. It's not opinionated about how it should be used. It creates abstract representations of views. It breaks down parts of the view in the Components. These components encompass both the logic to handle the display of view and the view itself. It can contain data that it uses to render the state of the app. To avoid complexity of interactions and subsequent render processing required, React does a full render of the application. It maintains a simple flow of work. React is founded on the idea that DOM manipulation is an expensive operation and should be minimized. It also recognizes that optimizing DOM manipulation by hand will result in a lot of boilerplate code, which is error-prone, boring, and repetitive. React solves this by giving the developer a virtual DOM to render to instead of the actual DOM. It finds difference between the real DOM and virtual DOM and conducts the minimum number of DOM operations required to achieve the new state. React is also declarative. When the data changes, React conceptually hits the refresh button and knows to only update the changed parts. This simple flow of data, coupled with dead simple display logic, makes development with ReactJS straightforward and simple to understand. Who uses React? If you've used any of the services such as Facebook, Instagram, Netflix, Alibaba, Yahoo, E-Bay, Khan-Academy, AirBnB, Sony, and Atlassian, you've already come across and used React on the Web. In just under a year, React has seen adoption from major Internet companies in their core products. In its first-ever conference, React also announced the development of React Native. React Native allows the development of mobile applications using React. It transpiles React code to the native application code, such as Objective-C for iOS applications. At the time of writing this, Facebook already uses React Native in its Groups iOS app. In this article, we will be following a conversation between two developers, Mike and Shawn. Mike is a senior developer at Adequate Consulting and Shawn has just joined the company. Mike will be mentoring Shawn and conducting pair programming with him. When Shawn meets Mike and ReactJS It's a bright day at Adequate Consulting. Its' also Shawn's first day at the company. Shawn had joined Adequate to work on its amazing products and also because it uses and develops exciting new technologies. After onboarding the company, Shelly, the CTO, introduced Shawn to Mike. Mike, a senior developer at Adequate, is a jolly man, who loves exploring new things. "So Shawn, here's Mike", said Shelly. "He'll be mentoring you as well as pairing with you on development. We follow pair programming, so expect a lot of it with him. He's an excellent help." With that, Shelly took leave. "Hey Shawn!" Mike began, "are you all set to begin?" "Yeah, all set! So what are we working on?" "Well we are about to start working on an app using https://openlibrary.org/. Open Library is collection of the world's classic literature. It's an open, editable library catalog for all the books. It's an initiative under https://archive.org/ and lists free book titles. We need to build an app to display the most recent changes in the record by Open Library. You can call this the Activities page. Many people contribute to Open Library. We want to display the changes made by these users to the books, addition of new books, edits, and so on, as shown in the following screenshot: "Oh nice! What are we using to build it?" "Open Library provides us with a neat REST API that we can consume to fetch the data. We are just going to build a simple page that displays the fetched data and format it for display. I've been experimenting and using ReactJS for this. Have you used it before?" "Nope. However, I have heard about it. Isn't it the one from Facebook and Instagram?" "That's right. It's an amazing way to define our UI. As the app isn't going to have much of logic on the server or perform any display, it is an easy option to use it." "As you've not used it before, let me provide you a quick introduction." "Have you tried services such as JSBin and JSFiddle before?" "No, but I have seen them." "Cool. We'll be using one of these, therefore, we don't need anything set up on our machines to start with." "Let's try on your machine", Mike instructed. "Fire up http://jsbin.com/?html,output" "You should see something similar to the tabs and panes to code on and their output in adjacent pane." "Go ahead and make sure that the HTML, JavaScript, and Output tabs are clicked and you can see three frames for them so that we are able to edit HTML and JS and see the corresponding output." "That's nice." "Yeah, good thing about this is that you don't need to perform any setups. Did you notice the Auto-run JS option? Make sure its selected. This option causes JSBin to reload our code and see its output so that we don't need to keep saying Run with JS to execute and see its output." "Ok." Requiring React library "Alright then! Let's begin. Go ahead and change the title of the page, to say, React JS Example. Next, we need to set up and we require the React library in our file." "React's homepage is located at http://facebook.github.io/react/. Here, we'll also locate the downloads available for us so that we can include them in our project. There are different ways to include and use the library. We can make use of bower or install via npm. We can also just include it as an individual download, directly available from the fb.me domain. There are development versions that are full version of the library as well as production version which is its minified version. There is also its version of add-on. We'll take a look at this later though." "Let's start by using the development version, which is the unminified version of the React source. Add the following to the file header:" <script src="http://fb.me/react-0.13.0.js"></script> "Done". "Awesome, let's see how this looks." <!DOCTYPE html> <html> <head> <script src="http://fb.me/react-0.13.0.js"></script> <meta charset="utf-8"> <title>React JS Example</title> </head> <body> </body> </html> Summary In this article, we started with React and built our first component. In the process we studied top level API of React for constructing components and elements. Resources for Article: Further resources on this subject: Create Your First React Element [article] An Introduction to ReactJs [article] An Introduction to Reactive Programming [article]
Read more
  • 0
  • 0
  • 1640

article-image-create-your-first-react-element
Packt
17 Feb 2016
22 min read
Save for later

Create Your First React Element

Packt
17 Feb 2016
22 min read
From the 7th to the 13th of November 2016, you can save up to 80% on some of our top ReactJS content - so what are you waiting for? Dive in here before the week ends! As many of you know, creating a simple web application today involves writing the HTML, CSS, and JavaScript code. The reason we use three different technologies is because we want to separate three different concerns: Content (HTML) Styling (CSS) Logic (JavaScript) (For more resources related to this topic, see here.) This separation works great for creating a web page because, traditionally, we had different people working on different parts of our web page: one person structured the content using HTML and styled it using CSS, and then another person implemented the dynamic behavior of various elements on that web page using JavaScript. It was a content-centric approach. Today, we mostly don't think of a website as a collection of web pages anymore. Instead, we build web applications that might have only one web page, and that web page does not represent the layout for our content—it represents a container for our web application. Such a web application with a single web page is called (unsurprisingly) a Single Page Application (SPA). You might be wondering, how do we represent the rest of the content in a SPA? Surely, we need to create an additional layout using HTML tags? Otherwise, how does a web browser know what to render? These are all valid questions. Let's take a look at how it works in this article. Once you load your web page in a web browser, it creates a Document Object Model (DOM) of that web page. A DOM represents your web page in a tree structure, and at this point, it reflects the structure of the layout that you created with only HTML tags. This is what happens regardless of whether you're building a traditional web page or a SPA. The difference between the two is what happens next. If you are building a traditional web page, then you would finish creating your web page's layout. On the other hand, if you are building a SPA, then you would need to start creating additional elements by manipulating the DOM with JavaScript. A web browser provides you with the JavaScript DOM API to do this. You can learn more about it at https://developer.mozilla.org/en-US/docs/Web/API/Document_Object_Model. However, manipulating (or mutating) the DOM with JavaScript has two issues: Your programming style will be imperative if you decide to use the JavaScript DOM API directly. This programming style leads to a code base that is harder to maintain. DOM mutations are slow because they cannot be optimized for speed, unlike other JavaScript code. Luckily, React solves both these problems for us. Understanding virtual DOM Why do we need to manipulate the DOM in the first place? Because our web applications are not static. They have a state represented by the user interface (UI) that a web browser renders, and that state can be changed when an event occurs. What kind of events are we talking about? There are two types of events that we're interested in: User events: When a user types, clicks, scrolls, resizes, and so on Server events: When an application receives data or an error from a server, among others What happens while handling these events? Usually, we update the data that our application depends on, and that data represents a state of our data model. In turn, when a state of our data model changes, we might want to reflect this change by updating a state of our UI. Looks like what we want is a way of syncing two different states: the UI state and the data model state. We want one to react to the changes in  the other and vice versa. How can we achieve this? One of the ways to sync your application's UI state with an underlying data model's state is two-way data binding. There are different types of two-way data binding. One of them is key-value observing (KVO), which is used in Ember.js, Knockout, Backbone, and iOS, among others. Another one is dirty checking, which is used in Angular. Instead of two-way data binding, React offers a different solution called the virtual DOM. The virtual DOM is a fast, in-memory representation of the real DOM, and it's an abstraction that allows us to treat JavaScript and DOM as if they were reactive. Let's take a look at how it works: Whenever the state of your data model changes, the virtual DOM and React will rerender your UI to a virtual DOM representation. React then calculates the difference between the two virtual DOM representations: the previous virtual DOM representation that was computed before the data was changed and the current virtual DOM representation that was computed after the data was changed. This difference between the two virtual DOM representations is what actually needs to be changed in the real DOM. React updates only what needs to be updated in the real DOM. The process of finding a difference between the two representations of the virtual DOM and rerendering only the updated patches in a real DOM is fast. Also, the best part is, as a React developer, that you don't need to worry about what actually needs to be rerendered. React allows you to write your code as if you were rerendering the entire DOM every time your application's state changes. If you would like to learn more about the virtual DOM, the rationale behind it, and how it can be compared to data binding, then I would strongly recommend that you watch this very informative talk by Pete Hunt from Facebook at https://www.youtube.com/watch?v=-DX3vJiqxm4. Now that we've learnt about the virtual DOM, let's mutate a real DOM by installing React and creating our first React element. Installing React To start using the React library, we need to first install it. I am going to show you two ways of doing this: the simplest one and the one using the npm install command. The simplest way is to add the <script> tag to our ~/snapterest/build/index.html file: For the development version of React, add the following command: <script src="https://cdnjs.cloudflare.com/ajax/libs/react/0.14.0-beta3/react.js"></script> For the production version version of React, add the following command: <script src="https://cdnjs.cloudflare.com/ajax/libs/react/0.14.0-beta3/react.min.js"></script> For our project, we'll be using the development version of React. At the time of writing, the latest version of React library is 0.14.0-beta3. Over time, React gets updated, so make sure you use the latest version that is available to you, unless it introduces breaking changes that are incompatible with the code samples provided in this article. Visit https://github.com/fedosejev/react-essentials to learn about any compatibility issues between the code samples and the latest version of React. We all know that Browserify allows us to import all the dependency modules for our application using the require() function. We'll be using require() to import the React library as well, which means that, instead of adding a <script> tag to our index.html, we'll be using the npm install command to install React: Navigate to the ~/snapterest/ directory and run this command:  npm install --save react@0.14.0-beta3 react-dom@0.14.0-beta3 Then, open the ~/snapterest/source/app.js file in your text editor and import the React and ReactDOM libraries to the React and ReactDOM variables, respectively: var React = require('react'); var ReactDOM = require('react-dom'); The react package contains methods that are concerned with the key idea behind React, that is, describing what you want to render in a declarative way. On the other hand, the react-dom package offers methods that are responsible for rendering to the DOM. You can read more about why developers at Facebook think it's a good idea to separate the React library into two packages at https://facebook.github.io/react/blog/2015/07/03/react-v0.14-beta-1.html#two-packages. Now we're ready to start using the React library in our project. Next, let's create our first React Element! Creating React Elements with JavaScript We'll start by familiarizing ourselves with a fundamental React terminology. It will help us build a clear picture of what the React library is made of. This terminology will most likely update over time, so keep an eye on the official documentation at http://facebook.github.io/react/docs/glossary.html. Just like the DOM is a tree of nodes, React's virtual DOM is a tree of React nodes. One of the core types in React is called ReactNode. It's a building block for a virtual DOM, and it can be any one of these core types: ReactElement: This is the primary type in React. It's a light, stateless, immutable, virtual representation of a DOM Element. ReactText: This is a string or a number. It represents textual content and it's a virtual representation of a Text Node in the DOM. ReactElements and ReactTexts are ReactNodes. An array of ReactNodes is called a ReactFragment. You will see examples of all of these in this article. Let's start with an example of a ReactElement: Add the following code to your ~/snapterest/source/app.js file: var reactElement = React.createElement('h1'); ReactDOM.render(reactElement, document.getElementById('react-application')); Now your app.js file should look exactly like this: var React = require('react'); var ReactDOM = require('react-dom'); var reactElement = React.createElement('h1'); ReactDOM.render(reactElement, document.getElementById('react-application')); Navigate to the ~/snapterest/ directory and run Gulp's default task: gulp You will see the following output: Starting 'default'... Finished 'default' after 1.73 s Navigate to the ~/snapterest/build/ directory, and open index.html in a web browser. You will see a blank web page. Open Developer Tools in your web browser and inspect the HTML markup for your blank web page. You should see this line, among others: <h1 data-reactid=".0"></h1> Well done! We've just created your first React element. Let's see exactly how we did it. The entry point to the React library is the React object. This object has a method called createElement() that takes three parameters: type, props, and children: React.createElement(type, props, children); Let's take a look at each parameter in more detail. The type parameter The type parameter can be either a string or a ReactClass: A string could be an HTML tag name such as 'div', 'p', 'h1', and so on. React supports all the common HTML tags and attributes. For a complete list of HTML tags and attributes supported by React, you can refer to http://facebook.github.io/react/docs/tags-and-attributes.html. A ReactClass is created via the React.createClass() method. The type parameter describes how an HTML tag or a ReactClass is going to be rendered. In our example, we're rendering the h1 HTML tag. The props parameter The props parameter is a JavaScript object passed from a parent element to a child element (and not the other way around) with some properties that are considered immutable, that is, those that should not be changed. While creating DOM elements with React, we can pass the props object with properties that represent the HTML attributes such as class, style, and so on. For example, run the following commands: var React = require('react'); var ReactDOM = require('react-dom'); var reactElement = React.createElement('h1', { className: 'header' }); ReactDOM.render(reactElement, document.getElementById('react-application')); The preceding code will create an h1 HTML element with a class attribute set to header: <h1 class="header" data-reactid=".0"></h1> Notice that we name our property className rather than class. The reason is that the class keyword is reserved in JavaScript. If you use class as a property name, it will be ignored by React, and a helpful warning message will be printed on the web browser's console: Warning: Unknown DOM property class. Did you mean className?Use className instead. You might be wondering what this data-reactid=".0" attribute is doing in our h1 tag? We didn't pass it to our props object, so where did it come from? It is added and used by React to track the DOM nodes; it might be removed in a future version of React. The children parameter The children parameter describes what child elements this element should have, if any. A child element can be any type of ReactNode: a virtual DOM element represented by a ReactElement, a string or a number represented by a ReactText, or an array of other ReactNodes, which is also called ReactFragment. Let's take a look at this example: var React = require('react'); var ReactDOM = require('react-dom'); var reactElement = React.createElement('h1', { className: 'header' }, 'This is React'); ReactDOM.render(reactElement, document.getElementById('react-application')); The following code will create an h1 HTML element with a class attribute and a text node, This is React: <h1 class="header" data-reactid=".0">This is React</h1> The h1 tag is represented by a ReactElement, while the This is React string is represented by a ReactText. Next, let's create a React element with a number of other React elements as it's children: var React = require('react'); var ReactDOM = require('react-dom');   var h1 = React.createElement('h1', { className: 'header', key: 'header' }, 'This is React'); var p = React.createElement('p', { className: 'content', key: 'content' }, "And that's how it works."); var reactFragment = [ h1, p ]; var section = React.createElement('section', { className: 'container' }, reactFragment);   ReactDOM.render(section, document.getElementById('react-application')); We've created three React elements: h1, p, and section. h1 and p both have child text nodes, "This is React" and "And that's how it works.", respectively. The section has a child that is an array of two ReactElements, h1 and p, called reactFragment. This is also an array of ReactNodes. Each ReactElement in the reactFragment array must have a key property that helps React to identify that ReactElement. As a result, we get the following HTML markup: <section class="container" data-reactid=".0">   <h1 class="header" data-reactid=".0.$header">This is React</h1>   <p class="content" data-reactid=".0.$content">And that's how it works.</p> </section> Now we understand how to create React elements. What if we want to create a number of React elements of the same type? Does it mean that we need to call React.createElement('type') over and over again for each element of the same type? We can, but we don't need to because React provides us with a factory function called React.createFactory(). A factory function is a function that creates other functions. This is exactly what React.createFactory(type) does: it creates a function that produces a ReactElement of a given type. Consider the following example: var React = require('react'); var ReactDOM = require('react-dom');   var listItemElement1 = React.createElement('li', { className: 'item-1', key: 'item-1' }, 'Item 1'); var listItemElement2 = React.createElement('li', { className: 'item-2', key: 'item-2' }, 'Item 2'); var listItemElement3 = React.createElement('li', { className: 'item-3', key: 'item-3' }, 'Item 3');   var reactFragment = [ listItemElement1, listItemElement2, listItemElement3 ]; var listOfItems = React.createElement('ul', { className: 'list-of-items' }, reactFragment);   ReactDOM.render(listOfItems, document.getElementById('react-application')); The preceding example produces this HTML: <ul class="list-of-items" data-reactid=".0">   <li class="item-1" data-reactid=".0.$item-1">Item 1</li>   <li class="item-2" data-reactid=".0.$item-2">Item 2</li>   <li class="item-3" data-reactid=".0.$item-3">Item 3</li> </ul> We can simplify it by first creating a factory function: var React = require('react'); var ReactDOM = require('react-dom'); var createListItemElement = React.createFactory('li'); var listItemElement1 = createListItemElement({ className: 'item-1', key: 'item-1' }, 'Item 1'); var listItemElement2 = createListItemElement({ className: 'item-2', key: 'item-2' }, 'Item 2'); var listItemElement3 = createListItemElement({ className: 'item-3', key: 'item-3' }, 'Item 3'); var reactFragment = [ listItemElement1, listItemElement2, listItemElement3 ]; var listOfItems = React.createElement('ul', { className: 'list-of-items' }, reactFragment); ReactDOM.render(listOfItems, document.getElementById('react-application')); In the preceding example, we're first calling the React.createFactory() function and passing a li HTML tag name as a type parameter. Then, the React.createFactory() function returns a new function that we can use as a convenient shorthand to create elements of type li. We store a reference to this function in a variable called createListItemElement. Then, we call this function three times, and each time we only pass the props and children parameters, which are unique for each element. Notice that React.createElement() and React.createFactory() both expect the HTML tag name string (such as li) or the ReactClass object as a type parameter. React provides us with a number of built-in factory functions to create the common HTML tags. You can call them from the React.DOM object; for example, React.DOM.ul(), React.DOM.li(), React.DOM.div(), and so on. Using them, we can simplify our previous example even further: var React = require('react'); var ReactDOM = require('react-dom');   var listItemElement1 = React.DOM.li({ className: 'item-1', key: 'item-1' }, 'Item 1'); var listItemElement2 = React.DOM.li({ className: 'item-2', key: 'item-2' }, 'Item 2'); var listItemElement3 = React.DOM.li({ className: 'item-3', key: 'item-3' }, 'Item 3');   var reactFragment = [ listItemElement1, listItemElement2, listItemElement3 ]; var listOfItems = React.DOM.ul({ className: 'list-of-items' }, reactFragment);   ReactDOM.render(listOfItems, document.getElementById('react-application')); Now we know how to create a tree of ReactNodes. However, there is one important line of code that we need to discuss before we can progress further: ReactDOM.render(listOfItems, document.getElementById('react-application')); As you might have already guessed, it renders our ReactNode tree to the DOM. Let's take a closer look at how it works. Rendering React Elements The ReactDOM.render() method takes three parameters: ReactElement, a regular DOMElement, and a callback function: ReactDOM.render(ReactElement, DOMElement, callback); ReactElement is a root element in the tree of ReactNodes that you've created. A regular DOMElement is a container DOM node for that tree. The callback is a function executed after the tree is rendered or updated. It's important to note that if this ReactElement was previously rendered to a parent DOM Element, then ReactDOM.render() will perform an update on the already rendered DOM tree and only mutate the DOM as it is necessary to reflect the latest version of the ReactElement. This is why a virtual DOM requires fewer DOM mutations. So far, we've assumed that we're always creating our virtual DOM in a web browser. This is understandable because, after all, React is a user interface library, and all the user interfaces are rendered in a web browser. Can you think of a case when rendering a user interface on a client would be slow? Some of you might have already guessed that I am talking about the initial page load. The problem with the initial page load is the one I mentioned at the beginning of this article—we're not creating static web pages anymore. Instead, when a web browser loads our web application, it receives only the bare minimum HTML markup that is usually used as a container or a parent element for our web application. Then, our JavaScript code creates the rest of the DOM, but in order for it to do so it often needs to request extra data from the server. However, getting this data takes time. Once this data is received, our JavaScript code starts to mutate the DOM. We know that DOM mutations are slow. How can we solve this problem? The solution is somewhat unexpected. Instead of mutating the DOM in a web browser, we mutate it on a server. Just like we would with our static web pages. A web browser will then receive an HTML that fully represents a user interface of our web application at the time of the initial page load. Sounds simple, but we can't mutate the DOM on a server because it doesn't exist outside a web browser. Or can we? We have a virtual DOM that is just a JavaScript, and as you know using Node.js, we can run JavaScript on a server. So technically, we can use the React library on a server, and we can create our ReactNode tree on a server. The question is how can we render it to a string that we can send to a client? React has a method called ReactDOMServer.renderToString() just to do this: var ReactDOMServer = require('react-dom/server'); ReactDOMServer.renderToString(ReactElement); It takes a ReactElement as a parameter and renders it to its initial HTML. Not only is this faster than mutating a DOM on a client, but it also improves the Search Engine Optimization (SEO) of your web application. Speaking of generating static web pages, we can do this too with React: var ReactDOMServer = require('react-dom/server'); ReactDOM.renderToStaticMarkup(ReactElement); Similar to ReactDOM.renderToString(), this method also takes a ReactElement as a parameter and outputs an HTML string. However, it doesn't create the extra DOM attributes that React uses internally, it produces shorter HTML strings that we can transfer to the wire quickly. Now you know not only how to create a virtual DOM tree using React elements, but you also know how to render it to a client and server. Our next question is whether we can do it quickly and in a more visual manner. Creating React Elements with JSX When we build our virtual DOM by constantly calling the React.createElement() method, it becomes quite hard to visually translate these multiple function calls into a hierarchy of HTML tags. Don't forget that, even though we're working with a virtual DOM, we're still creating a structure layout for our content and user interface. Wouldn't it be great to be able to visualize that layout easily by simply looking at our React code? JSX is an optional HTML-like syntax that allows us to create a virtual DOM tree without using the React.createElement() method. Let's take a look at the previous example that we created without JSX: var React = require('react'); var ReactDOM = require('react-dom');   var listItemElement1 = React.DOM.li({ className: 'item-1', key: 'item-1' }, 'Item 1'); var listItemElement2 = React.DOM.li({ className: 'item-2', key: 'item-2' }, 'Item 2'); var listItemElement3 = React.DOM.li({ className: 'item-3', key: 'item-3' }, 'Item 3');   var reactFragment = [ listItemElement1, listItemElement2, listItemElement3 ]; var listOfItems = React.DOM.ul({ className: 'list-of-items' }, reactFragment);   ReactDOM.render(listOfItems, document.getElementById('react-application')); Translate this to the one with JSX: var React = require('react'); var ReactDOM = require('react-dom');   var listOfItems = <ul className="list-of-items">                     <li className="item-1">Item 1</li>                     <li className="item-2">Item 2</li>                     <li className="item-3">Item 3</li>                   </ul>; ReactDOM.render(listOfItems, document.getElementById('react-application'));   As you can see, JSX allows us to write HTML-like syntax in our JavaScript code. More importantly, we can now clearly see what our HTML layout will look like once it's rendered. JSX is a convenience tool and it comes with a price in the form of an additional transformation step. Transformation of the JSX syntax into valid JavaScript syntax must happen before our "invalid" JavaScript code is interpreted. We know that the babely module transforms our JSX syntax into a JavaScript one. This transformation happens every time we run our default task from gulpfile.js: gulp.task('default', function () {   return browserify('./source/app.js')         .transform(babelify)         .bundle()         .pipe(source('snapterest.js'))         .pipe(gulp.dest('./build/')); }); As you can see, the .transform(babelify) function call transforms JSX into JavaScript before bundling it with the other JavaScript code. To test our transformation, run this command: gulp Then, navigate to the ~/snapterest/build/ directory, and open index.html in a web browser. You will see a list of three items. The React team has built an online JSX Compiler that you can use to test your understanding of how JSX works at http://facebook.github.io/react/jsx-compiler.html. Using JSX, you might feel very unusual in the beginning, but it can become a very intuitive and convenient tool to use. The best part is that you can choose whether to use it or not. I found that JSX saves me development time, so I chose to use it in this project that we're building. If you choose to not use it, then I believe that you have learned enough in this article to be able to translate the JSX syntax into a JavaScript code with the React.createElement() function calls. If you have a question about what we have discussed in this article, then you can refer to https://github.com/fedosejev/react-essentials and create a new issue. Summary We started this article by discussing the issues with single web page applications and how they can be addressed. Then, we learned what a virtual DOM is and how React allows us to build it. We also installed React and created our first React element using only JavaScript. Then, we also learned how to render React elements in a web browser and on a server. Finally, we looked at a simpler way of creating React elements with JSX. Resources for Article: Further resources on this subject: Changing Views [article] Introduction to Akka [article] ECMAScript 6 Standard [article]
Read more
  • 0
  • 0
  • 4912

article-image-changing-views
Packt
15 Feb 2016
25 min read
Save for later

Changing Views

Packt
15 Feb 2016
25 min read
In this article by Christopher Pitt, author of the book React Components, has explained how to change sections without reloading the page. We'll use this knowledge to create the public pages of the website our CMS is meant to control. (For more resources related to this topic, see here.) Location, location, and location Before we can learn about alternatives to reloading pages, let's take a look at how the browser manages reloads. You've probably encountered the window object. It's a global catch-all object for browser-based functionality and state. It's also the default this scope in any HTML page: We've even accessed window before. When we rendered to document.body or used document.querySelector, the window object was assumed. It's the same as if we were to call window.document.querySelector. Most of the time document is the only property we need. That doesn't mean it's the only property useful to us. Try the following, in the console: console.log(window.location); You should see something similar to: Location {     hash: ""     host: "127.0.0.1:3000"     hostname: "127.0.0.1"     href: "http://127.0.0.1:3000/examples/login.html"     origin: "http://127.0.0.1:3000"     pathname: "/examples/login.html"     port: "3000"     ... } If we were trying to work out which components to show based on the browser URL, this would be an excellent place to start. Not only can we read from this object, but we can also write to it: <script>     window.location.href = "http://material-ui.com"; </script> Putting this in an HTML page or entering that line of JavaScript in the console will redirect the browser to material-ui.com. It's the same if you click on the link. And if it's to a different page (than the one the browser is pointing at), then it will cause a full page reload. A bit of history So how does this help us? We're trying to avoid full page reloads, after all. Let's experiment with this object. Let's see what happens when we add something like #page-admin to the URL: Adding #page-admin to the URL leads to the window.location.hash property being populated with the same page. What's more, changing the hash value won't reload the page! It's the same as if we clicked on a link that had that hash in the href attribute. We can modify it without causing full page reloads, and each modification will store a new entry in the browser history. Using this trick, we can step through a number of different "states" without reloading the page, and be able to backtrack each with the browser's back button. Using browser history Let's put this trick to use in our CMS. First, let's add a couple functions to our Nav component: export default (props) => {     // ...define class names       var redirect = (event, section) => {         window.location.hash = `#${section}`;         event.preventDefault();     }       return <div className={drawerClassNames}>         <header className="demo-drawer-header">             <img src="images/user.jpg"                  className="demo-avatar" />         </header>         <nav className={navClassNames}>             <a className="mdl-navigation__link"                href="/examples/login.html"                onClick={(e) => redirect(e, "login")}>                 <i className={buttonIconClassNames}                    role="presentation">                     lock                 </i>                 Login             </a>             <a className="mdl-navigation__link"                href="/examples/page-admin.html"                onClick={(e) => redirect(e, "page-admin")}>                 <i className={buttonIconClassNames}                    role="presentation">                     pages                 </i>                 Pages             </a>         </nav>     </div>; }; We add an onClick attribute to our navigation links. We've created a special function that will change window.location.hash and prevent the default full page reload behavior the links would otherwise have caused. This is a neat use of arrow functions, but we're ultimately creating three new functions in each render call. Remember that this can be expensive, so it's best to move the function creation out of render. We'll replace this shortly. It's also interesting to see template strings in action. Instead of "#" + section, we can use `#${section}` to interpolate the section name. It's not as useful in small strings, but becomes increasingly useful in large ones. Clicking on the navigation links will now change the URL hash. We can add to this behavior by rendering different components when the navigation links are clicked: import React from "react"; import ReactDOM from "react-dom"; import Component from "src/component"; import Login from "src/login"; import Backend from "src/backend"; import PageAdmin from "src/page-admin";   class Nav extends Component {     render() {         // ...define class names           return <div className={drawerClassNames}>             <header className="demo-drawer-header">                 <img src="images/user.jpg"                      className="demo-avatar" />             </header>             <nav className={navClassNames}>                 <a className="mdl-navigation__link"                    href="/examples/login.html"                    onClick={(e) => this.redirect(e, "login")}>                     <i className={buttonIconClassNames}                        role="presentation">                         lock                     </i>                     Login                 </a>                 <a className="mdl-navigation__link"                    href="/examples/page-admin.html"                    onClick={(e) => this.redirect(e, "page-admin")}>                     <i className={buttonIconClassNames}                        role="presentation">                         pages                     </i>                     Pages                 </a>             </nav>         </div>;     }       redirect(event, section) {         window.location.hash = `#${section}`;           var component = null;           switch (section) {             case "login":                 component = <Login />;                 break;             case "page-admin":                 var backend = new Backend();                 component = <PageAdmin backend={backend} />;                 break;         }           var layoutClassNames = [             "demo-layout",             "mdl-layout",             "mdl-js-layout",             "mdl-layout--fixed-drawer"         ].join(" ");           ReactDOM.render(             <div className={layoutClassNames}>                 <Nav />                 {component}             </div>,             document.querySelector(".react")         );           event.preventDefault();     } };   export default Nav; We've had to convert the Nav function to a Nav class. We want to create the redirect method outside of render (as that is more efficient) and also isolate the choice of which component to render. Using a class also gives us a way to name and reference Nav, so we can create a new instance to overwrite it from within the redirect method. It's not ideal packaging this kind of code within a component, so we'll clean that up in a bit. We can now switch between different sections without full page reloads. There is one problem still to solve. When we use the browser back button, the components don't change to reflect the component that should be shown for each hash. We can solve this in a couple of ways. The first approach we can try is checking the hash frequently: componentDidMount() {     var hash = window.location.hash;       setInterval(() => {         if (hash !== window.location.hash) {             hash = window.location.hash;             this.redirect(null, hash.slice(1), true);         }     }, 100); }   redirect(event, section, respondingToHashChange = false) {     if (!respondingToHashChange) {         window.location.hash = `#${section}`;     }       var component = null;       switch (section) {         case "login":             component = <Login />;             break;         case "page-admin":             var backend = new Backend();             component = <PageAdmin backend={backend} />;             break;     }       var layoutClassNames = [         "demo-layout",         "mdl-layout",         "mdl-js-layout",         "mdl-layout--fixed-drawer"     ].join(" ");       ReactDOM.render(         <div className={layoutClassNames}>             <Nav />             {component}         </div>,         document.querySelector(".react")     );       if (event) {         event.preventDefault();     } } Our redirect method has an extra parameter, to apply the new hash whenever we're not responding to a hash change. We've also wrapped the call to event.preventDefault in case we don't have a click event to work with. Other than those changes, the redirect method is the same. We've also added a componentDidMount method, in which we have a call to setInterval. We store the initial window.location.hash and check 10 times a second to see if it has change. The hash value is #login or #page-admin, so we slice the first character off and pass the rest to the redirect method. Try clicking on the different navigation links, and then use the browser back button. The second option is to use the newish pushState and popState methods on the window.history object. They're not very well supported yet, so you need to be careful to handle older browsers or sure you don't need to handle them. You can learn more about pushState and popState at https://developer.mozilla.org/en-US/docs/Web/API/History_API. Using a router Our hash code is functional but invasive. We shouldn't be calling the render method from inside a component (at least not one we own). So instead, we're going to use a popular router to manage this stuff for us. Download it with the following: $ npm install react-router --save Then we need to join login.html and page-admin.html back into the same file: <!DOCTYPE html> <html>     <head>         <script src="/node_modules/babel-core/browser.js"></script>         <script src="/node_modules/systemjs/dist/system.js"></script>         <script src="https://storage.googleapis.com/code.getmdl.io/1.0.6/material.min.js"></script>         <link rel="stylesheet" href="https://storage.googleapis.com/code.getmdl.io/1.0.6/material.indigo-pink.min.css" />         <link rel="stylesheet" href="https://fonts.googleapis.com/icon?family=Material+Icons" />         <link rel="stylesheet" href="admin.css" />     </head>     <body class="         mdl-demo         mdl-color--grey-100         mdl-color-text--grey-700         mdl-base">         <div class="react"></div>         <script>             System.config({                 "transpiler": "babel",                 "map": {                     "react": "/examples/react/react",                     "react-dom": "/examples/react/react-dom",                     "router": "/node_modules/react-router/umd/ReactRouter"                 },                 "baseURL": "../",                 "defaultJSExtensions": true             });               System.import("examples/admin");         </script>     </body> </html> Notice how we've added the ReactRouter file to the import map? We'll use that in admin.js. First, let's define our layout component: var App = function(props) {     var layoutClassNames = [         "demo-layout",         "mdl-layout",         "mdl-js-layout",         "mdl-layout--fixed-drawer"     ].join(" ");       return (         <div className={layoutClassNames}>             <Nav />             {props.children}         </div>     ); }; This creates the page layout we've been using and allows a dynamic content component. Every React component has a this.props.children property (or props.children in the case of a stateless component), which is an array of nested components. For example, consider this component: <App>     <Login /> </App> Inside the App component, this.props.children will be an array with a single item—an instance of the Login. Next, we'll define handler components for the two sections we want to route: var LoginHandler = function() {     return <Login />; }; var PageAdminHandler = function() {     var backend = new Backend();     return <PageAdmin backend={backend} />; }; We don't really need to wrap Login in LoginHandler but I've chosen to do it to be consistent with PageAdminHandler. PageAdmin expects an instance of Backend, so we have to wrap it as we see in this example. Now we can define routes for our CMS: ReactDOM.render(     <Router history={browserHistory}>         <Route path="/" component={App}>             <IndexRoute component={LoginHandler} />             <Route path="login" component={LoginHandler} />             <Route path="page-admin" component={PageAdminHandler} />         </Route>     </Router>,     document.querySelector(".react") ); There's a single root route, for the path /. It creates an instance of App, so we always get the same layout. Then we nest a login route and a page-admin route. These create instances of their respective components. We also define an IndexRoute so that the login page will be displayed as a landing page. We need to remove our custom history code from Nav: import React from "react"; import ReactDOM from "react-dom"; import { Link } from "router";   export default (props) => {     // ...define class names       return <div className={drawerClassNames}>         <header className="demo-drawer-header">             <img src="images/user.jpg"                  className="demo-avatar" />         </header>         <nav className={navClassNames}>             <Link className="mdl-navigation__link" to="login">                 <i className={buttonIconClassNames}                    role="presentation">                     lock                 </i>                 Login             </Link>             <Link className="mdl-navigation__link" to="page-admin">                 <i className={buttonIconClassNames}                    role="presentation">                     pages                 </i>                 Pages             </Link>         </nav>     </div>; }; And since we no longer need a separate redirect method, we can convert the class back into a statement component (function). Notice we've swapped anchor components for a new Link component. This interacts with the router to show the correct section when we click on the navigation links. We can also change the route paths without needing to update this component (unless we also change the route names). Creating public pages Now that we can easily switch between CMS sections, we can use the same trick to show the public pages of our website. Let's create a new HTML page just for these: <!DOCTYPE html> <html>     <head>         <script src="/node_modules/babel-core/browser.js"></script>         <script src="/node_modules/systemjs/dist/system.js"></script>     </head>     <body>         <div class="react"></div>         <script>             System.config({                 "transpiler": "babel",                 "map": {                     "react": "/examples/react/react",                     "react-dom": "/examples/react/react-dom",                     "router": "/node_modules/react-router/umd/ReactRouter"                 },                 "baseURL": "../",                 "defaultJSExtensions": true             });               System.import("examples/index");         </script>     </body> </html> This is a reduced form of admin.html without the material design resources. I think we can ignore the appearance of these pages for the moment, while we focus on the navigation. The public pages are almost 100%, so we can use stateless components for them. Let's begin with the layout component: var App = function(props) {     return (         <div className="layout">             <Nav pages={props.route.backend.all()} />             {props.children}         </div>     ); }; This is similar to the App admin component, but it also has a reference to a Backend. We define that when we render the components: var backend = new Backend(); ReactDOM.render(     <Router history={browserHistory}>         <Route path="/" component={App} backend={backend}>             <IndexRoute component={StaticPage} backend={backend} />             <Route path="pages/:page" component={StaticPage} backend={backend} />         </Route>     </Router>,     document.querySelector(".react") ); For this to work, we also need to define a StaticPage: var StaticPage = function(props) {     var id = props.params.page || 1;     var backend = props.route.backend;       var pages = backend.all().filter(         (page) => {             return page.id == id;         }     );       if (pages.length < 1) {         return <div>not found</div>;     }       return (         <div className="page">             <h1>{pages[0].title}</h1>             {pages[0].content}         </div>     ); }; This component is more interesting. We access the params property, which is a map of all the URL path parameters defined for this route. We have :page in the path (pages/:page), so when we go to pages/1, the params object is {"page":1}. We also pass a Backend to Page, so we can fetch all pages and filter them by page.id. If no page.id is provided, we default to 1. After filtering, we check to see if there are any pages. If not, we return a simple not found message. Otherwise, we render the content of the first page in the array (since we expect the array to have a length of at least 1). We now have a page for the public pages of the website: Summary In this article, we learned about how the browser stores URL history and how we can manipulate it to load different sections without full page reloads. Resources for Article:   Further resources on this subject: Introduction to Akka [article] An Introduction to ReactJs [article] ECMAScript 6 Standard [article]
Read more
  • 0
  • 0
  • 1218

article-image-testing-node-and-hapi
Packt
09 Feb 2016
22 min read
Save for later

Testing in Node and Hapi

Packt
09 Feb 2016
22 min read
In this article by John Brett, the author of the book Getting Started with Hapi.js, we are going to explore the topic of testing in node and hapi. We will look at what is involved in writing a simple test using hapi's test runner, lab, how to test hapi applications, techniques to make testing easier, and finally how to achieve the all-important 100% code coverage. (For more resources related to this topic, see here.) The benefits and importance of testing code Technical debt is developmental work that must be done before a particular job is complete, or else it will make future changes much harder to implement later on. A codebase without tests is a clear indication of technical debt. Let's explore this statement in more detail. Even very simple applications will generally comprise: Features, which the end user interacts with Shared services, such as authentication and authorization, that features interact with These will all generally depend on some direct persistent storage or API. Finally, to implement most of these features and services, we will use libraries, frameworks, and modules regardless of language. So, even for simpler applications, we have already arrived at a few dependencies to manage, where a change that causes a break in one place could possibly break everything up the chain. So let's take a common use case, in which a new version of one of your dependencies is released. This could be a new hapi version, a smaller library, your persistent storage engine, MySQL, MongoDB, or even an operating system or language version. SemVer, as mentioned previously, attempts to mitigate this somewhat, but you are taking someone at their word when they say that they have adhered to this correctly, and SemVer is not used everywhere. So, in the case of a break-causing change, will the current application work with this new dependency version? What will fail? What percentage of tests fail? What's the risk if we don't upgrade? Will support eventually be dropped, including security patches? Without a good automated test suite, these have to be answered by manual testing, which is a huge waste of developer time. Development progress stops here every time these tasks have to be done, meaning that these types of tasks are rarely done, building further technical debt. Apart from this, humans are proven to be poor at repetitive tasks, prone to error, and I know I personally don't enjoy testing manually, which makes me poor at it. I view repetitive manual testing like this as time wasted, as these questions could easily be answered by running a test suite against the new dependency so that developer time could be spent on something more productive. Now, let's look at a worse and even more common example: a security exploit has been identified in one of your dependencies. As mentioned previously, if it's not easy to update, you won't do it often, so you could be on an outdated version that won't receive this security update. Now you have to jump multiple versions at once and scramble to test them manually. This usually means many quick fixes, which often just cause more bugs. In my experience, code changes under pressure are what deteriorate the structure and readability in a codebase, lead to a much higher number of bugs, and are a clear sign of poor planning. A good development team will, instead of looking at what is currently available, look ahead to what is in beta and will know ahead of time if they expect to run into issues. The questions asked will be: Will our application break in the next version of Chrome? What about the next version of node? Hapi does this by running the full test suite against future versions of node in order to alert the node community of how planned changes will impact hapi and the node community as a whole. This is what we should all aim to do as developers. A good test suite has even bigger advantages when working in a team or when adding new developers to a team. Most development teams start out small and grow, meaning all the knowledge of the initial development needs to be passed on to new developers joining the team. So, how do tests lead to a benefit here? For one, tests are a great documentation on how parts of the application work for other members of a team. When trying to communicate a problem in an application, a failing test is a perfect illustration of what and where the problem is. When working as a team, for every code change from yourself or another member of the team, you're faced with the preceding problem of changing a dependency. Do we just test the code that was changed? What about the code that depends on the changed code? Is it going to be manual testing again? If this is the case, how much time in a week would be spent on manual testing versus development? Often, with changes, existing functionality can be broken along with new functionality, which is called regression. Having a good test suite highlights this and makes it much easier to prevent. These are the questions and topics that need to be answered when discussing the importance of tests. Writing tests can also improve code quality. For one, identifying dead code is much easier when you have a good testing suite. If you find that you can only get 90% code coverage, what does the extra 10% do? Is it used at all if it's unreachable? Does it break other parts of the application if removed? Writing tests will often improve your skills in writing easily testable code. Software applications usually grow to be complex pretty quickly—it happens, but we always need to be active in dealing with this, or software complexity will win. A good test suite is one of the best tools we have to tackle this. The preceding is not an exhaustive list on the importance or benefits of writing tests for your code, but hopefully it has convinced you of the importance of having a good testing suite. So, now that we know why we need to write good tests, let's look at hapi's test runner lab and assertion library code and how, along with some tools from hapi, they make the process of writing tests much easier and a more enjoyable experience. Introducing hapi's testing utilities The test runner in the hapi ecosystem is called lab. If you're not familiar with test runners, they are command-line interface tools for you to run your testing suite. Lab was inspired by a similar test tool called mocha, if you are familiar with it, and in fact was initially begun as a fork of the mocha codebase. But, as hapi's needs diverged from the original focus of mocha, lab was born. The assertion library commonly used in the hapi ecosystem is code. An assertion library forms the part of a test that performs the actual checks to judge whether a test case has passed or not, for example, checking that the value of a variable is true after an action has been taken. Lets look at our first test script; then, we can take a deeper look at lab and code, how they function under the hood, and some of the differences they have with other commonly used libraries, such as mocha and chai. Installing lab and code You can install lab and code the same as any other module on npm: npm install lab code -–save-dev Note the --save-dev flag added to the install command here. Remember your package.json file, which describes an npm module? This adds the modules to the devDependencies section of your npm module. These are dependencies that are required for the development and testing of a module but are not required for using the module. The reason why these are separated is that when we run npm install in an application codebase, it only installs the dependencies and devDependencies of package.json in that directory. For all the modules installed, only their dependencies are installed, not their development dependencies. This is because we only want to download the dependencies required to run that application; we don't need to download all the development dependencies for every module. The npm install command installs all the dependencies and devDependencies of package.json in the current working directory, and only the dependencies of the other installed module, not devDependencies. To install the development dependencies of a particular module, navigate to the root directory of the module and run npm install. After you have installed lab, you can then run it with the following: ./node_modules/lab/bin/lab test.js This is quite long to type every time, but fortunately due to a handy feature of npm called npm scripts, we can shorten it. If you look at package.json generated by npm init in the first chapter, depending on your version of npm, you may see the following (some code removed for brevity): ... "scripts": { "test": "echo "Error: no test specified" && exit 1" }, ... Scripts are a list of commands related to the project; they can be for testing purposes, as we will see in this example; to start an application; for build steps; and to start extra servers, among many other options. They offer huge flexibility in how these are combined to manage scripts related to a module or application, and I could spend a chapter, or even a book, on just these, but they are outside the scope of this book, so let's just focus on what is important to us here. To get a list of available scripts for a module application, in the module directory, simply run: $ npm run To then run the listed scripts, such as test you can just use: $ npm run test As you can see, this gives a very clean API for scripts and the documentation for each of them in the project's package.json. From this point on in this book, all code snippets will use npm scripts to test or run any examples. We should strive to use these in our projects to simplify and document commands related to applications and modules for ourselves and others. Let's now add the ability to run a test file to our package.json file. This just requires modifying the scripts section to be the following: ... "scripts": { "test": "./node_modules/lab/bin/lab ./test/index.js" }, ... It is common practice in node to place all tests in a project within the test directory. A handy addition to note here is that when calling a command with npm run, the bin directory of every module in your node_modules directory is added to PATH when running these scripts, so we can actually shorten this script to: … "scripts": { "test": "lab ./test/index.js" }, … This type of module install is considered to be local, as the dependency is local to the application directory it is being run in. While I believe this is how we should all install our modules, it is worth pointing it out that it is also possible to install a module globally. This means that when installing something like lab, it is immediately added to PATH and can be run from anywhere. We do this by adding a -g flag to the install, as follows: $ npm install lab code -g This may appear handier than having to add npm scripts or running commands locally outside of an npm script but should be avoided where possible. Often, installing globally requires sudo to run, meaning you are taking a script from the Internet and allowing it to have complete access to your system. Hopefully, the security concerns here are obvious. Other than that, different projects may use different versions of test runners, assertion libraries, or build tools, which can have unknown side effects and cause debugging headaches. The only time I would use globally installed modules are for command-line tools that I may use outside a particular project—for example, a node base terminal IDE such as slap (https://www.npmjs.com/package/slap) or a process manager such as PM2 (https://www.npmjs.com/package/pm2)—but never with sudo! Now that we are familiar with installing lab and code and the different ways or running it inside and outside of npm scripts, let's look at writing our first test script and take a more in-depth look at lab and code. Our First Test Script Let's take a look at what a simple test script in lab looks like using the code assertion library: const Code = require('code'); [1] const Lab = require('lab'); [1] const lab = exports.lab = Lab.script(); [2] lab.experiment('Testing example', () => { [3] lab.test('fails here', (done) => { [4] Code.expect(false).to.be.true(); [4] return done(); [4] }); [4] lab.test('passes here', (done) => { [4] Code.expect(true).to.be.true(); [4] return done(); [4] }); [4] }); This script, even though small, includes a number of new concepts, so let's go through it with reference to the numbers in the preceding code: [1]: Here, we just include the code and lab modules, as we would any other node module. [2]: As mentioned before, it is common convention to place all test files within the test directory of a project. However, there may be JavaScript files in there that aren't tests, and therefore should not be tested. To avoid this, we inform lab of which files are test scripts by calling Lab.script() and assigning the value to lab and exports.lab. [3]: The lab.experiment() function (aliased lab.describe()) is just a way to group tests neatly. In test output, tests will have the experiment string prefixed to the message, for example, "Testing example fails here". This is optional, however. [4]: These are the actual test cases. Here, we define the name of the test and pass a callback function with the parameter function done(). We see code in action here for managing our assertions. And finally, we call the done() function when finished with our test case. Things to note here: lab tests are always asynchronous. In every test, we have to call done() to finish the test; there is no counting of function parameters or checking whether synchronous functions have completed in order to ensure that a test is finished. Although this requires the boilerplate of calling the done() function at the end of every test, it means that all tests, synchronous or asynchronous, have a consistent structure. In Chai, which was originally used for hapi, some of the assertions such as .ok, .true, and .false use properties instead of functions for assertions, while assertions like .equal(), and .above() use functions. This type of inconsistency leads to us easily forgetting that an assertion should be a method call and hence omitting the (). This means that the assertion is never called and the test may pass as a false positive. Code's API is more consistent in that every assertion is a function call. Here is a comparison of the two: Chai: expect('hello').to.equal('hello'); expect(foo).to.exist; Code: expect('hello').to.equal('hello'); expect('foot').to.exist(); Notice the difference in the second exist() assertion. In Chai, you see the property form of the assertion, while in Code, you see the required function call. Through this, lab can make sure all assertions within a test case are called before done is complete, or it will fail the test. So let's try running our first test script. As we already updated our package.json script, we can run our test with the following command: $ npm run test This will generate the following output: There are a couple of things to note from this. Tests run are symbolized with a . or an X, depending on whether they pass or not. You can get a lab list of the full test title by adding the -v or -–verbose flag to our npm test script command. There are lots of flags to customize the running and output of lab, so I recommend using the full labels for each of these, for example, --verbose and --lint instead of -v and -l, in order to save you the time spent referring back to the documentation each time. You may have noticed the No global variable leaks detected message at the bottom. Lab assumes that the global object won't be polluted and checks that no extra properties have been added after running tests. Lab can be configured to not run this check or whitelist certain globals. Details of this are in the lab documentation availbale at https://github.com/hapijs/lab. Testing approaches This is one of the many known approaches to building a test suite, as is BDD (Behavior Driven Development), and like most test runners in node, lab is unopinionated about how you structure your tests. Details of how to structure your tests in a BDD can again be found easily in the lab documentation. Testing with hapi As I mentioned before, testing is considered paramount in the hapi ecosystem, with every module in the ecosystem having to maintain 100% code coverage at all times, as with all module dependencies. Fortunately, hapi provides us with some tools to make the testing of hapi apps much easier through a module called Shot, which simulates network requests to a hapi server. Let's take the example of a Hello World server and write a simple test for it: const Code = require('code'); const Lab = require('lab'); const Hapi = require('hapi'); const lab = exports.lab = Lab.script(); lab.test('It will return Hello World', (done) => { const server = new Hapi.Server(); server.connection(); server.route({ method: 'GET', path: '/', handler: function (request, reply) { return reply('Hello Worldn'); } }); server.inject('/', (res) => { Code.expect(res.statusCode).to.equal(200); Code.expect(res.result).to.equal('Hello Worldn'); done(); }); }); Now that we are more familiar with with what a test script looks like, most of this will look familiar. However, you may have noticed we never started our hapi server. This means the server was never started and no port assigned, but thanks to the shot module (https://github.com/hapijs/shot), we can still make requests against it using the server.inject API. Not having to start a server means less setup and teardown before and after tests and means that a test suite can run quicker as less resources are required. server.inject can still be used if used with the same API whether the server has been started or not. Code coverage As I mentioned earlier in the article, having 100% code coverage is paramount in the hapi ecosystem and, in my opinion, hugely important for any application to have. Without a code coverage target, writing tests can feel like an empty or unrewarding task where we don't know how many tests are enough or how much of our application or module has been covered. With any task, we should know what our goal is; testing is no different, and this is what code coverage gives us. Even with 100% coverage, things can still go wrong, but it means that at the very least, every line of code has been considered and has at least one test covering it. I've found from working on modules for hapi that trying to achieve 100% code coverage actually gamifies the process of writing tests, making it a more enjoyable experience overall. Fortunately, lab has code coverage integrated, so we don't need to rely on an extra module to achieve this. It's as simple as adding the --coverage or -c flag to our test script command. Under the hood, lab will then build an abstract syntax tree so it can evaluate which lines are executed, thus producing our coverage, which will be added to the console output when we run tests. The code coverage tool will also highlight which lines are not covered by tests, so you know where to focus your testing effort, which is extremely useful in identifying where to focus your testing effort. It is also possible to enforce a minimum threshold as to the percentage of code coverage required to pass a suite of tests with lab through the --threshold or -t flag followed by an integer. This is used for all the modules in the hapi ecosystem, and all thresholds are set to 100. Having a threshold of 100% for code coverage makes it much easier to manage changes to a codebase. When any update or pull request is submitted, the test suite is run against the changes, so we can know that all tests have passed and all code covered before we even look at what has been changed in the proposed submission. There are services that even automate this process for us, such as TravisCI (https://travis-ci.org/). It's also worth knowing that the coverage report can be displayed in a number of formats; For a full list of these reporters with explanations, I suggest reading the lab documentation available at https://github.com/hapijs/lab. Let's now look at what's involved in getting 100% coverage for our previous example. First of all, we'll move our server code to a separate file, which we will place in the lib folder and call index.js. It's worth noting here that it's good testing practice and also the typical module structure in the hapi ecosystem to place all module code in a folder called lib and the associated tests for each file within lib in a folder called test, preferably with a one-to-one mapping like we have done here, where all the tests for lib/index.js are in test/index.js. When trying to find out how a feature within a module works, the one-to-one mapping makes it much easier to find the associated tests and see examples of it in use. So, having separated our server from our tests, let's look at what our two files now look like; first, ./lib/index.js: const Hapi = require('hapi'); const server = new Hapi.Server(); server.connection(); server.route({ method: 'GET', path: '/', handler: function (request, reply) { return reply('Hello Worldn'); } }); module.exports = server; The main change here is that we export our server at the end for another file to acquire and start it if necessary. Our test file at ./test/index.js will now look like this: const Code = require('code'); const Lab = require('lab'); const server = require('../lib/index.js'); const lab = exports.lab = Lab.script(); lab.test('It will return Hello World', (done) => { server.inject('/', (res) => { Code.expect(res.statusCode).to.equal(200); Code.expect(res.result).to.equal('Hello Worldn'); done(); }); }); Finally, for us to test our code coverage, we update our npm test script to include the coverage flag --coverage or -c. The final example of this is in the second example of the source code of Chapter 4, Adding Tests and the Importance of 100% Coverage, which is supplied with this book. If you run this, you'll find that we actually already have 100% of the code covered with this one test. An interesting exercise here would be to find out what versions of hapi this code functions correctly with. At the time of writing, this code was written for hapi version 11.x.x on node.js version 4.0.0. Will it work if run with hapi version 9 or 10? You can test this now by installing an older version with the help of the following command: $ npm install hapi@10 This will give you an idea of how easy it can be to check whether your codebase works with different versions of libraries. If you have some time, it would be interesting to see how this example runs on different versions of node (Hint: it breaks on any version earlier than 4.0.0). In this example, we got 100% code coverage with one test. Unfortunately, we are rarely this fortunate when we increase the complexity of our codebase, and so will the complexity of our tests be, which is where knowledge of writing testable code comes in. This is something that comes with practice by writing tests while writing application or module code. Linting Also built into lab is linting support. Linting enforces a code style that is adhered to, which can be specified through an .eslintrc or .jshintrc file. By default, lab will enforce the the hapi style guide rules. The idea of linting is that all code will have the same structure, making it much easier to spot bugs and keep code tidy. As JavaScript is a very flexible language, linters are used regularly to forbid bad practices such as global or unused variables. To enable the lab linter, simply add the linter flag to the test command, which is --lint or -L. I generally stick with the default hapi style guide rules as they are chosen to promote easy-to-read code that is easily testable and forbids many bad practices. However, it's easy to customize the linting rules used; for this, I recommend referring to the lab documentation. Summary In this article, we covered testing in node and hapi and how testing and code coverage are paramount in the hapi ecosystem. We saw justification for their need in application development and where they can make us more productive developers. We also introduced the test runner and code assertion libraries lab and code in the ecosystem. We saw justification for their use and also how to use them to write simple tests and how to use the tools provided in lab and hapi to test hapi applications. We also learned about some of the extra features baked into lab, such as code coverage and linting. We looked at how to test the code coverage of an application and get it to 100% and how the hapi ecosystem applies the hapi styleguide to all modules using lab's linting integration. Resources for Article: Further resources on this subject: Welcome to JavaScript in the full stack[article] A Typical JavaScript Project[article] An Introduction to Mastering JavaScript Promises and Its Implementation in Angular.js[article]
Read more
  • 0
  • 0
  • 4003
article-image-classes-and-instances-ember-object-model
Packt
05 Feb 2016
12 min read
Save for later

Classes and Instances of Ember Object Model

Packt
05 Feb 2016
12 min read
In this article by Erik Hanchett, author of the book Ember.js cookbook, Ember.js is an open source JavaScript framework that will make you more productive. It uses common idioms and practices, making it simple to create amazing single-page applications. It also let's you create code in a modular way using the latest JavaScript features. Not only that, it also has a great set of APIs in order to get any task done. The Ember.js community is welcoming newcomers and is ready to help you when required. (For more resources related to this topic, see here.) Working with classes and instances Creating and extending classes is a major feature of the Ember object model. In this recipe, we'll take a look at how creating and extending objects works. How to do it Let's begin by creating a very simple Ember class using extend(), as follows: const Light = Ember.Object.extend({ isOn: false }); This defines a new Light class with a isOn property. Light inherits the properties and behavior from the Ember object such as initializers, mixins, and computed properties. Ember Twiddle Tip At some point of time, you might need to test out small snippets of the Ember code. An easy way to do this is to use a website called Ember Twiddle. From that website, you can create an Ember application and run it in the browser as if you were using Ember CLI. You can even save and share it. It has similar tools like JSFiddle; however, only for Ember. Check it out at http://ember-twiddle.com. Once you have defined a class you'll need to be able to create an instance of it. You can do this by using the create() method. We'll go ahead and create an instance of Light. constbulb = Light.create(); Accessing properties within the bulb instance We can access the properties of the bulb object using the set and get accessor methods. Let's go ahead and get the isOn property of the Light class, as follows: console.log(bulb.get('isOn')); The preceding code will get the isOn property from the bulb instance. To change the isOn property, we can use the set accessor method: bulb.set('isOn', true) The isOn property will now be set to true instead of false. Initializing the Ember object The init method is invoked whenever a new instance is created. This is a great place to put in any code that you may need for the new instance. In our example, we'll go ahead and add an alert message that displays the default setting for the isOn property: const Light = Ember.Object.extend({ init(){ alert('The isON property is defaulted to ' + this.get('isOn')); }, isOn: false }); As soon as the Light.create line of code is executed, the instance will be created and this message will pop up on the screen. The isON property is defaulted to false. Subclass Be aware that you can create subclasses of your objects in Ember. You can override methods and access the parent class by using the _super() keyword method. This is done by creating a new object that uses the Ember extend method on the parent class. Another important thing to realize is that if you're subclassing a framework class such as Ember.Component and you override the init method, you'll need to make sure that you call this._super(). If not, the component may not work properly. Reopening classes At anytime, you can reopen a class and define new properties or methods in it. For this, use the reopen method. In our previous example, we had an isON property. Let's reopen the same class and add a color property, as follows: To add the color property, we need to use the reopen() method: Light.reopen({ color: 'yellow' }); If required, you can add static methods or properties using reopenClass, as follows: Light.reopen({ wattage: 40 }); You can now access the static property: Light.wattage How it works In the preceding examples, we have created an Ember object using extend. This tells Ember to create a new Ember class. The extend method uses inheritance in the Ember.js framework. The Light object inherits all the methods and bindings of the Ember object. The create method also inherits from the Ember object class and returns a new instance of this class. The bulb object is the new instance of the Ember object that we created. There's more To use the previous examples, we can create our own module and have it imported to our project. To do this, create a new MyObject.js file in the app folder, as follows: // app/myObject.js import Ember from 'ember'; export default function() { const Light = Ember.Object.extend({ init(){ alert('The isON property is defaulted to ' + this.get('isOn')); }, isOn: false }); Light.reopen({ color: 'yellow' }); Light.reopenClass({ wattage: 80 }); const bulb = Light.create(); console.log(bulb.get('color')); console.log(Light.wattage); } This is the module that we can now import to any file of our Ember application. In the app folder, edit the app.js file. You'll need to add the following line at the top of the file: // app/app.js import myObject from './myObject'; At the bottom, before the export, add the following line: myObject(); This will execute the myObject function that we created in the myObject.js file. After running the Ember server, you'll see the isOn property defaulted to the false pop-up message. Working with computed properties In this recipe, we'll take a look at the computed properties and how they can be used to display data, even if that data changes as the application is running. How to do it Let's create a new Ember.Object and add a computed property to it, as shown in the following: Begin by creating a new description computed property. This property will reflect the status of isOn and color properties: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn'); }) }); We can now create a new Light object and get the computed property description: const bulb = Light.create(); bulb.get('description'); //The yellow light is set to false The preceding example creates a computed property that depends on the isOn and color properties. When the description function is called, it returns a string describing the state of the light. Computed properties will observe changes and dynamically update whenever they occur. To see this in action, we can change the preceding example and set the isOn property to false. Use the following code to accomplish this: bulb.set('isOn', true); bulb.get('description') //The yellow light is set to true The description has been automatically updated and will now display that the yellow light is set to true. Chaining the Light object Ember provides a nice feature that allows computed properties to be present in other computed properties. In the previous example, we created a description property that outputted some basic information about the Light object, as follows: Let's add another property that gives a full description: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', age: null, description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn'); }), fullDescription: Ember.computed('description','age',function() { return this.get('description') + ' and the age is ' + this.get('age') }), }); The fullDescription function returns a string that concatenates the output from description with a new string that displays the age: const bulb = Light.create({age:22}); bulb.get('fullDescription'); //The yellow light is set to false and the age is 22 In this example, during instantiation of the Light object, we set the age to 22. We can overwrite any property if required. Alias The Ember.computed.alias method allows us to create a property that is an alias for another property or object. Any call to get or set will behave as if the changes were made to the original property, as shown in the following: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', age: null, description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn'); }), fullDescription: Ember.computed('description','age',function() { return this.get('description') + ' and the age is ' + this.get('age') }), aliasDescription: Ember.computed.alias('fullDescription') }); const bulb = Light.create({age: 22}); bulb.get('aliasDescription'); //The yellow light is set to false and the age is 22. The aliasDescription will display the same text as fullDescription since it's just an alias of this object. If we made any changes later to any properties in the Light object, the alias would also observe these changes and be computed properly. How it works Computed properties are built on top of the observer pattern. Whenever an observation shows a state change, it recomputes the output. If no changes occur, then the result is cached. In other words, the computed properties are functions that get updated whenever any of their dependent value changes. You can use it in the same way that you would use a static property. They are common and useful throughout Ember and it's codebase. Keep in mind that a computed property will only update if it is in a template or function that is being used. If the function or template is not being called, then nothing will occur. This will help with performance. Working with Ember observers in Ember.js Observers are fundamental to the Ember object model. In the next recipe, we'll take our light example and add in an observer and see how it operates. How to do it To begin, we'll add a new isOnChanged observer. This will only trigger when the isOn property changes: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', age: null, description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn') }), fullDescription: Ember.computed('description','age',function() { return this.get('description') + ' and the age is ' + this.get('age') }), desc: Ember.computed.alias('description'), isOnChanged: Ember.observer('isOn',function() { console.log('isOn value changed') }) }); const bulb = Light.create({age: 22}); bulb.set('isOn',true); //console logs isOn value changed The Ember.observer isOnChanged monitors the isOn property. If any changes occur to this property, isOnChanged is invoked. Computed Properties vs Observers At first glance, it might seem that observers are the same as computed properties. In fact, they are very different. Computed properties can use the get and set methods and can be used in templates. Observers, on the other hand, just monitor property changes. They can neither be used in templates nor be accessed like properties. They also don't return any values. With that said, be careful not to overuse observers. For many instances, a computed property is the most appropriate solution. Also, if required, you can add multiple properties to the observer. Just use the following command: Light.reopen({ isAnythingChanged: Ember.observer('isOn','color',function() { console.log('isOn or color value changed') }) }); const bulb = Light.create({age: 22}); bulb.set('isOn',true); // console logs isOn or color value changed bulb.set('color','blue'); // console logs isOn or color value changed The isAnything observer is invoked whenever the isOn or color properties changes. The observer will fire twice as each property has changed. Synchronous issues with the Light object and observers It's very easy to get observers out of sync. For example, if a property that it observes changes, it will be invoked as expected. After being invoked, it might manipulate a property that hasn't been updated yet. This can cause synchronization issues as everything happens at the same time, as follows: The following example shows this behavior: Light.reopen({ checkIsOn: Ember.observer('isOn', function() { console.log(this.get('fullDescription')); }) }); const bulb = Light.create({age: 22}); bulb.set('isOn', true); When isOn is changed, it's not clear whether fullDescription, a computed property, has been updated yet or not. As observers work synchronously, it's difficult to tell what has been fired and changed. This can lead to unexpected behavior. To counter this, it's best to use the Ember.run.once method. This method is a part of the Ember run loop, which is Ember's way of managing how the code is executed. Reopen the Light object and you can see the following occurring: Light.reopen({ checkIsOn: Ember.observer('isOn','color', function() { Ember.run.once(this,'checkChanged'); }), checkChanged: Ember.observer('description',function() { console.log(this.get('description')); }) }); const bulb = Light.create({age: 22}); bulb.set('isOn', true); bulb.set('color', 'blue'); The checkIsOn observer calls the checkChanged observer using Ember.run.once. This method is only run once per run loop. Normally, checkChanged would be fired twice; however, since it's be called using Ember.run.once, it only outputs once. How it works Ember observers are mixins from the Ember.Observable class. They work by monitoring property changes. When any change occurs, they are triggered. Keep in mind that these are not the same as computed properties and cannot be used in templates or with getters or setters. Summary In this article you learned classes and instances. You also learned computed properties and how they can be used to display data. Resources for Article: Further resources on this subject: Introducing the Ember.JS framework [article] Building Reusable Components [article] Using JavaScript with HTML [article]
Read more
  • 0
  • 0
  • 2441

article-image-customizing-and-automating-google-applications
Packt
27 Jan 2016
7 min read
Save for later

Customizing and Automating Google Applications

Packt
27 Jan 2016
7 min read
In this article by the author, Ramalingam Ganapathy, of the book, Learning Google Apps Script, we will see how to create new projects in sheets and send an email with inline image and attachments. You will also learn to create clickable buttons, a custom menu, and a sidebar. (For more resources related to this topic, see here.) Creating new projects in sheets Open any newly created google spreadsheet (sheets). You will see a number of menu items at the top of the window. Point your mouse to it and click on Tools. Then, click on Script editor as shown in the following screenshot: A new browser tab or window with a new project selection dialog will open. Click on Blank Project or close the dialog. Now, you have created a new untitled project with one script file (Code.gs), which has one default empty function (myFunction). To rename the project, click on project title (at the top left-hand side of the window), and then a rename dialog will open. Enter your favored project name, and then click on the OK button. Creating clickable buttons Open the script editor in a newly created or any existing Google sheet. Select the cell B3 or any other cell. Click on Insert and Drawing as shown in the following screenshot: A drawing editor window will open. Click on the Textbox icon and click anywhere on the canvas area. Type Click Me. Resize the object so as to only enclose the text as shown in the screenshot here: Click on Save & Close to exit from the drawing editor. Now, the Click Me image will be inserted at the top of the active cell (B3) as shown in the following screenshot: You can drag this image anywhere around the spreadsheet. In Google sheets, images are not anchored to a particular cell, it can be dragged or moved around. If you right-click on the image, then a drop-down arrow at the top right corner of the image will be visible. Click on the Assign script menu item. A script assignment window will open as shown here: Type "greeting" or any other name as you like but remember its name (so as to create a function with the same name for the next steps). Click on the OK button. Now, open the script editor in the same spreadsheet. When you the open script editor, the project selector dialog will open. You'll close or select blank project. A default function called myFunction will be there in the editor. Delete everything in the editor and insert the following code. function greeting() { Browser.msgBox("Greeting", "Hello World!", Browser.Buttons.OK); } Click on the save icon and enter a project name if asked. You have completed coding your greeting function. Activate the spreadsheet tab/window, and click on your button called Click Me. Then, an authorization window will open; click on Continue. In the successive Request for Permission window, click on Allow button. As soon as you click on Allow and the permission gets dialog disposed, your actual greeting message box will open as shown here: Click on OK to dispose the message box. Whenever you click on your button, this message box will open. Creating a custom menu Can you execute the function greeting without the help of the button? Yes, in the script editor, there is a Run menu. If you click on Run and greeting, then the greeting function will be executed and the message box will open. Creating a button for every function may not be feasible. Although, you cannot alter or add items to the application's standard menu (except the Add-on menu), such as File, Edit and View, and others, you can add the custom menu and its items. For this task, create a new Google docs document or open any existing document. Open the script editor and type these two functions: function createMenu() { DocumentApp.getUi() .createMenu("PACKT") .addItem("Greeting", "greeting") .addToUi(); } function greeting() { var ui = DocumentApp.getUi(); ui.alert("Greeting", "Hello World!", ui.ButtonSet.OK); } In the first function, you use the DocumentApp class, invoke the getUi method, and consecutively invoke the createMenu, addItem, and addToUi methods by method chaining. The second function is familiar to you that you have created in the previous task but this time with the DocumentApp class and associated methods. Now, run the function called createMenu and flip to the document window/tab. You can notice a new menu item called PACKT added next to the Help menu. You can see the custom menu PACKT with an item Greeting as shown next. The item label called Greeting is associated with the function called greeting: The menu item called Greeting works the same way as your button created in previous task. The drawback with this method of inserting custom menu is used to show up the custom menu. You need to run createMenu every time within the script editor. Imagine how your user can use this greeting function if he/she doesn't know about the GAS and script editor? Think that your user might not be a programmer as you. To enable your users to execute the selected GAS functions, then you should create a custom menu and make it visible as soon as the application is opened. To do so, rename the function called createMenu to onOpen, that's it. Creating a sidebar Sidebar is a static dialog box and it will be included in the right-hand side of the document editor window. To create a sidebar, type the following code in your editor: function onOpen() { var htmlOutput = HtmlService .createHtmlOutput('<button onclick="alert('Hello World!');">Click Me</button>') .setTitle('My Sidebar'); DocumentApp.getUi() .showSidebar(htmlOutput); } In the previous code, you use HtmlService and invoke its method called createHtmlOutput and consecutively invoke the setTitle method. To test this code, run the onOpen function or the reload document. The sidebar will be opened in the right-hand side of the document window as shown in the following screenshot. The sidebar layout size is a fixed one that means you cannot change, alter, or resize it: The button in the sidebar is an HTML element, not a GAS element, and if clicked, it opens the browser interface's alert box. Sending an email with inline image and attachments To embed images such as logo in your email message, you may use HTML codes instead of some plain text. Upload your image to Google Drive and get and use the file ID in the code: function sendEmail(){ var file = SpreadsheetApp.getActiveSpreadsheet() .getAs(MimeType.PDF); var image = DriveApp.getFileById("[[image file's id in Drive ]]").getBlob(); var to = "[[receiving email id]]"; var message = '<img src="cid:logo" /> Embedding inline image example.</p>'; MailApp.sendEmail( to, "Email with inline image and attachment", "", { htmlBody:message, inlineImages:{logo:image}, attachments:[file] } ); } Summary In this article, you learned how to customize and automate Google applications with a few examples. Many more useful and interesting applications have been described in the actual book.  Resources for Article: Further resources on this subject: How to Expand Your Knowledge [article] Google Apps: Surfing the Web [article] Developing Apps with the Google Speech Apis [article]
Read more
  • 0
  • 0
  • 1488

article-image-accessing-data-spring
Packt
25 Jan 2016
8 min read
Save for later

Accessing Data with Spring

Packt
25 Jan 2016
8 min read
In this article written by Shameer Kunjumohamed and Hamidreza Sattari, authors of the book Spring Essentials, we will learn how to access data with Spring. (For more resources related to this topic, see here.) Data access or persistence is a major technical feature of data-driven applications. It is a critical area where careful design and expertise is required. Modern enterprise systems use a wide variety of data storage mechanisms, ranging from traditional relational databases such as Oracle, SQL Server, and Sybase to more flexible, schema-less NoSQL databases such as MongoDB, Cassandra, and Couchbase. Spring Framework provides comprehensive support for data persistence in multiple flavors of mechanisms, ranging from convenient template components to smart abstractions over popular Object Relational Mapping (ORM) tools and libraries, making them much easier to use. Spring's data access support is another great reason for choosing it to develop Java applications. Spring Framework offers developers the following primary approaches for data persistence mechanisms to choose from: Spring JDBC ORM data access Spring Data Furthermore, Spring standardizes the preceding approaches under a unified Data Access Object (DAO) notation called @Repository. Another compelling reason for using Spring is its first class transaction support. Spring provides consistent transaction management, abstracting different transaction APIs such as JTA, JDBC, JPA, Hibernate, JDO, and other container-specific transaction implementations. In order to make development and prototyping easier, Spring provides embedded database support, smart data source abstractions, and excellent test integration. This article explores various data access mechanisms provided by Spring Framework and its comprehensive support for transaction management in both standalone and web environments, with relevant examples. Why use Spring Data Access when we have JDBC? JDBC (short for Java Database Connectivity), the Java Standard Edition API for data connectivity from Java to relational databases, is a very a low-level framework. Data access via JDBC is often cumbersome; the boilerplate code that the developer needs to write makes the it error-prone. Moreover, JDBC exception handling is not sufficient for most use cases; there exists a real need for simplified but extensive and configurable exception handling for data access. Spring JDBC encapsulates the often-repeating code, simplifying the developer's code tremendously and letting him focus entirely on his business logic. Spring Data Access components abstract the technical details, including lookup and management of persistence resources such as connection, statement, and result set, and accept the specific SQL statements and relevant parameters to perform the operation. They use the same JDBC API under the hood while exposing simplified, straightforward interfaces for the client's use. This approach helps make a much cleaner and hence maintainable data access layer for Spring applications. DataSource The first step of connecting to a database from any Java application is obtaining a connection object specified by JDBC. DataSource, part of Java SE, is a generalized factory of java.sql.Connection objects that represents the physical connection to the database, which is the preferred means of producing a connection. DataSource handles transaction management, connection lookup, and pooling functionalities, relieving the developer from these infrastructural issues. DataSource objects are often implemented by database driver vendors and typically looked up via JNDI. Application servers and servlet engines provide their own implementations of DataSource, a connector to the one provided by the database vendor, or both. Typically configured inside XML-based server descriptor files, server-supplied DataSource objects generally provide built-in connection pooling and transaction support. As a developer, you just configure your DataSource objects inside the server (configuration files) declaratively in XML and look it up from your application via JNDI. In a Spring application, you configure your DataSource reference as a Spring bean and inject it as a dependency to your DAOs or other persistence resources. The Spring <jee:jndi-lookup/> tag (of  the http://www.springframework.org/schema/jee namespace) shown here allows you to easily look up and construct JNDI resources, including a DataSource object defined from inside an application server. For applications deployed on a J2EE application server, a JNDI DataSource object provided by the container is recommended. <jee:jndi-lookup id="taskifyDS" jndi-name="java:jboss/datasources/taskify"/> For standalone applications, you need to create your own DataSource implementation or use third-party implementations, such as Apache Commons DBCP, C3P0, and BoneCP. The following is a sample DataSource configuration using Apache Commons DBCP2: <bean id="taskifyDS" class="org.apache.commons.dbcp2.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="${driverClassName}" /> <property name="url" value="${url}" /> <property name="username" value="${username}" /> <property name="password" value="${password}" /> . . . </bean> Make sure you add the corresponding dependency (of your DataSource implementation) to your build file. The following is the one for DBCP2: <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-dbcp2</artifactId> <version>2.1.1</version> </dependency> Spring provides a simple implementation of DataSource called DriverManagerDataSource, which is only for testing and development purposes, not for production use. Note that it does not provide connection pooling. Here is how you configure it inside your application: <bean id="taskifyDS" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="${driverClassName}" /> <property name="url" value="${url}" /> <property name="username" value="${username}" /> <property name="password" value="${password}" /> </bean> It can also be configured in a pure JavaConfig model, as shown in the following code: @Bean DataSource getDatasource() { DriverManagerDataSource dataSource = new DriverManagerDataSource(pgDsProps.getProperty("url")); dataSource.setDriverClassName( pgDsProps.getProperty("driverClassName")); dataSource.setUsername(pgDsProps.getProperty("username")); dataSource.setPassword(pgDsProps.getProperty("password")); return dataSource; } Never use DriverManagerDataSource on production environments. Use third-party DataSources such as DBCP, C3P0, and BoneCP for standalone applications, and JNDI DataSource, provided by the container, for J2EE containers instead. Using embedded databases For prototyping and test environments, it would be a good idea to use Java-based embedded databases for quickly ramping up the project. Spring natively supports HSQL, H2, and Derby database engines for this purpose. Here is a sample DataSource configuration for an embedded HSQL database: @Bean DataSource getHsqlDatasource() { return new EmbeddedDatabaseBuilder().setType(EmbeddedDatabaseType.HSQL) .addScript("db-scripts/hsql/db-schema.sql") .addScript("db-scripts/hsql/data.sql") .addScript("db-scripts/hsql/storedprocs.sql") .addScript("db-scripts/hsql/functions.sql") .setSeparator("/").build(); } The XML version of the same would look as shown in the following code: <jdbc:embedded-database id="dataSource" type="HSQL"> <jdbc:script location="classpath:db-scripts/hsql/db-schema.sql" /> . . . </jdbc:embedded-database> Handling exceptions in the Spring data layer With traditional JDBC based applications, exception handling is based on java.sql.SQLException, which is a checked exception. It forces the developer to write catch and finally blocks carefully for proper handling and to avoid resource leakages. Spring, with its smart exception hierarchy based on runtime exception, saves the developer from this nightmare. By having DataAccessException as the root, Spring bundles a big set of meaningful exceptions that translate the traditional JDBC exceptions. Besides JDBC, Spring covers the Hibernate, JPA, and JDO exceptions in a consistent manner. Spring uses SQLErrorCodeExceptionTranslator, which inherits SQLExceptionTranslator in order to translate SQLExceptions to DataAccessExceptions. We can extend this class for customizing the default translations. We can replace the default translator with our custom implementation by injecting it into the persistence resources (such as JdbcTemplate, which we will cover soon). DAO support and @Repository annotation The standard way of accessing data is via specialized DAOs that perform persistence functions under the data access layer. Spring follows the same pattern by providing DAO components and allowing developers to mark their data access components as DAOs using an annotation called @Repository. This approach ensures consistency over various data access technologies, such as JDBC, Hibernate, JPA, and JDO, as well as project-specific repositories. Spring applies SQLExceptionTranslator across all these methods consistently. Spring recommends your data access components to be annotated with the stereotype @Repository. The term "repository" was originally defined in Domain-Driven Design, Eric Evans, Addison-Wesley, as "a mechanism for encapsulating storage, retrieval, and search behavior which emulates a collection of objects." This annotation makes the class eligible for DataAccessException translation by Spring Framework. Spring Data, another standard data access mechanism provided by Spring, revolves around @Repository components. Summary We have so far explored Spring Framework's comprehensive coverage of all the technical aspects around data access and transaction. Spring provides multiple convenient data access methods, which takes away much of the hard work involved in building the data layer from the developer and also standardizes business components. Correct usage of Spring data access components will ensure that our data layer is clean and highly maintainable. Resources for Article: Further resources on this subject: So, what is Spring for Android?[article] Getting Started with Spring Security[article] Creating a Spring Application[article]
Read more
  • 0
  • 0
  • 1976
article-image-advanced-fetching
Packt
21 Jan 2016
6 min read
Save for later

Advanced Fetching

Packt
21 Jan 2016
6 min read
In this article by Ramin Rad, author of the book Mastering Hibernate, we have discussed various ways of fetching the data from the permanent store. We will focus a little more on annotations related to data fetch. (For more resources related to this topic, see here.) Fetching strategy In Java Persistence API, JPA, you can provide a hint to fetch the data lazily or eagerly using the FetchType. However, some implementations may ignore lazy strategy and just fetch everything eagerly. Hibernate's default strategy is FetchType.LAZY to reduce the memory footprint of your application. Hibernate offers additional fetch modes in addition to the commonly used JPA fetch types. Here, we will discuss how they are related and provide an explanation, so you understand when to use which. JOIN fetch mode The JOIN fetch type forces Hibernate to create a SQL join statement to populate both the entities and the related entities using just one SQL statement. However, the JOIN fetch mode also implies that the fetch type is EAGER, so there is no need to specify the fetch type. To understand this better, consider the following classes: @Entity public class Course { @Id @GeneratedValue private long id; private String title; @OneToMany(cascade=CascadeType.ALL, mappedBy="course") @Fetch(FetchMode.JOIN) private Set<Student> students = new HashSet<Student>(); // getters and setters } @Entity public class Student { @Id @GeneratedValue private long id; private String name; private char gender; @ManyToOne private Course course; // getters and setters } In this case, we are instructing Hibernate to use JOIN to fetch course and student in one SQL statement and this is the SQL that is composed by Hibernate: select course0_.id as id1_0_0_, course0_.title as title2_0_0_, students1_.course_id as course_i4_0_1_, students1_.id as id1_1_1_, students1_.gender as gender2_1_2_, students1_.name as name3_1_2_ from Course course0_ left outer join Student students1_ on course0_.id=students1_.course_id where course0_.id=? As you can see, Hibernate is using a left join all courses and any student that may have signed up for those courses. Another important thing to note is that if you use HQL, Hibernate will ignore JOIN fetch mode and you'll have to specify the join in the HQL. (we will discuss HQL in the next section) In other words, if you fetch a course entity using a statement such as this: List<Course> courses = session .createQuery("from Course c where c.id = :courseId") .setLong("courseId", chemistryId) .list(); Then, Hibernate will use SELECT mode; but if you don't use HQL, as shown in the next example, Hibernate will pay attention to the fetch mode instructions provided by the annotation. Course course = (Course) session.get(Course.class, chemistryId); SELECT fetch mode In SELECT mode, Hibernate uses an additional SELECT statement to fetch the related entities. This mode doesn't affect the behavior of the fetch type (LAZY, EAGER), so they will work as expected. To demonstrate this, consider the same example used in the last section and lets examine the output: select id, title from Course where id=? select course_id, id, gender, name from Student where course_id=? Note that the first Hibernate fetches and populates the Course entity and then uses the course ID to fetch the related students. Also, if your fetch type is set to LAZY and you never reference the related entities, the second SELECT is never executed. SUBSELECT fetch mode The SUBSELECT fetch mode is used to minimize the number of SELECT statements executed to fetch the related entities. If you first fetch the owner entities and then try to access the associated owned entities, without SUBSELECT, Hibernate will issue an additional SELECT statement for every one of the owner entities. Using SUBSELECT, you instruct Hibernate to use a SQL sub-select to fetch all the owners for the list of owned entities already fetched. To understand this better, let's explore the following entity classes. @Entity public class Owner { @Id @GeneratedValue private long id; private String name; @OneToMany(cascade=CascadeType.ALL, mappedBy="owner") @Fetch(FetchMode.SUBSELECT) private Set<Car> cars = new HashSet<Car>(); // getters and setters } @Entity public class Car { @Id @GeneratedValue private long id; private String model; @ManyToOne private Owner owner; // getters and setters } If you try to fetch from the Owner table, Hibernate will only issue two select statements; one to fetch the owners and another to fetch the cars for those owners, by using a sub-select, as follows: select id, name from Owner select owner_id, id, model from Car where owner_id in (select id from Owner) Without the SUBSELECT fetch mode, instead of the second select statement as shown in the preceding section, Hibernate will execute a select statement for every entity returned by the first statement. This is known as the n+1 problem, where one SELECT statement is executed, then, for each returned entity another SELECT statement is executed to fetch the associated entities. Finally, SUBSELECT fetch mode is not supported in the ToOne associations, such as OneToOne or ManyToOne because it was designed for relationships where the ownership of the entities is clear. Batch fetching Another strategy offered by Hibernate is batch fetching. The idea is very similar to SUBSELECT, except that instead of using SUBSELECT, the entity IDs are explicitly listed in the SQL and the list size is determined by the @BatchSize annotation. This may perform slightly better for smaller batches. (Note that all the commercial database engines also perform query optimization.) To demonstrate this, let's consider the following entity classes: @Entity public class Owner { @Id @GeneratedValue private long id; private String name; @OneToMany(cascade=CascadeType.ALL, mappedBy="owner") @BatchSize(size=10) private Set<Car> cars = new HashSet<Car>(); // getters and setters } @Entity public class Car { @Id @GeneratedValue private long id; private String model; @ManyToOne private Owner owner; // getters and setters } Using @BatchSize, we are instructing Hibernate to fetch the related entities (cars) using a SQL statement that uses a where in clause; thus listing the relevant ID for the owner entity, as shown: select id, name from Owner select owner_id, id, model from Car where owner_id in (?, ?) In this case, the first select statement only returned two rows, but if it returns more than the batch size there would be multiple select statements to fetch the owned entities, each fetching 10 entities at a time. Summary In this article, we covered many ways of fetching datasets from the database. Resources for Article: Further resources on this subject: Hibernate Types[article] Java Hibernate Collections, Associations, and Advanced Concepts[article] Integrating Spring Framework with Hibernate ORM Framework: Part 1[article]
Read more
  • 0
  • 0
  • 9163

article-image-getting-started-emberjspart2
Daniel Ochoa
11 Jan 2016
5 min read
Save for later

Getting started with Ember.js – Part 2

Daniel Ochoa
11 Jan 2016
5 min read
In Part 1 of this blog, we got started with Ember.js by examining how to set up your development environment from beginning to end with Ember.js using ember-cli – Ember’s build tool. Ember-cli minifies and concatenates your JavaScript, giving you a strong conventional project structure and a powerful add-on system for extensions. In this Part 2 post, I’ll guide you through the setting up of a very basic todo-like Ember.js application to get your feet wet with actual Ember.js development. Setting up a more detailed overview for the posts Feel free to change the title of our app header (see Part 1). Go to ‘app/templates/application.hbs’ and change the wording inside the h2 tag to something like ‘Funny posts’ or anything you’d like. Let’s change our app so when a user clicks on the title of a post, it will take them to a different route based on the id of the post, for example, /posts/1bbe3 . By doing so, we are telling ember to display a different route and template. Next, let's run the following on the terminal: ember generate route post This will modify our app/router.js file by creating a route file for our post and a template. Let’s go ahead and open the app/router.js file to make sure it looks like the following: import Ember from 'ember'; import config from './config/environment'; var Router = Ember.Router.extend({ location: config.locationType }); Router.map(function() { this.resource('posts'); this.route('post', {path: '/posts/:post_id'}); }); export default Router; In the router file, we make sure the new ‘post’ route has a specific path by passing it a second argument with an object that contains a key called path and a value of ‘/posts/:post_id’. The colon in that path means the second parth of the path after /posts/ is a dynamic URL. In this URL, we will be passing the id of the post so we can determine what specific post to load on our post route. (So far, we have posts and post routes, so don’t get confused). Now, let's go to app/templates/posts.hbs and make sure we only have the following: <ul> {{#each model as |post|}} {{#link-to 'post' post tagName='li'}} {{post.title}} {{/link-to}} {{/each}} </ul> As you can see, we replaced our <li> element with an ember helper called ‘link-to’. What link-to does is that it generates for you the link for our single post route. The first argument is the name of the route, ‘post’, the second argument is the actual post itself and in the last part of the helper, we are telling Handlebars to render the link to as a <li> element by providing the tagName property. Ember is smart enough to know that if you link to a route and pass it an object, your intent is to set the model on that route to a single post. Now open ‘app/templates/post.hbs’ and replace the contents with just the following: {{model.title }} Now if you refresh the app from ‘/posts’ and click on a post title, you’ll be taken to a different route and you’ll see only the title of the post. What happens if you refresh the page at this URL? You’ll see errors on the console and nothing will be displayed. This is because you arrived at this URL from the previous post's route where you passed a single post as the argument to be the model for the current post route. When you hit refresh you lose this step so no model is set for the current route. You can fix that by adding the following to ‘app/routes/post.js’ : import Ember from 'ember'; export default Ember.Route.extend({ model(params) { return Ember.$.getJSON('https://www.reddit.com/tb/' + params.post_id + '.json?jsonp=?').then(result => { return result[0].data.children[0].data; }); } }); Now, whenever you refresh on a single post page, Ember will see that you don’t have a model so the model hook will be triggered on the route. In this case, it will grab the id of the post from the dynamic URL, which is passed as an argument to the query hook and it will make a request to reddit for the relevant post. In this case, notice that we are also returned the request promise and then filtering the results to only return the single post object we need. Change the app/templates/post.hbs template to the following: <div class="title"> <h1>{{model.title}}</h1> </div> <div class="image"> <img src="{{model.preview.images.firstObject.source.url}}" height="400"/> </div> <div class="author"> submitted by: {{model.author}} </div> Now, if you look at an individual post, you’ll get the title, image, and author for the post. Congratulations, you’ve built your first Ember.js application with dynamic data and routes. Hopefully, you now have a better grasp and understanding of some basic concepts for building more ambitious web applications using Ember. About the Author: Daniel Ochoa is a senior software engineer at Frog with a passion for crafting beautiful web and mobile experiences. His current interests are Node.js, Ember.js, Ruby on Rails, iOS development with Swift, and the Haskell language. He can be found on Twitter @DanyOchoaOzz.
Read more
  • 0
  • 0
  • 2947