Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Front-End Web Development

341 Articles
article-image-building-simple-blog
Packt
15 May 2014
8 min read
Save for later

Building a Simple Blog

Packt
15 May 2014
8 min read
(For more resources related to this topic, see here.) Setting up the application Every application has to be set up, so we'll begin with that. Create a folder for your project—I'll call mine simpleBlog—and inside that, create a file named package.json. If you've used Node.js before, you know that the package.json file describes the project; lists the project home page, repository, and other links; and (most importantly for us) outlines the dependencies for the application. Here's what the package.json file looks like: { "name": "simple-blog", "description": "This is a simple blog.", "version": "0.1.0", "scripts": { "start": "nodemon server.js" }, "dependencies": { "express": "3.x.x", "ejs" : "~0.8.4", "bourne" : "0.3" }, "devDependencies": { "nodemon": "latest" } } This is a pretty bare-bones package.json file, but it has all the important bits. The name, description, and version properties should be self-explanatory. The dependencies object lists all the npm packages that this project needs to run: the key is the name of the package and the value is the version. Since we're building an ExpressJS backend, we'll need the express package. The ejs package is for our server-side templates and bourne is our database (more on this one later). The devDependencies property is similar to the dependencies property, except that these packages are only required for someone working on the project. They aren't required to just use the project. For example, a build tool and its components, such as Grunt, would be development dependencies. We want to use a package called nodemon. This package is really handy when building a Node.js backend: we can have a command line that runs the nodemon server.js command in the background while we edit server.js in our editor. The nodemon package will restart the server whenever we save changes to the file. The only problem with this is that we can't actually run the nodemon server.js command on the command line, because we're going to install nodemon as a local package and not a global process. This is where the scripts property in our package.json file comes in: we can write simple script, almost like a command-line alias, to start nodemon for us. As you can see, we're creating a script called start, and it runs nodemon server.js. On the command line, we can run npm start; npm knows where to find the nodemon binary and can start it for us. So, now that we have a package.json file, we can install the dependencies we've just listed. On the command line, change to the current directory to the project directory, and run the following command: npm install You'll see that all the necessary packages will be installed. Now we're ready to begin writing the code. Starting with the server I know you're probably eager to get started with the actual Backbone code, but it makes more sense for us to start with the server code. Remember, good Backbone apps will have strong server-side components, so we can't ignore the backend completely. We'll begin by creating a server.js file in our project directory. Here's how that begins: var express = require('express'); var path = require('path'); var Bourne = require("bourne"); If you've used Node.js, you know that the require function can be used to load Node.js components (path) or npm packages (express and bourne). Now that we have these packages in our application, we can begin using them as follows: var app = express(); var posts = new Bourne("simpleBlogPosts.json"); var comments = new Bourne("simpleBlogComments.json"); The first variable here is app. This is our basic Express application object, which we get when we call the express function. We'll be using it a lot in this file. Next, we'll create two Bourne objects. As I said earlier, Bourne is the database we'll use in our projects in this article. This is a simple database that I wrote specifically for this article. To keep the server side as simple as possible, I wanted to use a document-oriented database system, but I wanted something serverless (for example, SQLite), so you didn't have to run both an application server and a database server. What I came up with, Bourne, is a small package that reads from and writes to a JSON file; the path to that JSON file is the parameter we pass to the constructor function. It's definitely not good for anything bigger than a small learning project, but it should be perfect for this article. In the real world, you can use one of the excellent document-oriented databases. I recommend MongoDB: it's really easy to get started with, and has a very natural API. Bourne isn't a drop-in replacement for MongoDB, but it's very similar. You can check out the simple documentation for Bourne at https://github.com/andrew8088/bourne. So, as you can see here, we need two databases: one for our blog posts and one for comments (unlike most databases, Bourne has only one table or collection per database, hence the need for two). The next step is to write a little configuration for our application: app.configure(function(){ app.use(express.json()); app.use(express.static(path.join(__dirname, 'public'))); }); This is a very minimal configuration for an Express app, but it's enough for our usage here. We're adding two layers of middleware to our application; they are "mini-programs" that the HTTP requests that come to our application will run through before getting to our custom functions (which we have yet to write). We add two layers here: the first is express.json(), which parses the JSON requests bodies that Backbone will send to the server; the second is express.static(), which will statically serve files from the path given as a parameter. This allows us to serve the client-side JavaScript files, CSS files, and images from the public folder. You'll notice that both these middleware pieces are passed to app.use(), which is the method we call to choose to use these pieces. You'll notice that we're using the path.join() method to create the path to our public assets folder, instead of just doing __dirname and 'public'. This is because Microsoft Windows requires the separating slashes to be backslashes. The path.join() method will get it right for whatever operating system the code is running on. Oh, and __dirname (two underscores at the beginning) is just a variable for the path to the directory this script is in. The next step is to create a route method: app.get('/*', function (req, res) { res.render("index.ejs"); }); In Express, we can create a route calling a method on the app that corresponds to the desired HTTP verb (get, post, put, and delete). Here, we're calling app.get() and we pass two parameters to it. The first is the route; it's the portion of the URL that will come after your domain name. In our case, we're using an asterisk, which is a catchall; it will match any route that begins with a forward slash (which will be all routes). This will match every GET request made to our application. If an HTTP request matches the route, then a function, which is the second parameter, will be called. This function takes two parameters; the first is the request object from the client and the second is the response object that we'll use to send our response back. These are often abbreviated to req and res, but that's just a convention, you could call them whatever you want. So, we're going to use the res.render method, which will render a server-side template. Right now, we're passing a single parameter: the path to the template file. Actually, it's only part of the path, because Express assumes by default that templates are kept in a directory named views, a convention we'll be using. Express can guess the template package to use based on the file extension; that's why we don't have to select EJS as the template engine anywhere. If we had values that we want to interpolate into our template, we would pass a JavaScript object as the second parameter. We'll come back and do this a little later. Finally, we can start up our application; I'll choose to use the port 3000: app.listen(3000); We'll be adding a lot more to our server.js file later, but this is what we'll start with. Actually, at this point, you can run npm start on the command line and open up http://localhost:3000 in a browser. You'll get an error because we haven't made the view template file yet, but you can see that our server is working.
Read more
  • 0
  • 0
  • 953

article-image-using-webrtc-data-api
Packt
09 May 2014
10 min read
Save for later

Using the WebRTC Data API

Packt
09 May 2014
10 min read
(For more resources related to this topic, see here.) What is WebRTC? Web Real-Time Communication is a new (still under an active development) open framework for the Web to enable browser-to-browser applications for audio/video calling, video chat, peer-to-peer file sharing without any third-party additional software/plugins. It was open sourced by Google in 2011 and includes the fundamental building components for high-quality communications on the Web. These components, when implemented in a browser, can be accessed through a JavaScript API, enabling developers to build their own rich, media web applications. Google, Mozilla, and Opera support WebRTC and are involved in the development process. Major components of WebRTC API are as follows: getUserMedia: This allows a web browser to access the camera and microphone PeerConnection: This sets up audio/video calls DataChannels: This allow browsers to share data via peer-to-peer connection Benefits of using WebRTC in your business Reducing costs: It is a free and open source technology. You don't need to pay for complex proprietary solutions ever. IT deployment and support costs can be lowered because now you don't need to deploy special client software for your customers. Plugins?: You don't need it ever. Before now you had to use Flash, Java applets, or other tricky solutions to build interactive rich media web applications. Customers had to download and install third-party plugins to be able using your media content. You also had to keep in mind different solutions/plugins for variety of operating systems and platforms. Now you don't need to care about it. Peer-to-peer communication: In most cases communication will be established directly between your customers and you don't need to have a middle point. Easy to use: You don't need to be a professional programmer or to have a team of certified developers with some kind of specific knowledge. In a basic case, you can easily integrate WebRTC functionality into your web services/sites by using open JavaScript API or even using a ready-to-go framework. Single solution for all platforms: You don't need to develop special native version of your web service for different platforms (iOS, Android, Windows, or any other). WebRTC is developed to be a cross-platform and universal tool. WebRTC is open source and free: Community can discover new bugs and solve them effectively and quick. Moreover, it is developed and standardized by Mozilla, Google, and Opera—world software companies. Topics The article covers the following topics: Developing a WebRTC application: You will learn the basics of the technology and build a complete audio/video conference real-life web application. We will also talk on SDP (Session Description Protocol), signaling, client-server sides' interoperation, and configuring STUN and TURN servers. In Data API, you will learn how to build a peer-to-peer, cross-platform file sharing web service using the WebRTC Data API. Media streaming and screen casting introduces you into streaming prerecorded media content peer-to-peer and desktop sharing. In this article, you will build a simple application that provides such kind of functionality. Nowadays, security and authentication is very important topic and you definitely don't want to forget on it while developing your applications. So, in this article, you will learn how to make your WebRTC solutions to be secure, why authentication might be very important, and how you can implement this functionality in your products. Nowadays, mobile platforms are literally part of our life, so it's important to make your interactive application to be working great on mobile devices also. This article will introduce you into aspects that will help you in developing great WebRTC products keeping mobile devices in mind. Session Description Protocol SDP is an important part of WebRTC stack. It used to negotiate on session/media options during establishing peer connection. It is a protocol intended for describing multimedia communication sessions for the purposes of session announcement, session invitation, and parameter negotiation. It does not deliver media data itself, but is used for negotiation between peers of media type, format, and all associated properties/options (resolution, encryption, codecs, and so on). The set of properties and parameters are usually called a session profile. Peers have to exchange SDP data using signaling channel before they can establish a direct connection. The following is example of an SDP offer: v=0 o=alice 2890844526 2890844526 IN IP4 host.atlanta.example.com s= c=IN IP4 host.atlanta.example.com t=0 0 m=audio 49170 RTP/AVP 0 8 97 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:97 iLBC/8000 m=video 51372 RTP/AVP 31 32 a=rtpmap:31 H261/90000 a=rtpmap:32 MPV/90000 Here we can see that this is a video and audio session, and multiple codecs are offered. The following is example of an SDP answer: v=0 o=bob 2808844564 2808844564 IN IP4 host.biloxi.example.com s= c=IN IP4 host.biloxi.example.com t=0 0 m=audio 49174 RTP/AVP 0 a=rtpmap:0 PCMU/8000 m=video 49170 RTP/AVP 32 a=rtpmap:32 MPV/90000 Here we can see that only one codec is accepted in reply to the offer above. You can find more SDP sessions examples at https://www.rfc-editor.org/rfc/rfc4317.txt. You can also find in-dept details on SDP in the appropriate RFC at http://tools.ietf.org/html/rfc4566. Configuring and installing your own STUN server As you already know, it is important to have an access to STUN/TURN server to work with peers located behind NAT or firewall. In this article, developing our application, we used pubic STUN servers (actually, they are public Google servers accessible from other networks). Nevertheless, if you plan to build your own service, you should install your own STUN/TURN server. This way your application will not be depended on a server you even can't control. Today we have public STUN servers from Google, tomorrow they can be switched off. So, the right way is to have your own STUN/TURN server. In this section, you will be introduced to installing STUN server as the simpler case. There are several implementations of STUN servers that can be found on the Internet. You can take one from http://www.stunprotocol.org. It is cross-platform and can be used under Windows, Mac OS X, or Linux. To start STUN server, you should use the following command line: stunserver --mode full --primaryinterface x1.x1.x1.x1 --altinterface x2.x2.x2.x2 Please, pay attention that you need two IP addresses on your machine to run STUN server. It is mandatory to make STUN protocol work correct. The machine can have only one physical network interface, but it should have then a network alias with IP address different of that used on the main network interface. WebSocket WebSocket is a protocol that provides full-duplex communication channels over a single TCP connection. This is a relatively young protocol but today all major web browsers including Chrome, Internet Explorer, Opera, Firefox, and Safari support it. WebSocket is a replacement for long-polling to get two-way communications between browser and server. In this article, we will use WebSocket as a transport channel to develop a signaling server for our videoconference service. Using it, our peers will communicate with the signaling server. The two important benefits of WebSocket is that it does support HTTPS (secure channel) and can be used via web proxy (nevertheless, some proxies can block WebSocket protocol). NAT traversal WebRTC has in-built mechanism to use such NAT traversal options like STUN and TURN servers. In this article, we used public STUN (Session Traversal Utilities for NAT) servers, but in real life you should install and configure your own STUN or TURN (Traversal Using Relay NAT) server. In most cases, you will use a STUN server. It helps to do NAT/firewall traversal and establish direct connection between peers. In other words, STUN server is utilized during connection establishing stage only. After the connection has been established, peers will transfer media data directly between them. In some cases (unfortunately, they are not so rare), STUN server won't help you to get through a firewall or NAT and establishing direct connection between peers will be impossible. For example, if both peers are behind symmetric NAT. In this case TURN server can help you. TURN server works as a retransmitter between peers. Using TURN server, all media data between peers will be transmitted through the TURN server. If your application gives a list of several STUN/TURN servers to the WebRTC API, the web browser will try to use STUN servers first and in case if connection failed it will try to use TURN servers automatically. Preparing environment We can prepare the environment by performing the following steps: Create a folder for the whole application somewhere on your disk. Let's call it my_rtc_project. Make a directory named my_rtc_project/www here, we will put all the client-side code (JavaScript files or HTML pages). Signaling server's code will be placed under its separate folder, so create directory for it my_rtc_project/apps/rtcserver/src. Kindly note that we will use Git, which is free and open source distributed version control system. For Linux boxes it can be installed using default package manager. For Windows system, I recommend to install and use this implementation: https://github.com/msysgit/msysgit. If you're using Windows box, install msysgit and add path to its bin folder to your PATH environment variable. Installing Erlang The signaling server is developed in Erlang language. Erlang is a great choice to develop server-side applications due to the following reasons: It is very comfortable and easy for prototyping Its processes (aktors) are very lightweight and cheap It does support network operations with no need of any external libraries The code been compiled to a byte code running by a very powerful Erlang Virtual Machine Some great projects The following projects are developed using Erlang: Yaws and Cowboy: These are web servers Riak and CouchDB: These are distributed databases Cloudant: This is a database service based on fork of CouchDB Ejabberd: This is a XMPP instant messaging service Zotonic: This is a Content Management System RabbitMQ: This is a message bus Wings 3D: This is a 3D modeler GitHub: This a web-based hosting service for software development projects that use Git. GitHub uses Erlang for RPC proxies to Ruby processes WhatsApp: This is a famous mobile messenger, sold to Facebook Call of Duty: This computer game uses Erlang on server side Goldman Sachs: This is high-frequency trading computer programs A very brief history of Erlang 1982 to 1985: During this period, Ericsson starts experimenting with programming of telecom. Existing languages do not suit for the task. 1985 to 1986: During this period, Ericsson decides they must develop their own language with desirable features from Lisp, Prolog, and Parlog. The language should have built-in concurrency and error recovery. 1987: In this year, first experiments with the new language Erlang were conducted. 1988: In this year, Erlang firstly used by external users out of the lab. 1989: In this year, Ericsson works on fast implementation of Erlang. 1990: In this year, Erlang is presented on ISS'90 and gets new users. 1991: In this year, Fast implementation of Erlang is released to users. Erlang is presented on Telecom'91, and has compiler and graphic interface. 1992: In this year, Erlang gets a lot of new users. Ericsson ported Erlang to new platforms including VxWorks and Macintosh. 1993: In this year, Erlang gets distribution. It makes it possible to run homogeneous Erlang system on a heterogeneous hardware. Ericsson starts selling Erlang implementations and Erlang Tools. Separate organization in Ericsson provides support. Erlang is supported by many platforms. You can download and install it using the main website: http://www.erlang.org. Summary In this article, we have discussed in detail about the WebRTC technology, and also about the WebRTC API. Resources for Article: Further resources on this subject: Applying WebRTC for Education and E-learning [Article] Spring Roo 1.1: Working with Roo-generated Web Applications [Article] WebSphere MQ Sample Programs [Article]
Read more
  • 0
  • 0
  • 1816

article-image-bootstrap-3-and-other-applications
Packt
21 Apr 2014
10 min read
Save for later

Bootstrap 3 and other applications

Packt
21 Apr 2014
10 min read
(For more resources related to this topic, see here.) Bootstrap 3 Bootstrap 3, formerly known as Twitter's Bootstrap, is a CSS and JavaScript framework for building application frontends. The third version of Bootstrap has important changes over the earlier versions of the framework. Bootstrap 3 is not compatible with the earlier versions. Bootstrap 3 can be used to build great frontends. You can download the complete framework, including CSS and JavaScript, and start using it right away. Bootstrap also has a grid. The grid of Bootstrap is mobile-first by default and has 12 columns. In fact, Bootstrap defines four grids: the extra-small grid up to 768 pixels (mobile phones), the small grid between 768 and 992 pixels (tablets), the medium grid between 992 and 1200 pixels (desktop), and finally, the large grid of 1200 pixels and above for large desktops. The grid, all other CSS components, and JavaScript plugins are described and well documented at http://getbootstrap.com/. Bootstrap's default theme looks like the following screenshot: Example of a layout built with Bootstrap 3 The time when all Bootstrap websites looked quite similar is far behind us now. Bootstrap will give you all the freedom you need to create innovative designs. There is much more to tell about Bootstrap, but for now, let's get back to Less. Working with Bootstrap's Less files All the CSS code of Bootstrap is written in Less. You can download Bootstrap's Less files and recompile your own version of the CSS. The Less files can be used to customize, extend, and reuse Bootstrap's code. In the following sections, you will learn how to do this. To download the Less files, follow the links at http://getbootstrap.com/ to Bootstrap's GitHub pages at https://github.com/twbs/bootstrap. On this page, choose Download Zip on the right-hand side column. Building a Bootstrap project with Grunt After downloading the files mentioned earlier, you can build a Bootstrap project with Grunt. Grunt is a JavaScript task runner; it can be used for the automation of your processes. Grunt helps you when performing repetitive tasks such as minifying, compiling, unit testing, and linting your code. Grunt runs on node.js and uses npm, which you saw while installing the Less compiler. Node.js is a standalone JavaScript interpreter built on Google's V8 JavaScript runtime, as used in Chrome. Node.js can be used for easily building fast, scalable network applications. When you unzip the files from the downloaded file, you will find Gruntfile.js and package.json among others. The package.json file contains the metadata for projects published as npm modules. The Gruntfile.js file is used to configure or define tasks and load Grunt plugins. The Bootstrap Grunt configuration is a great example to show you how to set up automation testing for projects containing HTML, Less (CSS), and JavaScript. The parts that are interesting for you as a Less developer are mentioned in the following sections. In package.json file, you will find that Bootstrap compiles its Less files with grunt-contrib-less. At the time of writing this article, the grunt-contrib-less plugin compiles Less with less.js Version 1.7. In contrast to Recess (another JavaScript build tool previously used by Bootstrap), grunt-contrib-less also supports source maps. Apart from grunt-contrib-less, Bootstrap also uses grunt-contrib-csslint to check the compiled CSS for syntax errors. The grunt-contrib-csslint plugin also helps improve browser compatibility, performance, maintainability, and accessibility. The plugin's rules are based on the principles of object-oriented CSS (http://www.slideshare.net/stubbornella/object-oriented-css). You can find more information by visiting https://github.com/stubbornella/csslint/wiki/Rules. Bootstrap makes heavy use of Less variables, which can be set by the customizer. Whoever has studied the source of Gruntfile.js may very well also find a reference to the BsLessdocParser Grunt task. This Grunt task is used to build Bootstrap's customizer dynamically based on the Less variables used by Bootstrap. Though the process of parsing Less variables to build, for instance, documentation will be very interesting, this task is not discussed here further. This section ends with the part of Gruntfile.js that does the Less compiling. The following code from Gruntfile.js should give you an impression of how this code will look: less: { compileCore: { options: { strictMath: true, sourceMap: true, outputSourceFiles: true, sourceMapURL: '<%= pkg.name %>.css.map', sourceMapFilename: 'dist/css/<%= pkg.name %>.css.map' }, files: { 'dist/css/<%= pkg.name %>.css': 'less/bootstrap.less' } } Last but not least, let's have a look at the basic steps to run Grunt from the command line and build Bootstrap. Grunt will be installed with npm. Npm checks Bootstrap's package.json file and automatically installs the necessary local dependencies listed there. To build Bootstrap with Grunt, you will have to enter the following commands on the command line: > npm install -g grunt-cli > cd /path/to/extracted/files/bootstrap After this, you can compile the CSS and JavaScript by running the following command: > grunt dist This will compile your files into the /dist directory. The > grunt test command will also run the built-in tests. Compiling your Less files Although you can build Bootstrap with Grunt, you don't have to use Grunt. You will find the Less files in a separate directory called /less inside the root /bootstrap directory. The main project file is bootstrap.less; other files will be explained in the next section. You can include bootstrap.less together with less.js into your HTML for the purpose of testing as follows: <link rel="bootstrap/less/bootstrap.less"type="text/css" href="less/styles.less" /> <script type="text/javascript">less = { env: 'development' };</script> <script src = "less.js" type="text/javascript"></script> Of course, you can compile this file server side too as follows: lessc bootstrap.less > bootstrap.css Dive into Bootstrap's Less files Now it's time to look at Bootstrap's Less files in more detail. The /less directory contains a long list of files. You will recognize some files by their names. You have seen files such as variables.less, mixins.less, and normalize.less earlier. Open bootstrap.less to see how the other files are organized. The comments inside bootstrap.less tell you that the Less files are organized by functionality as shown in the following code snippet: // Core variables and mixins // Reset // Core CSS // Components Although Bootstrap is strongly CSS-based, some of the components don't work without the related JavaScript plugins. The navbar component is an example of this. Bootstrap's plugins require jQuery. You can't use the newest 2.x version of jQuery because this version doesn't have support for Internet Explorer 8. To compile your own version of Bootstrap, you have to change the variables defined in variables.less. When using the last declaration wins and lazy loading rules, it will be easy to redeclare some variables. Creating a custom button with Less By default, Bootstrap defines seven different buttons, as shown in the following screenshot: The seven different button styles of Bootstrap 3 Please take a look at the following HTML structure of Bootstrap's buttons before you start writing your Less code: <!-- Standard button --> <button type="button" class="btn btn-default">Default</button> A button has two classes. Globally, the first .btn class only provides layout styles, and the second .btn-default class adds the colors. In this example, you will only change the colors, and the button's layout will be kept intact. Open buttons.less in your text editor. In this file, you will find the following Less code for the different buttons: // Alternate buttons // -------------------------------------------------- .btn-default { .button-variant(@btn-default-color; @btn-default-bg; @btn-default-border); } The preceding code makes it clear that you can use the .button-variant() mixin to create your customized buttons. For instance, to define a custom button, you can use the following Less code: // Customized colored button // -------------------------------------------------- .btn-colored { .button-variant(blue;red;green); } In the preceding case, you want to extend Bootstrap with your customized button, add your code to a new file, and call this file custom.less. Appending @import custom.less to the list of components inside bootstrap.less will work well. The disadvantage of doing this will be that you will have to change bootstrap.less again when updating Bootstrap; so, alternatively, you could create a file such as custombootstrap.less which contains the following code: @import "bootstrap.less"; @import "custom.less"; The previous step extends Bootstrap with a custom button; alternatively, you could also change the colors of the default button by redeclaring its variables. To do this, create a new file, custombootstrap.less again, and add the following code into it: @import "bootstrap.less"; //== Buttons // //## For each of Bootstrap's buttons, define text,background and border color. @btn-default-color: blue; @btn-default-bg: red; @btn-default-border: green; In some situations, you will, for instance, need to use the button styles without everything else of Bootstrap. In these situations, you can use the reference keyword with the @import directive. You can use the following Less code to create a Bootstrap button for your project: @import (reference) "bootstrap.less"; .btn:extend(.btn){}; .btn-colored { .button-variant(blue;red;green); } You can see the result of the preceding code by visiting http://localhost/index.html in your browser. Notice that depending on the version of less.js you use, you may find some unexpected classes in the compiled output. Media queries or extended classes sometimes break the referencing in older versions of less.js. Use CSS source maps for debugging When working with large LESS code bases finding the original source can be become complex when viewing your results in the browsers. Since version 1.5 LESS offers support for CSS source maps. CSS source maps enable developer tools to map calls back to their location in original source files. This also works for compressed files. The latest versions of Google's Chrome Developers Tools offer support for these sources files. Currently CSS source maps debugging won't work for client side compiling as used for the examples in this book. The server-side lessc compiler can generate useful CSS source maps. After installing the lessc compiler you can run: >> lessc –source-map=styles.css.map styles.less > styles.css The preceding code will generate two files: styles.css.map and styles.css. The last line of styles.css contains now an extra line which refers to the source map: /*# sourceMappingURL=boostrap.css.map */ In your HTML you only have to include the styles.css as you used to: <link href="styles.css" rel="stylesheet"> When using CSS source maps as described earlier and inspecting your HTML with Google's Chrome Developers Tools, you will see something like the following screenshot: Inspect source with Google's Chrome Developers Tools and source maps As you see styles now have a reference to their original LESS file such as grid.less, including line number, which helps you in the process of debugging. The styles.css.map file should be in the same directory as the styles.css file. You don't have to include your LESS files in this directory. Summary This article has covered the concept of Bootstrap, how to use Bootstrap's Less files, and how the files can be modified to be used according to your convenience. Resources for Article: Further resources on this subject: Getting Started with Bootstrap [Article] Bootstrap 3.0 is Mobile First [Article] Downloading and setting up Bootstrap [Article]
Read more
  • 0
  • 0
  • 3200

article-image-services
Packt
19 Mar 2014
12 min read
Save for later

Services

Packt
19 Mar 2014
12 min read
(For more resources related to this topic, see here.) Services A service is just a specific instance of a given class. For example, whenever you access doctrine such as $this->get('doctrine'); in a controller, it implies that you are accessing a service. This service is an instance of the Doctrine EntityManager class, but you never have to create this instance yourself. The code needed to create this entity manager is actually not that simple since it requires a connection to the database, some other configurations, and so on. Without this service already being defined, you would have to create this instance in your own code. Maybe you will have to repeat this initialization in each controller, thus making your application messier and harder to maintain. Some of the default services present in Symfony2 are as follows: The annotation reader Assetic—the asset management library The event dispatcher The form widgets and form factory The Symfony2 Kernel and HttpKernel Monolog—the logging library The router Twig—the templating engine It is very easy to create new services because of the Symfony2 framework. If we have a controller that has started to become quite messy with long code, a good way to refactor it and make it simpler will be to move some of the code to services. We have described all these services starting with "the" and a singular noun. This is because most of the time, services will be singleton objects where a single instance is needed. A geolocation service In this example, we imagine an application for listing events, which we will call "meetups". The controller makes it so that we can first retrieve the current user's IP address, use it as basic information to retrieve the user's location, and only display meetups within 50 kms of distance to the user's current location. Currently, the code is all set up in the controller. As it is, the controller is not actually that long yet, it has a single method and the whole class is around 50 lines of code. However, when you start to add more code, to only list the type of meetups that are the user's favorites or the ones they attended the most. When you want to mix that information and have complex calculations as to which meetups might be the most relevant to this specific user, the code could easily grow out of control! There are many ways to refactor this simple example. The geocoding logic can just be put in a separate method for now, and this will be a good step, but let's plan for the future and move some of the logic to the services where it belongs. Our current code is as follows: use GeocoderHttpAdapterCurlHttpAdapter; use GeocoderGeocoder; use GeocoderProviderFreeGeoIpProvider; public function indexAction()   { Initialize our geocoding tools (based on the excellent geocoding library at http://geocoder-php.org/) using the following code: $adapter = new CurlHttpAdapter(); $geocoder = new Geocoder(); $geocoder->registerProviders(array( new FreeGeoIpProvider($adapter), )); Retrieve our user's IP address using the following code: $ip = $this->get('request')->getClientIp(); // Or use a default one     if ($ip == '127.0.0.1') {   $ip = '114.247.144.250'; } Get the coordinates and adapt them using the following code so that they are roughly a square of 50 kms on each side: $result = $geocoder->geocode($ip); $lat = $result->getLatitude(); $long = $result->getLongitude(); $lat_max = $lat + 0.25; // (Roughly 25km) $lat_min = $lat - 0.25; $long_max = $long + 0.3; // (Roughly 25km) $long_min = $long - 0.3; Create a query based on all this information using the following code: $em = $this->getDoctrine()->getManager(); $qb = $em->createQueryBuilder(); $qb->select('e')     ->from('KhepinBookBundle:Meetup, 'e')     ->where('e.latitude < :lat_max')     ->andWhere('e.latitude > :lat_min')     ->andWhere('e.longitude < :long_max')     ->andWhere('e.longitude > :long_min')     ->setParameters([       'lat_max' => $lat_max,       'lat_min' => $lat_min,       'long_max' => $long_max,       'long_min' => $long_min     ]); Retrieve the results and pass them to the template using the following code: $meetups = $qb->getQuery()->execute(); return ['ip' => $ip, 'result' => $result, 'meetups' => $meetups]; } The first thing we want to do is get rid of the geocoding initialization. It would be great to have all of this taken care of automatically and we would just access the geocoder with: $this->get('geocoder');. You can define your services directly in the config.yml file of Symfony under the services key, as follows: services:   geocoder:     class: GeocoderGeocoder That is it! We defined a service that can now be accessed in any of our controllers. Our code now looks as follows: // Create the geocoding class $adapter = new GeocoderHttpAdapterCurlHttpAdapter(); $geocoder = $this->get('geocoder'); $geocoder->registerProviders(array(     new GeocoderProviderFreeGeoIpProvider($adapter), )); Well, I can see you rolling your eyes, thinking that it is not really helping so far. That's because initializing the geocoder is a bit more complex than just using the new GeocoderGeocoder() code. It needs another class to be instantiated and then passed as a parameter to a method. The good news is that we can do all of this in our service definition by modifying it as follows: services:     # Defines the adapter class     geocoder_adapter:         class: GeocoderHttpAdapterCurlHttpAdapter         public: false     # Defines the provider class     geocoder_provider:         class: GeocoderProviderFreeGeoIpProvider         public: false         # The provider class is passed the adapter as an argument         arguments: [@geocoder_adapter]     geocoder:         class: GeocoderGeocoder         # We call a method on the geocoder after initialization to set up the         # right parameters         calls:             - [registerProviders, [[@geocoder_provider]]] It's a bit longer than this, but it is the code that we never have to write anywhere else ever again. A few things to notice are as follows: We actually defined three services, as our geocoder requires two other classes to be instantiated. We used @+service_name to pass a reference to a service as an argument to another service. We can do more than just defining new Class($argument); we can also call a method on the class after it is instantiated. It is even possible to set properties directly when they are declared as public. We marked the first two services as private. This means that they won't be accessible in our controllers. They can, however, be used by the Dependency Injection Container (DIC) to be injected into other services. Our code now looks as follows: // Retrieve current user's IP address $ip = $this->get('request')->getClientIp(); // Or use a default one if ($ip == '127.0.0.1') {     $ip = '114.247.144.250'; } // Find the user's coordinates $result = $this->get('geocoder')->geocode($ip); $lat = $result->getLatitude(); // ... Remaining code is unchanged Here, our controllers are extending the BaseController class, which has access to DIC since it implements the ContainerAware interface. All calls to $this->get('service_name') are proxied to the container that constructs (if needed) and returns the service. Let's go one step further and define our own class that will directly get the user's IP address and return an array of maximum and minimum longitude and latitudes. We will create the following class: namespace KhepinBookBundleGeo; use GeocoderGeocoder; use SymfonyComponentHttpFoundationRequest; class UserLocator {     protected $geocoder;     protected $user_ip;     public function__construct(Geocoder $geocoder, Request $request) {         $this->geocoder = $geocoder;         $this->user_ip = $request->getClientIp();         if ($this->user_ip == '127.0.0.1') {             $this->user_ip = '114.247.144.250';         }     }     public function getUserGeoBoundaries($precision = 0.3) {         // Find the user's coordinates         $result = $this->geocoder->geocode($this->user_ip);         $lat = $result->getLatitude();         $long = $result->getLongitude();         $lat_max = $lat + 0.25; // (Roughly 25km)         $lat_min = $lat - 0.25;         $long_max = $long + 0.3; // (Roughly 25km)         $long_min = $long - 0.3;         return ['lat_max' => $lat_max, 'lat_min' => $lat_min,            'long_max' => $long_max, 'long_min' => $long_min];     } } It takes our geocoder and request variables as arguments, and then does all the heavy work we were doing in the controller at the beginning of the article. Just as we did before, we will define this class as a service, as follows, so that it becomes very easy to access from within the controllers: # config.yml services:     #...     user_locator:        class: KhepinBookBundleGeoUserLocator        scope: request        arguments: [@geocoder, @request] Notice that we have defined the scope here. The DIC has two scopes by default: container and prototype, to which the framework also adds a third one named request. The following table shows their differences: Scope Differences Container All calls to $this->get('service_name') return the sameinstance of the service. Prototype Each call to $this->get('service_name') returns a new instance of the service. Request Each call to $this->get('service_name') returns the same instance of the service within a request. Symfony can have subrequests (such as including a controller in Twig). Now, the advantage is that the service knows everything it needs by itself, but it also becomes unusable in contexts where there are no requests. If we wanted to create a command that gets all users' last-connected IP address and sends them a newsletter of the meetups around them on the weekend, this design would prevent us from using the KhepinBookBundleGeoUserLocator class to do so. As we see, by default, the services are in the container scope, which means they will only be instantiated once and then reused, therefore implementing the singleton pattern. It is also important to note that the DIC does not create all the services immediately, but only on demand. If your code in a different controller never tries to access the user_locator service, then that service and all the other ones it depends on (geocoder, geocoder_provider, and geocoder_adapter) will never be created. Also, remember that the configuration from the config.yml is cached when on a production environment, so there is also little to no overhead in defining these services. Our controller looks a lot simpler now and is as follows: $boundaries = $this->get('user_locator')->getUserGeoBoundaries(); // Create our database query $em = $this->getDoctrine()->getManager(); $qb = $em->createQueryBuilder(); $qb->select('e')     ->from('KhepinBookBundle:Meetup', 'e')     ->where('e.latitude < :lat_max')     ->andWhere('e.latitude > :lat_min')     ->andWhere('e.longitude < :long_max')     ->andWhere('e.longitude > :long_min')     ->setParameters($boundaries); // Retrieve interesting meetups $meetups = $qb->getQuery()->execute(); return ['meetups' => $meetups]; The longest part here is the doctrine query, which we could easily put on the repository class to further simplify our controller. As we just saw, defining and creating services in Symfony2 is fairly easy and inexpensive. We created our own UserLocator class, made it a service, and saw that it can depend on our other services such as @geocoder service. We are not finished with services or the DIC as they are the underlying part of almost everything related to extending Symfony2. Summary In this article, we saw the importance of services and also had a look at the geolocation service. We created a class, made it a service, and saw how it can depend on our other services. Resources for Article: Further resources on this subject: Developing an Application in Symfony 1.3 (Part 1) [Article] Developing an Application in Symfony 1.3 (Part 2) [Article] User Interaction and Email Automation in Symfony 1.3: Part1 [Article]
Read more
  • 0
  • 0
  • 957

Banner background image
article-image-form-handling
Packt
20 Feb 2014
22 min read
Save for later

Form Handling

Packt
20 Feb 2014
22 min read
(For more resources related to this topic, see here.) Collecting user data is a basic function of many websites and web applications, from simple data collection techniques such as registration or login information, to more complex scenarios such as payment or billing information. It is important that only relevant and complete information is collected from the user. To ensure this, the web developer must enforce validation on all data input. It is also important to provide a good user experience while enforcing this data integrity. This can be done by providing useful feedback to the user regarding any validation errors their data may have caused. This article will show you how to create an attractive web form that enforces data integrity while keeping a high-quality user experience. A very important point to note is that any JavaScript or jQuery validation is open to manipulation by the user. JavaScript and jQuery resides within the web browser, so a user with little knowledge can easily modify the code to bypass any client-side validation techniques. This means that client-side validation cannot be totally relied on to prevent the user from submitting invalid data. Any validation done within the client side must be replicated on the server, which is not open for manipulation by the user. We use client-side validation to improve the user experience. Because of this, the user does not need to wait for a server response. Implementing basic form validation At the most basic level of form validation, you will need to be able to prevent the user from submitting empty values. This recipe will provide the HTML and CSS code for a web form that will be used for recipes 1 through 8 of this article. Getting ready Using your favorite text editor or IDE, create a blank HTML page in an easily accessible location and save this file as recipe-1.html. Ensure that you have the latest version of jQuery downloaded to the same location as this HTML file. This HTML page will form the basis of most of this article, so remember to keep it after you have completed this recipe. How to do it… Learn how to implement basic form validation with jQuery by performing the following steps: Add the following HTML code to index.html. Be sure to change the source location of the JavaScript included for the jQuery library, pointing it to where the latest version of jQuery is downloaded on your computer. <!DOCTYPE html> <html > <head>    <title>Chapter 5 :: Recipe 1</title>    <link type="text/css" media="screen" rel="stylesheet" href="styles.css" />    <script src = "jquery.min.js"></script>    <script src = "validation.js"></script> </head> <body>    <form id="webForm" method="POST">       <div class="header">          <h1>Register</h1>       </div>       <div class="input-frame">          <label for="firstName">First Name:</label>          <input name="firstName" id="firstName" type="text"             class="required" />       </div>       <div class="input-frame">          <label for="lastName">Last Name:</label>          <input name="lastName" id="lastName" type="text"             class="required" />       </div>       <div class="input-frame">          <label for="email">Email:</label>          <input name="email" id="email" type="text"             class="required email" />       </div>       <div class="input-frame">          <label for="number">Telephone:</label>          <input name="number" id="number" type="text"             class="number" />       </div>       <div class="input-frame">          <label for="dob">Date of Birth:</label>          <input name="dob" id="dob" type="text"             class="required date"             placeholder="DD/MM/YYYY"/>       </div>       <div class="input-frame">          <label for="creditCard">Credit Card #:</label>          <input name="creditCard" id="creditCard"             type="text" class="required credit-card" />       </div>       <div class="input-frame">          <label for="password">Password:</label>          <input name="password" id="password"             type="password" class="required" />       </div>       <div class="input-frame">          <label for="confirmPassword">Confirm             Password:</label>             <input name="confirmPassword"                id="confirmPassword" type="password"                class="required" />       </div>       <div class="actions">          <button class="submit-btn">Submit</button>       </div>    </form> </body> </html> Create a CSS file named styles.css in the same directory and add the following CSS code to add style to our HTML page and form: @import url(http: //fonts.googleapis.com/css?family=Ubuntu); body {    background-color: #FFF;    font-family: 'Ubuntu', sans-serif; } form {    width: 500px;    padding: 20px;    background-color: #333;    border-radius: 5px;    margin: 10px auto auto auto;    color: #747474;    border: solid 2px #000; } form label {    font-size: 14px;    line-height: 30px;    width: 27%;    display: inline-block;    text-align: right; } .input-frame {    clear: both;    margin-bottom: 25px;    position: relative; } form input {    height: 30px;    width: 330px;    margin-left: 10px;    background-color: #191919;    border: solid 1px #404040;    padding-left: 10px;    color: #DB7400; } form input:hover {    background-color: #262626; } form input:focus {    border-color: #DB7400; } form .header {    margin: -20px -20px 25px -20px;    padding: 10px 10px 10px 20px;    position: relative;    background-color: #DB7400;    border-top-left-radius: 4px;    border-top-right-radius: 4px; } form .header h1 {    line-height: 50px;    margin: 0px;    padding: 0px;    color: #FFF;    font-weight: normal; } .actions {    text-align: right; } .submit-btn {    background-color: #DB7400;    border: solid 1px #000;    border-radius: 5px;    color: #FFF;    padding: 10px 20px 10px 20px;    text-decoration: none;    cursor: pointer; } .error input {    border-color: red; } .error-data {    color: red;    font-size: 11px;    position: absolute;    bottom: -15px;    left: 30%; } In addition to the jQuery library, the previous HTML page also uses another JavaScript file. Create a blank JavaScript file in the directory where the index.html file is saved. Save this file as validation.js and add the following JavaScript code: $(function(){    $('.submit-btn').click(function(event){       //Prevent form submission       event.preventDefault();       var inputs = $('input');       var isError = false;       //Remove old errors       $('.input-frame').removeClass('error');       $('.error-data').remove();       for (var i = 0; i < inputs.length; i++) {          var input = inputs[i];          if ($(input).hasClass('required') &&             !validateRequired($(input).val())) {             addErrorData($(input), "This is a required             field");             isError = true;          }         }       if (isError === false) {          //No errors, submit the form          $('#webForm').submit();       }    }); });   function validateRequired(value) {    if (value == "") return false;    return true; }   function addErrorData(element, error) {    element.parent().addClass("error");    element.after("<div class='error-data'>" + error + "</div>"); } Open index.html in a web browser and you should see a form similar to the following screenshot: If you click on the Submit button to submit an empty form, you will be presented with error messages under the required fields. How it works… Now, let us understand the steps performed previously in detail. HTML The HTML creates a web form with various fields that will take a range of data inputs, including text, date of birth, and credit card number. This page forms the basis for most of this article. Each of the input elements has been given different classes depending on what type of validation they require. For this recipe, our JavaScript will only look at the required class, which indicates a required field and therefore cannot be blank. Other classes have been added to the input fields, such as date and number, which will be used in the later recipes in this article. CSS Basic CSS has been added to create an attractive web form. The CSS code styles the input fields so they blend in with the form itself and adds a hover effect. The Google Web Font Ubuntu has also been used to improve the look of the form. jQuery The first part of the jQuery code is wrapped within $(function(){});, which will ensure the code is executed on page load. Inside this wrapper, we attach a click event handler to the form submit button, shown as follows: $(function(){     $('.submit-btn').click(function(event){         //Prevent form submission         event.preventDefault();             }); }); As we want to handle the form submission based on whether valid data has been provided, we use event.preventDefault(); to initially stop the form from submitting, allowing us to perform the validation first, shown as follows: var inputs = $('input'); var isError = false; After the preventDefault code, an inputs variable is declared to hold all the input elements within the page, using $('input') to select them. Additionally, we create an isError variable, setting it to false. This will be a flag to determine if our validation code has discovered an error within the form. These variable declarations are shown previously. Using the length of the inputs variable, we are able to loop through all of the inputs on the page. We create an input variable for each input that is iterated over, which can be used to perform actions on the current input element using jQuery. This is done with the following code: for (var i = 0; i < inputs.length; i++) { var input = inputs[i]; } After the input variable has been declared and assigned the current input, any previous error classes or data is removed from the element using the following code: $(input).parent().removeClass('error'); $(input).next('.error-data').remove(); The first line removes the error class from the input's parent (.input-frame), which adds the red border to the input element. The second line removes the error information that is displayed under the input if the validation check has determined that this input has invalid data. Next, jQuery's hasClass() function is used to determine if the current input element has the required class. If the current element does have this class, we need to perform the required validation to make sure this field contains data. We call the validateRequired() function within the if statement and pass through the value of the current input, shown as follows: if ($(input).hasClass('required') && !validateRequired($(input).val())) { addErrorData($(input), "This is a required field");    isError = true; } We call the validateRequired() function prepended with an exclamation mark to check to determine if this function's results are equal to false; therefore, if the current input has the required class and validateRequired() returns false, the value of the current input is invalid. If this is the case, we call the addErrorData() function inside the if statement with the current input and the error message, which will be displayed under the input. We also set the isError variable to true, so that later on in the code, we will know a validation error occurred. The JavaScript's for loop will repeat these steps for each of the selected input elements on the page. After the for loop has completed, we check if the isError flag is still set to false. If so, we use jQuery to manually submit the form, shown as follows: if (isError === false) {    //No errors, submit the form    $('#webForm').submit(); } Note that the operator === is used to compare the variable type of isError (that is, Boolean) as well as its value. At the bottom of the JavaScript file, we declare our two functions that have been called earlier in the script. The first function, validateRequired(), simply takes the input value and checks to see if it is blank or not. If the value is blank, the function returns false, meaning validation failed; otherwise, the function returns true. This can be coded as follows: function validateRequired(value) {     if (value == "") return false;     return true; } The second function used is the addErrorData() function, which takes the current input and an error message. It uses jQuery's addClass() function to add the error class to the input's parent, which will display the red border on the input element using CSS. It then uses jQuery's after() function to insert a division element into the DOM, which will display the specified error message under the current input field, shown as follows: function validateRequired(value) {    if (value == "") return false;    return true; } function addErrorData(element, error) {    element.parent().addClass("error");    element.after("<div class='error-data'>" + error + "</div>"); } There's more... This structure allows us to easily add additional validation to our web form. Because the JavaScript is iterating over all of the input fields in the form, we can easily check for additional classes, such as date, number, and credit-card, and call extra functions to provide the alternative validation. The other recipes in this article will look in detail at the additional validation types and add these functions to the current validation.js file. See also Implementing input character restrictions Adding number validation When collecting data from a user, there are many situations when you will want to only allow numbers in a form field. Examples of this could be telephone numbers, PIN codes, or ZIP codes, to name a few. This recipe will show you how to validate the telephone number field within the form we created in the previous recipe. Getting ready Ensure that you have completed the previous recipe and have the same files available. Open validation.js in your text editor or IDE of choice. How to do it… Add number validation to the form you created in the previous recipe by performing the following steps: Update validation.js to be as follows, adding the valdiateNumber() function with an additional hasClass('number') check inside the for loop: $(function(){    $('.submit-btn').click(function(event){       //Prevent form submission       event.preventDefault();       var inputs = $('input');       var isError = false;       //Remove old errors       $('.input-frame').removeClass('error');       $('.error-data').remove();       for (var i = 0; i < inputs.length; i++) {          var input = inputs[i];            if ($(input).hasClass('required') &&             !validateRequired($(input).val())) {                addErrorData($(input), "This is a required                   field");                isError = true;             } /* Code for this recipe */          if ($(input).hasClass('number') &&             !validateNumber($(input).val())) {                addErrorData($(input), "This field can only                   contain numbers");                isError = true;             } /* --- */         }       if (isError === false) {          //No errors, submit the form          $('#webForm').submit();       }    }); });   function validateRequired(value) {    if (value == "") return false;    return true; }   /* Code for this recipe */ function validateNumber(value) {    if (value != "") {       return !isNaN(parseInt(value, 10)) && isFinite(value);       //isFinite, in case letter is on the end    }    return true; } /* --- */ function addErrorData(element, error) {    element.parent().addClass("error");    element.after("<div class='error-data'>" + error + "</div>"); } Open index.html in a web browser, input something other than a valid integer into the telephone number field, and click on the Submit button. You will be presented with a form similar to the following screenshot: How it works… First, we add an additional if statement to the main for loop of validation.js to check to see if the current input field has the class number, as follows: if ($(input).hasClass('number') &&    !validateNumber($(input).val())) {    addErrorData($(input), "This field can only contain numbers");    isError = true; } If it does, this input value needs to be validated for a number. To do this, we call the validateNumber function inline within the if statement: function validateNumber(value) {    if (value != "") {       return !isNaN(parseInt(value, 10)) && isFinite(value);       //isFinite, in case letter is on the end    }    return true; } This function takes the value of the current input field as an argument. It first checks to see if the value is blank. If it is, we do not need to perform any validation here because this is handled by the validateRequired() function from the first recipe of this article. If there is a value to validate, a range of actions are performed on the return statement. First, the value is parsed as an integer and passed to the isNaN() function. The JavaScript isNaN() function simply checks to see if the provided value is NaN (Not a Number). In JavaScript, if you try to parse a value as an integer and that value is not actually an integer, you will get the NaN value. The first part of the return statement is to ensure that the provided value is a valid integer. However, this does not prevent the user from inputting invalid characters. If the user was to input 12345ABCD, the parseInt function would ignore ABCD and just parse 12345, and therefore the validation would pass. To prevent this situation, we also use the isFinite function, which returns false if provided with 12345ABCD. Adding credit card number validation Number validation could be enough validation for a credit card number; however, using regular expressions, it is possible to check for number combinations to match credit card numbers from Visa, MasterCard, American Express, and more. Getting ready Make sure that you have validation.js from the previous two recipes in this article open and ready for modification. How to do it… Use jQuery to provide form input validation for credit card numbers by performing the following step-by-step instructions: Update validation.js to add the credit card validation function and the additional class check on the input fields: $(function(){    $('.submit-btn').click(function(event){       //Prevent form submission       event.preventDefault();       var inputs = $('input');       var isError = false;       for (var i = 0; i < inputs.length; i++) {   // -- JavaScript from previous two recipes hidden                       if ($(input).hasClass('credit-card') &&             !validateCreditCard($(input).val())) {             addErrorData($(input), "Invalid credit card                number");             isError = true;          }         } // -- JavaScript from previous two recipes hidden    }); });   // -- JavaScript from previous two recipes hidden   function validateCreditCard(value) {    if (value != "") {       return /^(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|6(?:011|5[0-9][0-9]) [0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}| (?:2131|1800|35d{3})d{11})$/.test(value);    }    return true; } // -- JavaScript from previous two recipes hidden } Open index.html and input an invalid credit card number. You will be presented with the following error information in the form: How it works… To add credit card validation, as with the previous two recipes, we added an additional check in the main for loop to look for the credit-card class on the input elements, as follows: if ($(input).hasClass('credit-card') &&    !validateCreditCard($(input).val())) {    addErrorData($(input), "Invalid credit card number");    isError = true; } The validateCreditCard function is also added, which uses a regular expression to validate the input value, as follows: function validateCreditCard(value) {    if (value != "") {       return /^(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-          9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-          9]{13}|3(?:0[0-5]|[68][0-9])[0-          9]{11}|(?:2131|1800|35d{3})d{11})$/.test(value);    }    return true; } The first part of this function determines if the provided value is blank. If it isn't, the function will perform further validation; otherwise, it will return true. Most credit card numbers start with a prefix, which allows us to add additional validation to the inputted value on top of numeric validation. The regular expression used in this function will allow for Visa, MasterCard, American Express, Diners Club, Discover, and JCB cards. See also Adding number validation
Read more
  • 0
  • 0
  • 1296

article-image-baritem-class-and-radoutlookbar-control
Packt
18 Feb 2014
6 min read
Save for later

The BarItem class and the RadOutlookBar control

Packt
18 Feb 2014
6 min read
(For more resources related to this topic, see here.) The next aspect of the system we want to review is the BarItem class. The design of this class is set up to be used by the RadOutlookBarItem and RadMenuItem classes. The reason for this setup was to match the properties of the Telerik item classes. The following is a screenshot of the class design: The main properties include Header, Position, and ReferId. The Header property is set up to match the Header property from the RadOutlookBarItem and RadmenuItem classes. The Position property orders the items on the system for proper display, and the ReferId property creates the structure of the view. The ReferId values are set up so that if the ReferId property is zero, this item becomes the header, otherwise the rest of the items become the items under that main item. Now that we've covered the new concepts within this article, let's start to work with the navigation containers for Telerik. The next sections of this article will cover the RadOutlookBar and RadMenu controls using the concepts from this section to load these controls based on the level of user access. RadOutlookBar The first control I will review in this article will be the RadOutlookBar control. This control is unique to the Telerik toolset. The standard Visual Studio control library does not contain a control like the RadOutlookBar control. The first example I will review will be to set the binding of the control's tabs using both a list of class objects from the object layer, and the database to populate the list of objects and authenticate the user. The first step will be to create a new window in the current WPF project; name this new window Chap5OutlookWindow.xaml. Once you have created the new window, you will need to add the RadOutlookBar control: RadOutlookBar: Name the outlook bar RadTestOutlook The final appearance of the window should be the following screenshot: Notice that the RadOutlookBar control is already loaded with the options. This is the fully functioning example, but gives you an idea for how the window should look once we've finished this section. RadOutlook with GenericList, DataBinding, and database security This is the first example of populating the RadOutlookBar with the menu links from the database. The concept we are demonstrating is that the menu links inside the RadOutlookBar control will be populated based on the user that is authenticated into the system. In this example, we will be using database security. This means that the database will store the username and password, and security link information for the RadOutlookBar control. Since we don't have a login form to gather the username and password, we will create the user information within the App.xaml.cs file instead. This will allow us to simulate the login process and authenticate the user in the system. This should not be done for a production application—this setup is for demonstration purposes only. Hard-coding a username and password is always a bad practice in a real-world application. The following screenshot shows how the code should look inside the App.xaml.cs file: Let's review the preceding code. The first step is to create an instance of the UserEntity class. The next step is to determine the type of security the system will be using to authenticate the user. The app.config file has a setting for the type of security as shown in the following line of code: <add key="SecurityType" value="DB"/> This key value is passed to the system, and used by the system to determine which type of security is used in the App.xaml.cs class. The value is read by the system; it sets an enum property from the UserEntity class called SecurityType. This enumeration is then evaluated to determine the security setup for the system. The next step is to set the UserName and UserChkPwd properties. Once these properties are set we can then call the Authenticate method to verify the user information against the database. The next step will be to set the CurrentUser property in the application request class. This object is passed to each window to persist base application information to each window. The final step will be to create an instance of the OutlookBarWindow class to open the window. Notice the code for the instance; we are passing the AppRequest object to the window using the instance method of the window. The following screenshot is the instance code: Now that the window code is set up, let's review the LoadRadOutlook method from the BaseWindow class. This method requires two values: the OutlookBar object to be loaded, and a generic list of BarItem objects. The BarItem class takes the information from the database and loads the RadOutlookBar control using BarItems to create the RadOutlookBarItem objects to load the RadOutlookBar control. The LoadRadOutlook method also creates a RadTreeView control of the options within each RadOutlookBarItem object. The database version then handles the gathering of the BarItem class and loading the RadOutlookBar control using a method called FetchMenuOptions. This method takes BarItems from the database query, and created a RadOutlookBar object with all the menu options for the current user. This method gathers the menu information based on the current username and generates a generic list of the BarItem classes. This list is used by the BaseWindow class to load the RadOutlookBar control. RadOutlookBar using generic list binding with XML security The next method for populating the RadOutlookBar control on the window will be to use a serialized XML to populate the generic list of objects to load the RadOutlookBar control. The XML file has the group information and pulls the BarItem objects based on the user's group information. The example we will be using has the group information hard-coded into the group list, but this is just an example. You should not hard-code the groups in a real-world application. Here is the method for gathering the menu items from the XML file: Notice that we first load the XDocument object, then deserialize the XDocument object to gather the list of BarItem objects. Once the full list is loaded, we use a Linq query to gather the items based on the user group for the current user. Now that we have that new list, we can populate the UserMenuItems property for use in the application. The application then calls the LoadOutlookBar method to generate the RadOutlookBar control again. Summary Thus in this article we learned what a RadOutlookBar control is and how it can be used with GenericList, DataBinding, database security and generic list binding with XML security. We also had a glance over the BarItem class and how the design of this class is set up to be used by the RadOutlookBarItem and RadMenuItem classes. Resources for Article: Further resources on this subject: WPF 4.5 Application and Windows [Article] Windows Presentation Foundation Project - Basics of Working [Article] Introduction to Web Experience Factory [Article]
Read more
  • 0
  • 0
  • 984
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-using-phalcon-models-views-and-controllers
Packt
20 Jan 2014
3 min read
Save for later

Using Phalcon Models, Views, and Controllers

Packt
20 Jan 2014
3 min read
(For more resources related to this topic, see here.) Creating CRUD scaffolding CRUD stands for create, read, update, and delete, which are the four basic functions our application should do with our blog post records. Phalcon web tools will also help us to get these built. Click on the Scaffold tab on the web tools page and you will see a page as shown in the following screenshot: Select posts from the Table name list and volt from the Template engine list, and check Force this time, because we are going to force our new files to overwrite the old model and controller files that we just generated. Click on the Generate button and some magic should happen. Browse to http://localhost/phalconBlog/posts and you will see a page like the following screenshot: We finally have some functionality we can use. We have no posts, but we can create some. Click on the Create posts link and you will see a page similar to the one we were just at. The form will look nearly the same, but it will have a Create posts heading. Fill out the Title, Body, and Excerpt fields and click on the Save button. The form will post, and you will get a message stating that the post was created successfully. This will take you back to the post's index page. Now you should be able to search for and find the post you just created. If you forgot what you posted, you can click on Search without entering anything in the fields, and you should see a page like the following screenshot: This is not a very pretty or user-friendly blog application. But it got us started, and that's all we need. The next time we start a Phalcon project, it should only take a few minutes to go through these steps. Now we will look over our generated code, and as we do, modify it to make it more blog-like. Summary In this article, we worked on the model, view, and controller for the posts in our blog. To do this, we used Phalcon web tools to generate our CRUD scaffolding for us. Then, we modified this generated code so it would do what we need it to do. We can now add posts. We also learned about the Volt template engine. Resources for Article: Further resources on this subject: Using An Object Oriented Approach for Implementing PHP Classes to Interact with Oracle [Article] FuelPHP [Article] Installing PHP-Nuke [Article]
Read more
  • 0
  • 0
  • 1967

article-image-your-first-application
Packt
15 Jan 2014
7 min read
Save for later

Your First Application

Packt
15 Jan 2014
7 min read
(For more resources related to this topic, see here.) Sketching out the application We are going to build a browsable database of cat profiles. Visitors will be able to create pages for their cats and fill in basic information such as the name, date of birth, and breed for each cat. This application will support the default Create-Retrieve-Update-Delete operations (CRUD). We will also create an overview page with the option to filter cats by breed. All of the security, authentication, and permission features are intentionally left out since they will be covered later. Entities, relationships, and attributes Firstly, we need to define the entities of our application. In broad terms, an entity is a thing (person, place, or object) about which the application should store data. From the requirements, we can extract the following entities and attributes: Cats, which have a numeric identifier, a name, a date of birth, and a breed Breeds, which only have an identifier and a name This information will help us when defining the database schema that will store the entities, relationships, and attributes, as well as the models, which are the PHP classes that represent the objects in our database. The map of our application We now need to think about the URL structure of our application. Having clean and expressive URLs has many benefits. On a usability level, the application will be easier to navigate and look less intimidating to the user. For frequent users, individual pages will be easier to remember or bookmark and, if they contain relevant keywords, they will often rank higher in search engine results. To fulfill the initial set of requirements, we are going to need the following routes in our application: Method Route Description GET / Index GET /cats Overview page GET /cats/breeds/:name Overview page for specific breed GET /cats/:id Individual cat page GET /cats/create Form to create a new cat page POST /cats Handle creation of new cat page GET /cats/:id/edit Form to edit existing cat page PUT /cats/:id Handle updates to cat page GET /cats/:id/delete Form to confirm deletion of page DELETE /cats/:id Handle deletion of cat page We will shortly learn how Laravel helps us to turn this routing sketch into actual code. If you have written PHP applications without a framework, you can briefly reflect on how you would have implemented such a routing structure. To add some perspective, this is what the second to last URL could have looked like with a traditional PHP script (without URL rewriting): /index.php?p=cats&id=1&_action=delete&confirm=true. The preceding table can be prepared using a pen and paper, in a spreadsheet editor, or even in your favorite code editor using ASCII characters. In the initial development phases, this table of routes is an important prototyping tool that forces you to think about URLs first and helps you define and refine the structure of your application iteratively. If you have worked with REST APIs, this kind of routing structure will look familiar to you. In RESTful terms, we have a cats resource that responds to the different HTTP verbs and provides an additional set of routes to display the necessary forms. If, on the other hand, you have not worked with RESTful sites, the use of the PUT and DELETE HTTP methods might be new to you. Even though web browsers do not support these methods for standard HTTP requests, Laravel uses a technique that other frameworks such as Rails use, and emulates those methods by adding a _method input field to the forms. This way, they can be sent over a standard POST request and are then delegated to the correct route or controller method in the application. Note also that none of the form submissions endpoints are handled with a GET method. This is primarily because they have side effects; a user could trigger the same action multiple times accidentally when using the browser history. Therefore, when they are called, these routes never display anything to the users. Instead, they redirect them after completing the action (for instance, DELETE /cats/:id will redirect the user to GET/cats). Starting the application Now that we have the blueprints for the application, let's roll up our sleeves and start writing some code. Start by opening a new terminal window and create a new project with Composer, as follows: $ composer create-project laravel/laravel cats --prefer-dist $ cd cats Once Composer finishes downloading Laravel and resolving its dependencies, you will have a directory structure identical to the one presented previously. Using the built-in development server To start the application, unless you are running an older version of PHP (5.3.*), you will not need a local server such as WAMP on Windows or MAMP on Mac OS since Laravel uses the built-in development server that is bundled with PHP 5.4 or later. To start the development server, we use the following artisan command: $ php artisan serve Artisan is the command-line utility that ships with Laravel and its features will be covered in more detail later. Next, open your web browser and visit http://localhost:8000; you will be greeted with Laravel's welcome message. If you get an error telling you that the php command does not exist or cannot be found, make sure that it is present in your PATH variable. If the command fails because you are running PHP 5.3 and you have no upgrade possibility, simply use your local development server (MAMP/WAMP) and set Apache's DocumentRoot to point to cats-app/public/. Writing the first routes Let's start by writing the first two routes of our application inside app/routes.php. This file already contains some comments as well as a sample route. You can keep the comments but you must remove the existing route before adding the following routes: Route::get('/', function(){   return "All cats"; }); Route::get('cats/{id}', function($id){   return "Cat #$id"; }); The first parameter of the get method is the URI pattern. When a pattern is matched, the closure function in the second parameter is executed with any parameters that were extracted from the pattern. Note that the slash prefix in the pattern is optional; however, you should not have any trailing slashes. You can make sure that your routes work by opening your web browser and visiting http://localhost:8000/cats/123. If you are not using PHP's built-in development server and are getting a 404 error at this stage, make sure that Apache's mod_rewrite configuration is enabled and works correctly. Restricting the route parameters In the pattern of the second route, {id} currently matches any string or number. To restrict it so that it only matches numbers, we can chain a where method to our route as follows: Route::get('cats/{id}', function($id){ return "Cat #$id"; })->where('id', '[0-9]+'); The where method takes two arguments: the first one is the name of the parameter and the second one is the regular expression pattern that it needs to match. Summary We have covered a lot in this article. We learned how to define routes, prepare the models of the application, and interact with them. Moreover, we have had a glimpse at the many powerful features of Eloquent, Blade, as well as the other convenient helpers in Laravel to create forms and input fields: all of this in under 200 lines of code! Resources for Article: Further resources on this subject: Laravel 4 - Creating a Simple CRUD Application in Hours [Article] Creating and Using Composer Packages [Article] Building a To-do List with Ajax [Article]
Read more
  • 0
  • 0
  • 1850

article-image-marionette-view-types-and-their-use
Packt
07 Jan 2014
15 min read
Save for later

Marionette View Types and Their Use

Packt
07 Jan 2014
15 min read
(For more resources related to this topic, see here.) Marionette.View and Marionette.ItemView The Marionete.View extends the Backbone.View, and it's important to remember this, because all the knowledge that we already have on creating a view will be useful while working with these new set of views of Marionette. Each of them aims to provide a specific out of the box functionality so that you spend less time focusing on the glue code needed to make things work, and more time on things that are related to the needs of your application. This allows you to focus all your attention on the specific logic of your application. We will start by describing the Marionette.View part of Marionette, as all of the other views extend from it; the reason we do this is because this view provides a very useful functionality. But it's important to notice that this view is not intended to be used directly. As it is the base view from which all the other views inherit from, it is an excellent place to contain some of the glue code that we just talked about. A good example of that functionality is the close method, which will be responsible for removing .el from DOM. This method will also take care of calling unbind to all your events, thus avoiding the problem called Zombie views. This an issue that you can have if you don't do this carefully in a regular Backbone view, where new instantiations of previously closed fire events are present. These events remain bound to the HTML elements used in the view. These are now present again in the DOM now that the view has been rerendered, and during the recreation of the view, new event listeners are attached to these HTML elements. From the documentation of the Marionette.View, we exactly know what the close method does. It calls an onBeforeClose event on the view, if one is provided It calls an onClose event on the view, if one is provided It unbinds all custom view events It unbinds all DOM events It removes this.el from the DOM It unbinds all listenTo events The link to the official documentation of the Marionette.View object is https://github.com/marionettejs/backbone.marionette/blob/master/docs/marionette.view.md. It's important to mention that the third point, unbind all custom view events, will unbind events created using the modelEvents hash, those created on the events hash, and events created via this.listenTo. As the close method is already provided and implemented, you don't need to perform the unbind and remove previously listed tasks. While most of the time this would be enough, at times, one of your views will need you to perform extra work in order to properly close it; in this case, two events will be fired at the same time to close a view. The event onBeforeClose, as the name indicates, will be fired just before the close method. It will call a function of the same name, onBeforeClose, where we can add the code that needs to be executed at this point. function : onBeforeClose () { // code to be run before closing the view } The second event will be onClose, which will be fired after the close method so that the .el of the view won't be present anymore and all the unbind tasks will have been performed. function : onClose () { // code to be run after closing the view } One of the core ideas behind Marionette is to reduce the boilerplate code that you have to write when building apps with Backbone. A perfect example of which is the render method that you have to implement in every Backbone view, and the code there is pretty much the same in each of your views. Load the template with the underscore _.template function and then pass the model converted to JSON to the template. The following is an example of repetitive code needed to render a view in Backbone: render : function () { var template = $( '#mytemplate' ).html(); var templateFunction = _.template( template ); var modelToJSON = this.model.toJSON(); var result = templateFunction(modelToJSON); var myElement = $( '#MyElement' ); myElement.html( result ); } As Marionette defining a render function is no longer required, just like the close method, the preceding code will be called for you behind the scenes. In order to render a view, we just need to declare it with a template property set. var SampleView = Backbone.Marionette.ItemView.extend({ template : '#sample-template' }); Next, we just create a Backbone model, and we pass it to the ItemView constructor. var SampleModel = Backbone.Model.extend({ defaults : { value1 : "A random Value", value2 : "Another Random Value" } }) var sampleModel = new SampleModel(); var sampleView = new SampleView({model:sampleModel); And then the only thing left is to call the render function. sampleView.render(); If you want to see it running, please go through this JSFiddle that illustrates the previous code: http://jsfiddle.net/rayweb_on/VS9hA/ One thing to note is that we just needed one line to specify the template, and Marionette did the rest by rendering our view with the specified template. Notice that in this example, we used the ItemView constructor; we should not use Marionette.View directly, as it does not have many functionalities of its own. It just serves as the base for other views. So some of the following examples of the functionalities provided by Marionette.View will be demonstrated using ItemView, as this view inherits all of these functionalities through extension. As we saw in the previous example, ItemView works perfectly for rendering a single model using a template, but what about rendering a collection of models? If you just need to render, for example, a list of books or categories, you still can use ItemView. To accomplish this, the template that you would assign to ItemView must know how to handle the creation of the DOM to properly display that list of items. Let's render a list of books. The Backbone model will have two properties: the book name and the book ID. We just want to create a list of links using the book name as the value to be displayed; the ID of the book will be used to create a link to see the specific book. First, let's create the book Backbone model for this example and its collection: var BookModel = Backbone.Model.extend({ defaults : { id : "1", name : "First", } }); var BookCollection = Backbone.Collection.extend({ model : BookModel }); Now let's instantiate the collection and add three models to it: var bookModel = new BookModel(); var bookModel2 = new BookModel({id:"2",name:"second"}); var bookModel3 = new BookModel({id:"3",name:"third"}); var bookCollection = new BookCollection(); bookCollection.add(bookModel); bookCollection.add(bookModel2); bookCollection.add(bookModel3); In our HTML, let's create the template to be used in this view; the template should look like the following: <script id="books-template" type="text/html"> <ul> <% _.each(items, function(item){ %> <li><a href="book/'+<%= item.id %> +"><%= item.name %> </li> <% }); %> </ul> </script> Now we could render the book list using the following code snippet: var BookListView = Marionette.ItemView.extend({ template: "#books-template" }); var view = new BookListView ({ collection: bookCollection }); view.Render(); If you want to see it in action, go to the working code in JSFiddle at http://jsfiddle.net/rayweb_on/8QAgQ/. The previous code would produce an unordered list of books with links to the specific book. Again, we gained the benefit of writing very little code once again, as we didn't need to specify the Render function, which could be misleading, because the ItemView is perfectly capable of rendering a model or a collection. Whether to use CollectionView or ItemView will depend on what we are trying to accomplish. If we need a set of individual views with its own functionality, CollectionView is the right choice, as we will see when we get to the point of reviewing it. But if we just need to render the values of a collection, ItemView would be the perfect choice. Handling events in the views To keep track of model events or collection events, we must write the following code snippet on a regular Backbone view: this.listenTo(this.model, "change:title", this.titleChanged); this.listenTo(this.collection, "add", this.collectionChanged); To start these events, we use the following handler functions: titleChanged: function(model, value){alert("changed");}, collectionChanged: function(model, value){alert("added");}, This still works fine in Marionette, but we can accomplish the same thing by declaring these events using the following configuration hash: modelEvents: { "change:title": "titleChanged" }, collectionEvents: { "add": "collectionChanged" }, This will give us exactly the same result, but the configuration hash is very convenient as we can keep adding events to our model or collection, and the code is cleaner and very easy to follow. The modelEvents and collectionEvents are not the only configuration hash sets that we have available in each one of the Marionette views; the UI configuration hash is also available. It may be the case that one of the DOM elements on your view will be used many times to read its value, and doing this using jQuery can not be optimal in terms of performance. Also, we would have the jQuery reference in several places, repeating ourselves and making our code less DRY. Inside a Backbone view, we can define a set of events that will be fired once an action is taken in the DOM; for instance, we pass the function that we want to handle in this event at the click of a button. events : { "click #button2" : "updateValue" }, This will invoke the updateValue function once we click on button2. This works fine, but what about calling a method that is not inside the view? To accomplish this, Marionette provides the triggers functionality that will fire events which can be listened to outside of your view. To declare a trigger, we can use the same syntax used in the events object as follows: triggers : { "click #button1": "trigger:alert"}, And then, we can listen to that event somewhere else using the following code: sampleView.on("trigger:alert", function(args){ alert(args.model.get("value2")); }); In the previous code, we used the model to alert and display the value of the property, value2. The args parameter received by the function will contain objects that you can use: The view that fired the trigger The Backbone model or collection of that view UI and templates While working with a view, you will need a reference to a particular HTML element through jQuery in more than one place in your view. This means you will make a reference to a button during initialization and in few other methods of the view. To avoid having the jQuery selector duplicated on each of these methods, you can map that UI element in a hash so that the selector is preserved. If you need to change it, the change will be done in a single place. To create this mapping of UI elements, we need to add the following declaration: ui: { quantity: "#quantity" saveButton : "#Save" }, And to make use of these mapper UI elements, we just need to refer them inside any function by the name given in the configuration. validateQuantity: function() { if (this.ui.quantity.val() > 0 { this.ui.saveButton.addClass('active'); } } There will be times when you need to pass a different template to your view. To do this in Marionette, we remove the template declaration and instead add a function called getTemplate. The following code snippet would illustrate the use of this function: getTemplate: function(){ if (this.model.get("foo"){ return "#sample-template"; }else { return "#a-different-template"; } }, In this case, we check the existence of the property foo; if it's not present, we use a different template and that will be it. You don't need to specify the render function because it will work the same way as declaring a template variable as seen in one of the previous examples. If you want to learn more about all the concepts that we have discussed so far, please refer to the jsFiddle link: http://jsfiddle.net/rayweb_on/NaHQS/. If you find yourself needing to make calculations involving a complicated process while rendering a value, you can make use of templeteHelpers that are functions contained in an object called templateHelpers. Let's look at an example that will illustrate its use better. Suppose we need to show the value of a book but are offering a discount that we need to calculate, use the following code: var PriceView = Backbone.Marionette.ItemView.extend({ template: "#price-template", templateHelpers: { calculatePrice: function(){ // logic to calculate the price goes here return price; } } }); As you can see the in the previous code, we declared an object literal that will contain functions that can be called from the templates. <script id="my-template" type="text/html"> Take this book with you for just : <%= calculatePrice () %> </script> Marionette.CollectionView Rendering a list of things like books inside one view is possible, but we want to be able to interact with each item. The solution for this will be to create a view one-by-one with the help of a loop. But Marionette solves this in a very elegant way by introducing the concept of CollectionView that will render a child view for each of the elements that we have in the collection we want to display. A good example to put into practice could be to list the books by category and create a Collection view. This is incredible easy. First, you need to define how each of your items should be displayed; this means how each item will be transformed in a view. For our categories example, we want each item to be a list <li> element and part of our collection; the <ul> list will contain each category view. We first declare ItemView as follows: var CategoryView = Backbone.Marionette.ItemView.extend({ tagName : 'li', template: "#categoryTemplate", }); Then we declare CollectionView, which specifies the view item to use. var CategoriesView = Backbone.Marionette.CollectionView.extend({ tagName : 'ul', className : 'unstyled', itemView: CategoryView }); A good thing to notice is that even when we are using Marionette views, we are still able to use the standard properties that Backbone views offer, such as tagName and ClassName. Finally, we create a collection and we instantiate CollectionView by passing the collection as a parameter. var categoriesView = new CategoriesView({collection:categories); categoriesView.render(); And that's it. Simple huh? The advantage of using this view is that it will render a view for each item, and it can have a lot of functionality; we can control all those views in the CollectionView that serves as a container. You can see it in action at http://jsfiddle.net/rayweb_on/7usdJ/. Marionette.CompositeView The Marionette.Composite view offers the possibility of not only rendering a model or collection models but, also the possibility of rendering both a model and a collection. That's why this view fits perfectly in our BookStore website. We will be adding single items to the shopping cart, books in this case, and we will be storing these books in a collection. But we need to calculate the subtotal of the order, show the calculated tax, and an order total; all of these properties will be part of our totals model that we will be displaying along with the ordered books. But there is a problem. What should we display in the order region when there are no items added? Well, in the CompositeView and the CollectionView, we can set an emptyView property, which will be a view to show in case there are no models in the collection. Once we add a model, we can then render the item and the totals model. Perhaps at this point, you may think that you lost control over your render functionality, and there will be cases where you need to do things to modify your HTML. Well, in that scenario, you should use the onRender() function, which is a very helpful method that will allow you to manipulate the DOM just after your render method was called. Finally, we would like to set a template with some headers. These headers are not part of an ItemView, so how can we display it? Let's have a look at part of the code snippet that explains how each part solves our needs. var OrderListView = Backbone.Marionette.CompositeView.extend({ tagName: "table", template: "#orderGrid", itemView: CartApp.OrderItemView, emptyView: CartApp.EmptyOrderView, className: "table table-hover table-condensed", appendHtml: function (collectionView, itemView) { collectionView.$("tbody").append(itemView.el); }, So far we defined the view and set the template; the Itemview and EmptyView properties will be used to render our view. The onBeforeRender is a function that will be called, as the name indicates, before the render method; this function will allow us to calculate the totals that will be displayed in the total model. onBeforeRender: function () { var subtotal = this.collection.getTotal(); var tax = subtotal * .08; var total = subtotal + tax; this.model.set({ subtotal: subtotal }); this.model.set({ tax: tax }); this.model.set({ total: total }); }, The onRender method is used here to check whether there are no models in the collection (that is, the user hasn't added a book to the shopping cart). If not, we should not display the header and footer regions of the view. onRender: function () { if (this.collection.length > 0) { this.$('thead').removeClass('hide'); this.$('tfoot').removeClass('hide'); } }, As we can see, Marionette does a great job offering functions that can remove a lot of boilerplate code and also give us full control over what is being rendered. Summary This article covered the introduction and usage of view types that Marionette has. Now you must be quite familiar with the Marionette.View and Marionette.ItemView view types of Marionette. Resources for Article: Further resources on this subject: Mobile Devices [Article] Puppet: Integrating External Tools [Article] Understanding Backbone [Article]
Read more
  • 0
  • 0
  • 2106

article-image-background-animation
Packt
19 Dec 2013
4 min read
Save for later

Background Animation

Packt
19 Dec 2013
4 min read
(For more resources related to this topic, see here.) Background-color animation Animating the background color of an element is a great way to draw our user's eyes to the object we want them to see. Another use for animating the background color of an element is to show that something has happened to the element. It's typically used in this way if the state of the object changes (added, moved, deleted, and so on), or if it requires attention to fix a problem. Due to the lack of support in jQuery 2.0 for animating background-color, we'll be using jQuery UI to give us the functionality we need to create this effect. Introducing the animate method The animate() method is one of the most useful methods jQuery has to offer in its bag of tricks in the animation realm. With it, we’re able to do things like, move an element across the page or alter and animating the properties of colors, backgrounds, text, fonts, the box model, position, display, lists, tables, generated content, and so on. Time for action – animating the body background-color Following the steps below, we're going to start by creating an example that changes the body background color. Start by creating a new file (using our template) called background-color.html and save it in our jquery-animation folder. Next, we'll need to include the jQuery UI library by adding this line directly under our jQuery library by adding this line: <script src = "js/jquery-ui.min.js"></script> A custom or stable build of jQuery UI can be downloaded from http://jqueryui.com, or you can link to the library using one of the three Content Delivery Networks (CDN) below. For fastest access to the library, go to http://jqueryui.com, scroll to the very bottom and look for the Quick Access section. Using the jQuery UI library JS file there will work just fine for our needs for the examples in this article. Media Template: http://code.jquery.com Google: http://developers.google.com/speed/libraries/devguide#jquery-ui Microsoft: http://asp.net/ajaxlibrary/cdn.ashx#jQuery_Releases_on_the_CDN_0 CDNJS: http://cdnjs.com/libraries/jquery Then, we'll add the following jQuery code to the anonymous function: var speed = 1500; $( "body").animate({ backgroundColor: "#D68A85" },speed); $( "body").animate({ backgroundColor: "#E7912D" },speed); $( "body").animate({ backgroundColor: "#CECC33" },speed); $( "body").animate({ backgroundColor: "#6FCD94" },speed); $( "body").animate({ backgroundColor: "#3AB6F1" },speed); $( "body").animate({ backgroundColor: "#8684D8" },speed); $( "body").animate({ backgroundColor: "#DD67AE" },speed); What just happened? First we added in the jQuery UI library to our page. This was needed because of the lack of support for animating the background color in the current version of jQuery. Next, we added in the code that will animate our background. We then set the speed variable to 1500 (milliseconds) so that we can control the duration of our animation. Lastly, using the animate() method, we set the background color of the body element and set duration to the variable we set above named speed. We duplicated the same line several times, changing only the hexadecimal value of the background color. The following screenshot is an illustration of colors the entire body background color animates through: Chaining together jQuery methods It's important to note that jQuery methods (animate() in this case) can be chained together. Our code mentioned previously would look like the following if we chained the animate() methods together: $("body")   .animate({ backgroundColor: "#D68A85"}, speed)  //red   .animate({ backgroundColor: "#E7912D"}, speed)  //orange   .animate({ backgroundColor: "#CECC33"}, speed)  //yellow   .animate({ backgroundColor: "#6FCD94"}, speed)  //green   .animate({ backgroundColor: "#3AB6F1"}, speed)  //blue   .animate({ backgroundColor: "#8684D8"}, speed)  //purple   .animate({ backgroundColor: "#DD67AE"}, speed); //pink Here's another example of chaining methods together: (selector).animate(properties).animate(properties).animate(properties) Have a go hero – extending our script with a loop In this example we used the animate() method and with some help from jQuery UI, we were able to animate the body background color of our page. Have a go at extending the script to use a loop, so that the colors continually animate without stopping once the script gets to the end of the function. Pop quiz – chaining with the animate() method Q1. Which code will properly animate our body background color from red to blue using chaining? $("body")   .animate({ background: "red"}, "fast")   .animate({ background: "blue"}, "fast"); $("body")   .animate({ background-color: "red"}, "slow")   .animate({ background-color: "blue"}, "slow"); $("body")   .animate({ backgroundColor:"red" })   .animate({ backgroundColor:"blue" }); $("body")   .animate({ backgroundColor,"red" }, "slow")   .animate({ backgroundColor,"blue" }, "slow");
Read more
  • 0
  • 0
  • 1775
article-image-reporting
Packt
19 Dec 2013
4 min read
Save for later

Reporting

Packt
19 Dec 2013
4 min read
(For more resources related to this topic, see here.) Creating a pie chart First, we made the component test CT for display purposes, but now let's create the CT to make it run. We will use the Direct function, so let's prepare that as well. In reality we've done this already. Duplicate a different app.html and change the JavaScript file like we have done before. Please see the source file for the code: 03_making_a_pie_chart/ct/dashboard/pie_app.html. Implementing the Direct function Next, prepare the Direct function to read the data. First, it's the config.php file that defines the API. Let's gather them together and implement the four graphs (source file: 04_implement_direct_function/php/config.php). .... 'MyAppDashBoard'=>array( 'methods'=>array( 'getPieData'=>array( 'len'=>0 ), 'getBarData'=>array( 'len'=>0 ), 'getLineData'=>array( 'len'=>0 ), 'getRadarData'=>array( 'len'=>0 ) ) .... Next, let's create the following methods to acquire data for the various charts: getPieData getBarData getLineData getRadarData First, implement the getPieData method for the pie chart. We'll implement the Direct method to get the data for the pie chart. Please see the actual content for the source code (source file: 04_implement_direct_function/php/classes/ MyAppDashBoard.php ). This is acquiring valid quotation and bill data items. With the data to be sent back to the client, set the array in items and set up the various names and data in a key array. You will now combine the definitions in the next model. Preparing the store for the pie chart Charts need a store, so let's define the store and model (source file: 05_prepare_the_store_for_the_pie_chart/app/model/ Pie.js). We'll create the MyApp.model.Pie class that has the name and data fields. Connect this with the data you set with the return value of the Direct function. If you increased the number of fields inside the model you just defined, make sure to amend the return field values, otherwise it won't be applied to the chart, so be careful. We'll use the model we made in the previous step and implement the store (source file: 05_prepare_the_store_for_the_pie_chart/app/model/ Pie.js). Ext.define('MyApp.store.Pie', { extend: 'Ext.data.Store', storeId: 'DashboardPie', model: 'MyApp.model.Pie', proxy: { type: 'direct', directFn: 'MyAppDashboard.getPieData', reader: { type: 'json', root: 'items' } } }) Then, define the store using the model we made and set up the Direct function we made earlier in the proxy. Creating the View We have now prepared the presentation data. Now, let's quickly create the view to display it (source file: 06_making_the_view/app/view/dashboard/Pie.js). Ext.define('MyApp.view.dashboard.Pie', { extend: 'Ext.panel.Panel', alias : 'widget.myapp-dashboard-pie', title: 'Pie Chart', layout: 'fit', requires: [ 'Ext.chart.Chart', 'MyApp.store.Pie' ], initComponent: function() { var me = this, store; store = Ext.create('MyApp.store.Pie'); Ext.apply(me, { items: [{ xtype: 'chart', store: store, series: [{ type: 'pie', field: 'data', showInLegend: true, label: { field: 'name', display: 'rotate', contrast: true, font: '18px Arial' } }] }] }); me.callParent(arguments); } }); Implementing the controller With the previous code, data is not being read by the store and nothing is being displayed. In the same way that reading was performed with onShow, let's implement the controller (source file: 06_making_the_view/app/controller/DashBoard.js): Ext.define('MyApp.controller.dashboard.DashBoard', { extend: 'MyApp.controller.Abstract', screenName: 'dashboard', init: function() { var me = this; me.control({ 'myapp-dashboard': { 'myapp-show': me.onShow, 'myapp-hide': me.onHide } }); }, onShow: function(p) { p.down('myapp-dashboard-pie chart').store.load(); }, onHide: function() { } }); With the charts we create from now on, as we create them it would be good to add the reading process to onShow. Let's take a look at our pie chart which appears as follows: Summary You must agree this is starting to look like an application! The dashboard is the first screen you see right after logging in. Charts are extremely effective in order to visually check a large and complicated amount of data. If you keep adding panels as and when you feel it's needed, you'll increase its practicability. This sample will become a customizable base for you to use in future projects. Resources for Article: Further resources on this subject: So, what is Ext JS? [Article] Buttons, Menus, and Toolbars in Ext JS [Article] Displaying Data with Grids in Ext JS [Article]
Read more
  • 0
  • 0
  • 1126

article-image-magic
Packt
16 Dec 2013
7 min read
Save for later

The Magic

Packt
16 Dec 2013
7 min read
(For more resources related to this topic, see here.) Application flow In the following diagram, from the Angular manual, you find a comprehensive schematic depiction of the program flow inside Angular: After the browser loads the HTML and parses it into a DOM, the angular.js script file is loaded. This can be added before or at the bottom of the <body> tag, although adding it at the bottom is preferred. Angular waits for the browser to fire the DOMContentLoaded event. This is similar to the way jQuery is bootstrapped, as illustrated in the following code: $(document).ready(function(){ // do jQuery }) In the Angular.js file, towards the end, after the entire code has been parsed by the browser, you will find the following code: jqLite(document).ready(function() { angularInit(document, bootstrap); }); The preceding code calls the function that looks for various flavors of the ng-app directive that you can use to bootstrap your Angular application. ['ng:app', 'ng-app', 'x-ng-app', 'data-ng-app'] Typically, the ng-app directive will be the HTML tag, but in theory, it could be any tag as long as there is only one of them. The module specification is optional and can tell the $injector service which of the defined modules to load. //index.html <!doctype html> <html lang="en" ng-app="tempApp"> <head> …... // app.js ….. angular.module('tempApp', ['serviceModule']) ….. In turn, the $injector service will create $rootscope, the parent scope of all Angular scopes, as the name suggests. This $rootscope is linked to DOM itself as a parent to all other Angular scopes. The $injector service will also create the $compile service that will traverse the DOM and look for directives. These directives are searched for within the complete list of declared Angular internal directives and custom directives at hand. This way, it can recognize directives declared as an element, as attributes, inside the class definition, or as a comment. Now that Angular is properly Bootstrapped, we can actually start executing some application code. This can be done in a variety of ways, shown as follows: In the initial examples, we started creating some Angular code with curly braces using some built-in Angular functions It is also possible to define a controller to control a specific part of the HTML page, as we have shown in the first tempCtrl code snippet We have also shown you how to use Angular's built-in router to manage your application using client-side routing As you can see, Angular extends the capabilities of HTML by providing a clever way to add new directives. The key ingredient here is the $injector service, which provides a way to look up for dependencies and create $rootscope. Different ways of injecting Let's look a bit more at how $injector does its work. Throughout all the examples in this book, we have used the array-style notation to define our controllers, modules, services, and directives. // app/ controllers.js tempApp.controller('CurrentCtrl', ['$scope', 'reading', function ($scope, reading) { $scope.temp = 17; ... This style is commonly referred to as annotation. Each injected value is annotated in the same order inside an array. You may have looked through the AngularJs website and may have seen different ways of defining functions. // angularJs home page JavaScript Projects example functionListCtrl($scope, Project) { $scope.projects = Project.query(); } So, what is the difference and why are we using another way of defining functions? The first difference you may notice is the definition of all the functions in the global scope. For reference, let's call this the simple injection method. The documentation states that this is a concise notation that really is only suited for demo applications because it is nothing but a potential clash waiting to happen. Any other JS library or framework you may have included could potentially have a function with the same name and cause your software to malfunction by executing this function instead of yours. After assigning the Angular module to a variable such as tempApp, we will chain the methods to that variable like we have done in this book so far; you could also just chain them directly as follows: angular.module('tempApp').controller('CurrentCtrl', function($scope) {}) These are essentially the same definitions and don't cause pollution in the global scope. The second difference that you may have noticed is in the way the dependencies are injected in the function. At the time of writing this book, most, if not all of the examples on the AngularJs website use the simple injection method. The dependencies are just parameters in the function definitions. Magically, Angular is able to figure out which parameter is what by the name because the order does not matter. So the preceding example could be rewritten as follows, and it would still function correctly: // reversedangularJs home page JavaScript Projects example functionListCtrl( Project, Scope ) { $scope.projects = Project.query(); } This is not a feature of the JavaScript language, so it must have been added by those smart Angular engineers. The magic behind this can be found in the injector. The parameters of the function are scanned, and Angular extracts the names of the parameters to be able to resolve them. The problem with this approach is that when you deploy a wonderful new application to production, it will probably be minified and even obfuscated. This will rename $scope and Project to something like a and b. Even Angular will then be unable to resolve the dependencies. There are two ways to solve this problem in Angular. You have seen one of them already, but we will explain it further. You can wrap the function in an array and type the names of the dependencies as strings before the function definition in the order in which you supplied them as arguments to the function. // app/ controllers.js tempApp.controller('CurrentCtrl', ['$scope', 'reading', function ($scope, reading) { $scope.temp = 17; ....... The corresponding order of the strings and the function arguments is significant here. Also, the strings should appear before the function arguments. If you prefer the definition without the array notation, there is still some hope. Angular provides a way to inform the injector service of the dependencies you are trying to inject. varCurrentCtrl = function($scope, reading) { $scope.temp = 17; $scope.save = function() { reading.save($scope.temp); } }; CurrentCtrl.$inject = ['$scope', 'reading']; tempApp.controller('CurrentCtrl', CurrentCtrl); As you can see, the definition is a bit more sizable, but essentially the same thing is happening here. The injector is informed by filling the $inject property of the function with an array of the injected dependencies. This is where Angular will then pick them up from. To understand how Angular accomplishes all of this, you should read this excellent blog post by Alex Rothenberg. Here, he explains how all of this works internally. The link to his blog is as follows: http://www.alexrothenberg.com/2013/02/11/the-magic-behind-angularjs-dependency-injection.html. Angular cleverly uses the toString() function of objects to be able to examine in which order the arguments were specified and what their names are. There is actually a third way to specify dependencies called ngmin, which is not native to Angular. It lets you use the simple injection method and parses and translates it to avoid minification problems. https://github.com/btford/ngmin Consider the following code: angular.module('whatever').controller('MyCtrl', function ($scope, $http) { ... }); ngmin will turn the preceding code into the following: angular.module('whatever').controller('MyCtrl', ['$scope','$http', function ($scope, $http) { ... }]);   Summary In this article, we started by looking at how AngularJS is bootstrapped. Then, we looked at how the injector works and why minification might ruin your plans there. We also saw that there are ways to avoid these problems by specifying dependencies differently. Resources for Article: Further resources on this subject: The Need for Directives [Article] Understanding Backbone [Article] Quick start – creating your first template [Article]
Read more
  • 0
  • 0
  • 1576

article-image-handling-authentication
Packt
13 Dec 2013
9 min read
Save for later

Handling Authentication

Packt
13 Dec 2013
9 min read
(for more resources related to this topic, see here.) Understanding Authentication methods In a world where security on the Internet is such a big issue, the need for great authentication methods is something that cannot be missed. Therefore, Zend Framework 2 provides a range of authentication methods that suits everyone's needs. Getting ready To make full use of this, I recommend a working Zend Framework 2 skeleton application to be set up. How to do it… The following is a list of authentication methods—or as they are called adapters—that are readily available in Zend Framework 2. We will provide a small overview of the adapter, and instructions on how you can use it. The DbTable adapter Constructing a DbTable adapter is pretty easy, if we take a look at the following constructor: public function __construct( // The ZendDbAdapterAdapter DbAdapter $zendDb, // The table table name to query on $tableName = null, // The column that serves as 'username' $identityColumn = null, // The column that serves as 'password' $credentialColumn = null, // Any optional treatment of the password before // checking, such as MD5(?), SHA1(?), etcetera $credentialTreatment = null ); The HTTP adapter After constructing the object we need to define the FileResolver to make sure there are actually user details parsed in. Depending on what we configured in the accept_schemes option, the FileResolver can either be set as a BasicResolver, a DigestResolver, or both. Let's take a quick look at how to set a FileResolver as a DigestResolver or BasicResolver (we do this in the /module/Application/src/Application/Controller/IndexController.php file): <?php namespace Application; // Use the FileResolver, and also the Http // authentication adapter. use ZendAuthenticationAdapterHttpFileResolver; use ZendAuthenticationAdapterHttp; use ZendMvcControllerAbstractActionController; class IndexController extends AbstractActionController { public function indexAction() { // Create a new FileResolver and read in our file to use // in the Basic authentication $basicResolver = new FileResolver(); $basicResolver->setFile( '/some/file/with/credentials.txt' ); // Now create a FileResolver to read in our Digest file $digestResolver = new FileResolver(); $digestResolver->setFile( '/some/other/file/with/credentials.txt' ); // Options doesn't really matter at this point, we can // fill them in to anything we like $adapter = new Http($options); // Now set our DigestResolver/BasicResolver, depending // on our $options set $adapter->setBasicResolver($basicResolver); $adapter->setDigestResolver($digestResolver); } } How it works… After two short examples, let's take a look at the other adapters available. The DbTable adapter Let's begin with probably the most used adapter of them all, the DbTable adapter. This adapter connects to a database and pulls the requested username/password combination from a table and, if all went well, it will return to you an identity, which is nothing more than the record that matched the username details. To instantiate the adapter, it requires a ZendDbAdapterAdapter in its constructor to connect with the database with the user details; there are also a couple of other options that can be set. Let's take a look at the definition of the constructor: The second (tableName) option speaks for itself as it is just the table name, which we need to use to get our users, the third and the fourth (identityColumn, credentialColumn) options are logical and they represent the username and password (or what we use) columns in our table. The last option, the credentialTreatment option, however, might not make a lot of sense. The credentialTreatment tells the adapter to treat the credentialColumn with a function before trying to query it. Examples of this could be to use the MD5 (?) function, PASSWORD (?), or SHA1 (?) function, if it was a MySQL database, but obviously this can differ per database as well. To give a small example on how the SQL can look like (the actual adapter builds this query up differently) with and without a credential treatment, take a look at the following examples: With credential treatment: SELECT * FROM `users` WHERE `username` = 'some_user' AND `password` = MD5('some_password'); Without credential treatment: SELECT * FROM `users` WHERE `username` = 'some_user' AND `password` = 'some_password'; When defining the treatment we should always include a question mark for where the password needs to come, for example, MD5 (?) would create MD5 ('some_password'), but without the question mark it would not insert the password. Lastly, instead of giving the options through the constructor, we can also use the setter methods for the properties: setTableName(), setIdentityColumn(), setCredentialColumn(), and setCredentialTreatment(). The HTTP adapter The HTTP authentication adapter is an adapter that we have probably all come across at least once in our Internet lives. We can recognize the authentication when we go to a website and there is a pop up showing where we can fill in our usernames and passwords to continue. This form of authentication is very basic, but still very effective in certain implementations, and therefore, a part of Zend Framework 2. There is only one big massive but to this authentication, and that is that it can (when using the basic authentication) send the username and password clear text through the browser (ouch!). There is however a solution to this problem and that is to use the Digest authentication, which is also supported by this adapter. If we take a look at the constructor of this adapter, we would see the following code line: public function __construct(array $config); The constructor accepts a load of keys in its config parameter, which are as follows: accept_schemes: This refers to what we want to accept authentication wise; this can be basic, digest, or basic digest. realm: This is a description of the realm we are in, for example Member's area. This is for the user only and is only to describe what the user is logging in for. digest_domains: These are URLs for which this authentication is working for. So if a user logs in with his details on any of the URLs defined, they will work. The URLs should be defined in a space-separated (weird, right?) list, for example /members/area /members/login. nonce_timeout: This will set the number of seconds the nonce (the hash users login with when we are using Digest authentication) is valid. Note, however, that nonce tracking and stale support are not implemented in Version 2.2 yet, which means it will authenticate again every time the nonce times out. use_opaque: This is either true or false (by default is true) and tells our adapter to send the opaque header to the client. The opaque header is a string sent by the server, which needs to be returned back on authentication. This does not work sometimes on Microsoft Internet Explorer browsers though, as they seem to ignore that header. Ideally the opaque header is an ever-changing string, to reduce predictability, but ZF 2 doesn't randomize the string and always returns the same hash. algorithm: This includes the algorithm to use for the authentication, it needs to be a supported algorithm that is defined in the supportedAlgos property. At the moment there is only MD5 though. proxy_auth: This boolean (by default is false) tells us if the authentication used is a proxy Authentication or not. It should be noted that there is a slight difference in files when using either Digest or Basic. Although both files have the same layout, they cannot be used interchangeably as the Digest requires the credentials to be MD5 hashed, while the Basic requires the credentials to be plain text. There should also always be a new line after every credential, meaning that the last line in the credential file should be empty. The layout of a credential file is as follows: username:realm:credentials For example: some_user:My Awesome Realm:clear text password Instead of a FileResolver, one can also use the ApacheResolver which can be used to read out htpasswd generated files, which comes in handy when there is already such a file in place. The Digest adapter The Digest adapter is basically the Http adapter without any Basic authentication. As the idea behind it is the same as the Http adapter, we will just go on and talk about the constructor, as that is a bit different in implementation: public function __construct($filename = null, $realm = null, $identity = null, $credential = null); As we can see the following options can be set when constructing the object: filename: This is the direct filename of the file to use with the Digest credentials, so no need to use a FileResolver with this one. realm: This identifies to the user what he/she is logging on to, for example My Awesome Realm or The Dragonborn's lair. As we are immediately trying to log on when constructing this, it does need to correspond with the credential file. identity: This is the username we are trying to log on with, and again it needs to resemble a user that is defined in the credential file to work. credential: This is the Digest password we try to log on with, and this again needs to match the password exactly like the one in the credential file. We can then, for example, just run $digestAdapter->getIdentity() to find out if we are successfully authenticated or not, resulting in NULL if we are not, and resulting in the identity column value if we are. The LDAP adapter Using the LDAP authentication is obviously a little more difficult to explain, so we will not go in to that full as that would take quite a while. What we will do is show the constructor of the LDAP adapter and explain its various options. However, if we want to know more about setting up an LDAP connection, we should take a look at the documentation of ZF2, as it is explained in there very well: public function __construct(array $options = array(), $identity = null, $credential = null); The options parameter in the construct refers to an array of configuration options that are compatible with the ZendLdapLdap configuration. There are literally dozens of options that can be set here so we advice to go and look at the LDAP documentation of ZF2 to know more about that. The next two parameters identity and credential are respectively the username and password again, so that explains itself really. Once you have set up the connection with the LDAP there isn't much left to do but to get the identity and see whether we were successfully validated or not. About Authentication Authentication in Zend Framework 2 works through specific adapters, which are always an implementation of the ZendAuthenticationAdapterAdapterInterface and thus, always provides the methods defined in there. However, the methods of Authentication are all different, and strong knowledge of the methods displayed previously is always a requirement. Some work through the browser, like the Http and Digest adapter, and others just require us to create a whole implementation like the LDAP and the DbTable adapter.
Read more
  • 0
  • 0
  • 1101
article-image-using-mongoid
Packt
09 Dec 2013
12 min read
Save for later

Using Mongoid

Packt
09 Dec 2013
12 min read
(For more resources related to this topic, see here.) Introducing Origin Origin is a gem that provides the DSL for Mongoid queries. Though at first glance, a question may seem to arise as to why we need a DSL for Mongoid queries; If we are finally going to convert the query to a MongoDB-compliant hash, then why do we need a DSL? Origin was extracted from Mongoid gem and put into a new gem, so that there is a standard for querying. It has no dependency on any other gem and is a standalone pure query builder. The idea was that this could be a generic DSL that can be used even without Mongoid! So, now we have a very generic and standard querying pattern. For example, in Mongoid 2.x we had the criteria any_in and any_of, and no direct support for the and, or, and nor operations. In Mongoid 2.x, the only way we could fire a $or or a $and query was like this: Author.where("$or" => {'language' => 'English', 'address.city' => 'London '}) And now in Mongoid 3, we have a cleaner approach. Author.or(:language => 'English', 'address.city' => 'London') Origin also provides good selectors directly in our models. So, this is now much more readable: Book.gte(published_at: Date.parse('2012/11/11')) Memory maps, delayed sync, and journals As we have seen earlier, MongoDB stores data in memory-mapped files of at most 2 GB each. After the data is loaded for the first time into the memory mapped files, we now get almost memory-like speeds for access instead of disk I/O, which is much slower. These memory-mapped files are preallocated to ensure that there is no delay of the file generation while saving data. However, to ensure that the data is not lost, it needs to be persisted to the disk. This is achieved by journaling. With journaling, every database operation is written to the oplog collection and that is flushed to disk every 100 ms. Journaling is turned on by default in the MongoDB configuration. This is not the actual data but the operation itself. This helps in better recovery (in case of any crash) and also ensures the consistency of writes. The data that is written to various collections are flushed to the disk every 60 seconds. This ensures that the data is persisted periodically and also ensures the speed of data access is almost as fast as memory. MongoDB relies on the operating system for the memory management of its memory-mapped files. This has the advantage of getting inherent OS benefits as the OS is improved. Also, there's the disadvantage of lack of control on how memory is managed by MongoDB. However, what happens if something goes wrong (server crashes, database stops, or disk is corrupted)? To ensure durability, whenever data is saved in files, the action is logged to a file in a chronological order. This is the journal entry, which is also a memory-mapped file but is synced with the disk every 100 ms. Using the journal, the database can be easily recovered in case of any crash. So, in the worst case scenario, we could potentially lose 100 ms of information. This is a fair price to pay for the benefits of using MongoDB. MongoDB journaling makes it a very robust and durable database. However, it also helps us decide when to use MongoDB and when not to use it. 100 ms is a long time for some services, such as financial core banking or maybe stock price updates. In such applications, MongoDB is not recommended. For most cases that are not related to heavy multi-table transactions like most financial applications MongoDB can be suitable. All these things are handled seamlessly, and we don't usually need to change anything. We can control this behavior via the configuration of MongoDB but usually it's not recommended. Let's now see how we save data using Mongoid. Updating documents and attributes As with ActiveModel specifications, save will update the changed attributes and return the updated object, otherwise it will return false on failure. The save! function will raise an exception on the error. In both cases, if we pass validate: false as a parameter to save, it will bypass the validations. A lesser-known persistence option is the upsert action. An upsert action creates a new document if it does not find it and overwrites the object if it finds it. A good reason to use upsert is in the find_and_modify action. For example, suppose we want to reserve a book in our Sodibee system, and we want to ensure that at any one point, there can be only one reservation for a book. In a traditional scenario: t1: Request-1 searches for a for a book which is not reserved and finds it t2: Now, it saves the book with the reservation information t3: Request-2 searches for a reservation for the same book and finds that the book is reserved t4: Request-2 handles the situation with either error or waits for reservation to be freed So far so good! However in a concurrent model, especially for web applications, it creates problems. t1: Request-1 searches for a book which is not reserved and finds it t2: Request-2 searches for a reservation for the same book and also gets back that book since it's not yet reserved t3: Request-1 saves the book with its reservation information t4: Request-2 now overwrites previous update and saves the book with its reservation information Now we have a situation where two requests think that the reservation for the book was successful and that is against our expectations. This is a typical problem that plagues most web applications. The various ways in which we can solve this is discussed in the subsequent sections. Write concern MongoDB helps us ensure write consistency. This means that when we write something to MongoDB, it now guarantees the success of the write operation. Interestingly, this is a configurable option and is set to acknowledged by default. This means that the write is guaranteed because it waits for an acknowledgement before returning success. In earlier versions of Mongoid, safe: true was turned off by default. This meant that success of the write operation was not guaranteed. The write concern is configured in Mongoid.yml as follows: development:sessions:default:hosts:- localhost:27017options:write:w: 1 The default write concern in Mongoid is configured with w: 1, which means that the success of a write operation is guaranteed. Let's see an example: class Authorinclude Mongoid::Documentfield :name, type: Stringindex( {name: 1}, {unique: true, background: true})end Indexing blocks read and write operations. Hence, its recommended to configure indexing in the background in a Rails application. We shall now start a Rails console and see how this reacts to a duplicate key index by creating two Author objects with the same name. irb> Author.create(name: "Gautam") => #<Author _id: 5143678345db7ca255000001, name: "Gautam"> irb> Author.create(name: "Gautam") Moped::Errors::OperationFailure: The operation: #<Moped::Protocol::Command @length=83 @request_id=3 @response_to=0 @op_code=2004 @flags=[] @full_collection_name="sodibee_development.$cmd" @skip=0 @limit=-1 @selector={:getlasterror=>1, :w=>1} @fields=nil> failed with error 11000: "E11000 duplicate key error index: sodibee_ development.authors.$name_1 dup key: { : "Gautam" }" As we can see, it has raised a duplicate key error and the document is not saved. Now, let's have some fun. Let's change the write concern to unacknowledged: development:sessions:default:hosts:- localhost:27017options:write:w: 0 The write concern is now set to unacknowledged writes. That means we do not wait for the MongoDB write to eventually succeed, but assume that it will. Now let's see what happens with the same command that had failed earlier. irb > Author.where(name: "Gautam").count=> 1irb > Author.create(name: "Gautam")=> #<Author _id: 5287cba54761755624000000, name: "Gautam">irb > Author.where(name: "Gautam").count=> 1 There seems to be a discrepancy here. Though Mongoid create returned successfully, the data was not saved to the database. Since we specified background: true for the name index, the document creation seemed to succeed as MongoDB had not indexed it yet, and we did not wait for acknowledging the success of the write operation. So, when MongoDB tries to index the data in the background, it realizes that the index criterion is not met (since the index is unique), and it deletes the document from the database. Now, since that was in the background, there is no way to figure this out on the console or in our Rails application. This leads to an inconsistent result. So, how can we solve this problem? There are various ways to solve this problem: We leave the Mongoid default write concern configuration alone. By default, it is w: 1 and it will raise an exception. This is the recommended approach as prevention is better than cure! Do not specify the background: true option. This will create indexes in the foreground. However, this approach is not recommended as it can cause a drop in performance because index creations block read and write access. Add drop_dups: true. This deletes data, so you have to be really careful when using this option. Other options to the index command create different types of indexes as shown in the following table: Index Type Example Description sparse index({twitter_name: 1}, { sparse: true}) This creates sparse indexes, that is, only the documents containing the indexed fields are indexed. Use this with care as you can get incomplete results. 2d 2dsphere index({:location => "2dsphere"}) This creates a two-dimensional spherical index. Text index MongoDB 2.4 introduced text indexes that are as close to free text search indexes as it gets. However, it does only basic text indexing—that is, it supports stop words and stemming. It also assigns a relevance score with each search. Text indexes are still an experimental feature in MongoDB, and they are not recommended for extensive use. Use ElasticSearch, Solr (Sunspot), or ThinkingSphinx instead. The following code snippet shows how we can specify a text index with weightage: index({ "name" => 'text',"last_name" => 'text'},{weights: {'name' => 10,'last_name' => 5,},name: 'author_text_index'}) There is no direct search support in Mongoid (as yet). So, if you want to invoke a text search, you need to hack around a little. irb> srch = Mongoid::Contextual::TextSearch.new(Author.collection,Author.all, 'john')=> #<Mongoid::Contextual::TextSearchselector: {}class: Authorsearch: johnfilter: {}project: N/Alimit: N/Alanguage: default>irb> srch.execute=> {"queryDebugString"=>"john||||||", "language"=>"english","results"=>[{"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058345db7c843f00030b'), "name"=>"Bettye Johns"}}, {"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058345db7c843f00046d'), "name"=>"John Pagac"}},{"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058345db7c843f000578'),"name"=>"Jeanie Johns"}}, {"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058445db7c843f0007e7')...{"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058a45db7c843f0025f1'), "name"=>"Alford Johns"}}], "stats"=>{"nscanned"=>25,"nscannedObjects"=>0, "n"=>25, "nfound"=>25, "timeMicros"=>31103},"ok"=>1.0} By default, text search is disabled in MongoDB configuration. We need to turn it on by adding setParameter = textSearchEnabled=true in the MongoDB configuration file, typically /usr/local/mongo.conf. This returns a result with statistical data as well the documents and their relevance score. Interestingly, it also specifies the language. There are a few more things we can do with the search result. For example, we can see the statistical information as follows: irb> a.stats=> {"nscanned"=>25, "nscannedObjects"=>0, "n"=>25, "nfound"=>25,"timeMicros"=>31103} We can also convert the data into our Mongoid model objects by using project, as shown in the following command: > a.project(:name).to_a=> [#<Author _id: 51fc058345db7c843f00030b, name: "Bettye Johns",last_name: nil, password: nil>, #<Author _id: 51fc058345db7c843f00046d,name: "John Pagac", last_name: nil, password: nil>, #<Author _id:51fc058345db7c843f000578, name: "Jeanie Johns", last_name: nil, password:nil> ... Some of the important things to remember are as follows: Text indexes can be very heavy in memory. They can return the documents, so the result can be large. We can use multiple keys (or filters) along with a text search. For example, the index with index ({ 'state': 1, name: 'text'}) will mandate the use of the state for every text search that is specified. A search for "john doe" will result in a search for "john"or "doe"or both. A search for "john"and "doe" will search for all "john"and "doe"in a random order. A search for ""john doe"", that is, with escaped quotes, will search for documents containing the exact phrase "john doe". A lot more data can be found at http://docs.mongodb.org/manual/tutorial/search-for-text/ Summary This article provides an excellent reference for using Mongoid. The article has examples with code samples and explanations that help in understanding the various features of Mongoid. Resources for Article: Further resources on this subject: Installing phpMyAdmin [Article] Integrating phpList 2 with Drupal [Article] Documentation with phpDocumentor: Part 1 [Article]
Read more
  • 0
  • 0
  • 1256

article-image-touch-events
Packt
26 Nov 2013
7 min read
Save for later

Touch Events

Packt
26 Nov 2013
7 min read
(For more resources related to this topic, see here.) The why and when of touch events These touch events come in a few different flavors. There are taps, swipes, rotations, pinches, and more complicated gestures available for you to make your users' experience more enjoyable. Each type of touch event has an established convention which is important to know, so your user has minimal friction when using your website for the first time: The tap is analogous to the mouse click in a traditional desktop environment, whereas, the rotate, pinch, and related gestures have no equivalence. This presents an interesting problem—what can those events be used for? Pinch events are typically used to scale views (have a look at the following screenshot of Google Maps accessed from an iPad, which uses pinch events to control the map's zoom level), while rotate events generally rotate elements on screen. Swipes can be used to move content from left to right or top to bottom. Multifigure gestures can be different as per your website's use cases, but it's still worthwhile examining if a similar interaction already exists, and thus an established convention. Web applications can benefit greatly from touch events. For example, imagine a paint-style web application. It could use pinches to control zoom, rotations to rotate the canvas, and swipes to move between color options. There is almost always an intuitive way of using touch events to enhance a web application. In order to get comfortable with touch events, it's best to get out there and start using them practically. To this end, we will be building a paint tool with a difference, that is, instead of using the traditional mouse pointer as the selection and drawing tool, we will exclusively use touch events. Together, we can learn just how awesome touch events are and the pleasure they can afford your users. Touch events are the way of the future, and adding them to your developer tool belt will mean that you can stay at the cutting edge. While MC Hammer may not agree with these events, the rest of the world will appreciate touching your applications. Testing while keeping your sanity (Simple) Touch events are awesome on touch-enabled devices. Unfortunately, your development environment is generally not touch-enabled, which means testing these events can be trickier than normal testing. Thankfully, there are a bunch of methods to test these events on non-touch enabled devices, so you don't have to constantly switch between environments. In saying that though, it's still important to do the final testing on actual devices, as there can be subtle differences. How to do it... Learn how to do initial testing with Google Chrome. Debug remotely with Mobile Safari and Safari. Debug further with simulators and emulators. Use general tools to debug your web application. How it works… As mentioned earlier, testing touch events generally isn't as straightforward as the normal update-reload-test flow in web development. Luckily though, the traditional desktop browser can still get you some of the way when it comes to testing these events. While you generally are targeting Mobile Safari on iOS and Chrome, or Browser on Android, it's best to do development and initial testing in Chrome on the desktop. Chrome has a handy feature in its debugging tools called Emulate touch events. This allows a lot of your mouse-related interactions to trigger their touch equivalents. You can find this setting by opening the Web Inspector, clicking on the Settings cog, switching to the Overrides tab, and enabling Emulate touch events (if the option is grayed out, simply click on Enable at the top of the screen). Using this emulation, we are able to test many touch interactions; however, there are certain patterns (such as scaling and rotating) that aren't really possible just using emulated events and the mouse pointer. To correctly test these, we need to find the middle ground between testing on a desktop browser and testing on actual devices. There's more… Remember the following about testing with simulators and browsers: Device simulators While testing with Chrome will get you some of the way, eventually you will want to use a simulator or emulator for the target device, be it a phone, tablet, or other touch-enabled device. Testing in Mobile Safari The iOS Simulator, for testing Mobile Safari on iPhone, iPod, and iPad, is available only on OS X. You must first download Xcode from the App Store. Once you have Xcode, go to Xcode | Preferences | Downloads | Components, where you can download the latest iOS Simulator. You can also refer to the following screenshot: You can then open the iOS Simulator from Xcode by accessing Xcode | Open Developer Tool | iOS Simulator. Alternatively, you can make a direct alias by accessing Xcode's package contents by accessing the Xcode icon's context menu in the Applications folder and selecting Show Package Contents. You can then access it at Xcode.app/Contents/Applications. Here you can create an alias, and drag it anywhere using Finder. Once you have the simulator (or an actual device); you can then attach Safari's Web Inspector to it, which makes debugging very easy. Firstly, open Safari and then launch Mobile Safari in either the iOS Simulator or on your actual device (ensure it's plugged in via its USB cable). Ensure that Web Inspector is enabled in Mobile Safari's settings via Settings | Safari | Advanced | Web Inspector. You can refer to the following screenshot: Then, in Safari, go to Develop | <your device name> | <tab name>. You can now use the Web Inspector to inspect your remote Mobile Safari instance as shown in the following screenshot: Testing in Mobile Safari in the iOS Simulator allows you to use the mouse to trigger all the relevant touch events. In addition, you can hold down the Alt key to test pinch gestures. Testing on Android's Chrome Testing on the Android on desktop is a little more complicated, due to the nature of the Android emulator. Being an emulator and not a simulator, its performance suffers (in the name of accuracy). It also isn't so straightforward to attach a debugger. You can download the Android SDK from http://developer.android.com/sdk. Inside the tools directory, you can launch the Android Emulator by setting up a new Android Virtual Device (AVD). You can then launch the browser to test the touch events. Keep in mind to use 10.0.2.2 instead of localhost to access local running servers. To debug with a real device and Google Chrome, official documentation can be followed at https://developers.google.com/chrome-developer-tools/docs/remote-debugging. When there isn't a built-in debugger or inspector, you can use generic tools such as Web Inspector Remote (Weinre). This tool is platform-agnostic, so you can use it for Mobile Safari, Android's Browser, Chrome for Android, Windows Phone, Blackberry, and so on. It can be downloaded from http://people.apache.org/~pmuellr/weinre. To start using Weinre, you need to include a JavaScript on the page you want to test, and then run the testing server. The URL of this JavaScript file is given to you when you run the weinre command from its downloaded directory. When you have Weinre running on your desktop, you can use its Web Inspector-like interface to do remote debugging and inspecting your touch events, you can send messages to console.log() and use a familiar interface for other debugging. Summary In this article the author has discussed about the touch events and has highlighted the limitations faced by such events when an unfavorable environment is encountered. He has also discussed the methods to test these events on non-touch enabled devices. Resources for Article: Further resources on this subject: Web Design Principles in Inkscape [Article] Top Features You Need to Know About – Responsive Web Design [Article] Sencha Touch: Layouts Revisited [Article]
Read more
  • 0
  • 0
  • 1175