Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Front-End Web Development

341 Articles
article-image-working-data-components
Packt
22 Nov 2013
15 min read
Save for later

Working with Data Components

Packt
22 Nov 2013
15 min read
(For more resources related to this topic, see here.) Introducing the DataList component The DataList component displays a collection of data in the list layout with several display types and supports AJAX pagination. The DataList component iterates through a collection of data and renders its child components for each item. Let us see how to use <p:dataList>to display a list of tag names as an unordered list: <p:dataList value="#{tagController.tags}" var="tag" type="unordered" itemType="disc"> #{tag.label} </p:dataList> The preceding <p:dataList> component displays tag names as an unordered list of elements marked with disc type bullets. The valid type options are unordered, ordered, definition, and none. We can use type="unordered" to display items as an unordered collection along with various itemType options such as disc, circle, and square. By default, type is set to unordered and itemType is set to disc. We can set type="ordered" to display items as an ordered list with various itemType options such as decimal, A, a, and i representing numbers, uppercase letters, lowercase letters, and roman numbers respectively. Time for action – displaying unordered and ordered data using DataList Let us see how to display tag names as unordered and ordered lists with various itemType options. Create <p:dataList> components to display items as unordered and ordered lists using the following code: <h:form> <p:panel header="Unordered DataList"> <h:panelGrid columns="3"> <h:outputText value="Disc"/> <h:outputText value="Circle" /> <h:outputText value="Square" /> <p:dataList value="#{tagController.tags}" var="tag" itemType="disc"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" itemType="circle"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" itemType="square"> #{tag.label} </p:dataList> </h:panelGrid> </p:panel> <p:panel header="Ordered DataList"> <h:panelGrid columns="4"> <h:outputText value="Number"/> <h:outputText value="Uppercase Letter" /> <h:outputText value="Lowercase Letter" /> <h:outputText value="Roman Letter" /> <p:dataList value="#{tagController.tags}" var="tag" type="ordered"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" type="ordered" itemType="A"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" type="ordered" itemType="a"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" type="ordered" itemType="i"> #{tag.label} </p:dataList> </h:panelGrid> </p:panel> </h:form> Implement the TagController.getTags() method to return a collection of tag objects: public class TagController { private List<Tag> tags = null; public TagController() { tags = loadTagsFromDB(); } public List<Tag> getTags() { return tags; } } What just happened? We have created DataList components to display tag names as an unordered list using type="unordered" and as an ordered list using type="ordered" with various supported itemTypes values. This is shown in the following screenshot: Using DataList with pagination support DataList has built-in pagination support that can be enabled by setting paginator="true". By enabling pagination, the various page navigation options will be displayed using the default paginator template. We can customize the paginator template to display only the desired options. The paginator can be customized using the paginatorTemplate option that accepts the following keys of UI controls: FirstPageLink LastPageLink PreviousPageLink NextPageLink PageLinks CurrentPageReport RowsPerPageDropdown Note that {RowsPerPageDropdown} has its own template, and options to display is provided via the rowsPerPageTemplate attribute (for example, rowsPerPageTemplate="5,10,15"). Also, {CurrentPageReport} has its own template defined with the currentPageReportTemplate option. You can use the {currentPage}, {totalPages}, {totalRecords}, {startRecord}, and {endRecord} keywords within the currentPageReport template. The default is "{currentPage} of {totalPages}". The default paginator template is "{FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink}". We can customize the paginator template to display only the desired options. For example: {CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown} The paginator can be positioned using the paginatorPosition attribute in three different locations: top, bottom, or both(default). The DataList component provides the following attributes for customization: rows: This is the number of rows to be displayed per page. first: This specifies the index of the first row to be displayed. The default is 0. paginator: This enables pagination. The default is false. paginatorTemplate: This is the template of the paginator. rowsPerPageTemplate: This is the template of the rowsPerPage dropdown. currentPageReportTemplate: This is the template of the currentPageReport UI. pageLinks: This specifies the maximum number of page links to display. The default value is 10. paginatorAlwaysVisible: This defines if paginator should be hidden when the total data count is less than the number of rows per page. The default is true. rowIndexVar: This specifies the name of the iterator to refer to for each row index. varStatus: This specifies the name of the exported request scoped variable to represent the state of the iteration same as in <ui:repeat> attribute varStatus. Time for action – using DataList with pagination Let us see how we can use the DataList component's pagination support to display five tags per page. Create a DataList component with pagination support along with custom paginatorTemplate: <p:panel header="DataList Pagination"> <p:dataList value="#{tagController.tags}" var="tag" id="tags" type="none" paginator="true" rows="5" paginatorTemplate="{CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown}" rowsPerPageTemplate="5,10,15"> <f:facet name="header"> Tags </f:facet> <h:outputText value="#{tag.id} - #{tag.label}" style="margin-left:10px" /> <br/> </p:dataList> </p:panel> What just happened? We have created a DataList component along with pagination support by setting paginator="true". We have customized the paginator template to display additional information such as CurrentPageReport and RowsPerPageDropdown. Also, we have used the rowsPerPageTemplate attribute to specify the values for RowsPerPageDropdown. The following screenshot displays the result: Displaying tabular data using the DataTable component DataTable is an enhanced version of the standard DataTable that provides various additional features such as: Pagination Lazy loading Sorting Filtering Row selection Inline row/cell editing Conditional styling Expandable rows Grouping and SubTable and many more In our TechBuzz application, the administrator can view a list of users and enable/disable user accounts. First, let us see how we can display list of users using basic DataTable as follows: <p:dataTable id="usersTbl" var="user" value="#{adminController.users}"> <f:facet name="header"> List of Users </f:facet> <p:column headerText="Id"> <h:outputText value="#{user.id}" /> </p:column> <p:column headerText="Email"> <h:outputText value="#{user.emailId}" /> </p:column> <p:column headerText="FirstName"> <h:outputText value="#{user.firstName}" /> </p:column> <p:column headerText="Disabled"> <h:outputText value="#{user.disabled}" /> </p:column> <f:facet name="footer"> Total no. of Users: #{fn:length(adminController.users)}. </f:facet> </p:dataTable> The following screenshot shows us the result: PrimeFaces 4.0 introduced the Sticky component and provides out-of-the-box support for DataTable to make the header as sticky while scrolling using the stickyHeader attribute: <p:dataTable var="user" value="#{adminController.users}" stickyHeader="true"> ... </p:dataTable> Using pagination support If there are a large number of users, we may want to display users in a page-by-page style. DataTable has in-built support for pagination. Time for action – using DataTable with pagination Let us see how we can display five users per page using pagination. Create a DataTable component using pagination to display five records per page, using the following code: <p:dataTable id="usersTbl" var="user" value="#{adminController.users}" paginator="true" rows="5" paginatorTemplate="{CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown}" currentPageReportTemplate="( {startRecord} - {endRecord}) of {totalRecords} Records." rowsPerPageTemplate="5,10,15"> <p:column headerText="Id"> <h:outputText value="#{user.id}" /> </p:column> <p:column headerText="Email"> <h:outputText value="#{user.emailId}" /> </p:column> <p:column headerText="FirstName"> <h:outputText value="#{user.firstName}" /> </p:column> <p:column headerText="Disabled"> <h:outputText value="#{user.disabled}" /> </p:column> </p:dataTable> What just happened? We have created a DataTable component with the pagination feature to display five rows per page. Also, we have customized the paginator template and provided an option to change the page size dynamically using the rowsPerPageTemplate attribute. Using columns sorting support DataTable comes with built-in support for sorting on a single column or multiple columns. You can define a column as sortable using the sortBy attribute as follows: <p:column headerText="FirstName" sortBy="#{user.firstName}"> <h:outputText value="#{user.firstName}" /> </p:column> You can specify the default sort column and sort order using the sortBy and sortOrder attributes on the <p:dataTable> element: <p:dataTable id="usersTbl2" var="user" value="#{adminController.users}" sortBy="#{user.firstName}" sortOrder="descending"> </p:dataTable> The <p:dataTable> component's default sorting algorithm uses a Java comparator, you can use your own customized sort method as well: <p:column headerText="FirstName" sortBy="#{user.firstName}" sortFunction="#{adminController.sortByFirstName}"> <h:outputText value="#{user.firstName}" /> </p:column> public int sortByFirstName(Object firstName1, Object firstName2) { //return -1, 0 , 1 if firstName1 is less than, equal to or greater than firstName2 respectively return ((String)firstName1).compareToIgnoreCase(((String)firstName2)); } By default, DataTable's sortMode is set to single, to enable sorting on multiple columns set sortMode to multiple. In multicolumns' sort mode, you can click on a column while the metakey (Ctrl or command) adds the column to the order group: <p:dataTable id="usersTbl" var="user" value="#{adminController.users}" sortMode="multiple"> </p:dataTable> Using column filtering support DataTable provides support for column-level filtering as well as global filtering (on all columns) and provides an option to hold the list of filtered records. In addition to the default match mode startsWith, we can use various other match modes such as endsWith, exact, and contains. Time for action – using DataTable with filtering Let us see how we can use filters with users' DataTable. Create a DataTable component and apply column-level filters and a global filter to apply filter on all columns: <p:dataTable widgetVar="userTable" var="user" value="#{adminController.users}" filteredValue="#{adminController.filteredUsers}" emptyMessage="No Users found for the given Filters"> <f:facet name="header"> <p:outputPanel> <h:outputText value="Search all Columns:" /> <p:inputText id="globalFilter" onkeyup="userTable.filter()" style="width:150px" /> </p:outputPanel> </f:facet> <p:column headerText="Id"> <h:outputText value="#{user.id}" /> </p:column> <p:column headerText="Email" filterBy="#{user.emailId}" footerText="contains" filterMatchMode="contains"> <h:outputText value="#{user.emailId}" /> </p:column> <p:column headerText="FirstName" filterBy="#{user.firstName}" footerText="startsWith"> <h:outputText value="#{user.firstName}" /> </p:column> <p:column headerText="LastName" filterBy="#{user.lastName}" filterMatchMode="endsWith" footerText="endsWith"> <h:outputText value="#{user.lastName}" /> </p:column> <p:column headerText="Disabled" filterBy="#{user.disabled}" filterOptions="#{adminController.userStatusOptions}" filterMatchMode="exact" footerText="exact"> <h:outputText value="#{user.disabled}" /> </p:column> </p:dataTable> Initialize userStatusOptions in AdminController ManagedBean. @ManagedBean @ViewScoped public class AdminController { private List<User> users = null; private List<User> filteredUsers = null; private SelectItem[] userStatusOptions; public AdminController() { users = loadAllUsersFromDB(); this.userStatusOptions = new SelectItem[3]; this.userStatusOptions[0] = new SelectItem("", "Select"); this.userStatusOptions[1] = new SelectItem("true", "True"); this.userStatusOptions[2] = new SelectItem("false", "False"); } //setters and getters } What just happened? We have used various filterMatchMode instances, such as startsWith, endsWith, and contains, while applying column-level filters. We have used the filterOptions attribute to specify the predefined filter values, which is displayed as a select drop-down list. As we have specified filteredValue="#{adminController.filteredUsers}", once the filters are applied the filtered users list will be populated into the filteredUsers property. This following is the resultant screenshot: Since PrimeFaces Version 4.0, we can specify the sortBy and filterBy properties as sortBy="emailId" and filterBy="emailId" instead of sortBy="#{user.emailId}" and filterBy="#{user.emailId}". A couple of important tips It is suggested to use a scope longer than the request such as the view scope to keep the filteredValue attribute so that the filtered list is still accessible after filtering. The filter located at the header is a global one applying on all fields; this is implemented by calling the client-side API method called filter(). The important part is to specify the ID of the input text as globalFilter, which is a reserved identifier for DataTable. Selecting DataTable rows Selecting one or more rows from a table and performing operations such as editing or deleting them is a very common requirement. The DataTable component provides several ways to select a row(s). Selecting single row We can use a PrimeFaces' Command component, such as commandButton or commandLink, and bind the selected row to a server-side property using <f:setPropertyActionListener>, shown as follows: <p:dataTable id="usersTbl" var="user" value="#{adminController.users}"> <!-- Column definitions --> <p:column style="width:20px;"> <p:commandButton id="selectButton" update=":form:userDetails" icon="ui-icon-search" title="View"> <f:setPropertyActionListener value="#{user}" target="#{adminController.selectedUser}" /> </p:commandButton> </p:column> </p:dataTable> <h:panelGrid id="userDetails" columns="2" > <h:outputText value="Id:" /> <h:outputText value="#{adminController.selectedUser.id}"/> <h:outputText value="Email:" /> <h:outputText value="#{adminController.selectedUser.emailId}"/> </h:panelGrid> Selecting rows using a row click Instead of having a separate button to trigger binding of a selected row to a server-side property, PrimeFaces provides another simpler way to bind the selected row by using selectionMode, selection, and rowKey attributes. Also, we can use the rowSelect and rowUnselect events to update other components based on the selected row, shown as follows: <p:dataTable var="user" value="#{adminController.users}" selectionMode="single" selection="#{adminController.selectedUser}" rowKey="#{user.id}"> <p:ajax event="rowSelect" listener="#{adminController.onRowSelect}" update=":form:userDetails"/> <p:ajax event="rowUnselect" listener="#{adminController.onRowUnselect}" update=":form:userDetails"/> <!-- Column definitions --> </p:dataTable> <h:panelGrid id="userDetails" columns="2" > <h:outputText value="Id:" /> <h:outputText value="#{adminController.selectedUser.id}"/> <h:outputText value="Email:" /> <h:outputText value="#{adminController.selectedUser.emailId}"/> </h:panelGrid> Similarly, we can select multiple rows using selectionMode="multiple" and bind the selection attribute to an array or list of user objects: <p:dataTable var="user" value="#{adminController.users}" selectionMode="multiple" selection="#{adminController.selectedUsers}" rowKey="#{user.id}"> <!-- Column definitions --> </p:dataTable> rowKey should be a unique identifier from your data model and should be used by DataTable to find the selected rows. You can either define this key by using the rowKey attribute or by binding a data model that implements org.primefaces.model.SelectableDataModel. When the multiple selection mode is enabled, we need to hold the Ctrl or command key and click on the rows to select multiple rows. If we don't hold on to the Ctrl or command key and click on a row and the previous selection will be cleared with only the last clicked row selected. We can customize this behavior using the rowSelectMode attribute. If you set rowSelectMode="add", when you click on a row, it will keep the previous selection and add the current selected row even though you don't hold the Ctrl or command key. The default rowSelectMode value is new. We can disable the row selection feature by setting disabledSelection="true". Selecting rows using a radio button / checkbox Another very common scenario is having a radio button or checkbox for each row, and the user can select one or more rows and then perform actions such as edit or delete. The DataTable component provides a radio-button-based single row selection using a nested <p:column> element with selectionMode="single": <p:dataTable var="user" value="#{adminController.users}" selection="#{adminController.selectedUser}" rowKey="#{user.id}"> <p:column selectionMode="single"/> <!-- Column definitions --> </p:dataTable> The DataTable component also provides checkbox-based multiple row selection using a nested <p:column> element with selectionMode="multiple": <p:dataTable var="user" value="#{adminController.users}" selection="#{adminController.selectedUsers}" rowKey="#{user.id}"> <p:column selectionMode="multiple"/> <!-- Column definitions --> </p:dataTable> In our TechBuzz application, the administrator would like to have a facility to be able to select multiple users and disable them at one go. Let us see how we can implement this using the checkbox-based multiple rows selection.
Read more
  • 0
  • 0
  • 2166

article-image-foundations
Packt
20 Nov 2013
6 min read
Save for later

Foundations

Packt
20 Nov 2013
6 min read
(For more resources related to this topic, see here.) Installation If you do not have node installed, visit: http://nodejs.org/download/. There is also an installation guide on the node GitHub repository wiki if you prefer not to or cannot use an installer: https://github.com/joyent/node/wiki/Installation. Let's install Express globally: npm install -g express If you have downloaded the source code, install its dependencies by running this command: npm install Testing Express with Mocha and SuperTest Now that we have Express installed and our package.json file in place, we can begin to drive out our application with a test-first approach. We will now install two modules to assist us: mocha and supertest. Mocha is a testing framework for node; it's flexible, has good async support, and allows you to run tests in both a TDD and BDD style. It can also be used on both the client and server side. Let's install Mocha with the following command: npm install -g mocha –-save-dev SuperTest is an integration testing framework that will allow us to easily write tests against a RESTful HTTP server. Let's install SuperTest: npm install supertest –-save-dev Continuous testing with Mocha One of the great things about working with a dynamic language and one of the things that has drawn me to node is the ability to easily do Test-Driven Development and continuous testing. Simply run Mocha with the -w watch switch and Mocha will respond when changes to our codebase are made, and will automatically rerun the tests: mocha -w Extracting routes Express supports multiple options for application structure. Extracting elements of an Express application into separate files is one option; a good candidate for this is routes. Let's extract our route heartbeat into ./lib/routes/heartbeat.js; the following listing simply exports the route as a function called index: exports.index = function(req, res){ res.json(200, 'OK'); }; Let's make a change to our Express server and remove the anonymous function we pass to app.get for our route and replace it with a call to the function in the following listing. We import the route heartbeat and pass in a callback function, heartbeat.index: var express = require('express') , http = require('http') , config = require('../configuration') , heartbeat = require('../routes/heartbeat') , app = express(); app.set('port', config.get('express:port')); app.get('/heartbeat', heartbeat.index); http.createServer(app).listen(app.get('port')); module.exports = app; 404 handling middleware In order to handle a 404 Not Found response, let's add a 404 not found middleware. Let's write a test, ./test/heartbeat.js; the content type returned should be JSON and the status code expected should be 404 Not Found: describe('vision heartbeat api', function(){ describe('when requesting resource /missing', function(){ it('should respond with 404', function(done){ request(app) .get('/missing') .expect('Content-Type', /json/) .expect(404, done); }) }); }); Now, add the following middleware to ./lib/middleware/notFound.js. Here we export a function called index and call res.json, which returns a 404 status code and the message Not Found. The next parameter is not called as our 404 middleware ends the request by returning a response. Calling next would call the next middleware in our Express stack; we do not have any more middleware due to this, it's customary to add error middleware and 404 middleware as the last middleware in your server: exports.index = function(req, res, next){ res.json(404, 'Not Found.'); }; Now add the 404 not found middleware to ./lib/express/index.js: var express = require('express') , http = require('http') , config = require('../configuration') , heartbeat = require('../routes/heartbeat') , notFound = require('../middleware/notFound') , app = express(); app.set('port', config.get('express:port')); app.get('/heartbeat', heartbeat.index); app.use(notFound.index); http.createServer(app).listen(app.get('port')); module.exports = app; Logging middleware Express comes with a logger middleware via Connect; it's very useful for debugging an Express application. Let's add it to our Express server ./lib/express/index.js: var express = require('express') , http = require('http') , config = require('../configuration') , heartbeat = require('../routes/heartbeat') , notFound = require('../middleware/notFound') , app = express(); app.set('port', config.get('express:port')); app.use(express.logger({ immediate: true, format: 'dev' })); app.get('/heartbeat', heartbeat.index); app.use(notFound.index); http.createServer(app).listen(app.get('port')); module.exports = app; The immediateoption will write a log line on request instead of on response. The devoption provides concise output colored by the response status. The logger middleware is placed high in the Express stack in order to log all requests. Logging with Winston We will now add logging to our application using Winston; let's install Winston: npm install winston --save The 404 middleware will need to log 404 not found, so let's create a simple logger module, ./lib/logger/index.js; the details of our logger will be configured with Nconf. We import Winston and the configuration modules. We define our Logger function, which constructs and returns a file logger—winston.transports. File—that we configure using values from our config. We default the loggers maximum size to 1 MB, with a maximum of three rotating files. We instantiate the Logger function, returning it as a singleton. var winston = require('winston') , config = require('../configuration'); function Logger(){ return winston.add(winston.transports.File, { filename: config.get('logger:filename'), maxsize: 1048576, maxFiles: 3, level: config.get('logger:level') }); } module.exports = new Logger(); Let's add the Loggerconfiguration details to our config files ./config/ development.jsonand ./config/test.json: { "express": { "port": 3000 }, "logger" : { "filename": "logs/run.log", "level": "silly", } } Let's alter the ./lib/middleware/notFound.js middleware to log errors. We import our logger and log an error message via logger when a 404 Not Found response is thrown: var logger = require("../logger"); exports.index = function(req, res, next){ logger.error('Not Found'); res.json(404, 'Not Found'); }; Summary This article has shown in detail with all the commands how Node.js is installed along with Express. The testing of our Express with Mocha and SuperTest is shown in detail. The logging in into our application is shown with middleware and Winston. Resources for Article: Further resources on this subject: Spring Roo 1.1: Working with Roo-generated Web Applications [Article] Building tiny Web-applications in Ruby using Sinatra [Article] Develop PHP Web Applications with NetBeans, VirtualBox and Turnkey LAMP Appliance [Article]
Read more
  • 0
  • 0
  • 2287

article-image-css3-animation
Packt
18 Nov 2013
7 min read
Save for later

CSS3 Animation

Packt
18 Nov 2013
7 min read
(For more resources related to this topic, see here.) The websites, we see today, are complex and complicated. By complex and complicated, we are referring to the development of these websites and not the webpage itself. We see animations and complex features. Prior to HTML5 and CSS3, JavaScript was used extensively for this purpose. HTML was incorrectly used for styling when it was expected to design the structural markup of the page. However, with the advent of CSS, it is a good practice to use HTML for markup and CSS for styling. CSS3 brings along transforms, transition elements, and animation features that make it easier to develop awesome features. In transition, we can view the change from a single state to other but when it comes to multiple states, Animation is the solution. Let's discuss the various properties of CSS3 Animations and then we will incorporate all of that in a code to understand it better. @keyframes The points at which the transition should take place can be defined using the @keyframes property. As of now, we need to add a vendor prefix to the @keyframes property as it is still in its development state. In future, when it is accepted as a standard, then we do not have to use a vendor prefix. We can use percentage or from and to keywords to implement the change in state from one CSS style to another. animation-name We need to apply animation to an element. This property enables us to do so by applying it to the animation name defined in the keyframes rule. However, it cannot be a standalone property and has to be used in conjunction with other animation properties. animation-duration Using this feature, we can define the duration of the animation. If we specify the animation-duration to 5 seconds, changes in the CSS defined states will need to be completed within 5 seconds. animation-delay Similar to the delay property in transition, the delay feature will delay the animation by the time period specified. animation-timing-function Similar to the timing function, this property decides the speed of transition. It behaves the same way as the transition timing function that we have seen earlier. animation-iteration-count We can decide the number of iteration carried out in the animation phase using this property. Setting this property to infinite will mean that the animation will never stop. animation-direction We can decide the direction of the animation using this property. We can use values like reverse, alternate to define the direction of the element to be animated. animation-play-state Using this feature, we can determine whether the animation would be running or paused accordingly. Now that we had a look at these properties, we will now incorporate some of these properties in a code and understand the functionality in a better way. Hence, to gain a practical insight, let's look at the following code. <!DOCTYPE html> <html> <head> <style> body { background:#000; color:#fff; } #trigger { width:100px; height:100px; position:absolute; top:50%; margin:-50px 0 0 -50px; left:50%; background: black; border-radius:50px; /*set the animation*/ /*[animation name] [animation duration] [animation timing function] [animation delay] [animation iterations count] [animation direction]*/ animation: glowness 5s linear 0s 5 alternate; -moz-animation: glowness 5s linear 0s 5 alternate; /* Firefox */ -webkit-animation: glowness 5s linear 0s 5 alternate; /* Safari and Chrome */ -o-animation: glowness 5s linear 0s 5 alternate; /* Opera */ -ms-animation: glowness 5s linear 0s 5 alternate; /* IE10 */ } #trigger:hover { animation-play-state: paused; -moz-animation-play-state: paused; -webkit-animation-play-state: paused; -o-animation-play-state: paused; -ms-animation-play-state: paused; } /*animation keyframes*/ @keyframes glowness { 0% {box-shadow: 0 0 80px orange;} 25% {box-shadow: 0 0 150px red;} 50% {box-shadow: 0 0 70px pink;} 75% {box-shadow: 0 0 50px violet;} 100% {box-shadow: 0 0 100px yellow;} } @-moz-keyframes glowness /* Firefox */ { 0% {box-shadow: 0 0 80px orange;} 25% {box-shadow: 0 0 150px red;} 50% {box-shadow: 0 0 70px pink;} 75% {box-shadow: 0 0 50px violet;} 100% {box-shadow: 0 0 100px yellow;} } @-webkit-keyframes glowness /* Safari and Chrome */ { 0% {box-shadow: 0 0 80px orange;} 25% {box-shadow: 0 0 150px red;} 50% {box-shadow: 0 0 70px pink;} 75% {box-shadow: 0 0 50px violet;} 100% {box-shadow: 0 0 100px yellow;} } @-o-keyframes glowness /* Opera */ { 0% {box-shadow: 0 0 80px orange;} 25% {box-shadow: 0 0 150px red;} 50% {box-shadow: 0 0 70px pink;} 75% {box-shadow: 0 0 50px violet;} 100% {box-shadow: 0 0 100px yellow;} } @-ms-keyframes glowness /* IE10 */ { 0% {box-shadow: 0 0 20px green;} 25% {box-shadow: 0 0 150px red;} 50% {box-shadow: 0 0 70px pink;} 75% {box-shadow: 0 0 50px violet;} 100% {box-shadow: 0 0 100px yellow;} } </style> <script> // animation started (buggy on firefox) $('#trigger').on('animationstart mozanimationstart webkitAnimationStart oAnimationStart msanimationstart',function() { $('p').html('animation started'); }) // animation paused $('#trigger').on('mouseover',function(){ $('p').html('animation paused'); }) // animation re-started $('#trigger').on('mouseout',function(){ $('p').html('animation re-started'); }) // animation ended $('#trigger').on('animationend mozanimationend webkitAnimationEnd oAnimationEnd msanimationend',function() { $('p').html('animation ended'); }) //iteration count var i =0; $('#trigger').on('animationiteration mozanimationiteration webkitAnimationIteration oAnimationIteration msanimationiteration', function() { i++; $('p').html('animation iteration='+i); }) </script> </head> <body> <div id="trigger"></div> </body> </html> The output of the code on execution would be as follows: We have used –webkit as the prefix in this example as we are executing the code in Google Chrome. Please us –moz prefix for Firefox and –o- for Opera. Comments are added in the code so that we can understand it easily. Apart from HTML5 and CSS3, we have used a bit of JQuery. Let’s go through the animation part of the code to understand it better. In the CSS3 styles, we have mentioned the animation direction as alternate as a result of which the animation would be in a different direction after the first iteration. We have used the hover property. In this code, whenever we hover over the object, the animation is paused. We have also defined the glowness of the object in keyframes. We have also mentioned how the colors change and defined a box-shadow attribute for the animation in keyframes. We have defined the <script> tag in which we have included the JavaScript and JQuery code. We have used the trigger attribute. The trigger() method triggers a particular event and the default behavior of an event with regards to the chosen elements. We have used mouseover and mouseout properties. The mouseover and mouseout event fires when the user moves the mouse pointer over an element and out of an element respectively. We have used those events in conjunction with the start, end and pausing of the animation. Therefore, we can create complex animations using CSS3. Coding is an art which gets better with practice. Hence, we need to implement it practically in order to know the subtle nuances of HTML5 and CSS3. However, we can achieve that after a considerable amount of practice. However, we are just on the shore; the sea of knowledge is far beyond. In this article, we have covered a lot of HTML5 and CSS3 features. Instead of wading through loads of theory, the concepts in this article are explained in a practical manner using code samples to demonstrate the new features of HTML5 and CSS3. The code samples are such that you can copy the code (the entire code is written instead of code snippets) and execute it for better understanding. Transition, transformation, and animation are also explained in a lucid manner, and there is a gradual increase in the difficulty level throughout the article. By the end of the book, you will be thoroughly acquainted with HTML5 and CSS3, enabling you to design a web page using the included code samples with ease. Click on the following link to have a look at the book: http://www.packtpub.com/html5-and-css3-for-transition-transformation-animation/book Summary This article has discussed how HTML5 and CSS3 features can be used used in websites. There is a detailed discussion on the animations used in the websites offered by CSS3. Resources for Article: Further resources on this subject: Mobiles First – How and Why [Article] Creating an Animated Gauge with CSS3 [Article] HTML5 Canvas [Article]
Read more
  • 0
  • 0
  • 1637

article-image-fuelphp
Packt
15 Nov 2013
11 min read
Save for later

FuelPHP

Packt
15 Nov 2013
11 min read
(For more resources related to this topic, see here.) Since it is community-driven, everyone is in an equal position to spot bugs, provide fixes, or add new features to the framework. This has led to the creation of features such as the new temporal ORM (Object Relation Mapper), which is a first for any PHP-based ORM. This also means that everyone can help build tools that make development easier, more straightforward, and quicker. The framework is lightweight and allows developers to load only what they need. It's a configuration over convention approach. Instead of enforcing conventions, they act as recommendations and best practices. This allows new developers to jump onto a project and catch up to speed quicker. It also helps when we want to find extra team members for projects. A brief history of FuelPHP FuelPHP started out with the goal of adopting the best practices from other frameworks to form a thoroughly modern starting point, which makes full use of PHP Version 5.3 features, such as namespaces. It has little in the way of legacy and compatibility issues that can affect older frameworks. The framework was started in the year 2010 by Dan Horrigan. He was joined by Phil Sturgeon, Jelmer Schreuder, Harro Verton, and Frank de Jonge. FuelPHP was a break from other frameworks such as CodeIgniter, which was basically still a PHP 4 framework. This break allowed for the creation of a more modern framework for PHP 5.3, and brings together decades of experience of other languages and frameworks, such as Ruby on Rails and Kohana. After a period of community development and testing, Version 1.0 of the FuelPHP framework was released in July 2011. This marked a version ready for use on production sites and the start of the growth of the community. The community provides periodic releases (at the time of writing, it is up to Version 1.7) with a clear roadmap (http://fuelphp.com/roadmap) of features to be added. This also includes a good guide of progress made to date. The development of FuelPHP is an open process and all the code is hosted on GitHub at https://github.com/fuel/fuel, and the main core packages can be found in other repositories on the Fuel GitHub account—a full list of these can be found at https://github.com/fuel/. Features of FuelPHP Using a Bespoke PHP or a custom-developed framework could give you a greater performance. FuelPHP provides many features, documentation, and a great community. The following sections describe some of the most useful features. (H)MVC Although FuelPHP is a Model-View-Controller (MVC) framework, it was built to support the HMVC variant of MVC. Hierarchical Model-View-Controller (HMVC) is a way of separating logic and then reusing the controller logic in multiple places. This means that when a web page is generated using a theme or a template section, it can be split into multiple sections or widgets. Using this approach, it is possible to reuse components or functionality throughout a project or in multiple projects. In addition to the usual MVC structure, FuelPHP allows the use of presentation modules (ViewModels). These are a powerful layer that sits between the controller and the views, allowing for a smaller controller while still separating the view logic from both the controller and the views. If this isn't enough, FuelPHP also supports a router-based approach where you can directly route to a closure. This then deals with the execution of the input URI. Modular and extendable The core of FuelPHP has been designed so that it can be extended without the need for changing any code in the core. It introduces the notion of packages, which are self-contained functionality that can be shared between projects and people. Like the core, in the new versions of FuelPHP, these can be installed via the Composer tool . Just like packages, functionality can also be divided into modules. For example, a full user-authentication module can be created to handle user actions, such as registration. Modules can include both logic and views, and they can be shared between projects. The main difference between packages and modules is that packages can be extensions of the core functionality and they are not routable, while modules are routable. Security Everyone wants their applications to be as secure as possible; to this end, FuelPHP handles some of the basics for you. Views in FuelPHP will encode all the output to ensure that it's secure and is capable of avoiding Cross-site scripting (XSS) attacks. This behavior can be overridden or can be cleaned by the included htmLawed library. The framework also supports Cross-site request forgery (CSRF) prevention with tokens, input filtering, and the query builder, which tries to help in preventing SQL injection attacks. PHPSecLib is used to offer some of the security features in the framework. Oil – the power of the command line If you are familiar with CakePHP or the Zend framework or Ruby on Rails, then you will be comfortable with FuelPHP Oil. It is the command-line utility at the heart of FuelPHP—designed to speed up development and efficiency. It also helps with testing and debugging. Although not essential, it proves indispensable during development. Oil provides a quick way for code generation, scaffolding, running database migrations, debugging, and cron-like tasks for background operations. It can also be used for custom tasks and background processes. Oil is a package and can be found at https://github.com/fuel/oil. ORM FuelPHP also comes with an Object Relation Mapper (ORM) package that helps in working with various databases through an object-oriented approach. It is relatively lightweight and is not supposed to replace the more complex ORMs such as Doctrine or Propel. The ORM also supports data relations such as: belongs-to has-one has-many many-to-many relationships Another nice feature is cascading deletions; in this case, the ORM will delete all the data associated with a single entry. The ORM package is available separately from FuelPHP and is hosted on GitHub at https://github.com/fuel/orm. Base controller classes and model classes FuelPHP includes several classes to give a head start on projects. These include controllers that help with templates, one for constructing RESTful APIs, and another that combines both templates and RESTful APIs. On the model side, base classes include CRUD (Create, Read, Update, and Delete) operations. There is a model for soft deletion of records, one for nested sets, and lastly a temporal model. This is an easy way of keeping revisions of data. The authentication package The authentication framework gives a good basis for user authentication and login functionality. It can be extended using drivers for new authentication methods. Some of the basics such as groups, basic ACL functions, and password hashing can be handled directly in the authentication framework. Although the authentication package is included when installing FuelPHP, it can be upgraded separately to the rest of the application. The code can be obtained from https://github.com/fuel/auth. Template parsers The parser package makes it even easier to separate logic from views instead of embedding basic PHP into the views. FuelPHP supports many template languages, such as Twig, Markdown, Smarty, and HTML Abstraction Markup Language (Haml). Documentation Although not particularly a feature of the actual framework, the documentation for FuelPHP is one of the best available. It is kept up-to-date for each release and can be found at http://fuelphp.com/docs/. What to look forward to in Version 2.0 Although this book focuses on FuelPHP 1.6 and newer, it is worth looking forward to the next major release of the framework. It brings significant improvements but also makes some changes to the way the framework functions. Global scope and moving to dependency injection One of the nice features of FuelPHP is the global scope that allows easy static syntax and instances when needed. One of the biggest changes in Version 2 is the move away from static syntax and instances. The framework used the Multiton design pattern, rather than the Singleton design pattern. Now, the majority of Multitons will be replaced with the Dependency Injection Container (DiC) design pattern , but this depends on the class in question. The reason for the changes is to allow the unit testing of core files and to dynamically swap and/or extend our other classes depending upon the needs of the application. The move to dependency injection will allow all the core functionality to be tested in isolation. Before detailing the next feature, let's run through the design patterns in more detail. Singleton Ensures that a class only has a single instance and it provides a global point of access to it. The thinking is that a single instance of a class or object can be more efficient, but it can add unnecessary restrictions to classes that may be better served using a different design pattern. Multiton This is similar to the singleton pattern but expands upon it to include a way of managing a map of named instances as key-value pairs. So instead of having a single instance of a class or object, this design pattern ensures that there is a single instance for each key-value pair. Often the multiton is known as a registry of singletons. Dependency injection container This design pattern aims to remove hard coded dependencies and make is possible to change them either at run time or compile time. One example is ensure that variables have default values but also allow for them to be overridden, also allow for other objects to be passed to class for manipulation. It allows for mock objects to be used whilst testing functionality. Coding standards One of the far-reaching changes will be the difference in coding standards. FuelPHP Version 2.0 will now conform to both PSR-0 and PSR-1. This allows a more standard auto-loading mechanism and the ability to use Composer. Although Composer compatibility was introduced in Version 1.5, this move to PSR is for better consistency. It means that the method names will follow the "camelCase" method rather than the current "snake_case" method names. Although a simple change, this is likely to have a large effect on existing projects and APIs. With a similar move of other PHP frameworks to a more standardized coding standard, there will be more opportunities to re-use functionality from other frameworks. Package management and modularization Package management for other languages such as Ruby and Ruby on Rails has made sharing pieces of code and functionality easy and common-place. The PHP world is much larger and this same sharing of functionality is not as common. PHP Extension and Application Repository (PEAR) was a precursor of most package managers. It is a framework and distribution system for re-usable PHP components. Although infinitely useful, it is not as widely supported by the more popular PHP frameworks. Starting with FuelPHP 1.6 and leading into FuelPHP 2.0, dependency management will be possible through Composer (http://getcomposer.org). This deals with not only single packages, but also their dependencies. It allows projects to consistently set up with known versions of libraries required by each project. This helps not only with development, but also its testability of the project as well as its maintainability. It also protests against API changes. The core of FuelPHP and other modules will be installed via Composer and there will be a gradual migration of some Version 1 packages. Backwards compatibility A legacy package will be released for FuelPHP that will provide aliases for the changed function names as part of the change in the coding standards. It will also allow the current use of static function calling to continue working, while allowing for a better ability to unit test the core functionality. Speed boosts Although initially slower during the initial alpha phases, Version 2.0 is shaping up to be faster than Version 1.0. Currently, the beta version (at the time of writing) is 7 percent faster while requiring 8 percent less memory. This might not sound much, but it can equate to a large saving if running a large website over multiple servers. These figures may get better in the final release of Version 2.0 after the remaining optimizations are complete. Summary We now know a little more about the history of FuelPHP and some of the useful features such as ORM, authentication, modules, (H)MVC, and Oil (the command-line interface). We have also listed the following useful links, including the official API documentation (http://fuelphp.com/docs/) and the FuelPHP home page (http://fuelphp.com). This article also touched upon some of the new features and changes due in Version 2.0 of FuelPHP. Resources for Article: Further resources on this subject: Installing PHP-Nuke [Article] Installing phpMyAdmin [Article] Integrating phpList 2 with Drupal [Article]
Read more
  • 0
  • 0
  • 2832

article-image-introduction-wordpress-applications-frontend
Packt
12 Nov 2013
7 min read
Save for later

Introduction to a WordPress application's frontend

Packt
12 Nov 2013
7 min read
(For more resources related to this topic, see here.) Basic file structure of a WordPress theme As WordPress developers, you should have a fairly good idea about the default file structure of WordPress themes. Let's have a brief introduction of the default files before identifying their usage in web applications. Think about a typical web application layout where we have a common header, footer, and content area. In WordPress, the content area is mainly populated by pages or posts. The design and the content for pages are provided through the page.php template, while the content for posts is provided through one of the following templates: index.php archive.php category.php single.php Basically, most of these post-related file types are developed to cater to the typical functionality in blogging systems, and hence can be omitted in the context of web applications. Since custom posts are widely used in application development, we need more focus on templates such as single-{post_type} and archive-{post_type} than category.php, archive.php, and tag.php. Even though default themes contain a number of files for providing default features, only the style.css and index.php files are enough to implement a WordPress theme. Complex web application themes are possible with the standalone index.php file. In normal circumstances, WordPress sites have a blog built on posts, and all the remaining content of the site is provided through pages. When referring to pages, the first thing that comes to our mind is the static content. But WordPress is a fully functional CMS, and hence the page content can be highly dynamic. Therefore, we can provide complex application screens by using various techniques on pages. Let's continue our exploration by understanding the theme file execution hierarchy. Understanding template execution hierarchy WordPress has quite an extensive template execution hierarchy compared to general web application frameworks. However, most of these templates will be of minor importance in the context of web applications. Here, we are going to illustrate the important template files in the context of web applications. The complete template execution hierarchy can be found at: http://hub.packtpub.com/wp-content/uploads/2013/11/Template_Hierarchy.png An example of the template execution hierarchy is as shown in the following diagram: Once the Initial Request is made, WordPress looks for one of the main starting templates as illustrated in the preceding screenshot. It's obvious that most of the starting templates such as front page, comments popup, and index pages are specifically designed for content management systems. In the context of web applications, we need to put more focus into both singular and archive pages, as most of the functionality depends on top of those templates. Let's identify the functionality of the main template files in the context of web applications: Archive pages: These are used to provide summarized listings of data as a grid. Single posts: These are used to provide detailed information about existing data in the system. Singular pages: These are used for any type of dynamic content associated with the application. Generally, we can use pages for form submissions, dynamic data display, and custom layouts. Let's dig deeper into the template execution hierarchy on the Singular Page path as illustrated in the following diagram: Singular Page is divided into two paths that contain posts or pages. Static Page is defined as Custom or Default page templates. In general, we use Default page templates for loading website pages. WordPress looks for a page with the slug or ID before executing the default page.php file. In most scenarios, web application layouts will take the other route of Custom page templates where we create a unique template file inside the theme for each of the layouts and define it as a page template using code comments. We can create a new custom page template by creating a new PHP file inside the theme folder and using the Template Name definition in code comments illustrated as follows: <?php/** Template Name: My Custom Template*/?> To the right of the preceding diagram, we have Single Post Page, which is divided into three paths called Blog Post, Custom Post, and Attachment Post. Both Attachment Posts and Blog Posts are designed for blogs and hence will not be used frequently in web applications. However, the Custom Post template will have a major impact on application layouts. As with Static Page, Custom Post looks for specific post type templates before looking for a default single.php file. The execution hierarchy of an Archive Page is similar in nature to posts, as it looks for post-specific archive pages before reverting to the default archive.php file. Now we have had a brief introduction to the template loading process used by WordPress. In the next section, we are going to look at the template loading process of a typical web development framework to identify the differences. Template execution process of web application frameworks Most stable web application frameworks use a flat and straightforward template execution process compared to the extensive process used by WordPress. These frameworks don't come with built-in templates, and hence each and every template will be generated from scratch. Consider the following diagram of a typical template execution process: In this process, Initial Request always comes to the index.php file, which is similar to the process used by WordPress or any other framework. It then looks for custom routes defined within the framework. It's possible to use custom routes within a WordPress context, even though it's not used generally for websites or blogs. Finally, Initial Request looks for the direct template file located in the templates section of the framework. As you can see, the process of a normal framework has very limited depth and specialized templates. Keep in mind that index.php referred to in the preceding section is the file used as the main starting point of the application, not the template file. In WordPress, we have a specific template file named index.php located inside the themes folder as well. Managing templates in a typical application framework is a relatively easy task when compared to the extensive template hierarchy used by WordPress. In web applications, it's ideal to keep the template hierarchy as flat as possible with specific templates targeted towards each and every screen. In general, WordPress developers tend to add custom functionalities and features by using specific templates within the hierarchy. Having multiple templates for a single screen and identifying the order of execution can be a difficult task in large-scale applications, and hence should be avoided in every possible instance. Web application layout creation techniques As we move into developing web applications, the logic and screens will become complex, resulting in the need of custom templates beyond the conventional ones. There is a wide range of techniques for putting such functionality into the WordPress code. Each of these techniques have their own pros and cons. Choosing the appropriate technique is vital in avoiding potential bottlenecks in large-scale applications. Here is a list of techniques for creating dynamic content within WordPress applications: Static pages with shortcodes Page templates Custom templates with custom routing Summary In this article we learned about basic file structure of the WordPress theme, the template execution hierarchy, and template execution process. We also learned the different techniques of Web application layout creation. Resources for Article: Further resources on this subject: Customizing WordPress Settings for SEO [Article] Getting Started with WordPress 3 [Article] Dynamic Menus in WordPress [Article]
Read more
  • 0
  • 0
  • 3069

Packt
12 Nov 2013
6 min read
Save for later

Quick start – creating your first template

Packt
12 Nov 2013
6 min read
(For more resources related to this topic, see here.) Preparing the project To get started, create a file named index.htmland add the following boilerplate code: <!DOCTYPE HTML> <html> <head> <title>Handlebars Quickstart</title> <script src ="handlebars.js"></script> </head> <body> <script> var src = "<h1>Hello {{name}}</h1>"; var template = Handlebars.compile(src); var output = template({name: "Tom"}); document.body.innerHTML += output; </script> </body> </html> This is a pretty good example to start with, as it demonstrates the minimum amount of code you will need to write to get a template on screen. We will start it by writing the template itself, just a pair of header tags with a greeting message inside. If you remember from the introduction, a Handlebars tag is a reference for some external data wrapped between two pairs of curly braces, and it signifies a dynamic point in the page where Handlebars will insert some information. Here we just want a property called "name" to be inserted at this point, which we will set in a moment. Once you have the template, the next step is where all the magic begins; Handlebars compile function will process through the template's source and generate a JavaScript function to output the result. What I mean by this is Handlebars will create a function that accepts some data and returns the final string with all the placeholders replaced. An example of what I mean could be something like the following code for our quick template stated in the preceding paragraph: var template = function (data) { return "<h1>Hello " + data.name + "</h1>"; } And then every time the template gets called with data, the resulting string will be passed back. Now obviously it is a bit more complex than this, and Handlebars performs some escaping for you and other such checks, but the basic idea of what the compile function generates remains the same. So with our template function created, we can call it by passing in some data (in this case the name Tom), and we take the output and append it to the body. After opening this page in a browser, you should see something like the following screenshot: With the basics out of the way, let's take a look at helpers. Block helpers Helpers can be called in the same way as the data placeholder was called from the template. The difference between them is that a data placeholder will just take a static string or number and insert it into the template's output. Helpers on the other hand are functions, which first compute something, and then the results get placed into the output instead. You can think of helpers as a more dynamic form of placeholders. Now there are two types of helpers in Handlebars: tag helpers, which work like regular functions; and block helpers, which have an added, nested template to manipulate. Handlebars comes with a series of block helpers built-in, which allows you to perform basic logic in your templates. One of the most commonly used block helpers in Handlebars would have to be the each helper, which allows you to run a section of template per item in an array. Let's take a look at it in action. It is going to be too messy to continue placing the templates into JavaScript strings like we did in the first example, so we will place it in its own script tag and pull it in. The reason we are using a script tag is because we don't want the template to show up on the page itself; by placing it in a script tag and setting the type to something the browser doesn't understand it will just be ignored. So right on top of the script tag block that we just wrote, add the following code: <script id="quickstart" type="template/handlebars"><h1>Hello {{name}}</h1><ul>{{#each messages}}<li><b>{{from}}</b>: {{text}}</li>{{/each}}</ul></script> We give the script tag an id, so we can access it later, and then we give it an arbitrary type, so that the browser doesn't try to parse it as JavaScript. Inside it we start with the same template code as before, and then we add each block to cycle through a list of messages and print out each one in a list element. The next step is to replace the script block underneath with the new code, which will get the template from here: <script>var src = document.getElementById('quickstart').innerHTML;var template = Handlebars.compile(src);var output = template({name: "Tom",messages: [{ from: "John", text: "Demo Message" },{ from: "Bob", text: "Something Else" },{ from: "John", text: "Second Post" }]});document.body.innerHTML += output;</script> We start by pulling the template from the script block we added in the previous paragraph using standard JavaScript; next we compile it like before and run the template, this time with the added "messages" array. Running this in your browser will give you something like the following: You may have picked up on this, but it's worth mentioning, that inside each block the context changes from the global data object passed into the template to the specific array element, because of this we are able to access its properties directly. These first few steps have been simple, but subtly we have covered loading in templates from script tags, and the syntax for both standard placeholders as well as block helpers in your templates. Summary Thus we have learned how to create template in this article. Resources for Article: Further resources on this subject: Working with JavaScript in Drupal 6: Part 1 [Article] Using JavaScript and jQuery in Drupal Themes [Article] Basics of Exception Handling Mechanism in JavaScript Testing [Article]
Read more
  • 0
  • 0
  • 1236
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-creating-image-gallery
Packt
30 Oct 2013
5 min read
Save for later

Creating an image gallery

Packt
30 Oct 2013
5 min read
(For more resources related to this topic, see here.) Getting ready Before we get started, we need to find a handful of images that we can use for the gallery. Find four to five images to use for the gallery and put them in the images folder. How to do it... Add the following links to the images to the index.html file: <a class="fancybox"href="images/waterfall.png">Waterfall</a><a class="fancybox" href="images/frozenlake.png">Frozen Lake</a><a class="fancybox" href="images/road-inforest.png">Road in Forest</a><a class="fancybox" href="images/boston.png">Boston</a> The anchor tags no longer have an ID, but a class. It is important that they all have the same class so that Fancybox knows about them. Change our call to the Fancybox plugin in the scripts.js file to use the class that all of the links have instead of show-fancybox ID. $(function() { // Using fancybox class instead of the show-fancybox ID $('.fancybox').fancybox(); }); Fancybox will now work on all of the images but they will not be part of the same gallery. To make images part of a gallery, we use the rel attribute of the anchor tags. Add rel="gallery" to all of the anchor tags, shown as follows: <a class="fancybox" rel="gallery" href="images/waterfall.png">Waterfall</a> <a class="fancybox" rel="gallery" href="images/frozenlake.png">Frozen Lake</a> <a class="fancybox" rel="gallery" href="images/roadin-forest.png">Road in Forest</a> <a class="fancybox" rel="gallery" href="images/boston.png">Boston</a> Now that we have added rel="gallery" to each of our anchor tags, you should see left and right arrows when you hover over the left-hand side or right-hand side of Fancybox. These arrows allow you to navigate between images as shown in the following screenshot: How it works... Fancybox determines that an image is part of a gallery using the rel attribute of the anchor tags. The order of the images is based on the order of the anchor tags on the page. This is important so that the slideshow order is exactly the same as a gallery of thumbnails without any additional work on our end. We changed the ID of our single image to a class for the gallery because we wanted to call Fancybox on all of the links instead of just one. If we wanted to add more image links to the page, it would just be a matter of adding more anchor tags with the proper href values and the same class. There's more... So, what else can we do with the gallery functionality of Fancybox? Let's take a look at some of the other things that we could do with the gallery that we have currently. Captions and thumbnails All of the functionalities that we discussed for single images apply to galleries as well. So, if we wanted to add a thumbnail, it would just be a matter of adding an img tag inside the anchor tag instead of the text. If we wanted to add a caption, we can do so by adding the title attribute to our anchor tags. Showing slideshow from one link Let's say that we wanted to have just one link to open our gallery slideshow. This can be easily achieved by hiding the other links via CSS with the help of the following step: We start by adding this style tag to the <head> tag just under the <script> tag for our scripts.js file, shown as follows: <style type="text/css"> .hidden { display: none; } </style> Now, we update the HTML file so that all but one of our anchor tags have the hidden class. Next, when we reload the page, we will see only one link. When you click on the link, you should still be able to navigate through the gallery just like all of the links were on the page. <a class="fancybox" rel="gallery" href="images/waterfall.png">Image Gallery</a> <div class="hidden"> <a class="fancybox" rel="gallery" href="images/frozen-lake.png">Frozen Lake</a> <a class="fancybox" rel="gallery" href="images/roadin-forest.png">Road in Forest</a> <a class="fancybox" rel="gallery" href="images/boston.png">Boston</a> </div> Summary In this article we saw that Fancybox provides very strong image handling functionalities. We also saw how an image gallery is created by Fancybox. We can also display images as thumbnails and display the images as a slideshow using just one link. Resources for Article: Further resources on this subject: Getting started with your first jQuery plugin [Article] OpenCart Themes: Styling Effects of jQuery Plugins [Article] The Basics of WordPress and jQuery Plugin [Article]
Read more
  • 0
  • 0
  • 1911

article-image-authenticating-your-application-devise
Packt
25 Oct 2013
11 min read
Save for later

Authenticating Your Application with Devise

Packt
25 Oct 2013
11 min read
(For more resources related to this topic, see here.) Signing in using authentication other than e-mails By default, Devise only allows e-mails to be used for authentication. For some people, this condition will lead to the question, "What if I want to use some other field besides e-mail? Does Devise allow that?" The answer is yes; Devise allows other attributes to be used to perform the sign-in process. For example, I will use username as a replacement for e-mail, and you can change it later with whatever you like, including userlogin, adminlogin, and so on. We are going to start by modifying our user model. Create a migration file by executing the following command inside your project folder: $ rails generate migration add_username_to_users username:string This command will produce a file, which is depicted by the following screenshot: The generated migration file Execute the migrate (rake db:migrate) command to alter your users table, and it will add a new column named username. You need to open the Devise's main configuration file at config/initializers/devise.rb and modify the code: config.authentication_keys = [:username] config.case_insensitive_keys = [:username] config.strip_whitespace_keys = [:username] You have done enough modification to your Devise configuration, and now you have to modify the Devise views to add a username field to your sign-in and sign-up pages. By default, Devise loads its views from its gemset code. The only way to modify the Devise views is to generate copies of its views. This action will automatically override its default views. To do this, you can execute the following command: $ rails generate devise:views It will generate some files, which are shown in the following screenshot: Devise views files As I have previously mentioned, these files can be used to customize another view. But we are going to talk about it a little later in this article. Now, you have the views and you can modify some files to insert the username field. These files are listed as follows: app/views/devise/sessions/new.html.erb: This is a view file for the sign-up page. Basically, all you need to do is change the email field into the username field. #app/views/devise/sessions/new.html.erb <h2>Sign in</h2> <%= notice %> <%= alert %> <%= form_for(resource, :as => resource_name, :url => session_path (resource_name)) do |f| %> <div><%= f.label :username %><br /> <%= f.text_field :username, :autofocus => true %><div> <div><%= f.label :password %><br /> <%= f.password_field :password %></div> <% if devise_mapping.rememberable? -%> <div><%= f.check_box :remember_me %> <%= f.label :remember_me %></div> <% end -%> <div><%= f.submit "Sign in" %></div> <% end %> %= render "devise/shared/links" %> You are now allowed to sign in with your username. The modification will be shown, as depicted in the following screenshot: The sign-in page with username app/views/devise/registrations/new.html.erb: This file is a view file for the registration page. It is a bit different from the sign-up page; in this file, you need to add the username field, so that the user can fill in their username when they perform the registration. #app/views/devise/registrations/new.html.erb <h2>Sign Up</h2> <%= form_for() do |f| %> <%= devise_error_messages! %> <div><%= f.label :email %><br /> <%= f.email_field :email, :autofocus => true %></div> <div><%= f.label :username %><br /> <%= f.text_field :username %></div> <div><%= f.label :password %><br /> <%= f.password_field :password %></div> <div><%= f.label :password_confirmation %><br /> <%= f.password_field :password_confirmation %></div> <div><%= f.submit "Sign up" %></div> <% end %> <%= render "devise/shared/links" %> Especially for registration, you need to perform extra modifications. Mass assignment rules written in the app/controller/application_controller.rb file, and now, we are going to modify them a little. Add username to the sanitizer for sign-in and sign-up, and you will have something as follows: #these codes are written inside configure_permitted_parameters function devise_parameter_sanitizer.for(:sign_in) {|u| u.permit(:email, :username )} devise_parameter_sanitizer.for(:sign_up) {|u| u.permit(:email, :username, :password, :password_confirmation)} These changes will allow you to perform a sign-up along with the username data. The result of the preceding example is shown in the following screenshot: The sign-up page with username I want to add a new case for your sign-in, which is only one field for username and e-mail. This means that you can sign in either with your e-mail ID or username like in Twitter's sign-in form. Based on what we have done before, you already have username and email columns; now, open /app/models/user.rb and add the following line: attr_accessor :signin Next, you need to change the authentication keys for Devise. Open /config/initializers/devise.rb and change the value for config.authentication_keys, as shown in the following code snippet: config.authentication_keys = [ :signin ] Let's go back to our user model. You have to override the lookup function that Devise uses when performing a sign-in. To do this, add the following method inside your model class: def self.find_first_by_auth_conditions(warden_conditions) conditions = warden_conditions.dup where(conditions).where(["lower(username) = :value OR lower(email) = :value", { :value => signin.downcase }]).first end As an addition, you can add a validation for your username, so it will be case insensitive. Add the following validation code into your user model: validates :username, :uniqueness => {:case_sensitive => false} Please open /app/controller/application_controller.rb and make sure you have this code to perform parameter filtering: before_filter :configure_permitted_parameters, if: :devise_controller? protected def configure_permitted_parameters devise_parameter_sanitizer.for(:sign_in) {|u| u.permit(:signin)} devise_parameter_sanitizer.for(:sign_up) {|u| u.permit(:email, : username, :password, :password_confirmation)} end We're almost there! Currently, I assume that you've already stored an account that contains the e-mail ID and username. So, you just need to make a simple change in your sign-in view file (/app/views/devise/sessions/new.html.erb). Make sure that the file contains this code: <h2>Sign in</h2> <%= notice %> <%= alert %> <%= form_for(resource, :as => resource_name, :url => session_path (resource_name)) do |f| %> <div><%= f.label "Username or Email" %><br /> <%= f.text_field :signin, :autofocus => true %></div> <div><%= f.label :password %><br /> <%= f.password_field :password %></div> <% if devise_mapping.rememberable? -%> <div><%= f.check_box :remember_me %> <%= f.label :remember_me %> </div> <% end -%> <div><%= f.submit "Sign in" %></div> <% end %> <%= render "devise/shared/links" %> You can see that you don't have a username or email field anymore. The field is now replaced by a single field named :signin that will accept either the e-mail ID or the username. It's efficient, isn't it? Updating the user account Basically, you are already allowed to access your user account when you activate the registerable module in the model. To access the page, you need to log in first and then go to /users/edit. The page is as shown in the following screenshot: The edit account page But, what if you want to edit your username or e-mail ID? How will you do that? What if you have extra information in your users table, such as addresses, birth dates, bios, and passwords as well? How will you edit these? Let me show you how to edit your user data including your password, or edit your user data without editing your password. Editing your data, including the password: To perform this action, the first thing that you need to do is modify your view. Your view should contain the following code: <div><%= f.label :username %><br /> <%= f.text_field :username %></div> Now, we are going to overwrite Devise's logic. To do this, you have to create a new controller named registrations_controller. Please use the rails command to generate the controller, as shown: $ rails generate controller registrations update It will produce a file located at app/controllers/. Open the file and make sure you write this code within the controller class: class RegistrationsController < Devise::RegistrationsController def update new_params = params.require(:user).permit(:email,:username, : current_password, :password,:password_confirmation) @user = User.find(current_user.id) if @user.update_with_password(new_params) set_flash_message :notice, :updated sign_in @user, :bypass => true redirect_to after_update_path_for(@user) else render "edit" end end end Let's look at the code. Currently, Rails 4 has a new method in organizing whitelist attributes. Therefore, before performing mass assignment attributes, you have to prepare your data. This is done in the first line of the update method. Now, if you see the code, there's a method defined by Devise named update_with_password. This method will use mass assignment attributes with the provided data. Since we have prepared it before we used it, it will be fine. Next, you have to edit your route file a bit. You should modify the rule defined by Devise, so instead of using the original controller, Devise will use the controller you created before. The modification should look as follows: devise_for :users, :controllers => {:registrations => "registrations"} Now you have modified the original user edit page, and it will be a little different. You can turn on your Rails server and see it in action. The view is as depicted in the following screenshot: The modified account edit page Now, try filling up these fields one by one. If you are filling them with different values, you will be updating all the data (e-mail, username, and password), and this sounds dangerous. You can modify the controller to have better data update security, and it all depends on your application's workflows and rules. Editing your data, excluding the password: Actually, you already have what it takes to update data without changing your password. All you need to do is modify your registrations_controller.rb file. Your update function should be as follows: class RegistrationsController < Devise::RegistrationsController def update new_params = params.require(:user).permit(:email,:username, : current_password, :password,:password_confirmation) change_password = true if params[:user][:password].blank? params[:user].delete("password") params[:user].delete("password_confirmation") new_params = params.require(:user).permit(:email,:username) change_password = false end @user = User.find(current_user.id) is_valid = false if change_password is_valid = @user.update_with_password(new_params) else @user.update_without_password(new_params) end if is_valid set_flash_message :notice, :updated sign_in @user, :bypass => true redirect_to after_update_path_for(@user) else render "edit" end end end The main difference from the previous code is now you have an algorithm that will check whether the user intends to update your data with their password or not. If not, the code will call the update_without_password method. Now, you have codes that allow you to edit with/without a password. Now, refresh your browser and try editing with or without a password. It won't be a problem anymore. Summary Now, I believe that you will be able to make your own Rails application with Devise. You should be able to make your own customizations based on your needs. Resources for Article: Further resources on this subject: Integrating typeahead.js into WordPress and Ruby on Rails [Article] Facebook Application Development with Ruby on Rails [Article] Designing and Creating Database Tables in Ruby on Rails [Article]
Read more
  • 0
  • 0
  • 4311

article-image-taking-control-reactivity-inputs-and-outputs
Packt
23 Oct 2013
7 min read
Save for later

Taking Control of Reactivity, Inputs, and Outputs

Packt
23 Oct 2013
7 min read
(For more resources related to this topic, see here.) Showing and hiding elements of the UI We'll start easy with a simple function that you are certainly going to need if you build even a moderately complex application. Those of you who have been doing extra credit exercises and/or experimenting with your own applications will probably have already wished for this or, indeed, have already found it. conditionalPanel() allows you to show/hide UI elements based on other selections within the UI. The function takes a condition (in JavaScript, but the form and syntax will be familiar from many languages) and a UI element, and displays the UI only when the condition is true. This is actually used a couple of times in the advanced GA application and indeed in all the applications I've ever written of even moderate complexity. The following is a simpler example (from ui.R, of course, in the first section, within sidebarPanel()), which allows users who request a smoothing line to decide what type they want: conditionalPanel(condition = "input.smoother == true",selectInput("linearModel", "Linear or smoothed",list("lm", "loess"))) As you can see, the condition appears very R/Shiny-like, except with the "." operator familiar to JavaScript users in place of "$", and with "true" in lower case. This is a very simple but powerful way of making sure that your UI is not cluttered with irrelevant material. Giving names to tabPanel elements In order to further streamline the UI, we're going to hide the hour selector when the monthly graph is displayed and the date selector when the hourly graph is displayed. The difference is illustrated in the following screenshot with side-by-side pictures, hourly figures UI on the left-hand side and monthly figures on the right-hand side: In order to do this, we're going to have to first give the tabs of the tabbed output names. This is done as follows (with the new code in bold): tabsetPanel(id ="theTabs",tabPanel("Summary", textOutput("textDisplay"),value = "summary"),tabPanel("Monthly figures",plotOutput("monthGraph"), value = "monthly"),tabPanel("Hourly figures",plotOutput("hourGraph"), value = "hourly")) As you can see, the whole panel is given an ID (theTabs), and then each tabPanel is also given a name (summary, monthly, and hourly). They are referred to in the server.R file very simply as input$theTabs. Let's have a quick look at a chunk of code in server.R that references the tab names; this code makes sure that we subset based on date only when the date selector is actually visible, and by hour only when the hour selector is actually visible. Our function to calculate and pass data now looks like the following (new code again bolded): passData <- reactive({if(input$theTabs != "hourly"){analytics <- analytics[analytics$Date %in%seq.Date(input$dateRange[1], input$dateRange[2],by = "days"),]}if(input$theTabs != "monthly"){analytics <- analytics[analytics$Hour %in%as.numeric(input$minimumTime) :as.numeric(input$maximumTime),]}analytics <- analytics[analytics$Domain %in%unlist(input$domainShow),]analytics}) As you can see, subsetting by month is carried out only when the date display is visible (that is, when the hourly tab is not shown), and vice versa. Finally, we can make our changes to ui.R to remove parts of the UI based on tab selection: conditionalPanel(condition = "input.theTabs != 'hourly'",dateRangeInput(inputId = "dateRange",label = "Date range",start = "2013-04-01",max = Sys.Date())),conditionalPanel(condition = "input.theTabs != 'monthly'",sliderInput(inputId = "minimumTime",label = "Hours of interest- minimum",min = 0,max = 23,value = 0,step = 1),sliderInput(inputId = "maximumTime",label = "Hours of interest- maximum",min = 0,max = 23,value = 23,step = 1)) Note the use in the latter example of two UI elements within the same conditionalPanel() call; it is worth noting that it helps you keep your code clean and easy to debug. Reactive user interfaces Another trick you will definitely want up your sleeve at some point is a reactive user interface. This enables you to change your UI (for example, the number or content of radio buttons) based on reactive functions. For example, consider an application that I wrote related to survey responses across a broad range of health services in different areas. The services are related to each other in quite a complex hierarchy, and over time, different areas and services respond (or cease to exist, or merge, or change their name...), which means that for each time period the user might be interested in, there would be a totally different set of areas and services. The only sensible solution to this problem is to have the user tell you which area and date range they are interested in and then give them back the correct list of services that have survey responses within that area and date range. The example we're going to look at is a little simpler than this, just to keep from getting bogged down in too much detail, but the principle is exactly the same and you should not find this idea too difficult to adapt to your own UI. We are going to imagine that your users are interested in the individual domains from which people are accessing the site, rather than just have them lumped together as the NHS domain and all others. To this end, we will have a combo box with each individual domain listed. This combo box is likely to contain a very high number of domains across the whole time range, so we will let users constrain the data by date and only have the domains that feature in that range return. Not the most realistic example, but it will illustrate the principle for our purposes. Reactive user interface example – server.R The big difference is that instead of writing your UI definition in your ui.R file, you place it in server.R, and wrap it in renderUI(). Then all you do is point to it from your ui.R file. Let's have a look at the relevant bit of the server.R file: output$reacDomains <- renderUI({domainList = unique(as.character(passData()$networkDomain))selectInput("subDomains", "Choose subdomain", domainList)}) The first line takes the reactive dataset that contains only the data between the dates selected by the user and gives all the unique values of domains within it. The second line is a widget type we have not used yet which generates a combo box. The usual id and label arguments are given, followed by the values that the combo box can take. This is taken from the variable defined in the first line. Reactive user interface example – ui.R The ui.R file merely needs to point to the reactive definition as shown in the following line of code (just add it in to the list of widgets within sidebarPanel()): uiOutput("reacDomains") You can now point to the value of the widget in the usual way, as input$subDomains. Note that you do not use the name as defined in the call to renderUI(), that is, reacDomains, but rather the name as defined within it, that is, subDomains. Summary It's a relatively small but powerful toolbox with which you can build a vast array of useful and intuitive applications with comparatively little effort. This article looked at fine-tuning the UI using conditionalPanel() and observe(), and changing our UI reactively. Resources for Article: Further resources on this subject: Fine Tune the View layer of your Fusion Web Application [Article] Building tiny Web-applications in Ruby using Sinatra [Article] Spring Roo 1.1: Working with Roo-generated Web Applications [Article]
Read more
  • 0
  • 0
  • 2956

article-image-integrating-typeaheadjs-wordpress-and-ruby-rails
Packt
17 Oct 2013
6 min read
Save for later

Integrating typeahead.js into WordPress and Ruby on Rails

Packt
17 Oct 2013
6 min read
(For more resources related to this topic, see here.) Integrating typeahead.js into WordPress (Become an expert) WordPress is an incredibly well known and well used open source blogging platform, and it is almost fully featured, except of course for the ability to have a typeahead style lookup on your site! In this article we are going to fix that. Getting ready In order to create this we are going to first need to have a working WordPress installed. WordPress runs off a LAMP stack so if you haven't got one of those running locally you will need to set this up. Once set up you can download WordPress from http://wordpress.org/, extract the files, place them in your localhost, and visit http://localhost/install/. This will then guide you through the rest of the install process. Now we should be ready to get typeahead.js working with WordPress. How to do it... Like so many things in WordPress, when it comes to adding new functionality, there is probably already a plugin, and in our case there is one made by Kyle Reicks that can be found at https://github.com/kylereicks/typeahead.js.wp. Download the code and add the folder it downloads to /wp-content/plugins/ Log into our administration panel at http://localhost/wp-admin/ and go to the Plugins section. You will see an option to activate our new plugin, so activate it now. Once activated, under plugins you will now have access to typeahead Settings. In here you can set up what type of things you want typeahead to be used for; pick posts, tags, pages, and categories. How it works... This plugin hijacks the default search form that WordPress uses out of the box and adds the typeahead functionality to it. For each of the post types that you have associated with typeahead plugin, it will create a JSON file, with each JSON file representing a different dataSet and getting loaded in with prefetch. There's more... The plugin is a great first start, but there is plenty that could be done to improve it. For example, by editing /js/typeahead-activation.js we could edit the amount of values that get returned by our typeahead search: if(typeahead.datasets.length){ typeahead.data = []; for(i = 0, arrayLength = typeahead.datasets.length; i < arrayLength; i++){ typeahead.data[i] = { name: typeahead.datasets[i], prefetch: typeahead.dataUrl + '?data=' + typeahead.datasets[i], limit: 10 }; } jQuery(document).ready(function($){ $('#searchform input[type=text], #searchform input[type=search]').typeahead(typeahead.data); }); } Integrating typeahead.js into Ruby on Rails (Become an expert) Ruby on Rails has become one of the most popular frameworks for developing web applications in, and it comes as little surprise that Rails developers would like to be able to harness the power of typeahead.js. In this recipe we will look at how you can quickly get up and running with typeahead.js in your Rails project. Getting ready Ruby on Rails is an open source web application framework for the Ruby language. It famously champions the idea of convention over configuration, which is one of the reasons it has been so widely adopted. Obviously in order to do this we will need a rails application. Setting up Ruby on Rails is an entire article to itself, but if you follow the guides on http://rubyonrails.org/, you should be able to get up and start running quickly with your chosen setup. We will start from the point that both Ruby and Ruby on Rails have been installed and set up correctly. We will also be using a Gem made by Yousef Ourabi, which has the typeahead.js functionality we need. We can find it at https://github.com/yourabi/twitter-typeahead-rails. How to do it... The first thing we will need is a Rails project, and we can create one of these by typing; rails new typeahead_rails This will generate the basic rails application for us, and one of the files it will generate is the Gemfile which we need to edit to include our new Gem; source 'https://rubygems.org' gem 'rails', '3.2.13' gem 'sqlite3' gem 'json' group :assets do gem 'sass-rails', '~> 3.2.3' gem 'coffee-rails', '~> 3.2.1' gem 'uglifier', '>= 1.0.3' end gem 'jquery-rails' gem 'twitter-typeahead-rails' With this change made, we need to reinstall our Gems: bundle install We will now have the required file, but before we can access them we need to add a reference to them in our manifest file. We do this by editing app/assets/javascripts and adding a reference to typeahead.js: //= require jquery //= require jquery_ujs //= require_tree //= require twitter/typeahead Of course we need a page to try this out on, so let's have Rails make us one; rails generate controller Pages home One of the files generated by the above command will be found in app/views/pages/home.html.erb. Let's edit this now: <label for="friends">Pick Your Friend</label> <input type="text" name="friends" /> <script> $('input').typeahead({ name: 'people', local: ['Elaine', 'Column', 'Kirsty', 'Chris Elder'] }); </script> Finally we will start up a web server to be able to view what we have accomplished; rails s And now if we go to localhost:3000/pages/home we should see something very much. How it works... The Gem we installed brings together the required JavaScript files that we normally need to include manually, allowing them to be accessed from our manifest file, which will load all mentioned JavaScript on every page. There's more... Of course we don't need to use a Gem to install typeahead functionality, we could have manually copied the code into a file called typeahead.js that sat inside of app/assets/javascripts/twitter/ and this would have been accessible to the manifest file too and produced the same functionality. This would mean one less dependency on a Gem, which in my opinion is always a good thing, although this isn't necessarily the Rails way, which is why I didn't lead with it. Summary In this article, we explained the functionality of WordPress, which is probably the biggest open source blogging platform in the world right now and it is pretty feature complete. One thing the search doesn't have, though, is good typeahead functionality. In this article we learned how to change that by incorporating a WordPress plugin that gives us this functionality out of the box. It also discussed how Ruby on Rails is fast becoming the framework of choice among developers wanting to build web applications fast, along with out of the box benefits of using Ruby on Rails. Using Ruby gives you access to a host of excellent resources in the form of Gems. In this article we had a look at one Gem that gives us typeahead.js functionality in our Ruby on Rails project. Resources for Article: Further resources on this subject: Customizing WordPress Settings for SEO [Article] Getting Started with WordPress 3 [Article] Building tiny Web-applications in Ruby using Sinatra [Article]
Read more
  • 0
  • 0
  • 4062
article-image-creating-autocad-command
Packt
10 Oct 2013
5 min read
Save for later

Creating an AutoCAD command

Packt
10 Oct 2013
5 min read
Some custom AutoCAD applications are designed to run unattended, such as when a drawing loads or in reaction to some other event that occurs in your AutoCAD drawing session. But, the majority of your AutoCAD programming work will likely involve custom AutoCAD commands, whether automating a sequence of built-in AutoCAD commands, or implementing new functionality to address a business need. Commands can be simple (printing to the command window or a dialog box), or more difficult (generating a new design on-the-fly, based on data stored in an existing design). Our first custom command will be somewhat simple. We will define a command which will count the number of AutoCAD entities found in ModelSpace (the space in AutoCAD where you model your designs). Then, we will display that data in the command window. Frequently, custom commands acquire information about an object in AutoCAD (or summarize a collection of user input), and then present that information to the user, either for the purpose of reporting data or so the user can make an informed choice or selection based upon the data being presented. Using Netload to load our command class You may be wondering at this point, "How do we load and run our plugin?" I'm glad you asked! To load the plugin, enter the native AutoCAD command NETLOAD. When the dialog box appears, navigate to the DLL file, MyAcadCSharpPlugin1.dll, select it and click on OK. Our custom command will now be available in the AutoCAD session. At the command prompt, enter COUNTENTS to execute the command. Getting ready In our initial project, we have a class MyCommands, which was generated by the AutoCAD 2014 .NET Wizard. This class contains stubs for four types of AutoCAD command structures: basic command; command with pickfirst selection; a session command; and a lisp function. For this plugin, we will create a basic command, CountEnts, using the stub for the Modal command. How to do it... Let's take a look at the code we will need in order to read the AutoCAD database, count the entities in ModelSpace, identify (and count) block references, and display our findings to users: First, let's get the active AutoCAD document and the drawing database. Next, begin a new transaction. Use the using keyword, which will also take care of disposing of the transaction. Open the block table in AutoCAD. In this case, open it for read operation using the ForRead keyword. Similarly, open the block table record for ModelSpace, also for read (ForRead) (we aren't writing new entities to the drawing database at this time). We'll initialize two counters: one to count all AutoCAD entities; one to specifically count block references (also known as Inserts). Then, as we iterate through all of the entities in AutoCAD's ModelSpace, we'll tally AutoCAD entities in general, as well as block references. Having counted the total number of entities overall, as well as the total number of block references, we'll display that information to the user in a dialog box. How it works... AutoCAD is a multi-document application. We must identify the active document (the drawing that is activated) in order to read the correct database. Before reading the database we must start a transaction. In fact, we use transactions whenever we read from or write to the database. In the drawing database, we open AutoCAD's block table to read it. The block table contains the block table records ModelSpace, PaperSpace, and PaperSpace0. We are going to read the entities in ModelSpace so we will open that block table record for reading. We create two variables to store the tallies as we iterate through ModelSpace, keeping track of both block references and AutoCAD entities in general. A block reference is just a reference to a block. A block is a group of entities that is selectable as if it was a single entity. Blocks can be saved as drawing files (.dwg) and then inserted into other drawings. Once we have examined every entity in ModelSpace, we display the tallies (which are stored in the two count variables we created) to the user in a dialog box. Because we used the using keyword when creating the transaction, it is automatically disposed of when our command function ends. Summary The Session command, one of the four types of command stubs added to our project by the AutoCAD 2014 .NET Wizard, has application (rather than document) context. This means it is executed in the context of the entire AutoCAD session, not just within the context of the current document. This allows for some operations that are not permitted in document context, such as creating a new drawing. The other command stub, described as having pickfirst selection is executed with pre-selected AutoCAD entities. In other words, users can select (or pick) AutoCAD entities just prior to executing the command and those entities will be known to the command upon execution. Resources for Article: Further resources on this subject: Dynamically enable a control (Become an expert) [Article] Introduction to 3D Design using AutoCAD [Article] Getting Started with DraftSight [Article]
Read more
  • 0
  • 0
  • 2286

article-image-introducing-sproutcore
Packt
10 Oct 2013
6 min read
Save for later

Introducing SproutCore

Packt
10 Oct 2013
6 min read
(For more resources related to this topic, see here.) Understanding the SproutCore approach In the strictly technical sense, I would describe SproutCore as an open source web application development framework. As you are likely a technical person interested in web application development, this should be reassuring. And if you are interested in developing web applications, you must also already know how difficult it is to keep track of the vast number of libraries and frameworks to choose from. While it would be nice if we could say that there was one true way, and even nicer if I could say that the one true way was SproutCore; this is not the case and never will be the case. Competing ideas will always exist, especially in this area because the future of software is largely JavaScript and the web. So where does SproutCore fit ideologically within this large and growing group? To best describe it, I would ask you to picture a spectrum of all the libraries and frameworks one can use to build a web application. Towards one end are the small single-feature libraries that provide useful helper functions for use in dynamic websites. As we move across, you'll see that the libraries grow and become combined into frameworks of libraries that provide larger functions, some of which start to bridge the gap between what we may call a website and what we may call a web app. Finally, at the other end of the spectrum you'll find the full application development frameworks. These are the frameworks dedicated to writing software for the web and as you may have guessed, this is where you would find SproutCore along with very few others. First, let me take a moment to argue the position of full application development frameworks such as SproutCore. In my experience, in order to develop web software that truly rivals the native software, you need more than just a collection of parts, and you need a cohesive set of tools with strong fundamentals. I've actually toyed with calling SproutCore something more akin to a platform, rather than a framework, because it is really more than just the framework code, it's also the tools, the ideas, and the experience that come with it. On the other side of the argument, there is the idea of picking small pieces and cobbling them together to form an application. While this is a seductive idea and makes great demos, this approach quickly runs out of steam when attempting to go beyond a simple project. The problem isn't the technology, it's the realities of software development: customization is the enemy of maintainability and growth. Without a native software like structure to build on, the developers must provide more and more glue code to keep it all together and writing architecturally sound code is extremely hard. Unfortunately, under deadlines this results in difficult to maintain codebases that don't scale. In the end, the ability to execute and the ability to iterate are more important than the ability to start. Fortunately, almost all of what you need in an application is common to all applications and so there is no need to reinvent the foundations in each project. It just needs to work and work exceptionally well so that we can free up time and resources to focus on attaining the next level in the user experience. This is the SproutCore approach. SproutCore does not just include all the components you need to create a real application. It also includes thousands of hours of real world tested professional engineering experience on how to develop and deploy genre-changing web applications that are used by millions of people. This experience is baked into the heart of SproutCore and it's completely free to use, which I hope you find as exciting a prospect as I do! Knowing when SproutCore is the right choice As you may have noticed, I use the word "software" occasionally and I will continue to do so, because I don't want to make any false pretenses about what it is we are doing. SproutCore is about writing software for the web. If the term software feels too heavy or too involved to describe your project, then SproutCore may not be the best platform for you. A good measure of whether SproutCore is a good candidate for your project or not, is to describe the goals of your project in normal language. For example, if we were to describe a typical SproutCore application, we would use terms such as: "rich user experience" "large scale" "extremely fast" "immediate feedback" "huge amounts of data" "fluid scrolling through gigantic lists" "works on multiple browsers, even IE7" "full screen" "pixel perfect design" "offline capable" "localized in multiple languages" and perhaps the most telling descriptor of them all, "like a native app" If these terms match several of the goals for your own project, then we are definitely on the right path. Let me talk about the other important factor to consider, possibly the most important factor to consider when deciding as a business on which technology to use: developer performance. It does not matter at all what features a framework has if the time it takes or the skill required to build real applications with it becomes unmanageable. I can tell you first hand that custom code written by a star developer quickly becomes useless in the hands of the next person and all software eventually ends up in someone else's hands. However, SproutCore is built using the same web technology (HTML, JavaScript and CSS) that millions are already familiar with. This provides a simple entry point for a lot of current web developers to start from. But more importantly, SproutCore was built around the software concepts that native desktop and mobile developers have used for years, but that have barely existed in the web. These concepts include: Class-like inheritance, encapsulation, and polymorphism Model-View-Controller (MVC) structure Statecharts Key-value coding, binding, and observing Computed properties Query-able data stores Centralized event handling Responder chains Run loops While there is also a full UI library and many conveniences, the application of software development principles onto web technology is what makes SproutCore so great. When your web app becomes successful and grows exponentially, and I hope it does, then you will be thankful to have SproutCore at its root. As I often heard Charles Jolley , the creator of SproutCore, say: "SproutCore is the technology you bet the company on."
Read more
  • 0
  • 0
  • 5297

article-image-minimizing-http-requests
Packt
08 Oct 2013
6 min read
Save for later

Minimizing HTTP requests

Packt
08 Oct 2013
6 min read
(For more resources related to this topic, see here.) How to do it... Reducing DNS lookup: Whenever possible try to use URL directives and paths to different functionalities instead of different hostnames. For example, if a website is abc.com, instead of having a separate hostname for its forum, for example, forum.abc.com, we can have the same URL path, abc.com/forum. This will reduce one extra DNS lookup and thus minimize HTTP requests. Imagine if your website contains many such URLs, either its own subdomains or others, it would take a lot of time to parse the page, because it will send a lot of DNS queries to the server. For example, check www.aliencoders.com that has several DNS lookup components that makes it a very slow website. Please check the following image for a better understanding: If you really have to serve some JavaScript files at the head section, make sure that they come from the same host where you are trying to display the page, else put it at the bottom to avoid latency because almost all browsers block other downloads while rendering JavaScript files are being downloaded fully and get executed. Modern browsers support DNS prefetching. If it's absolutely necessary for developers to load resources from other domains, he/she should make use of it. The following are the URLs: https://developer.mozilla.org/en/docs/Controlling_DNS_prefetching http://www.chromium.org/developers/design-documents/dns-prefetching Using combined files: If we reduce the number of JavaScript files to be parsed and executed and if we do the same for CSS files, it will reduce HTTP requests and load the website much faster. We can do so, by combining all JavaScript files into one file and all CSS files into one CSS file. Setting up CSS sprites: There are two ways to combine different images into one to reduce the number of HTTP requests. One is using the image map technique and other is using CSS sprites. What we do in a CSS sprite is that we write CSS code for the image going to be used so that while hovering, clicking, or performing any action related to that image would invoke the correct action similar to the one with having different images for different actions. It's just a game of coordinates and a little creativity with design. It will make the website at least 50 percent faster as compared to the one with a lot of images. Using image maps: Use the image map idea if you are going to have a constant layout for those images such as menu items and a navigational part. The only drawback with this technique is that it requires a lot of hard work and you should be a good HTML programmer at the least. However, writing mapping code for a larger image with proper coordinates is not an easy task, but there are saviors out there. If you want to know the basics of the area and map tags, you can check out the Basics on area and map tag in HTML post I wrote at http://www.aliencoders.com/content/basics-area-and-map-tag-html. You can create an image map code for your image online at http://www.maschek.hu/imagemap/imgmap. If you want to make it more creative with different sets of actions and colors, try using CSS codes for image maps.. The following screenshot shows you all the options that you can play with while reducing DNS lookups: How it works… In the case of reducing DNS lookup, when you open any web page for the first time, it performs DNS lookups through all unique hostnames that are involved with that web page. When you hit a URL in your browser, it first needs to resolve the address (DNS name) to an IP address. As we know, DNS resolutions are being cached by the browser or the operating system or both. So, if a valid record for the URL is available in the user's browser or OS cache, there is no time delay observed. All ISPs have their own DNS servers that cache name-IP mappings from authoritative name servers and if the caching DNS server's record has already expired, it should be refreshed again. We will not go much deeper into the DNS mechanism. But it's important to reduce DNS lookups more than any other kind of requests because it will add a more prolonged latency period as any other requests do. Similarly, in the case of using image maps, imagine you have a website where you have inserted separate images for separate tabular menus instead of just plain text to make the website catchier! For example, Home, Blogs, Forums, Contact Us, and About Us. Now whenever you load the page, it sends five requests, which will surely consume some amount of time and will make the website a bit slower too. It will be a good idea to merge all such images into one big image and use the image map technique to reduce the number of HTTP requests for those images. We can do it by using area and map tags to make it work like the previous one. It will not only save a few KBs, but also reduce the server request from five to just one. There's more... If you already have map tags in your page and wish to edit it for proper coordinates without creating trouble for yourself, there is a Firefox add-on available called the Image Map Editor (https://addons.mozilla.org/en-us/firefox/addon/ime/). If you want to know the IP address of your name servers, use the $ grepnameserver /etc/resolv.conf command in Linux and C:/>ipconfig /all in Windows. Even you can get the website's details from your name server, that is, host website-name <nameserver>. There is a Firefox add-on that will speed up DNS resolution by doing pre-DNS work and you will observe faster loading of the website. Download Speed DNS from https://addons.mozilla.org/en-US/firefox/addon/speed-dns/?src=search. Summary We saw that lesser the number of requests, faster the website will be. This article showed us how to minimize such HTTP requests without hampering the website. Resources for Article: Further resources on this subject: Magento Performance Optimization [Article] Creating and optimizing your first Retina image [Article] Search Engine Optimization using Sitemaps in Drupal 6 [Article]
Read more
  • 0
  • 0
  • 1874
article-image-gamified-websites-framework
Packt
07 Oct 2013
15 min read
Save for later

Gamified Websites: The Framework

Packt
07 Oct 2013
15 min read
(For more resources related to this topic, see here.) Business objectives Before we can go too far down the road on any journey, we first have to be clear about where we are trying to go. This is where business objectives come into the picture. Although games are about fun, and gamification is about generating positive emotion without losing sight of the business objectives, gamification is a serious business. Organizations spend millions of dollars every year on information technology. Consistent and steady investment in information technology is expected to bring a return on that investment in the way of improved business process flow. It's meant to help the organization run smoother and easier. Gamification is all about "improving" business processes. Organizations try to improve the process itself, wherever possible, whereas technology only facilitates the process. Therefore, gamification efforts will be scrutinized under similar microscope and success metrics that information technology efforts will. The fact that customers, employees, or stakeholders are having more fun with the organization's offering is not enough. It will have to meet a business objective. The place to start with defining business objectives is with the business process that the organization is looking to improve. In our case, the process we are planning to improve is e-learning. We are looking at the process of K-12 aged persons learning "thinking". How does that process look right now? Image source: http://www.moddb.com/groups/critical-thinkers-of-moddb/images/critical-thinking-skills-explained In a full-blown e-learning situation, we would be looking to gamify as much of this process as possible. For our purpose, we will focus on the areas of negotiation and cooperation. According to the Negotiate and Cooperate phase of the Critical Thinking Process, learners consider different perspectives and engage in discussions with others. This gives us a clear picture of what some of our objectives might be. They might be, among others: Increasing engagement in discussion with others Increasing the level of consideration of different perspectives Note that these objectives are measurable. We will be able to test whether the increases/improvements we are looking for are actually happening over time. With a set of measurable objectives, we can turn our attention to the next step, that is target behaviors, in our Gamification Design Framework. Target behaviors Now that we are clear about what we are trying to accomplish with our system, we will focus on the actions we are hoping to incentivize: our target behaviors. One of the big questions around gamification efforts is can it really cause behavioral change. Will employees, customers, and stakeholders simply go back to doing things the way they are used to once the game is over? Will they figure out a way to "cheat" the system? The only way to meet long-term organizational objectives in a systematic way is the application to not only cause change for the moment, but lasting change over time. Many gamification applications fail in long-term behavior change, and here's why. Psychologists have studied the behavior change life cycle at length. . The study revealed that people go through five distinct phases when changing a behavior. Each phase presents a different set of challenges. The five phases of the behavioral life cycle are as follows: Awareness: Before a person will take any action to change a behavior, he/she must first be aware of their current behavior and how it might need to change. Buy in: After a person becomes aware that they need to change, they must agree that they actually need to change and make the necessary commitment to do so. Learn: But what actually does a person need to do to change? It cannot be assumed that he/she knows how to change. They must learn the new behavior. Adopt: Now that he/she has learned the necessary skills, they have to actually implement them. They need to take the new action. Maintain: Finally, after adopting a new behavior, it can only become a lasting change with constant practice. Image source: http://www.accenture.com/us-en/blogs/technology-labs-blog/archive/2012/03/28/gamification-and-the-behavior-change-lifecycle.aspx) How can we use this understanding to establish our target behaviors? Keep in mind that our objectives are to increase interaction through discussion and increase consideration for other perspectives. According to our understanding of changing behavior around our objectives, we need our users to: Become aware of their discussion frequency with other users Become aware that other perspectives exist Commit to more discussions with other users Commit to considering other users' perspectives Learn how to have more discussions with other users Learn about other users' perspectives Have more discussions with other users Actually consider other users' perspectives Continue to have more discussions with other users on a consistent basis Continue to consider other users' perspectives over time This outlines the list of activities that needs to be performed for our systems to meet our objectives. Of course, some of our target behaviors will be clear. In other cases, it will require some creativity on our part to get users to take these actions. So what are some possible actions that we can have our users take to move them along the behavior change life cycle? Check their discussion thread count Review the Differing Point of View section Set a target discussion amount for a particular time period Set a target number of Differing Points of View to review Watch a video (or some instructional material) on how to use the discussion area Watch a video (or some instructional material) on the value of viewing other perspectives Participate in the discussion groups Read through other users' discussions posts Participate in the discussion groups over time Read through other users' perspectives over time Some of these target behaviors are relatively straightforward to implement. Others will require more thought. More importantly, we have now identified the target behaviors we want our users to take. This will guide the rest of our development efforts. Players Although the last few sections have been about the serious side of things, such as objectives and target behaviors, we still have gamification as the focal point. Hence, from this point on we will refer to our users as players. We must keep in mind that although we have defined the actions that we want our players to take, the strategies to motivate them to take that action vary from player to player. Gamification is definitely not a one-size-fits-all process. We will have to look at each of our target behaviors from the perspective of our players. We must take their motivations into consideration, unless our mechanics are pretty much trial and error. We will need an approach that's a little more structured. According to the Bartle's Player Motivations theory, players of any game system fall into one of the following four categories: Killers: These are people motivated to participate in a gaming scenario with the primary purpose of winning the game by "acting on" other players. This might include killing them, beating, and directly competing with other players in the game. Achievers: These, on the other hand, are motivated by taking clear actions against the system itself to win. They are less motivated by beating an opponent than by achieving things to win. Socializers: These have very different motivations for participating in a game. They are motivated more by interacting and engaging with other players. Explorers: Like socializers, explorers enjoy interaction and engagement, but less with other players than with the system itself. The following diagram outlines each player motivation type and what game mechanic might best keep them engaged. Image source: http://frankcaron.com/Flogger/?p=1732 As we define our activity loops, we need to make sure that we include each of the four types of players and their motivations. Activity loops Gamified systems, like other systems, are simply a series of actions. The player acts on the system and the system responds. We refer to how the user interacts with the system as activity loops. We will talk about two types of activity loops, engagement loops and progression loops, to describe our player interactions. Engagement loops describe how a player engages the system. They outline what a player does and how the system responds. Activity will be different for players depending on their motivations, so we must also take into consideration why the player is taking the action he is taking. A progression loop describes how the player engages the system as a whole. It outlines how he/she might progress through the game itself. Whereas engagement loops discuss what the player does on a detailed level, progression loops outline the movement of the player through the system. For example, when a person drives a car, he/she is interacting with the car almost constantly. This interaction is a set of engagement loops. All the while, the car is going somewhere. Where the car is going describes its progression loops. Activity loops tend to follow the Motivation, Action, Feedback pattern. The players are sufficiently motivated to take an action. When the players take the action and they get a feedback from the system, the feedback hopefully motivates the players enough to take another action. They take that action and get more feedback. In a perfect world, this cycle would continue indefinitely and the players would never stop playing our gamified system. Our goal is to get as close to this continuous activity loop as we possibly can. Progression loops We have spent the last few pages looking at the detailed interactions that a player will have with the system in our engagement loops. Now it's time to turn our attention to the other type of activity loop, the progression loop. Progression loops look at the system at a macro level. They describe the player's journey through the system. We usually think about levels, badges, and/or modes when we are thinking about progression loops We answer questions such as: where have you been, where are you now, and where are you going. This can all be summed up into codifying the player's mastery level. In our application, we will look at the journey from the vantage point of a novice, an expert, and a master. Upon joining the game, players will begin at novice level. At novice level we will focus on: Welcome On-boarding and getting the user acclimated to using the system Achievable goals In the Welcome stage, we will simply introduce the user to the game and encourage him/her to try it out. Upon on-boarding, we need to make the process as easy as possible and give back positive feedback as soon as possible. Once the user is on board, we will outline the easiest way to get involved and begin the journey. At the expert level, the player is engaging regularly in the game. However, other players would not consider this player a leader in the game. Our goal at this level is to present more difficult challenges. When the player reaches a challenge that is appearing too difficult, we can include surprise alternatives along the way to keep him/her motivated until they can break through the expert barrier to master level. The game and other players recognize masters. They should be prominently displayed within the game and might tend to want to help others at novice and expert levels. These options should become available at later stages in the game. Fun After we have done the work of identifying our objectives, defining target behaviors, scoping our players, and laying out the activities of our system, we can finally think about the area of the system where many novice game designers start: the fun. Other gamification practitioners will avoid, or at least disguise, the fun aspect of the gamification design process. It is important that we don't over or under emphasize the fun in the process. For example, chefs prepare an entire meal with spices, but they don't add all spices together. They use the spices in a balanced amount in their cooking to bring flavor to their dishes. Think of fun as an array of spices that we can apply to our activity loops. Marc Leblanc has categorized fun into eight distinct categories. We will attempt to sprinkle just enough of each, where appropriate, to accomplish the desired amount of fun. Keep in mind that what one player will experience as fun will not be the same for another. One size definitely does not fit all in this case. Sensation: A pleasurable experience Narrative: An unfolding story Challenge: An obstacle course Fantasy: Make believe Fellowship: A social framework Discovery: Exploring uncharted territory Expression: Player is given a platform Submission: Mindless activity So how can we sparingly introduce the above dimensions of fun in our system? Action to take Dimension of fun Check their discussion thread count Challenge Review a differing point of the View section Discovery Set a target discussion  amount for a particular time period Challenge Set a target number of "Differing Points of View" to review Challenge Watch a video (or some instructional material) on the how to use the discussion area Challenge Watch a video (or some instructional material) on the value of viewing other perspectives Challenge Participate in the discussion groups Fellowship Expression Read through other users' discussions posts Discovery Participate in the discussion groups over time Fellowship Expression Read through other users' perspectives over time Discovery Tools We are finally at the stage from where we can begin implementation. At this point, we can look at the various game elements (tools) to implement our gamified system. If we have followed the framework upto this point, the mechanics and elements should become apparent. We are not simply adding leader boards or a point system for the sake of it. We can tie all the tools we use back to our previous work. This will result in a Gamification Design Matrix for our application. But before we go there, let's stop and take a look at some tools we have at our disposal. There are a myriad of tools, mechanics, and strategies at our disposal. New ones are being designed everyday. Here are a few of the most common mechanics that we will encounter when designing our gamified system: Achievements: These are specific objectives that a player meets. Avatars: These are visual representations of a player's role, persona, or character in a game. Badges: These are visual elements used to recognize a particular accomplishment. They give players a sense of pride that they can show off to others. Boss fight: This is an exceptionally difficult challenge in a game scenario, usually at the end of a level to demonstrate enough skill level to move up to the next level. Leaderboards: These show rankings of players publicly. They recognize an accomplishment like a badge, but they are visible for all to see. We see this almost every day, in every way from sports team rankings to sales rep monthly results. Points: These are rather straightforward. Players accumulate points and take various actions in the system. Quests/Mission: These are specialized challenges in a game scenario having narrative and objective as characteristics. Reward: This is anything used to extrinsically motivate the user to take a particular action. Team: This is a group of players playing as a single unit. Virtual assets: These are elements in the game that have some value and can be acquired or used to acquire other assets, whether tangible or virtual. Now it's time to turn and take off our gamification design hat and put on our developer hat. Let's start by developing some initial mockups of what our final site might look like using the design we have outlined previously. Many people develop mockups using graphics tools such as Photoshop or Gimp. At this stage, we will be less detailed in our mockups and simply use pencil sketches or a mockup tool such as Balsamiq. Login screen This is a mock-up of the basic login screen in our application. Players are accustomed to a basic login and password scenario we provide here. Account creation screen First time players will have to create an account initially. This is the mock-up of our signup page. Main Player Screen This captures the main elements of our system when a player is fully engaged with the system. Main Player Post Response Screen We have outlined the key functionality of our gamified system via mock-ups. Mock-ups are a means of visually communicating to our team what we are building and why we are building it. Visual mock-ups also give us an opportunity to uncover issues in our design early in the process. Summary Most gamified applications will fail due to a poorly designed system. Hence, we have introduced a Gamification Design Framework to guide our development process. We know that our chances of developing a successful system increase tremendously if we: Define clear business objectives Establish target behaviors Understand our players Work through the activity loops Remember the fun Optimize the tools Resources for Article: Further resources on this subject: An Introduction to PHP-Nuke [Article] Installing phpMyAdmin [Article] Getting Started with jQuery [Article]
Read more
  • 0
  • 0
  • 2006

article-image-routes-and-model-binding-intermediate
Packt
01 Oct 2013
6 min read
Save for later

Routes and model binding (Intermediate)

Packt
01 Oct 2013
6 min read
(For more resources related to this topic, see here.) Getting ready This section builds on the previous section and assumes you have the TodoNancy and TodoNancyTests projects all set up. How to do it... The following steps will help you to handle the other HTTP verbs and work with dynamic routes: Open the TodoNancy Visual Studio solution. Add a new class to the NancyTodoTests project, call it TodosModulesTests, and fill this test code for a GET and a POST route into it: public class TodosModuleTests { private Browser sut; private Todo aTodo; private Todo anEditedTodo; public TodosModuleTests() { TodosModule.store.Clear(); sut = new Browser(new DefaultNancyBootstrapper()); aTodo = new Todo { title = "task 1", order = 0, completed = false }; anEditedTodo = new Todo() { id = 42, title = "edited name", order = 0, completed = false }; } [Fact] public void Should_return_empty_list_on_get_when_no_todos_have_been_posted() { var actual = sut.Get("/todos/"); Assert.Equal(HttpStatusCode.OK, actual.StatusCode); Assert.Empty(actual.Body.DeserializeJson<Todo[]>()); } [Fact] public void Should_return_201_create_when_a_todo_is_posted() { var actual = sut.Post("/todos/", with => with.JsonBody(aTodo)); Assert.Equal(HttpStatusCode.Created, actual.StatusCode); } [Fact] public void Should_not_accept_posting_to_with_duplicate_id() { var actual = sut.Post("/todos/", with => with.JsonBody(anEditedTodo)) .Then .Post("/todos/", with => with.JsonBody(anEditedTodo)); Assert.Equal(HttpStatusCode.NotAcceptable, actual.StatusCode); } [Fact] public void Should_be_able_to_get_posted_todo() { var actual = sut.Post("/todos/", with => with.JsonBody(aTodo) ) .Then .Get("/todos/"); var actualBody = actual.Body.DeserializeJson<Todo[]>(); Assert.Equal(1, actualBody.Length); AssertAreSame(aTodo, actualBody[0]); } private void AssertAreSame(Todo expected, Todo actual) { Assert.Equal(expected.title, actual.title); Assert.Equal(expected.order, actual.order); Assert.Equal(expected.completed, actual.completed); } } The main thing to notice new in these tests is the use of actual.Body.DesrializeJson<Todo[]>(), which takes the Body property of the BrowserResponse type, assumes it contains JSON formatted text, and then deserializes that string into an array of Todo objects. At the moment, these tests will not compile. To fix this, add this Todo class to the TodoNancy project as follows: public class Todo { public long id { get; set; } public string title { get; set; } public int order { get; set; } public bool completed { get; set; } } Then, go to the TodoNancy project, and add a new C# file, call it TodosModule, and add the following code to body of the new class: public static Dictionary<long, Todo> store = new Dictionary<long, Todo>(); Run the tests and watch them fail. Then add the following code to TodosModule: public TodosModule() : base("todos") { Get["/"] = _ => Response.AsJson(store.Values); Post["/"] = _ => { var newTodo = this.Bind<Todo>(); if (newTodo.id == 0) newTodo.id = store.Count + 1; if (store.ContainsKey(newTodo.id)) return HttpStatusCode.NotAcceptable; store.Add(newTodo.id, newTodo); return Response.AsJson(newTodo) .WithStatusCode(HttpStatusCode.Created); }; } The previous code adds two new handlers to our application. One handler for the GET "/todos/" HTTP and the other handler for the POST "/todos/" HTTP. The GET handler returns a list of todo items as a JSON array. The POST handler allows for creating new todos. Re-run the tests and watch them succeed. Now let's take a closer look at the code. Firstly, note how adding a handler for the POST HTTP is similar to adding handlers for the GET HTTP. This consistency extends to the other HTTP verbs too. Secondly, note that we pass the "todos"string to the base constructor. This tells Nancy that all routes in this module are related to /todos. Thirdly, notice the this.Bind<Todo>() call, which is Nancy's data binding in action; it deserializes the body of the POST HTTP into a Todo object. Now go back to the TodosModuleTests class and add these tests for the PUT and DELETE HTTP as follows: [Fact] public void Should_be_able_to_edit_todo_with_put() { var actual = sut.Post("/todos/", with => with.JsonBody(aTodo)) .Then .Put("/todos/1", with => with.JsonBody(anEditedTodo)) .Then .Get("/todos/"); var actualBody = actual.Body.DeserializeJson<Todo[]>(); Assert.Equal(1, actualBody.Length); AssertAreSame(anEditedTodo, actualBody[0]); } [Fact] public void Should_be_able_to_delete_todo_with_delete() { var actual = sut.Post("/todos/", with => with.Body(aTodo.ToJSON())) .Then .Delete("/todos/1") .Then .Get("/todos/"); Assert.Equal(HttpStatusCode.OK, actual.StatusCode); Assert.Empty(actual.Body.DeserializeJson<Todo[]>()); } After watching these tests fail, make them pass by adding this code to the constructor of TodosModule: Put["/{id}"] = p => { if (!store.ContainsKey(p.id)) return HttpStatusCode.NotFound; var updatedTodo = this.Bind<Todo>(); store[p.id] = updatedTodo; return Response.AsJson(updatedTodo); }; Delete["/{id}"] = p => { if (!store.ContainsKey(p.id)) return HttpStatusCode.NotFound; store.Remove(p.id); return HttpStatusCode.OK; }; All tests should now pass. Take a look at the routes to the new handlers for the PUT and DELETE HTTP. Both are defined as "/{id}". This will match any route that starts with /todos/ and then something more that appears after the trailing /, such as /todos/42 and the {id} part of the route definition is 42. Notice that both these new handlers use their p argument to get the ID from the route in the p.id expression. Nancy lets you define very flexible routes. You can use any regular expression to define a route. All named parts of such regular expressions are put into the argument for the handler. The type of this argument is DynamicDictionary, which is a special Nancy type that lets you look up parts via either indexers (for example, p["id"]) like a dictionary, or dot notation (for example, p.id) like other dynamic C# objects. There's more... In addition to the handlers for GET, POST, PUT, and DELETE, which we added in this recipe, we can go ahead and add handler for PATCH and OPTIONS by following the exact same pattern. Out of the box, Nancy automatically supports HEAD and OPTIONS for you. To handle the HEAD HTTP request, Nancy will run the corresponding GET handler but only return the headers. To handle OPTIONS, Nancy will inspect which routes you have defined and respond accordingly. Summary In this article we saw how to handle the other HTTP verbs apart from GET and how to work with dynamic routes. We will also saw how to work with JSON data and how to do model binding. Resources for Article: Further resources on this subject: Displaying MySQL data on an ASP.NET Web Page [Article] Layout with Ext.NET [Article] ASP.Net Site Performance: Speeding up Database Access [Article]
Read more
  • 0
  • 0
  • 1216