Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Front-End Web Development

341 Articles
article-image-handling-long-running-requests-play
Packt
22 Sep 2014
18 min read
Save for later

Handling Long-running Requests in Play

Packt
22 Sep 2014
18 min read
In this article by Julien Richard-Foy, author of Play Framework Essentials, we will dive in the framework internals and explain how to leverage its reactive programming model to manipulate data streams. (For more resources related to this topic, see here.) Firstly, I would like to mention that the code called by controllers must be thread-safe. We also noticed that the result of calling an action has type Future[Result] rather than just Result. This article explains these subtleties and gives answers to questions such as "How are concurrent requests processed by Play applications?" More precisely, this article presents the challenges of stream processing and the way the Play framework solves them. You will learn how to consume, produce, and transform data streams in a non-blocking way using the Iteratee library. Then, you will leverage these skills to stream results and push real-time notifications to your clients. By the end of the article, you will be able to do the following: Produce, consume, and transform streams of data Process a large request body chunk by chunk Serve HTTP chunked responses Push real-time notifications using WebSockets or server-sent events Manage the execution context of your code Play application's execution model The streaming programming model provided by Play has been influenced by the execution model of Play applications, which itself has been influenced by the nature of the work a web application performs. So, let's start from the beginning: what does a web application do? For now, our example application does the following: the HTTP layer invokes some business logic via the service layer, and the service layer does some computations by itself and also calls the database layer. It is worth noting that in our configuration, the database system runs on the same machine as the web application but this is, however, not a requirement. In fact, there are chances that in real-world projects, your database system is decoupled from your HTTP layer and that both run on different machines. It means that while a query is executed on the database, the web layer does nothing but wait for the response. Actually, the HTTP layer is often waiting for some response coming from another system; it could, for example, retrieve some data from an external web service, or the business layer itself could be located on a remote machine. Decoupling the HTTP layer from the business layer or the persistence layer gives a finer control on how to scale the system (more details about that are given further in this article). Anyway, the point is that the HTTP layer may essentially spend time waiting. With that in mind, consider the following diagram showing how concurrent requests could be executed by a web application using a threaded execution model. That is, a model where each request is processed in its own thread.  Threaded execution model Several clients (shown on the left-hand side in the preceding diagram) perform queries that are processed by the application's controller. On the right-hand side of the controller, the figure shows an execution thread corresponding to each action's execution. The filled rectangles represent the time spent performing computations within a thread (for example, for processing data or computing a result), and the lines represent the time waiting for some remote data. Each action's execution is distinguished by a particular color. In this fictive example, the action handling the first request may execute a query to a remote database, hence the line (illustrating that the thread waits for the database result) between the two pink rectangles (illustrating that the action performs some computation before querying the database and after getting the database result). The action handling the third request may perform a call to a distant web service and then a second one, after the response of the first one has been received; hence, the two lines between the green rectangles. And the action handling the last request may perform a call to a distant web service that streams a response of an infinite size, hence, the multiple lines between the purple rectangles. The problem with this execution model is that each request requires the creation of a new thread. Threads have an overhead at creation, because they consume memory (essentially because each thread has its own stack), and during execution, when the scheduler switches contexts. However, we can see that these threads spend a lot of time just waiting. If we could use the same thread to process another request while the current action is waiting for something, we could avoid the creation of threads, and thus save resources. This is exactly what the execution model used by Play—the evented execution model—does, as depicted in the following diagram: Evented execution model Here, the computation fragments are executed on two threads only. Note that the same action can have its computation fragments run by different threads (for example, the pink action). Also note that several threads are still in use, that's why the code must be thread-safe. The time spent waiting between computing things is the same as before, and you can see that the time required to completely process a request is about the same as with the threaded model (for instance, the second pink rectangle ends at the same position as in the earlier figure, same for the third green rectangle, and so on). A comparison between the threaded and evented models can be found in the master's thesis of Benjamin Erb, Concurrent Programming for Scalable Web Architectures, 2012. An online version is available at http://berb.github.io/diploma-thesis/. An attentive reader may think that I have cheated; the rectangles in the second figure are often thinner than their equivalent in the first figure. That's because, in the first model, there is an overhead for scheduling threads and, above all, even if you have a lot of threads, your machine still has a limited number of cores effectively executing the code of your threads. More precisely, if you have more threads than your number of cores, you necessarily have threads in an idle state (that is, waiting). This means, if we suppose that the machine executing the application has only two cores, in the first figure, there is even time spent waiting in the rectangles! Scaling up your server The previous section raises the question of how to handle a higher number of concurrent requests, as depicted in the following diagram: A server under an increasing load The previous section explained how to avoid wasting resources to leverage the computing power of your server. But actually, there is no magic; if you want to compute even more things per unit of time, you need more computing power, as depicted in the following diagram: Scaling using more powerful hardware One solution could be to have a more powerful server. But you could be smarter than that and avoid buying expensive hardware by studying the shape of the workload and make appropriate decisions at the software-level. Indeed, there are chances that your workload varies a lot over time, with peaks and holes of activity. This information suggests that if you wanted to buy more powerful hardware, its performance characteristics would be drawn by your highest activity peak, even if it occurs very occasionally. Obviously, this solution is not optimal because you would buy expensive hardware even if you actually needed it only one percent of the time (and more powerful hardware often also means more power-consuming hardware). A better way to handle the workload elasticity consists of adding or removing server instances according to the activity level, as depicted in the following diagram: Scaling using several server instances This architecture design allows you to finely (and dynamically) tune your server capacity according to your workload. That's actually the cloud computing model. Nevertheless, this architecture has a major implication on your code; you cannot assume that subsequent requests issued by the same client will be handled by the same server instance. In practice, it means that you must treat each request independently of each other; you cannot for instance, store a counter on a server instance to count the number of requests issued by a client (your server would miss some requests if one is routed to another server instance). In a nutshell, your server has to be stateless. Fortunately, Play is stateless, so as long as you don't explicitly have a mutable state in your code, your application is stateless. Note that the first implementation I gave of the shop was not stateless; indeed the state of the application was stored in the server's memory. Embracing non-blocking APIs In the first section of this article, I claimed the superiority of the evented execution model over the threaded execution model, in the context of web servers. That being said, to be fair, the threaded model has an advantage over the evented model: it is simpler to program with. Indeed, in such a case, the framework is responsible for creating the threads and the JVM is responsible for scheduling the threads, so that you don't even have to think about this at all, yet your code is concurrently executed. On the other hand, with the evented model, concurrency control is explicit and you should care about it. Indeed, the fact that the same execution thread is used to run several concurrent actions has an important implication on your code: it should not block the thread. Indeed, while the code of an action is executed, no other action code can be concurrently executed on the same thread. What does blocking mean? It means holding a thread for too long a duration. It typically happens when you perform a heavy computation or wait for a remote response. However, we saw that these cases, especially waiting for remote responses, are very common in web servers, so how should you handle them? You have to wait in a non-blocking way or implement your heavy computations as incremental computations. In all the cases, you have to break down your code into computation fragments, where the execution is managed by the execution context. In the diagram illustrating the evented execution model, computation fragments are materialized by the rectangles. You can see that rectangles of different colors are interleaved; you can find rectangles of another color between two rectangles of the same color. However, by default, the code you write forms a single block of execution instead of several computation fragments. It means that, by default, your code is executed sequentially; the rectangles are not interleaved! This is depicted in the following diagram: Evented execution model running blocking code The previous figure still shows both the execution threads. The second one handles the blue action and then the purple infinite action, so that all the other actions can only be handled by the first execution context. This figure illustrates the fact that while the evented model can potentially be more efficient than the threaded model, it can also have negative consequences on the performances of your application: infinite actions block an execution thread forever and the sequential execution of actions can lead to much longer response times. So, how can you break down your code into blocks that can be managed by an execution context? In Scala, you can do so by wrapping your code in a Future block: Future { // This is a computation fragment} The Future API comes from the standard Scala library. For Java users, Play provides a convenient wrapper named play.libs.F.Promise: Promise.promise(() -> {// This is a computation fragment}); Such a block is a value of type Future[A] or, in Java, Promise<A> (where A is the type of the value computed by the block). We say that these blocks are asynchronous because they break the execution flow; you have no guarantee that the block will be sequentially executed before the following statement. When the block is effectively evaluated depends on the execution context implementation that manages it. The role of an execution context is to schedule the execution of computation fragments. In the figure showing the evented model, the execution context consists of a thread pool containing two threads (represented by the two lines under the rectangles). Actually, each time you create an asynchronous value, you have to supply the execution context that will manage its evaluation. In Scala, this is usually achieved using an implicit parameter of type ExecutionContext. You can, for instance, use an execution context provided by Play that consists, by default, of a thread pool with one thread per processor: import play.api.libs.concurrent.Execution.Implicits.defaultContext In Java, this execution context is automatically used by default, but you can explicitly supply another one: Promise.promise(() -> { ... }, myExecutionContext); Now that you know how to create asynchronous values, you need to know how to manipulate them. For instance, a sequence of several Future blocks is concurrently executed; how do we define an asynchronous computation depending on another one? You can eventually schedule a computation after an asynchronous value has been resolved using the foreach method: val futureX = Future { 42 }futureX.foreach(x => println(x)) In Java, you can perform the same operation using the onRedeem method: Promise<Integer> futureX = Promise.promise(() -> 42);futureX.onRedeem((x) -> System.out.println(x)); More interestingly, you can eventually transform an asynchronous value using the map method: val futureIsEven = futureX.map(x => x % 2 == 0) The map method exists in Java too: Promise<Boolean> futureIsEven = futureX.map((x) -> x % 2 == 0); If the function you use to transform an asynchronous value returned an asynchronous value too, you would end up with an inconvenient Future[Future[A]] value (or a Promise<Promise<A>> value, in Java). So, use the flatMap method in that case: val futureIsEven = futureX.flatMap(x => Future { x % 2 == 0 }) The flatMap method is also available in Java: Promise<Boolean> futureIsEven = futureX.flatMap((x) -> {Promise.promise(() -> x % 2 == 0)}); The foreach, map, and flatMap functions (or their Java equivalent) all have in common to set a dependency between two asynchronous values; the computation they take as the parameter is always evaluated after the asynchronous computation they are applied to. Another method that is worth mentioning is zip: val futureXY: Future[(Int, Int)] = futureX.zip(futureY) The zip method is also available in Java: Promise<Tuple<Integer, Integer>> futureXY = futureX.zip(futureY); The zip method returns an asynchronous value eventually resolved to a tuple containing the two resolved asynchronous values. It can be thought of as a way to join two asynchronous values without specifying any execution order between them. If you want to join more than two asynchronous values, you can use the zip method several times (for example, futureX.zip(futureY).zip(futureZ).zip(…)), but an alternative is to use the Future.sequence function: val futureXs: Future[Seq[Int]] =Future.sequence(Seq(futureX, futureY, futureZ, …)) This function transforms a sequence of future values into a future sequence value. In Java, this function is named Promise.sequence. In the preceding descriptions, I always used the word eventually, and it has a reason. Indeed, if we use an asynchronous value to manipulate a result sent by a remote machine (such as a database system or a web service), the communication may eventually fail due to some technical issue (for example, if the network is down). For this reason, asynchronous values have error recovery methods; for example, the recover method: futureX.recover { case NonFatal(e) => y } The recover method is also available in Java: futureX.recover((throwable) -> y); The previous code resolves futureX to the value of y in the case of an error. Libraries performing remote calls (such as an HTTP client or a database client) return such asynchronous values when they are implemented in a non-blocking way. You should always be careful whether the libraries you use are blocking or not and keep in mind that, by default, Play is tuned to be efficient with non-blocking APIs. It is worth noting that JDBC is blocking. It means that the majority of Java-based libraries for database communication are blocking. Obviously, once you get a value of type Future[A] (or Promise<A>, in Java), there is no way to get the A value unless you wait (and block) for the value to be resolved. We saw that the map and flatMap methods make it possible to manipulate the future A value, but you still end up with a Future[SomethingElse] value (or a Promise<SomethingElse>, in Java). It means that if your action's code calls an asynchronous API, it will end up with a Future[Result] value rather than a Result value. In that case, you have to use Action.async instead of Action, as illustrated in this typical code example: val asynchronousAction = Action.async { implicit request =>  service.asynchronousComputation().map(result => Ok(result))} In Java, there is nothing special to do; simply make your method return a Promise<Result> object: public static Promise<Result> asynchronousAction() { service.asynchronousComputation().map((result) -> ok(result));} Managing execution contexts Because Play uses explicit concurrency control, controllers are also responsible for using the right execution context to run their action's code. Generally, as long as your actions do not invoke heavy computations or blocking APIs, the default execution context should work fine. However, if your code is blocking, it is recommended to use a distinct execution context to run it. An application with two execution contexts (represented by the black and grey arrows). You can specify in which execution context each action should be executed, as explained in this section Unfortunately, there is no non-blocking standard API for relational database communication (JDBC is blocking). It means that all our actions that invoke code executing database queries should be run in a distinct execution context so that the default execution context is not blocked. This distinct execution context has to be configured according to your needs. In the case of JDBC communication, your execution context should be a thread pool with as many threads as your maximum number of connections. The following diagram illustrates such a configuration: This preceding diagram shows two execution contexts, each with two threads. The execution context at the top of the figure runs database code, while the default execution context (on the bottom) handles the remaining (non-blocking) actions. In practice, it is convenient to use Akka to define your execution contexts as they are easily configurable. Akka is a library used for building concurrent, distributed, and resilient event-driven applications. This article assumes that you have some knowledge of Akka; if that is not the case, do some research on it. Play integrates Akka and manages an actor system that follows your application's life cycle (that is, it is started and shut down with the application). For more information on Akka, visit http://akka.io. Here is how you can create an execution context with a thread pool of 10 threads, in your application.conf file: jdbc-execution-context {thread-pool-executor {   core-pool-size-factor = 10.0   core-pool-size-max = 10}} You can use it as follows in your code: import play.api.libs.concurrent.Akkaimport play.api.Play.currentimplicit val jdbc =  Akka.system.dispatchers.lookup("jdbc-execution-context") The Akka.system expression retrieves the actor system managed by Play. Then, the execution context is retrieved using Akka's API. The equivalent Java code is the following: import play.libs.Akka;import akka.dispatch.MessageDispatcher;import play.core.j.HttpExecutionContext;MessageDispatcher jdbc =   Akka.system().dispatchers().lookup("jdbc-execution-context"); Note that controllers retrieve the current request's information from a thread-local static variable, so you have to attach it to the execution context's thread before using it from a controller's action: play.core.j.HttpExecutionContext.fromThread(jdbc) Finally, forcing the use of a specific execution context for a given action can be achieved as follows (provided that my.execution.context is an implicit execution context): import my.execution.contextval myAction = Action.async {Future { … }} The Java equivalent code is as follows: public static Promise<Result> myAction() {return Promise.promise(   () -> { … },   HttpExecutionContext.fromThread(myExecutionContext));} Does this feels like clumsy code? Buy the book to learn how to reduce the boilerplate! Summary This article detailed a lot of things on the internals of the framework. You now know that Play uses an evented execution model to process requests and serve responses and that it implies that your code should not block the execution thread. You know how to use future blocks and promises to define computation fragments that can be concurrently managed by Play's execution context and how to define your own execution context with a different threading policy, for example, if you are constrained to use a blocking API. Resources for Article: Further resources on this subject: Play! Framework 2 – Dealing with Content [article] So, what is Play? [article] Play Framework: Introduction to Writing Modules [article]
Read more
  • 0
  • 0
  • 4353

article-image-setting-rig
Packt
21 Aug 2014
16 min read
Save for later

Setting Up The Rig

Packt
21 Aug 2014
16 min read
In this article by Vinci Rufus, the author of the book AngularJS Web Application Development Blueprints, we will see the process of setting up various tools required to start building AngularJS apps. I'm sure you would have heard the saying, "A tool man is known by the tools he keeps." OK fine, I just made that up, but that's actually true, especially when it comes to programming. Sure you can build complete and fully functional AngularJS apps just using a simple text editor and a browser, but if you want to work like a ninja, then make sure that you start using some of these tools as a part of your development workflow. Do note that these tools are not mandatory to build AngularJS apps. Their use is recommended mainly to help improve the productivity. In this article, we will see how to set up and use the following productivity tools: Node.js Grunt Yeoman Karma Protractor Since most of us are running a Mac, Windows, Ubuntu, or another flavor of the Linux operating system, we'll be covering the deployment steps common for all of them. (For more resources related to this topic, see here.) Setting up Node.js Depending on your technology stack, I strongly recommend you have either Ruby or Node.js installed. In case of AngularJS, most of the productivity tools or plugins are available as Node Package Manager (npm), and, hence, we will be setting up Node.js along with npm. Node.js is an open source JavaScript-based platform that uses an event-based Input/output model, making it lightweight and fast. Let us head over to www.nodejs.org and install Node.js. Choose the right version as per your operating system. The current version of Node.js at the time of writing this article is v0.10.x which comes with npm built in, making it a breeze to set up Node.js and npm. Node.js doesn't come with a Graphical User Interface (GUI), so to use Node.js, you will need to open up your terminal and start firing some commands. Now would also be a good time to brush up on your DOS and Unix/Linux commands. After installing Node.js, the first thing you'd want to check is to see if Node.js has been installed correctly. So, let us open up the terminal and write the following command: node –-version This should output the version number of Node.js that's installed on your system. The next would be to see what version of npm we have installed. The command for that would be as follows: npm –-version This will tell you the version number for your npm. Creating a simple Node.js web server with ExpressJS For basic, simple AngularJS apps, you don't really need a web server. You can simply open the HTML files from your filesystem and they would work just fine. However, as you start building complex applications where you are passing data in JSON, web services, or using a Content Delivery Network (CDN), you would find the need to use a web server. The good thing about AngularJS apps is that they could work within any web server, so if you already have IIS, Apache, Nginx, or any other web server running on your development environment, you can simply run your AngularJS project from within the web root folder. In case you don't have a web server and are looking for a lightweight web server, then let us set one up using Node.js and ExpressJS. One could write the entire web server in pure Node.js; however, ExpressJS provides a nice layer of abstraction on top of Node.js so that you can just work with the ExpressJS APIs and don't have to worry about the low-level calls. So, let's first install the ExpressJS module for Node.js. Open up your terminal and fire the following command: npm install -g express-generator This will globally install ExpressJS. Omit the –g to install ExpressJS locally in the current folder. When installing ExpressJS globally on Linux or Mac, you will need to run it via sudo as follows: sudo npm install –g express-generator This will let npm have the necessary permissions to write to the protected local folder under the user. The next step is to create an ExpressJS app; let us call it my-server. Type the following command in the terminal and hit enter: express my-server You'll see something like this: create : my-server create : my-server/package.json create : my-server/app.js create : my-server/public create : my-server/public/javascripts create : my-server/public/images create : my-server/public/stylesheets create : my-server/public/stylesheets/style.css create : my-server/routes create : my-server/routes/index.js create : my-server/routes/user.js create : my-server/views create : my-server/views/layout.jade create : my-server/views/index.jade install dependencies: $ cd my-server && npm install run the app: $ DEBUG=my-server ./bin/www This will create a folder called my-server and put in a bunch of files inside the folder. The package.json file is created, which contains the skeleton of your app. Open it and ensure the name says my-server; also, note the dependencies listed. Now, to install ExpressJS along with the dependencies, first change into the my-server directory and run the following command in the terminal: cd my-server npm install Now, in the terminal, type in the following command: npm start Open your browser and type http://localhost:3000 in the address bar. You'll get a nice ExpressJS welcome message. Now to test our Address Book App, we will copy our index.html, scripts.js, and styles.css into the public folder located within my-server. I'm not copying the angular.js file because we'll use the CDN version of the AngularJS library. Open up the index.html file and replace the following code: <script src= "angular.min.js" type="text/javascript"> </script> With the CDN version of AngularJS as follows: <script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.17/angular.min.js"></script> A question might arise, as to what if the CDN is unreachable. In such cases, we can add a fall back to use a local version of the AngularJS library. We do this by adding the following script after the CDN link is called: <script>window.angular || document.write('<script src="lib/angular/angular.min.js"></script>');</script> Save the file in the browser and enter localhost:3000/index.html. Your Address Book app is now running from a server and taking advantage of Google's CDN to serve the AngularJS file. Referencing the files using only // is also called the protocol independent absolute path. This means that the files are requested using the same protocol that is being used to call the parent page. For example, if the page you are loading is via https://, then the CDN link will also be called via HTTPS. This also means that when using // instead of http:// during development, you will need to run your app from within a server instead of a filesystem. Setting up Grunt Grunt is a JavaScript-based task runner. It is primarily used for automating tasks such as running unit tests, concatenating, merging, and minifying JS and CSS files. You can also run shell commands. This makes it super easy to perform server cleanups and deploy code. Essentially, Grunt is to JavaScript what Rake would be to Ruby or Ant/Maven would be to Java. Installing Grunt-cli Installing Grunt-cli is slightly different from installing other Node.js modules. We first need to install the Grunt's Command Line Interface (CLI) by firing the following command in the terminal: npm install -g grunt-cli Mac or Linux users can also directly run the following command: sudo npm install –g grunt-cli Make sure you have administrative privileges. Use sudo if you are on a Mac or Linux system. If you are on Windows, right-click and open the command prompt with administrative rights. An important thing to note is that installing Grunt-cli doesn't automatically install Grunt and its dependencies. Grunt-cli merely invokes the version of Grunt installed along with the Grunt file. While this may seem a little complicated at start, the reason it works this way is so that we can run different versions of Grunt from the same machine. This comes in handy when your project has dependencies on a specific version of Grunt. Creating the package.json file To install Grunt first, let's create a folder called my-project and create a file called package.json with the following content: { "name": "My-Project", "version": "0.1.0", "devDependencies": { "grunt": "~0.4.5", "grunt-contrib-jshint": "~0.10.0", "grunt-contrib-concat": "~0.4.0", "grunt-contrib-uglify": "~0.5.0", "grunt-shell": "~0.7.0" } } Save the file. The package.json is where you define the various parameters of your app; for example, the name of your app, the version number, and the list of dependencies needed for the app. Here we are calling our app My-Project with Version 0.1.0, and listing out the following dependencies that need to be installed as a part of this app: grunt (v0.4.5): This is the main Grunt application grunt-contrib-jshint (v0.10.0): This is used for code analysis grunt-contrib-concat (v0.4.0): This is used to merge two or more files into one grunt-contrib-uglify (v0.5.0): This is used to minify the JS file grunt-shell (v0.7.0): This is the Grunt shell used for running shell commands Visit http://gruntjs.com/plugins to get a list of all the plugins available for Grunt and also their exact names and version numbers. You may also choose to create a default package.json file by running the following command and answering the questions: npm init Open the package.json file and add the dependencies as mentioned earlier. Now that we have the package.json file, load the terminal and navigate into the my-project folder. To install Grunt and the modules specified in the file, type in the following command: npm install --save-dev You'll see a series of lines getting printed in the console, let that continue for a while and wait until it returns to the command prompt. Ensure that the last line printed by the previous command ends with OK code 0. Once Grunt is installed, a quick version check command will ensure that Grunt is installed. The command is as follows: grunt –-version There is a possibility that you got a bunch of errors and it ended with a not ok code 0 message. There could be multiple reasons why that would have happened, ranging from errors in your code to a network connection issue or something changing at Grunt's end due to a new version update. If grunt --version throws up an error, it means Grunt wasn't installed properly. To reinstall Grunt, enter the following commands in the terminal: rm –rf node_modules npm cache clean npm install Windows users may manually delete the node_modules folder from Windows Explorer, before running the cache clean command in the command prompt. Refer to http://www.gruntjs.com to troubleshoot the problem. Creating your Grunt tasks To run our Grunt tasks, we'll need a JavaScript file. So, let's copy our scritps.js file and place it into the my-projects folder. The next step is to create a Grunt file that will list out the tasks that we need Grunt to perform. For now, we will ask it to do four simple tasks, first check if our JS code is clean using JSHint, then we will merge three JS files into one and then minify the JS file, and finally we will run some shell commands to clean up. Until Version 0.3, the init command was a part of the Grunt tool and one could create a blank project using grunt-init. With Version 0.4, init is now available as a separate tool called grunt-init and needs to be installed using the npm install –g grunt-init command line. Also note that the structure of the grunt.js file from Version 0.4 onwards is fairly different from the earlier versions you've used. For now, we will resort to creating the Grunt file manually. Refer to the following screenshot: In the same location as where you have your package.json, create a file called gruntfile.js as shown earlier and type in the following code: module.exports = function(grunt) { // Project configuration. grunt.initConfig({ jshint:{ all:['scripts.js'] } }); grunt.loadNpmTasks('grunt-contrib-jshint'); // Default task. grunt.registerTask('default', ['jshint']); }; To start, we will add only one task which is jshint and specify scripts.js in the list of files that need to be linted. In the next line, we specify grunt-contrib-jshint as the npm task that needs to be loaded. In the last line, we define the jshint as the task to be run when Grunt is running in default mode. Save the file and in the terminal run the following command: grunt You would probably get to see the following message in the terminal: So JSHint is saying that we are missing a semicolon on lines 18 and 24. Oh! Did I mention that JSHint is like your very strict math teacher from high school. Let's open up scripts.js and put in those semicolons and rerun Grunt. Now you should get a message in green saying 1 file lint free. Done without errors. Let's add some more tasks to Grunt. We'll now ask it to concatenate and minify a couple of JS files. Since we currently have just one file, let's go and create two dummy JS files called scripts1.js and scripts2.js. In scripts1.js we'll simply write an empty function as follows: // This is from script 1 function Script1Function(){ //------// } Similarly, in scripts2.js we'll write the following: // This is from script 2 function Script2Function(){ //------// } Save these files in the same folder where you have scripts.js. Grunt tasks to merge and concatenate files Now, let's open our Grunt file and add the code for both the tasks—to merge the JS file, and minify them as follows: module.exports = function(grunt) { // Project configuration. grunt.initConfig({ jshint:{ all:['scripts.js'] }, concat: { dist: { src: ['scripts.js', 'scripts1.js','scripts2.js'], dest: 'merged.js' } }, uglify: { dist: { src: 'merged.js', dest: 'build/merged.min.js' } } }); grunt.loadNpmTasks('grunt-contrib-jshint'); grunt.loadNpmTasks('grunt-contrib-concat'); grunt.loadNpmTasks('grunt-contrib-uglify'); // Default task. grunt.registerTask('default', ['jshint','concat','uglify']); }; As you can see from the preceding code, after the jshint task, we added the concat task. Under the src attribute, we define the files separated by a comma that need to be concatenated. And in the dest attribute, we specify the name of the merged JS file. It is very important that the files are entered in the same sequence as they need to be merged. If the sequence of the files entered is incorrect, the merged JS file will cause errors in your app. The uglify task is used to minify the JS file and the structure is very similar to the concat task. We add the merged.js file to the src attribute and in the dest attribute, we will place the merged.min.js file into a folder called build. Grunt will auto create the build folder. After defining the tasks, we will load the necessary plugins, namely the grunt-contrib-concat and the grunt-contrib-uglify, and finally we will register the concat and uglify tasks to the default task. Save the file and run Grunt. And if all goes well, you should see Grunt running these tasks and informing the status of each of the tasks. If you get the final message saying, Done, without any errors, it means things went well, and this was your lucky day! If you now open your my-project folder in the file manager, you should see a new file called merged.js. Open it in the text editor and you'll notice that all the three files have been merged into this. Also, go into the build/merged.min.js file and verify whether the file is minified. Running shell commands via Grunt Another really helpful plugin in Grunt is grunt-shell. This allows us to effectively run clean-up activities such as deleting .tmp files and moving files from one folder to another. Let's see how to add the shell tasks to our Grunt file. Add the following highlighted piece of code to your Grunt file: module.exports = function(grunt) { // Project configuration. grunt.initConfig({ jshint:{ all:['scripts.js'] }, concat: { dist: { src: ['scripts.js', 'scripts1.js','scripts2.js'], dest: 'merged.js' } }, uglify: { dist: { src: 'merged.js', dest: 'build/merged.min.js' } } , shell: { multiple: { command: [ 'rm -rf merged.js', 'mkdir deploy', 'mv build/merged.min.js deploy/merged.min.js' ].join('&&') } } }); grunt.loadNpmTasks('grunt-contrib-jshint'); grunt.loadNpmTasks('grunt-contrib-concat'); grunt.loadNpmTasks('grunt-contrib-uglify'); grunt.loadNpmTasks('grunt-shell'); // Default task. grunt.registerTask('default', ['jshint','concat','uglify', 'shell' ]); }; As you can see from the code we added, we are first deleting the merged.js file, then creating a new folder called deploy and moving our merged.min.js file into it. Windows users would need to use the appropriate DOS commands for deleting and copying the files. Note that .join('&&') is used when you want Grunt to run multiple shell commands. The next steps are to load the npm tasks and add shell to the default task list. To see Grunt perform all these tasks, run the Grunt command in the terminal. Once it's done, open up the filesystem and verify whether Grunt has done what you had asked it to do. Just like we used the preceding four plugins, there are numerous other plugins that you can use with Grunt to automate your tasks. A point to note is while the default Grunt command will run all the tasks mentioned in the grunt.registerTask statement, if you would need to run a specific task instead of all of them, then you can simply type the following in the command line: grunt jshint Alternatively, you can type the following command: grunt concat Alternatively, you can type the following command: grunt ugligy At times if you'd like to run just two of the three tasks, then you can register them separately as another bundled task in the Grunt file. Open up the gruntfile.js file, and just after the line where you have registered the default task, add the following code: grunt.registerTask('concat-min', ['concat','uglify']); This will register a new task called concat-min and will run only the concat and uglify tasks. In the terminal run the following command: grunt concat-min Verify whether Grunt only concatenated and minified the file and didn't run JSHint or your shell commands. You can run grunt --help to see a list of all the tasks available in your Grunt file.
Read more
  • 0
  • 0
  • 1437

article-image-transforming-data-service
Packt
20 Aug 2014
4 min read
Save for later

Transforming data in the service

Packt
20 Aug 2014
4 min read
This article written by, Jim Lavin, author of the book AngularJS Services will cover ways on how to transform data. Sometimes, you need to return a subset of your data for a directive or controller, or you need to translate your data into another format for use by an external service. This can be handled in several different ways; you can use AngularJS filters or you could use an external library such as underscore or lodash. (For more resources related to this topic, see here.) How often you need to do such transformations will help you decide on which route you take. If you are going to transform data just a few times, it isn't necessary to add another library to your application; however, if you are going to do it often, using a library such as underscore or lodash will be a big help. We are going to limit our discussion to using AngularJS filters to handle transforming our data. Filters are an often-overlooked component in the AngularJS arsenal. Often, developers will end up writing a lot of methods in a controller or service to filter an array of objects that are iterated over in an ngRepeat directive, when a simple filter could have easily been written and applied to the ngRepeat directive and removed the excess code from the service or controller. First, let's look at creating a filter that will reduce your data based on a property on the object, which is one of the simplest filters to create. This filter is designed to be used as an option to the ngRepeat directive to limit the number of items displayed by the directive. The following fermentableType filter expects an array of fermentable objects as the input parameter and a type value to filter as the arg parameter. If the fermentable's type value matches the arg parameter passed into the filter, it is pushed onto the resultant array, which will in turn cause the object to be included in the set provided to the ngRepeat directive. angular.module('brew-everywhere').filter('fermentableType', function () {return function (input, arg) {var result = [];angular.forEach(input, function(item){if(item.type === arg){result.push(item);}})return result;};}); To use the filter, you include it in your partial in an ngRepeat directive as follows: <table class="table table-bordered"><thead><tr><th>Name</th><th>Type</th><th>Potential</th><th>SRM</th><th>Amount</th><th>&nbsp;</th></tr></thead><tbody><tr ng-repeat="fermentable in fermentables |fermentableType:'Grain'"><td class="col-xs-4">{{fermentable.name}}</td><td class="col-xs-2">{{fermentable.type}}</td><td class="col-xs-2">{{fermentable.potential}}</td><td class="col-xs-2">{{fermentable.color}}</td></tr></tbody></table> The result of calling fermentableType with the value, Grain is only going to display those fermentable objects that have a type property with a value of Grain. Using filters to reduce an array of objects can be as simple or complex as you like. The next filter we are going to look at is one that uses an object to reduce the fermentable object array based on properties in the passed-in object. The following filterFermentable filter expects an array of fermentable objects as an input and an object that defines the various properties and their required values that are needed to return a matching object. To build the resulting array of objects, you walk through each object and compare each property with those of the object passed in as the arg parameter. If all the properties match, the object is added to the array and it is returned. angular.module('brew-everywhere').filter('filterFermentable', function () {return function (input, arg) {var result = [];angular.forEach(input, function (item) {var add = truefor (var key in arg) {if (item.hasOwnProperty(key)) {if (item[key] !== arg[key]) {add = false;}}}if (add) {result.push(item);}});return result;};});
Read more
  • 0
  • 0
  • 1122

article-image-angularjs-0
Packt
20 Aug 2014
15 min read
Save for later

AngularJS

Packt
20 Aug 2014
15 min read
In this article, by Rodrigo Branas, author of the book, AngularJS Essentials, we will go through the basics of AngularJS. Created by Miško Hevery and Adam Abrons in 2009, AngularJS is an open source, client-side JavaScript framework that promotes a high productivity web development experience. It was built over the belief that declarative programming is the best choice to construct the user's interface, while imperative programming is much better and preferred to implement the application's business logic. To achieve that, AngularJS empowers the traditional HTML by extending its current vocabulary, making the life of developers easier. The result is the development of expressive, reusable, and maintainable application components, leaving behind a lot of unnecessary code and keeping the team focused on the valuable and important things. (For more resources related to this topic, see here.) Architectural concepts It's been a long time since the famous Model-View-Controller, also known as MVC, started to be widely used in the software development industry, thereby becoming one of the legends of the enterprise architecture design. Basically, the model represents the knowledge that the view is responsible to present, while the controller mediates their relationship. However, these concepts are a little bit abstract, and this pattern may have different implementations depending on the language, platform, and purposes of the application. After a lot of discussions about which architectural pattern the framework follows, its authors declared that from now on, AngularJS is adopting Model-View-Whatever (MVW). Regardless of the name, the most important benefit is that the framework provides a clear separation of the concerns between the application layers, providing modularity, flexibility, and testability. In terms of concepts, a typical AngularJS application consists primarily of view, model, and controller, but there are other important components, such as services, directives, and filters. The view, also called template, is entirely written in HTML, which becomes a great opportunity to see web designers and JavaScript developers working side-by-side. It also takes advantage of the directives mechanism, a kind of extension of the HTML vocabulary that brings the ability to perform the programming language tasks, such as iterating over an array or even evaluating an expression conditionally. Behind the view, there is the controller. At first, the controller contains all business logic implementation used by the view. However, as the application grows, it becomes really important to perform some refactoring activities, such as moving the code from the controller to other components like services, in order to keep the cohesion high. The connection between the view and the controller is done by a shared object called scope. It is located between them and is used to exchange information related to the model. The model is a simple Plain-Old-JavaScript-Object (POJO). It looks very clear and easy to understand, bringing simplicity to the development by not requiring any special syntax to be created. Setting up the framework The configuration process is very simple and in order to set up the framework, we start by importing the angular.js script to our HTML file. After that, we need to create the application module by calling the module function, from the Angular's API, with it's name and dependencies. With the module already created, we just need to place the ng-app attribute with the module's name inside the html element or any other that surrounds the application. This attribute is important because it supports the initialization process of the framework. In the following code, there is an introductory application about a parking lot. At first, we are able to add and also list the parked cars, storing it’s plate in memory. Throughout the book, we will evolve this parking control application by incorporating each newly studied concept. index.html <!doctype html> <!-- Declaring the ng-app --> <html ng-app="parking"> <head> <title>Parking</title> <!-- Importing the angular.js script --> <script src="angular.js"></script> <script> // Creating the module called parking var parking = angular.module("parking", []); // Registering the parkingCtrl to the parking module parking.controller("parkingCtrl", function ($scope) { // Binding the car’s array to the scope $scope.cars = [ {plate: '6MBV006'}, {plate: '5BBM299'}, {plate: '5AOJ230'} ]; // Binding the park function to the scope $scope.park = function (car) { $scope.cars.push(angular.copy(car)); delete $scope.car; }; }); </script> </head> <!-- Attaching the view to the parkingCtrl --> <body ng-controller="parkingCtrl"> <h3>[Packt] Parking</h3> <table> <thead> <tr> <th>Plate</th> </tr> </thead> <tbody> <!-- Iterating over the cars --> <tr ng-repeat="car in cars"> <!-- Showing the car’s plate --> <td>{{car.plate}}</td> </tr> </tbody> </table> <!-- Binding the car object, with plate, to the scope --> <input type="text" ng-model="car.plate"/> <!-- Binding the park function to the click event --> <button ng-click="park(car)">Park</button> </body> </html> The ngController, was used to bind the parkingCtrl to the view while the ngRepeat iterated over the car's array. Also, we employed expressions like {{car.plate}} to display the plate of the car. Finally, to add new cars, we applied the ngModel, which creates a new object called car with the plate property, passing it as a parameter of the park function, called through the ngClick directive. To improve the loading page performance, it is recommended to use the minified and obfuscated version of the script that can be identified by angular.min.js. Both minified and regular distributions of the framework can be found on the official site of AngularJS, that is, http://www.angularjs.org, or they can be directly referenced to Google Content Delivery Network (CDN). What is a directive? A directive is an extension of the HTML vocabulary that allows the creation of new behaviors. This technology lets the developers create reusable components that can be used within the whole application and even provide their own custom components. It may be applied as an attribute, element, class, and even as a comment, by using the camelCase syntax. However, because HTML is case-insensitive, we need to use a lowercase form. For the ngModel directive, we can use ng-model, ng:model, ng_model, data-ng-model, and x-ng-model in the HTML markup. Using AngularJS built-in directives By default, the framework brings a basic set of directives, such as iterate over an array, execute a custom behavior when an element is clicked, or even show a given element based on a conditional expression and many others. ngBind This directive is generally applied to a span element and replaces the content of the element with the result of the provided expression. It has the same meaning as that of the double curly markup, for example, {{expression}}. Why would anyone like to use this directive when a less verbose alternative is available? This is because when the page is being compiled, there is a moment when the raw state of the expressions is shown. Since the directive is defined by the attribute of the element, it is invisible to the user. Here is an example of the ngBind directive usage: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> </body> </html> ngRepeat The ngRepeat directive is really useful to iterate over arrays and objects. It can be used with any kind of element such as rows of a table, elements of a list, and even options of select. We must provide a special repeat expression that describes the array to iterate over and the variable that will hold each item in the iteration. The most basic expression format allows us to iterate over an array, attributing each element to a variable: variable in array In the following code, we will iterate over the cars array and assign each element to the car variable: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; $scope.cars = []; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> <table> <thead> <tr> <th>Plate</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-repeat="car in cars"> <td><span ng-bind="car.plate"></span></td> <td><span ng-bind="car.entrance"></span></td> </tr> </tbody> </table> </body> </html> ngModel The ngModel directive attaches the element to a property in the scope, binding the view to the model. In this case, the element could be input (all types), select, or textarea. <input type="text" ng-model="car.plate" placeholder="What's the plate?" /> There is an important advice regarding the use of this directive. We must pay attention to the purpose of the field that is using the ngModel directive. Every time the field is being part of the construction of an object, we must declare in which object the property should be attached. In this case, the object that is being constructed is a car, so we use car.plate inside the directive expression. However, sometimes it might occur that there is an input field that is just used to change a flag, allowing the control of the state of a dialog or another UI component. In these cases, we may use the ngModel directive without any object, as far as it will not be used together with other properties or even persisted. ngClick and other event directives The ngClick directive is one of the most useful kinds of directives in the framework. It allows you to bind any custom behavior to the click event of the element. The following code is an example of the usage of the ngClick directive calling a function: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; $scope.cars = []; $scope.park = function (car) { car.entrance = new Date(); $scope.cars.push(car); delete $scope.car; }; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> <table> <thead> <tr> <th>Plate</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-repeat="car in cars"> <td><span ng-bind="car.plate"></span></td> <td><span ng-bind="car.entrance"></span></td> </tr> </tbody> </table> <input type="text" ng-model="car.plate" placeholder="What's the plate?" /> <button ng-click="park(car)">Park</button> </body> </html> Here there is another pitfall. Inside the ngClick directive, we call the park function, passing the car as a parameter. As far as we have access to the scope through the controller, would not be easier if we just access it directly, without passing any parameter at all? Keep in mind that we must take care of the coupling level between the view and the controller. One way to keep it low is by avoid reading the scope object directly from the controller, replacing this intention by passing everything it need by parameter from the view. It will increase the controller testability and also make the things more clear and explicitly. Other directives that have the same behavior, but are triggered by other events, are ngBlur, ngChange, ngCopy, ngCut, ngDblClick, ngFocus, ngKeyPress, ngKeyDown, ngKeyUp, ngMousedown, ngMouseenter, ngMouseleave, ngMousemove, ngMouseover, ngMouseup, and ngPaste. Filters The filters are, associated with other technologies like directives and expressions, responsible for the extraordinary expressiveness of framework. It lets us easily manipulate and transform any value, not only combined with expressions inside a template, but also injected in other components like controllers and services. It is really useful when we need to format date and money according to our current locale or even support the filtering feature of a grid component. Filters are the perfect answer to easily perform any data manipulating. currency The currency filter is used to format a number based on a currency. The basic usage of this filter is without any parameter: {{ 10 | currency}} The result of the evaluation will be the number $10.00, formatted and prefixed with the dollar sign. In order to achieve the correct output, in this case R$10,00 instead of R$10.00, we need to configure the Brazilian (PT-BR) locale, available inside the AngularJS distribution package. There, we may find locales to the most part of the countries and we just need to import it to our application such as: <script src="js/lib/angular-locale_pt-br.js"></script> After import the locale, we will not need to use the currency symbol anymore because it's already wrapped inside. Besides the currency, the locale also defines the configuration of many other variables like the days of the week and months, very useful when combined with the next filter used to format dates. date The date filter is one of the most useful filters of the framework. Generally, a date value comes from the database or any other source in a raw and generic format. In this way, filters like that are essential to any kind of application. Basically, we can use this filter by declaring it inside any expression. In the following example, we use the filter on a date variable attached to the scope. {{ car.entrance | date }} The output will be Dec 10, 2013. However, there are thousands of combinations that we can make with the optional format mask. {{ car.entrance | date:'MMMM dd/MM/yyyy HH:mm:ss' }} Using this format, the output changes to December 10/12/2013 21:42:10. filter Have you ever needed to filter a list of data? This filter performs exactly this task, acting over an array and applying any filtering criteria. Now, let's include in our car parking application a field to search any parked car and use this filter to do the job. index.html <input type="text" ng-model="criteria" placeholder="What are you looking for?" /> <table> <thead> <tr> <th></th> <th>Plate</th> <th>Color</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-class="{selected: car.selected}" ng-repeat="car in cars | filter:criteria" > <td> <input type="checkbox" ng-model="car.selected" /> </td> <td>{{car.plate}}</td> <td>{{car.color}}</td> <td>{{car.entrance | date:'dd/MM/yyyy hh:mm'}}</td> </tr> </tbody> </table> The result is really impressive. With an input field and the filter declaration we did the job. Integrating the backend with AJAX AJAX, also known as Asynchronous JavaScript and XML, is a technology that allows the applications to send and retrieve data from the server asynchronously, without refreshing the page. The $http service wraps the low-level interaction with the XMLHttpRequest object, providing an easy way to perform calls. This service could be called by just passing a configuration object, used to set many important information like the method, the URL of the requested resource, the data to be sent, and many others: $http({method: "GET", url: "/resource"}) .success(function (data, status, headers, config, statusText) { }) .error(function (data, status, headers, config, statusText) { }); To make it easier to use, there are the following shortcuts methods available for this service. In this case, the configuration object is optional. $http.get(url, [config]) $http.post(url, data, [config]) $http.put(url, data, [config]) $http.head(url, [config]) $http.delete(url, [config]) $http.jsonp(url, [config]) Now, it’s time to integrate our parking application with the back-end by calling the resource cars with the method GET. It will retrieve the cars, binding it to the $scope object. In case of something went wrong, we are going to log it to the console. controllers.js parking.controller("parkingCtrl", function ($scope, $http) { $scope.appTitle = "[Packt] Parking"; $scope.park = function (car) { car.entrance = new Date(); $scope.cars.push(car); delete $scope.car; }; var retrieveCars = function () { $http.get("/cars") .success(function(data, status, headers, config) { $scope.cars = data; }) .error(function(data, status, headers, config) { switch(status) { case 401: { $scope.message = "You must be authenticated!" break; } case 500: { $scope.message = "Something went wrong!"; break; } } console.log(data, status); }); }; retrieveCars(); }); Summary This article introduced you to the fundamentals of AngularJS in order to design and construct reusable, maintainable, and modular web applications. Resources for Article: Further resources on this subject: AngularJS Project [article] Working with Live Data and AngularJS [article] CreateJS – Performing Animation and Transforming Function [article]
Read more
  • 0
  • 0
  • 2264

Banner background image
article-image-foundation
Packt
19 Aug 2014
22 min read
Save for later

Foundation

Packt
19 Aug 2014
22 min read
In this article by Kevin Horek author of Learning Zurb Foundation, we will be covering the following points: How to move away from showing clients wireframes and how to create responsive prototypes Why these prototypes are better and quicker than doing traditional wireframes The different versions of Foundation What does Foundation include? How to use the documentation How to migrate from an older version Getting support when you can't figure something out What browsers does Foundation support? How to extend Foundation Our demo site (For more resources related to this topic, see here.) Over the last couple of years, showing wireframes to most clients has not really worked well for me. They never seem to quite get it, and if they do, they never seem to fully understand all the functionality through a wireframe. For some people, it is really hard to picture things in their head, they need to see exactly what it will look and function like to truly understand what they are looking at. You should still do a rough wireframe either on paper, on a whiteboard, or on the computer. Then once you and/or your team are happy with these rough wireframes, then jump right into the prototype. Rough wireframing and prototypying You might think prototyping this early on when the client has only seen a sitemap is crazy, but the thing is, once you master Foundation, you can build prototypes in about the same time you would spend doing traditional high quality wireframes in Illustrator or whatever program you currently use. With these prototypes, you can make things clickable, interactive, and super fast to make edits to after you get feedback from the client. With the default Foundation components, you can work out how things will work on a phone, tablet, and desktop/laptop. This way you can work with your team to fully understand how things will function and start seeing where the project's potential issues will be. You can then assign people to start dealing with these potential problems early on in the process. When you are ready to show the client, you can walk them through their project on multiple devices and platforms. You can easily show them what content they are going to need and how that content will flow and reflow based on the medium the user is viewing their project on. You should try to get content as early as possible; a lot of companies are hiring content strategists. These content strategists handle working with the client to get, write, and rework content to fit in the responsive web medium. This allows you to design around a client's content, or at least some of the content. We all know that what a client says they will get you for content is not always what you get, so you may need to tweak the design to fit the actual content you get. Making these theming changes to accommodate these content changes can be a pain, but with Foundation, you can just reflow part of the page and try some ideas out in the prototype before you put them back into the working development site. Once you have built up a bunch of prototypes, you can easily combine and use parts of them to create new designs really fast for current or new projects. When prototyping, you should keep everything grayscale, without custom fonts, or a theme beyond the base Foundation one. These prototypes do not have to look pretty. The less it looks like a full design, the better off you will be. You will have to inform your client that an actual design for their project will be coming and that it will be done after they sign off this prototype. When you show the client, you should bring a phone, a tablet, and a laptop to show them how the project will flow on each of these devices. This takes out all the confusion about what happens to the layouts on different screen sizes and on touch and non-touch devices. It also allows your client and your team to fully understand what they are building and how everything works and functions. Trying to take a PDF of wireframes, a Photoshop file, and trying to piece them together to build a responsive web project can be really challenging. With this approach, so many details can get lost in translation, you have to keep going back to talk to the client or your team about how certain things should work or function. Even worse, you have to make huge changes to a section close to the end of the project because something was designed without being really thought through and now your developers have to scramble to make something work within the budget. Prototyping can sort out all the issues or at least the major issues that could arise in the project. With these Foundation prototypes, you keep building on the code for each step of the web building process. Your designer can work with your frontend/backend team to come up with a prototype that everyone is happy with and commit to being able to build it before the client sees anything. If you are familiar with version control, you can use it to keep track of your prototypes and collaborate with another person or a team of people. The two most popular version control software applications are Git (http://git-scm.com/) and Subversion (http://subversion.apache.org/). Git is the more popular of the two right now; however, if you are working on a project that has been around for a number of years, chances are that it will be in Subversion. You can migrate from one to the other, but it might take a bit of work. These prototypes keep your team on the same page right from the beginning of the project and allow the client to sign off on functionality and how the project will work on different mediums. Yes, you are spending more time at the beginning getting everyone on the same page and figuring out functionality early on, but this process should sort out all the confusion later in a project and save you time and money at the end of the project. When the client has changes that are out of scope, it is easy to reference back to the prototype and show them how that change will impact what they signed off on. If the change is major enough then you will need to get them a cost on making that change happen. You should test your prototypes on an iPhone, an Android phone, an iPad, and your desktop or laptop. I would also figure out what browser your client uses and make sure you test on that as well. If they are using an older version of IE, 8 or earlier, you will need to have the conversation with them about how Foundation 4+ does not support IE8. If that support is needed, you will have to come up with a solution to handle this outdated version of IE. Looking at a client's analytics to see what versions of IE their clients are coming to the project with will help you decide how to handle older versions of IE. Analytics might tell you that you can drop the version all together. Another great component that is included with Foundation is Modernizr (http://modernizr.com/); this allows you to write conditional JS and/or CSS for a specific situation or browser version. This really can be a lifesaver. Prototyping smaller projects While you are learning Foundation, you might think that using Foundation on a smaller project will eat up your entire budget. However, these are the best projects to learn Foundation. Basically, you take the prototype to a place where you can show a client the rough look and feel using Foundation. Then, you create a theme board in Photoshop with colors, fonts, photos and anything else to show the client. This first version will be a grayscale prototype that will function across multiple screen sizes. Then you can pull up your theme board to show the direction you are thinking of for the look and feel. If you still feel more comfortable doing your designs in Photoshop, there are some really good Photoshop grid templates at http://www.yeedeen.com/downloads/category/30-psd. If you want to create a custom grid that you can take a screenshot of, then paste into Photoshop, and then drag your guidelines over the grid to make your own template, you can refer to http://www.gridlover.net/foundation/. Prototyping wrap-up These methods are not perfect and may not always work for you, but you're going to see my workflow and how Foundation can be used on all of your web projects. You will figure out what will work with your clients, your projects, and your workflow. Also, you might have slightly different workflows based on the type of project, and/or project budget. If the client does not see value in having a responsive site, you should choose if you want to work with these types of clients. The Web is not one standard resolution anymore and it never will be again, so if a client does not understand that, you might want to consider not working with them. These types of clients are usually super hard to work with and your time is better spent on clients that get or are willing to allow you to teach them and trust you that you are building their project for the modern Web. Personally, clients that have fought with me to not be responsive usually come back a few months later wondering why their site does not work great on their new smartphone or tablet and wanting you to fix it. So try and address this up front and it will save you grief later on and make your clients happier and their experience better. Like anything, there are exceptions to this but just make sure you have a contract in place to outline that you are not building this as responsive, and that it could cause the client a lot of grief and costs later to go back and make it responsive. No matter what you do for a client, you should have a contract in place, this will just make sure you both understand what is each party responsible for. Personally, I like to use a modified version of, (https://gist.github.com/malarkey/4031110). This contract does not have any legal mumbo jumbo that people do not understand. It is written in plain English and has a little bit of a less serious tone. Now that we have covered why prototyping with Foundation is faster than doing wireframes or prototypes in Photoshop, let's talk about what comes in the base Foundation framework. Then we will cover which version to install, and then go through each file and folder. Introducing the framework Before we get started, please refer to the http://foundation.zurb.com/develop/download.html webpage. You will see that there are four versions of Foundation: complete, essentials, custom, and SCSS. But let's talk about the other versions. The essentials is just a smaller version of Foundation that does not include all the components of the framework; this version is a barebones version. Once you are familiar with Foundation, you will likely only include the components that you need for a specific project. By only including the components you need, you can speed up the load time of your project and you do not make the user download files that are not being used by your project. The custom version allows you to pick the components and basic sizes, colors, radius, and text direction. You will likely use this or the SCSS version of Foundation once you are more comfortable with the framework. The SCSS or Sass version of Foundation is the most powerful version. If you do not know what Sass is, it basically gives you additional features of CSS that can speed up how you theme your projects. There is actually another version of Foundation that is not listed on this page, which can be found by hitting the blue Getting Started option in the top right-corner and then clicking on App Guide under Building and App. You can also visit this version at http://foundation.zurb.com/docs/applications.html. This version is the Ruby Gem version of Foundation, and unless you are building a Ruby on Rails project, you will never use this version of Foundation. Zurb keeps the gem pretty up to date, you will likely get the new version of the gem about a week or two after the other versions come out. Alright, let's get into Foundation. If you have not already, hit the blue Download Everything button below the complete heading on the webpage. We will be building a one page demo site from the base Foundation theme that you just downloaded. This way, you can see how to take what you are given by default and customize this base theme to look anyway you want it to. We will give this base theme a custom look and feel, and make it look like you are not using a responsive framework at all. The only way to tell is if you view the source of the website. The Zurb components have very little theming applied to them. This allows you to not have to worry about really overriding the CSS code and you can just start adding additional CSS to customize these components. We will cover how to use all the major components of the framework, you will have an advanced understanding of the framework and how you can use it on all your projects going forward. Foundation has been used on small-to-large websites, web apps, at startups, with content management systems, and with enterprise-level applications. Going over the base theme The base theme that you download is made up of an HTML index file, a folder of CSS files, JavaScript files, and an empty img folder for images, which are explained in the following points: The index.html file has a few Foundation components to get you started. You have three, 12- column grids at three screen sizes; small, medium, and large. You can also control how many columns are in the grid, and the spacing (also called the gutter) between the columns, and how to use the other grid options. You will soon notice that you have full control over pretty much anything and you can control how things are rendered on any screen size or device, and whether that device is in portrait or landscape. You also have the ability to render different code on different devices and for different screen sizes. In the CSS folder, there is the un-minified version of Foundation with the filename foundation.css. There is also a minified version of Foundation with the filename foundation.min.css. If you are not familiar with minification, it has the same code as the foundation.css file, just all the spacing, comments, and code formatting have been taken out. This makes the file really hard to read and edit, but the file size is smaller and will speed up your project's load time. Most of the time, minified files have all the code on one really long line. You should use the foundation.css file as reference but actually include the minified one in your project. The minified version makes debugging and error fixing almost impossible, so we use the un-minified version for development and then the minified version for production. The last file in that folder is normalize.css; this file can be called a reset file, but it is more of an alternative to a reset file. This file is used to try to set defaults on a bunch of CSS elements and tries to get all the browsers to be set to the same defaults. The thinking behind this is that every browser will look and render things the same, and, therefore, there should not be a lot of specific theming fixes for different browsers. These types of files do a pretty good job but are not perfect and you will have to do little fixes for different browsers, even the modern ones. We will also cover how to use some extra CSS to take resetting certain elements a little further than the normalize file does for you. This will mainly include showing you how to render form elements and buttons to be the same across-browser and device. We will also talk about, browser version, platform, OS, and screen resolution detection when we talk about testing. We will also be adding our own CSS file that we will add our customizations to, so if you ever decide to update the framework as a new version comes out, you will not have to worry about overriding your changes. We will never add or modify the core files of the framework; I highly recommend you do not do this either. Once we get into Sass, we will cover how you can really start customizing the framework defaults using the custom variables that are built right into Foundation. These variables are one of the reasons that Foundation is the most advanced responsive framework out there. These variables are super powerful and one of my favorite things about Foundation. Once you understand how to use variables, you can write your own or you can extend your setup of Foundation as much as you like. In the JS folder, you will find a few files and some folders. In the Foundation folder, you will find each of the JavaScript components that you need to make Foundation work properly cross-device, browser, and responsive. These JavaScript components can also be use to extend Foundation's functionality even further. You can only include the components that you need in your project. This allows you to keep the framework lean and can help with load times; this is especially useful on mobile. You can also use CSS to theme each of these components to be rendered differently on each device or at different screen sizes. The foundation.min.js file is a minified version of all the files in the Foundation folder. You can decide based on your needs whether you want to include only the JavaScripts you are using on that project or whether you want to include them all. When you are learning, you should include them all. When you are comfortable with the framework and are ready to make your project live, you should only include the JavaScripts you are actually using. This helps with load times and can make troubleshooting easier. Many of the Foundation components will not work without including the JavaScript for that component. The next file you will notice is jquery.js it might be either in the root of this folder or in the vendor folder if you are using a newer version of Foundation 5. If you are not familiar with jQuery, it is a JavaScript library that makes DOM manipulation, event handling, animation, and Ajax a lot easier. It also makes all of this stuff work cross-browser and cross-device. The next file in the JS folder or in the vendor folder under JS is modernizr.js; this file helps you to write conditional JavaScript and/or CSS to make things work cross-browser and to make progressive enhancements. Also, you put third-party JavaScript libraries that you are using on your project in the vendor folder. These are libraries that you either wrote yourself or found online, are not part of Foundation, and are not required for Foundation to work properly. Referring to the Foundation documentation The Foundation documentation is located at http://foundation.zurb.com/docs/. Foundation is really well documented and provides a lot of code samples and examples to use in your own projects. All the components also contain Sass variables that you can use to customize some of the defaults and even build your own. This saves you writing a bunch of override CSS classes. Each part of the framework is listed on the left-hand side and you can click on what you are looking for. You are taken to a page about that specific part and can read the section's overview, view code samples, working examples, and how to customize that part of the framework. Each section has a pretty good walk through about how to use each piece. Zurb is constantly updating Foundation, so you should check the change log every once in a while at http://foundation.zurb.com/docs/changelog.html. If you need documentation on an older version of Foundation, it is at the bottom of the documentation site in the left-hand column. Zurb keeps all the documentation back to Foundation 2. The only reason you will ever need to use Foundation 2 is if you need to support a really, really old version of IE, such as version 7. Foundation never supported IE6, but you will likely never have to worry about that version of IE. Migrating to a newer version of Foundation If you have an older version of Foundation, each version has a migration guide. The migration guide from Foundation 4 to 5 can be found at http://foundation.zurb.com/docs/upgrading.html. Personally, I have migrated websites and web apps in multiple languages and as long as Zurb does not change the grid, like they did from Foundation 3 to 4, then usually we copy-and-paste over the old version of the Foundation CSS, JavaScript, and images. You will likely have to change some JavaScript calls, do some testing, and do some minor fixes here and there, but it is usually a pretty smooth process as long as you did not modify the core framework or write a bunch of custom overrides. If you did either of these things, you will be in for a lot of work or a full rebuild of your project, so you should never modify the core. For old versions of Foundation, or if your version has been heavily modified, it might be easier to start with a fresh version of Foundation and copy-and-paste in the parts that you want to still use. Personally, I have done both and it really depends on the project. Before you do any migration, make sure you are using some sort of version control, such as GIT. If you do not know what GIT is, you should look into it. Here is a good place to start: (http://git-scm.com/book/en/Getting-Started) GIT has saved me from losing code so many times. If GIT is a little overwhelming right now, at the very least, duplicate your project folder as a backup and then copy in the new version of the framework over your files. If things are really broken, you can at least still use your old version while you work out the kinks in the new version. Framework support At some point, you will likely have questions about something in the framework, or will be trying to get something to work and for some reason, you can't figure it out. Foundation has multiple ways to get support, some of which are listed as follows: E-mail Twitter GitHub StackOverflow Forums To visit or get in-touch with support go to http://foundation.zurb.com/support/support.html. Browser support Foundation 5 supports the majority of browsers and devices, but like anything modern, it drops support for older browser versions. If you need IE8 or cringe, or IE7 support, you will need to use an older version of Foundation. You can see a full browser and device compatibility list at http://foundation.zurb.com/docs/compatibility.html. Extending Foundation Zurb also builds a bunch of other components that usually make their way into Foundation at some point, and work well with Foundation even though they are not officially part of it. These components range from new JavaScript libraries, fonts, icons, templates, and so on. You can visit their playground at http://zurb.com/playground. This playground also has other great resources and tools that you can use on other projects and other mediums. The things at Zurb's playground can make designing with Foundation a lot easier, even if you are not a designer. It can take quite a while to find icons or make them into SVGs or fonts for use in your projects, but Zurb has provided these in their playground. Overview of our one-page demo website The best way to show you how to learn the Zurb Foundation Responsive Framework is to actually get you building a demo site along with me. You can visit the final demo site we will be building at http://www.learningzurbfoundation.com/demo. We will be taking the base starter theme that we downloaded and making a one-page demo site. The demo site is built to teach you how to use the components and how they work together. You can also add outside components, but you can try those on your own. The demo site will show you how to build a responsive website, and it might not look like an ideal site, but I am trying to use as many components as possible to show you how to use the framework. Once you complete this site, you will have a deep understanding of the framework. You can then use this site as a starter theme or at the very least, as a reference for all your Foundation projects going forward. Summary In this article, we covered how to rough wireframe and quickly moved into prototyping. We also covered the following points: We went over what is included in the base Foundation theme Explored the documentation and how to migrate Foundation versions How to get framework support Started to get you thinking about browser support Letting you know that you can extend Foundation beyond its defaults We quickly covered our one-page demo site Resources for Article: Further resources on this subject: Quick start – using Foundation 4 components for your first website [Article] Zurb Foundation – an Overview [Article] Best Practices for Modern Web Applications [Article]
Read more
  • 0
  • 0
  • 1282

article-image-bootstrap-grid-system
Packt
19 Aug 2014
3 min read
Save for later

The Bootstrap grid system

Packt
19 Aug 2014
3 min read
This article is written by Pieter van der Westhuizen, the author of Bootstrap for ASP.NET MVC. Many websites are reporting an increasing amount of mobile traffic and this trend is expected to increase over the coming years. The Bootstrap grid system is mobile-first, which means it is designed to target devices with smaller displays and then grow as the display size increases. Fortunately, this is not something you need to be too concerned about as Bootstrap takes care of most of the heavy lifting. (For more resources related to this topic, see here.) Bootstrap grid options Bootstrap 3 introduced a number of predefined grid classes in order to specify the sizes of columns in your design. These class names are listed in the following table: Class name Type of device Resolution Container width Column width col-xs-* Phones Less than 768 px Auto Auto col-sm-* Tablets Larger than 768 px 750 px 60 px col-md-* Desktops Larger than 992 px 970 px 1170 px col-lg-* High-resolution desktops Larger than 1200 px 78 px 95 px The Bootstrap grid is divided into 12 columns. When laying out your web page, keep in mind that all columns combined should be a total of 12. To illustrate this, consider the following HTML code: <div class="container"><div class="row"><div class="col-md-3" style="background-color:green;"><h3>green</h3></div><div class="col-md-6" style="background-color:red;"><h3>red</h3></div><div class="col-md-3" style="background-color:blue;"><h3>blue</h3></div></div></div> In the preceding code, we have a <div> element, container, with one child <div> element, row. The row div element in turn has three columns. You will notice that two of the columns have a class name of col-md-3 and one of the columns has a class name of col-md-6. When combined, they add up to 12. The preceding code will work well on all devices with a resolution of 992 pixels or higher. To preserve the preceding layout on devices with smaller resolutions, you'll need to combine the various CSS grid classes. For example, to allow our layout to work on tablets, phones, and medium-sized desktop displays, change the HTML to the following code: <div class="container"><div class="row"><div class="col-xs-3 col-sm-3 col-md-3" style="backgroundcolor:green;"><h3>green</h3></div><div class="col-xs-6 col-sm-6 col-md-6" style="backgroundcolor:red;"><h3>red</h3></div><div class="col-xs-3 col-sm-3 col-md-3" style="backgroundcolor:blue;"><h3>blue</h3></div></div></div> By adding the col-xs-* and col-sm-* class names to the div elements, we'll ensure that our layout will appear the same in a wide range of device resolutions. Bootstrap HTML elements Bootstrap provides a host of different HTML elements that are styled and ready to use. These elements include the following: Tables Buttons Forms Images
Read more
  • 0
  • 0
  • 2381
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-shapefiles-leaflet
Packt
18 Aug 2014
5 min read
Save for later

Shapefiles in Leaflet

Packt
18 Aug 2014
5 min read
This article written by Paul Crickard III, the author of Leaflet.js Essentials, describes the use of shapefiles in Leaflet. It shows us how a shapefile can be used to create geographical features on a map. This article explains how shapefiles can be used to add a pop up or for styling purposes. (For more resources related to this topic, see here.) Using shapefiles in Leaflet A shapefile is the most common geographic file type that you will most likely encounter. A shapefile is not a single file, but rather several files used to create geographic features on a map. When you download a shapefile, you will have .shp, .shx, and .dbf at a minimum. These files are the shapefiles that contain the geometry, the index, and a database of attributes. Your shapefile will most likely include a projection file (.prj) that will tell that application the projection of the data so the coordinates make sense to the application. In the examples, you will also have a .shp.xml file that contains metadata and two spatial index files, .sbn and .sbx. To find shapefiles, you can usually search for open data and a city name. In this example, we will be using a shapefile from ABQ Data, the City of Albuquerque data portal. You can find more data on this at http://www.cabq.gov/abq-data. When you download a shapefile, it will most likely be in the ZIP format because it will contain multiple files. To open a shapefile in Leaflet using the leaflet-shpfile plugin, follow these steps: First, add references to two JavaScript files. The first, leaflet-shpfile, is the plugin, and the second depends on the shapefile parser, shp.js: <script src="leaflet.shpfile.js"></script> <script src="shp.js"></script> Next, create a new shapefile layer and add it to the map. Pass the layer path to the zipped shapefile: var shpfile = new L.Shapefile('council.zip'); shpfile.addTo(map); Your map should display the shapefile as shown in the following screenshot: Performing the preceding steps will add the shapefile to the map. You will not be able to see any individual feature properties. When you create a shapefile layer, you specify the data, followed by specifying the options. The options are passed to the L.geoJson class. The following code shows you how to add a pop up to your shapefile layer: var shpfile = new L.Shapefile('council.zip',{onEachFeature:function(feature, layer) { layer.bindPopup("<a href='"+feature.properties.WEBPAGE+"'>Page</a><br><a href='"+feature. properties.PICTURE+"'>Image</a>"); }}); In the preceding code, you pass council.zip to the shapefile, and for options, you use the onEachFeature option, which takes a function. In this case, you use an anonymous function and bind the pop up to the layer. In the text of the pop up, you concatenate your HTML with the name of the property you want to display using the format feature.properties.NAME-OF-PROPERTY. To find the names of the properties in a shapefile, you can open .dbf and look at the column headers. However, this can be cumbersome, and you may want to add all of the shapefiles in a directory without knowing its contents. If you do not know the names of the properties for a given shapefile, the following example shows you how to get them and then display them with their value in a pop up: var holder=[]; for (var key in feature.properties){holder.push(key+": "+feature.properties[key]+"<br>");popupContent=holder.join(""); layer.bindPopup(popupContent);} shapefile.addTo(map); In the preceding code, you first create an array to hold all of the lines in your pop up, one for each key/value pair. Next, you run a for loop that iterates through the object, grabbing each key and concatenating the key name with the value and a line break. You push each line into the array and then join all of the elements into a single string. When you use the .join() method, it will separate each element of the array in the new string with a comma. You can pass empty quotes to remove the comma. Lastly, you bind the pop up with the string as the content and then add the shapefile to the map. You now have a map that looks like the following screenshot: The shapefile also takes a style option. You can pass any of the path class options, such as the color, opacity, or stroke, to change the appearance of the layer. The following code creates a red polygon with a black outline and sets it slightly transparent: var shpfile = new L.Shapefile('council.zip',{style:function(feature){return {color:"black",fillColor:"red",fillOpacity:.75}}}); Summary In this article, we learned how shapefiles can be added to a geographical map. We learned how pop ups are added to the maps. This article also showed how these pop ups would look once added to the map. You will also learn how to connect to an ESRI server that has an exposed REST service. Resources for Article: Further resources on this subject: Getting started with Leaflet [Article] Using JavaScript Effects with Joomla! [Article] Quick start [Article]
Read more
  • 0
  • 0
  • 20699

article-image-indexes
Packt
23 Jul 2014
8 min read
Save for later

Indexes

Packt
23 Jul 2014
8 min read
(For more resources related to this topic, see here.) As a database administrator (DBA) or developer, one of your most important goals is to ensure that the query times are consistent with the service-level agreement (SLA) and are meeting user expectations. Along with other performance enhancement techniques, creating indexes for your queries on underlying tables is one of the most effective and common ways to achieve this objective. The indexes of underlying relational tables are very similar in purpose to an index section at the back of a book. For example, instead of flipping through each page of the book, you use the index section at the back of the book to quickly find the particular information or topic within the book. In the same way, instead of scanning each individual row on the data page, SQL Server uses indexes to quickly find the data for the qualifying query. Therefore, by indexing an underlying relational table, you can significantly enhance the performance of your database. Indexing affects the processing speed for both OLTP and OLAP and helps you achieve optimum query performance and response time. The cost associated with indexes SQL Server uses indexes to optimize overall query performance. However, there is also a cost associated with indexes; that is, indexes slow down insert, update, and delete operations. Therefore, it is important to consider the cost and benefits associated with indexes when you plan your indexing strategy. How SQL Server uses indexes A table that doesn't have a clustered index is stored in a set of data pages called a heap. Initially, the data in the heaps is stored in the order in which the rows are inserted into the table. However, SQL Server Database Engine moves the data around the heap to store the rows efficiently. Therefore, you cannot predict the order of the rows for heaps because data pages are not sequenced in any particular order. The only way to guarantee the order of the rows from a heap is to use the SELECT statement with the ORDER BY clause. Access without an index When you access the data, SQL Server first determines whether there is a suitable index available for the submitted SELECT statement. If no suitable index is found for the submitted SELECT statement, SQL Server retrieves the data by scanning the entire table. The database engine begins scanning at the physical beginning of the table and scans through the full table page by page and row by row to look for qualifying data that is specified in the submitted SELECT statement. Then, it extracts and returns the rows that meet the criteria in the format specified in the submitted SELECT statement. Access with an index The process is improved when indexes are present. If the appropriate index is available, SQL Server uses it to locate the data. An index improves the search process by sorting data on key columns. The database engine begins scanning from the first page of the index and only scans those pages that potentially contain qualifying data based on the index structure and key columns. Finally, it retrieves the data rows or pointers that contain the locations of the data rows to allow direct row retrieval. The structure of indexes In SQL Server, all indexes—except full-text, XML, in-memory optimized, and columnstore indexes—are organized as a balanced tree (B-tree). This is because full-text indexes use their own engine to manage and query full-text catalogs, XML indexes are stored as internal SQL Server tables, in-memory optimized indexes use the Bw-tree structure, and columnstore indexes utilize SQL Server in-memory technology. In the B-tree structure, each page is called a node. The top page of the B-tree structure is called the root node. Non-leaf nodes, also referred to as intermediate levels, are hierarchical tree nodes that comprise the index sort order. Non-leaf nodes point to other non-leaf nodes that are one step below in the B-tree hierarchy, until reaching the leaf nodes. Leaf nodes are at the bottom of the B-tree hierarchy. The following diagram illustrates the typical B-tree structure: Index types In SQL Server 2014, you can create several types of indexes. They are explored in the next sections. Clustered indexes A clustered index sorts table or view rows in the order based on clustered index key column values. In short, a leaf node of a clustered index contains data pages, and scanning them will return the actual data rows. Therefore, a table can have only one clustered index. Unless explicitly specified as nonclustered, SQL Server automatically creates the clustered index when you define a PRIMARY KEY constraint on a table. When should you have a clustered index on a table? Although it is not mandatory to have a clustered index per table, according to the TechNet article, Clustered Index Design Guidelines, with few exceptions, every table should have a clustered index defined on the column or columns that used as follows: The table is large and does not have a nonclustered index. The presence of a clustered index improves performance because without it, all rows of the table will have to be read if any row needs to be found. A column or columns are frequently queried, and data is returned in a sorted order. The presence of a clustered index on the sorting column or columns prevents the sorting operation from being started and returns the data in the sorted order. A column or columns are frequently queried, and data is grouped together. As data must be sorted before it is grouped, the presence of a clustered index on the sorting column or columns prevents the sorting operation from being started. A column or columns data that are frequently used in queries to search data ranges from the table. The presence of clustered indexes on the range column will help avoid the sorting of the entire table data. Nonclustered indexes Nonclustered indexes do not sort or store the data of the underlying table. This is because the leaf nodes of the nonclustered indexes are index pages that contain pointers to data rows. SQL Server automatically creates nonclustered indexes when you define a UNIQUE KEY constraint on a table. A table can have up to 999 nonclustered indexes. You can use the CREATE INDEX statement to create clustered and nonclustered indexes. A detailed discussion on the CREATE INDEX statement and its parameters is beyond the scope of this article. For help with this, refer to the CREATE INDEX (Transact-SQL) article at http://msdn.microsoft.com/en-us/library/ms188783.aspx. SQL Server 2014 also supports new inline index creation syntax for standard, disk-based database tables, temp tables, and table variables. For more information, refer to the CREATE TABLE (SQL Server) article at http://msdn.microsoft.com/en-us/library/ms174979.aspx. Single-column indexes As the name implies, single-column indexes are based on a single-key column. You can define it as either clustered or nonclustered. You cannot drop the index key column or change the data type of the underlying table column without dropping the index first. Single-column indexes are useful for queries that search data based on a single column value. Composite indexes Composite indexes include two or more columns from the same table. You can define composite indexes as either clustered or nonclustered. You can use composite indexes when you have two or more columns that need to be searched together. You typically place the most unique key (the key with the highest degree of selectivity) first in the key list. For example, examine the following query that returns a list of account numbers and names from the Purchasing.Vendor table, where the name and account number starts with the character A: USE [AdventureWorks2012]; SELECT [AccountNumber] , [Name] FROM [Purchasing].[Vendor] WHERE [AccountNumber] LIKE 'A%' AND [Name] LIKE 'A%'; GO If you look at the execution plan of this query without modifying the existing indexes of the table, you will notice that the SQL Server query optimizer uses the table's clustered index to retrieve the query result, as shown in the following screenshot: As our search is based on the Name and AccountNumber columns, the presence of the following composite index will improve the query execution time significantly: USE [AdventureWorks2012]; GO CREATE NONCLUSTERED INDEX [AK_Vendor _ AccountNumber_Name] ON [Purchasing].[Vendor] ([AccountNumber] ASC, [Name] ASC) ON [PRIMARY]; GO Now, examine the query execution plan of this query once again, after creating the previous composite index on the Purchasing.Vendor table, as shown in the following screenshot: As you can see, SQL Server performs a seek operation on this composite index to retrieve the qualifying data. Summary Thus we have learned what indexes are, how SQL Server uses indexes, structure of indexes, and some of the types of indexes. Resources for Article: Further resources on this subject: Easily Writing SQL Queries with Spring Python [article] Manage SQL Azure Databases with the Web Interface 'Houston' [article] VB.NET Application with SQL Anywhere 10 database: Part 1 [article]
Read more
  • 0
  • 0
  • 2442

article-image-tuning-solr-jvm-and-container
Packt
22 Jul 2014
6 min read
Save for later

Tuning Solr JVM and Container

Packt
22 Jul 2014
6 min read
(For more resources related to this topic, see here.) Some of these JVMs are commercially optimized for production usage; you may find comparison studies at http://dior.ics.muni.cz/~makub/java/speed.html. Some of the JVM implementations provide server versions, which would be more appropriate than normal ones. Since Solr runs in JVM, all the standard optimizations for applications are applicable to it. It starts with choosing the right heap size for your JVM. The heap size depends upon the following aspects: Use of facets and sorting options Size of the Solr index Update frequencies on Solr Solr cache Heap size for JVM can be controlled by the following parameters: Parameter Description -Xms This is the minimum heap size required during JVM initialization, that is, container -Xmx This is the maximum heap size up to which the JVM or J2EE container can consume Deciding heap size Heap in JVM contributes as a major factor while optimizing the performance of any system. JVM uses heap to store its objects, as well as its own content. Poor allocation of JVM heap results in Java heap space OutOfMemoryError thrown at runtime crashing the application. When the heap is allocated with less memory, the application takes a longer time to initialize, as well as slowing the execution speed of the Java process during runtime. Similarly, higher heap size may underutilize expensive memory, which otherwise could have been used by the other application. JVM starts with initial heap size, and as the demand grows, it tries to resize the heap to accommodate new space requirements. If a demand for memory crosses the maximum limit, JVM throws an Out of Memory exception. The objects that expire or are unused, unnecessarily consume memory in JVM. This memory can be taken back by releasing these objects by a process called garbage collection. Although it's tricky to find out whether you should increase or reduce the heap size, there are simple ways that can help you out. In a memory graph, typically, when you start the Solr server and run your first query, the memory usage increases, and based on subsequent queries and memory size, the memory graph may increase or remain constant. When garbage collection is run automatically by the JVM container, it sharply brings down its usage. If it's difficult to trace GC execution from the memory graph, you can run Solr with the following additional parameters: -Xloggc:<some file> -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails If you are monitoring the heap usage continuously, you will find a graph that increases and decreases (sawtooth); the increase is due to the querying that is going on consistently demanding more memory by your Solr cache, and decrease is due to GC execution. In a running environment, the average heap size should not grow over time or the number of GC runs should be less than the number of queries executed on Solr. If that's not the case, you will need more memory. Features such as Solr faceting and sorting requires more memory on top of traditional search. If memory is unavailable, the operating system needs to perform hot swapping with the storage media, thereby increasing the response time; thus, users find huge latency while searching on large indexes. Many of the operating systems allow users to control swapping of programs. How can we optimize JVM? Whenever a facet query is run in Solr, memory is used to store each unique element in the index for each field. So, for example, a search over a small set of facet value (an year from 1980 to 2014) will consume less memory than a search with larger set of facet value, such as people's names (can vary from person to person). To reduce the memory usage, you may set the term index divisor to 2 (default is 4) by setting the following in solrconfig.xml: <indexReaderFactory name="IndexReaderFactory" class="solr.StandardIndexReaderFactory"> <int name="setTermIndexDivisor">2</int> </indexReaderFactory > From Solr 4.x onwards, the ability to set the min, max (term index divisor) block size ability is not available. This will reduce the memory usage for storing all the terms to half; however, it will double the seek time for terms and will impact a little on your search runtime. One of the causes of large heap is the size of index, so one solution is to introduce SolrCloud and the distributed large index into multiple shards. This will not reduce your memory requirement, but will spread it across the cluster. You can look at some of the optimized GC parameters described at http://wiki.apache.org/solr/ShawnHeisey#GC_Tuning page. Similarly, Oracle provides a GC tuning guide for advanced development stages, and it can be seen at http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html. Additionally, you can look at the Solr performance problems at http://wiki.apache.org/solr/SolrPerformanceProblems. Optimizing JVM container JVM containers allow users to have their requests served in threads. This in turn enables JVM to support concurrent sessions created for different users connecting at the same time. The concurrency can, however, be controlled to reduce the load on the search server. If you are using Apache Tomcat, you can modify the following entries in server.xml for changing the number of concurrent connections: Similarly, in Jetty, you can control the number of connections held by modifying jetty.xml: Similarly, for other containers, these files can change appropriately. Many containers provide a cache on top of the application to avoid server hits. This cache can be utilized for static pages such as the search page. Containers such as Weblogic provide a development versus production mode. Typically, a development mode runs with 15 threads and a limited JDBC pool size by default, whereas, for a production mode, this can be increased. For tuning containers, besides standard optimization, specific performance-tuning guidelines should be followed, as shown in the following table: Container Performance tuning guide Jetty http://wiki.eclipse.org/Jetty/Howto/High_Load Tomcat http://www.mulesoft.com/tcat/tomcat-performance and http://javamaster.wordpress.com/2013/03/13/apache-tomcat-tuning-guide/ JBoss https://access.redhat.com/site/documentation/en-US/JBoss_Enterprise_Application_Platform/5/pdf/Performance_Tuning_Guide/JBoss_Enterprise_Application_Platform-5-Performance_Tuning_Guide-en-US.pdf Weblogic http://docs.oracle.com/cd/E13222_01/wls/docs92/perform/WLSTuning.html Websphere http://www.ibm.com/developerworks/websphere/techjournal/0909_blythe/0909_blythe.html Apache Solr works better with the default container it ships with, Jetty, since it offers a small footprint compared to other containers such as JBoss and Tomcat for which the memory required is a little higher. Summary In this article, we have learned about about Apache Solr which runs on the underlying JVM in the J2EE container and tuning containers. Resources for Article: Further resources on this subject: Apache Solr: Spellchecker, Statistics, and Grouping Mechanism [Article] Getting Started with Apache Solr [Article] Apache Solr PHP Integration [Article]
Read more
  • 0
  • 0
  • 5209

article-image-using-reactjs-without-jsx
Richard Feldman
30 Jun 2014
6 min read
Save for later

Using React.js without JSX

Richard Feldman
30 Jun 2014
6 min read
React.js was clearly designed with JSX in mind, however, there are plenty of good reasons to use React without it. Using React as a standalone library lets you evaluate the technology without having to spend time learning a new syntax. Some teams—including my own—prefer to have their entire frontend code base in one compile-to-JavaScript language, such as CoffeeScript or TypeScript. Others might find that adding another JavaScript library to their dependencies is no big deal, but adding a compilation step to the build chain is a deal-breaker. There are two primary drawbacks to eschewing JSX. One is that it makes using React significantly more verbose. The other is that the React docs use JSX everywhere; examples demonstrating vanilla JavaScript are few and far between. Fortunately, both drawbacks are easy to work around. Translating documentation The first code sample you see in the React Documentation includes this JSX snippet: /** @jsx React.DOM */ React.renderComponent( <h1>Hello, world!</h1>, document.getElementById('example') ); Suppose we want to see the vanilla JS equivalent. Although the code samples on the React homepage include a helpful Compiled JS tab, the samples in the docs—not to mention React examples you find elsewhere on the Web—will not. Fortunately, React’s Live JSX Compiler can help. To translate the above JSX into vanilla JS, simply copy and paste it into the left side of the Live JSX Compiler. The output on the right should look like this: /** @jsx React.DOM */ React.renderComponent( React.DOM.h1(null, "Hello, world!"), document.getElementById('example') ); Pretty similar, right? We can discard the comment, as it only represents a necessary directive in JSX. When writing React in vanilla JS, it’s just another comment that will be disregarded as usual. Take a look at the call to React.renderComponent. Here we have a plain old two-argument function, which takes a React DOM element (in this case, the one returned by React.DOM.h1) as its first argument, and a regular DOM element (in this case, the one returned by document.getElementById('example')) as its second. jQuery users should note that the second argument will not accept jQuery objects, so you will have to extract the underlying DOM element with $("#example")[0] or something similar. The React.DOM object has a method for every supported tag. In this case we’re using h1, but we could just as easily have used h2, div, span, input, a, p, or any other supported tag. The first argument to these methods is optional; it can either be null (as in this case), or an object specifying the element’s attributes. This argument is how you specify things like class, ID, and so on. The second argument is either a string, in which case it specifies the object’s text content, or a list of child React DOM elements. Let’s put this together with a more advanced example, starting with the vanilla JS: React.DOM.form({className:"commentForm"}, React.DOM.input({type:"text", placeholder:"Your name"}), React.DOM.input({type:"text", placeholder:"Say something..."}), React.DOM.input({type:"submit", value:"Post"}) ) For the most part, the attributes translate as you would expect: type, value, and placeholder do exactly what they would do if used in HTML. The one exception is className, which you use in place of the usual class. The above is equivalent to the following JSX: /** @jsx React.DOM */ <form className="commentForm"> <input type="text" placeholder="Your name" /> <input type="text" placeholder="Say something..." /> <input type="submit" value="Post" /> </form> This JSX is a snippet found elsewhere in the React docs, and again you can view its vanilla JS equivalent by pasting it into the Live JSX Compiler. Note that you can include pure JSX here without any surrounding JavaScript code (unlike the JSX playground), but you do need the /** @jsx React.DOM */ comment at the top of the JSX side. Without the comment, the compiler will simply output the JSX you put in. Simple DSLs to make things concise Although these two implementations are functionally identical, clearly the JSX version is more concise. How can we make the vanilla JS version less verbose? A very quick improvement is to alias the React.DOM object: var R = React.DOM; R.form({className:"commentForm"}, R.input({type:"text", placeholder:"Your name"}), R.input({type:"text", placeholder:"Say something..."}), R.input({type:"submit", value:"Post"})) You can take it even further with a tiny bit of DSL: var R = React.DOM; var form = R.form; var input = R.input; form({className:"commentForm"}, input({type:"text", placeholder:"Your name"}), input({type:"text", placeholder:"Say something..."}), input({type:"submit", value:"Post"}) ) This is more verbose in terms of lines of code, but if you have a large DOM to set up, the extra up-front declarations can make the rest of the file much nicer to read. In CoffeeScript, a DSL like this can tidy things up even further: {form, input} = React.DOM form {className:"commentForm"}, [ input type: "text", placeholder:"Your name" input type:"text", placeholder:"Say something..." input type:"submit", value:"Post" ] Note that in this example, the form’s children are passed as an array rather than as a list of extra arguments (which, in CoffeeScript, allows you to omit commas after each line). React DOM element constructors support either approach. (Also note that CoffeeScript coders who don’t mind mixing languages can use the coffee-react compiler or set up a custom build chain that allows for inline JSX in CoffeeScript sources instead.) Takeaways No matter your particular use case, there are plenty of ways to effectively use React without JSX. Thanks to the Live JSX Compiler ’s ability to quickly translate documentation code samples, and the ease with which you can set up a simple DSL to reduce verbosity, there really is very little overhead to using React as a JavaScript library like any other. About the author Richard Feldman is a functional programmer who specializes in pushing the limits of browser-based UIs. He’s built a framework that performantly renders hundreds of thousands of shapes in the HTML5 canvas, a writing web app that functions like a desktop app in the absence of an Internet connection, and much more in between
Read more
  • 0
  • 0
  • 5996
article-image-component-communication-reactjs
Richard Feldman
30 Jun 2014
5 min read
Save for later

Component Communication in React.js

Richard Feldman
30 Jun 2014
5 min read
You can get a long way in React.js solely by having parent components create child components with varying props, and having each component deal only with its own state. But what happens when a child wants to affect its parent’s state or props? Or when a child wants to inspect that parent’s state or props? Or when a parent wants to inspect its child’s state? With the right techniques, you can handle communication between React components without introducing unnecessary coupling. Child Elements Altering Parents Suppose you have a list of buttons, and when you click one, a label elsewhere on the page updates to reflect which button was most recently clicked. Although any button’s click handler can alter that button’s state, the handler has no intrinsic knowledge of the label that we need to update. So how can we give it access to do what we need? The idiomatic approach is to pass a function through props. Like so: var ExampleParent = React.createClass({ getInitialState: function() { return {lastLabelClicked: "none"} }, render: function() { var me = this; var setLastLabel = function(label) { me.setState({lastLabelClicked: label}); }; return <div> <p>Last clicked: {this.state.lastLabelClicked}</p> <LabeledButton label="Alpha Button" setLastLabel={setLastLabel}/> <LabeledButton label="Beta Button" setLastLabel={setLastLabel}/> <LabeledButton label="Delta Button" setLastLabel={setLastLabel}/> </div>; } }); var LabeledButton = React.createClass({ handleClick: function() { this.props.setLastLabel(this.props.label); }, render: function() { return <button onClick={this.handleClick}>{this.props.label}</button>; } }); Note that this does not actually affect the label’s state directly; rather, it affects the parent component’s state, and doing so will cause the parent to re-render the label as appropriate. What if we wanted to avoid using state here, and instead modify the parent’s props? Since props are externally specified, this would be a lot of extra work. Rather than telling the parent to change, the child would necessarily have to tell its parent’s parent—its grandparent, in other words—to change that grandparent’s child. This is not a route worth pursuing; besides being less idiomatic, there is no real benefit to changing the parent’s props when you could change its state instead. Inspecting Props Once created, the only way for a child’s props to “change” is for the child to be recreated when the parent’s render method is called again. This helpfully guarantees that the parent’s render method has all the information needed to determine the child’s props—not only in the present, but for the indefinite future as well. Thus if another of the parent’s methods needs to know the child’s props, like for example a click handler, it’s simply a matter of making sure that data is available outside the parent’s render method. An easy way to do this is to record it in the parent’s state: var ExampleComponent = React.createClass({ handleClick: function() { var buttonStatus = this.state.buttonStatus; // ...do something based on buttonStatus }, render: function() { // Pretend it took some effort to determine this value var buttonStatus = "btn-disabled"; this.setState({buttonStatus: buttonStatus}); return <button className={buttonStatus} onClick={this.handleClick}> Click this button! </button>; } }); It’s even easier to let a child know about its parent’s props: simply have the parent pass along whatever information is necessary when it creates the child. It’s cleaner to pass along only what the child needs to know, but if all else fails you can go as far as to pass in the parent’s entire set of props: var ParentComponent = React.createClass({ render: function() { return <ChildComponent parentProps={this.props} />; } }); Inspecting State State is trickier to inspect, because it can change on the fly. But is it ever strictly necessary for components to inspect each other’s states, or might there be a universal workaround? Suppose you have a child whose click handler cares about its parent’s state. Is there any way we could refactor things such that the child could always know that value, without having to ask the parent directly? Absolutely! Simply have the parent pass the current value of its state to the child as a prop. Whenever the parent’s state changes, it will re-run its render method, so the child (including its click handler) will automatically be recreated with the new prop. Now the child’s click handler will always have an up-to-date knowledge of the parent’s state, just as we wanted. Suppose instead that we have a parent that cares about its child’s state. As we saw earlier with the buttons-and-labels example, children can affect their parent’s states, so we can use that technique again here to refactor our way into a solution. Simply include in the child’s props a function that updates the parent’s state, and have the child incorporate that function into its relevant state changes. With the child thus keeping the parent’s state up to speed on relevant changes to the child’s state, the parent can obtain whatever information it needed simply by inspecting its own state. Takeaways Idiomatic communication between parent and child components can be easily accomplished by passing state-altering functions through props. When it comes to inspecting props and state, a combination of passing props on a need-to-know basis and refactoring state changes can ensure the relevant parties have all the information they need, whenever they need it. About the Author Richard Feldman is a functional programmer who specializes in pushing the limits of browser-based UIs. He’s built a framework that performantly renders hundreds of thousands of shapes in HTML5 canvas, a writing web app that functions like a desktop app in the absence of an Internet connection, and much more in between.
Read more
  • 0
  • 0
  • 2396

article-image-various-subsystem-configurations
Packt
25 Jun 2014
8 min read
Save for later

Various subsystem configurations

Packt
25 Jun 2014
8 min read
(For more resources related to this topic, see here.) In a high-performance environment, every costly resource instantiation needs to be minimized. This can be done effectively using pools. The different subsystems in WildFly often use various pools of resources to minimize the cost of creating new ones. These resources are often threads or various connection objects. Another benefit is that the pools work as a gatekeeper, hindering the underlying system from being overloaded. This is performed by preventing client calls from reaching their target if a limit has been reached. In the upcoming sections of this article, we will provide an overview of the different subsystems and their pools. The thread pool executor subsystem The thread pool executor subsystem was introduced in JBoss AS 7. Other subsystems can reference thread pools configured in this one. This makes it possible to normalize and manage the thread pools via native WildFly management mechanisms, and it allows you to share thread pools across subsystems. The following code is an example taken from the WildFly Administration Guide (https://docs.jboss.org/author/display/WFLY8/Admin+Guide) that describes how the Infinispan subsystem may use the subsystem, setting up four different pools: <subsystem > <thread-factory name="infinispan-factory" priority="1"/> <bounded-queue-thread-pool name="infinispan-transport"> <core-threads count="1"/> <queue-length count="100000"/> <max-threads count="25"/> <thread-factory name="infinispan-factory"/> </bounded-queue-thread-pool> <bounded-queue-thread-pool name="infinispan-listener"> <core-threads count="1"/> <queue-length count="100000"/> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </bounded-queue-thread-pool> <scheduled-thread-pool name="infinispan-eviction"> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </scheduled-thread-pool> <scheduled-thread-pool name="infinispan-repl-queue"> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </scheduled-thread-pool> </subsystem> ... <cache-container name="web" default-cache="repl"listener-executor= "infinispan-listener" eviction-executor= "infinispan-eviction"replication-queue-executor ="infinispan-repl-queue"> <transport executor="infinispan-transport"/> <replicated-cache name="repl" mode="ASYNC" batching="true"> <locking isolation="REPEATABLE_READ"/> <file-store/> </replicated-cache> </cache-container> The following thread pools are available: unbounded-queue-thread-pool bounded-queue-thread-pool blocking-bounded-queue-thread-pool queueless-thread-pool blocking-queueless-thread-pool scheduled-thread-pool The details of these thread pools are described in the following sections: unbounded-queue-thread-pool The unbounded-queue-thread-pool thread pool executor has the maximum size and an unlimited queue. If the number of running threads is less than the maximum size when a task is submitted, a new thread will be created. Otherwise, the task is placed in a queue. This queue is allowed to grow infinitely. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) thread-factory This specifies the thread factory to use to create worker threads. bounded-queue-thread-pool The bounded-queue-thread-pool thread pool executor has a core, maximum size, and a specified queue length. If the number of running threads is less than the core size when a task is submitted, a new thread will be created; otherwise, it will be put in the queue. If the queue's maximum size has been reached and the maximum number of threads hasn't been reached, a new thread is also created. If max-threads is hit, the call will be sent to the handoff-executor. If no handoff-executor is configured, the call will be discarded. The configuration properties are shown in the following table: core-threads Optional and should be less that max-threads queue-length This specifies the maximum size of the queue. max-threads This specifies the maximum number of threads that are allowed to run simultaneously. keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) Handoff-executor This specifies an executor to which tasks will be delegated, in the event that a task cannot be accepted. allow-core-timeout This specifies whether core threads may time-out; if false, only threads above the core size will time-out. thread-factory This specifies the thread factory to use to create worker threads. blocking-bounded-queue-thread-pool The blocking-bounded-queue-thread-pool thread pool executor has a core, a maximum size and a specified queue length. If the number of running threads is less than the core size when a task is submitted, a new thread will be created. Otherwise, it will be put in the queue. If the queue's maximum size has been reached, a new thread is created; if not, max-threads is exceeded. If so, the call is blocked. The configuration properties are shown in the following table: core-threads Optional and should be less that max-threads queue-length This specifies the maximum size of the queue. max-threads This specifies the maximum number of simultaneous threads allowed to run. keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) allow-core-timeout This specifies whether core threads may time-out; if false, only threads above the core size will time-out. thread-factory This specifies the thread factory to use to create worker threads queueless-thread-pool The queueless-thread-pool thread pool is a thread pool executor without any queue. If the number of running threads is less than max-threads when a task is submitted, a new thread will be created; otherwise, the handoff-executor will be called. If no handoff-executor is configured the call will be discarded. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time The amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) handoff-executor Specifies an executor to delegate tasks to in the event that a task cannot be accepted thread-factory The thread factory to use to create worker threads blocking-queueless-thread-pool The blocking-queueless-thread-pool thread pool executor has no queue. If the number of running threads is less than max-threads when a task is submitted, a new thread will be created. Otherwise, the caller will be blocked. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) thread-factory This specifies the thread factory to use to create worker threads scheduled-thread-pool The scheduled-thread-pool thread pool is used by tasks that are scheduled to trigger at a certain time. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) thread-factory This specifies the thread factory to use to create worker threads Monitoring All of the pools just mentioned can be administered and monitored using both CLI and JMX (actually, the Admin Console can be used to administer, but not see, any live data). The following example and screenshots show the access to an unbounded-queue-thread-pool called test. Using CLI, run the following command: /subsystem=threads/unbounded-queue-thread-pool=test:read-resource (include-runtime=true) The response to the preceding command is as follows: { "outcome" => "success", "result" => { "active-count" => 0, "completed-task-count" => 0L, "current-thread-count" => 0, "keepalive-time" => undefined, "largest-thread-count" => 0, "max-threads" => 100, "name" => "test", "queue-size" => 0, "rejected-count" => 0, "task-count" => 0L, "thread-factory" => undefined } } Using JMX (query and result in the JConsole UI), run the following code: jboss.as:subsystem=threads,unbounded-queue-thread-pool=test An example thread pool by JMX is shown in the following screenshot: An example thread pool by JMX The following screenshot shows the corresponding information in the Admin Console Example thread pool—Admin Console The future of the thread subsystem According to the official JIRA case WFLY-462 (https://issues.jboss.org/browse/WFLY-462), the central thread pool configuration has been targeted for removal in future versions of the application server. It is, however, uncertain that all subprojects will adhere to this. The actual configuration will then be moved out to the subsystem itself. This seems to be the way the general architecture of WildFly is moving in terms of pools—moving away from generic ones and making them subsystem-specific. The different types of pools described here are still valid though. Note that, contrary to previous releases, Stateless EJB is no longer pooled by default. More information of this is available in the JIRA case WFLY-1383. It can be found at https://issues.jboss.org/browse/WFLY-1383.
Read more
  • 0
  • 0
  • 2342

article-image-introduction-mapreduce
Packt
25 Jun 2014
10 min read
Save for later

Introduction to MapReduce

Packt
25 Jun 2014
10 min read
(For more resources related to this topic, see here.) The Hadoop platform Hadoop can be used for a lot of things. However, when you break it down to its core parts, the primary features of Hadoop are Hadoop Distributed File System (HDFS) and MapReduce. HDFS stores read-only files by splitting them into large blocks and distributing and replicating them across a Hadoop cluster. Two services are involved with the filesystem. The first service, the NameNode acts as a master and keeps the directory tree of all file blocks that exist in the filesystem and tracks where the file data is kept across the cluster. The actual data of the files is stored in multiple DataNode nodes, the second service. MapReduce is a programming model for processing large datasets with a parallel, distributed algorithm in a cluster. The most prominent trait of Hadoop is that it brings processing to the data; so, MapReduce executes tasks closest to the data as opposed to the data travelling to where the processing is performed. Two services are involved in a job execution. A job is submitted to the service JobTracker, which first discovers the location of the data. It then orchestrates the execution of the map and reduce tasks. The actual tasks are executed in multiple TaskTracker nodes. Hadoop handles infrastructure failures such as network issues, node, or disk failures automatically. Overall, it provides a framework for distributed storage within its distributed file system and execution of jobs. Moreover, it provides the service ZooKeeper to maintain configuration and distributed synchronization. Many projects surround Hadoop and complete the ecosystem of available Big Data processing tools such as utilities to import and export data, NoSQL databases, and event/real-time processing systems. The technologies that move Hadoop beyond batch processing focus on in-memory execution models. Overall multiple projects, from batch to hybrid and real-time execution exist. MapReduce Massive parallel processing of large datasets is a complex process. MapReduce simplifies this by providing a design pattern that instructs algorithms to be expressed in map and reduce phases. Map can be used to perform simple transformations on data, and reduce is used to group data together and perform aggregations. By chaining together a number of map and reduce phases, sophisticated algorithms can be achieved. The shared nothing architecture of MapReduce prohibits communication between map tasks of the same phase or reduces tasks of the same phase. Communication that's required happens at the end of each phase. The simplicity of this model allows Hadoop to translate each phase, depending on the amount of data that needs to be processed into tens or even hundreds of tasks being executed in parallel, thus achieving scalable performance. Internally, the map and reduce tasks follow a simplistic data representation. Everything is a key or a value. A map task receives key-value pairs and applies basic transformations emitting new key-value pairs. Data is then partitioned and different partitions are transmitted to different reduce tasks. A reduce task also receives key-value pairs, groups them based on the key, and applies basic transformation to those groups. A MapReduce example To illustrate how MapReduce works, let's look at an example of a log file of total size 1 GB with the following format: INFO MyApp - Entering application. WARNING com.foo.Bar - Timeout accessing DB - Retrying ERROR com.foo.Bar - Did it again! INFO MyApp - Exiting application Once this file is stored in HDFS, it is split into eight 128 MB blocks and distributed in multiple Hadoop nodes. In order to build a MapReduce job to count the amount of INFO, WARNING, and ERROR log lines in the file, we need to think in terms of map and reduce phases. In one map phase, we can read local blocks of the file and map each line to a key and a value. We can use the log level as the key and the number 1 as the value. After it is completed, data is partitioned based on the key and transmitted to the reduce tasks. MapReduce guarantees that the input to every reducer is sorted by key. Shuffle is the process of sorting and copying the output of the map tasks to the reducers to be used as input. By setting the value to 1 on the map phase, we can easily calculate the total in the reduce phase. Reducers receive input sorted by key, aggregate counters, and store results. In the following diagram, every green block represents an INFO message, every yellow block a WARNING message, and every red block an ERROR message: Implementing the preceding MapReduce algorithm in Java requires the following three classes: A Map class to map lines into <key,value> pairs; for example, <"INFO",1> A Reduce class to aggregate counters A Job configuration class to define input and output types for all <key,value> pairs and the input and output files MapReduce abstractions This simple MapReduce example requires more than 50 lines of Java code (mostly because of infrastructure and boilerplate code). In SQL, a similar implementation would just require the following: SELECT level, count(*) FROM table GROUP BY level Hive is a technology originating from Facebook that translates SQL commands, such as the preceding one, into sets of map and reduce phases. SQL offers convenient ubiquity, and it is known by almost everyone. However, SQL is declarative and expresses the logic of a computation without describing its control flow. So, there are use cases that will be unusual to implement in SQL, and some problems are too complex to be expressed in relational algebra. For example, SQL handles joins naturally, but it has no built-in mechanism for splitting data into streams and applying different operations to each substream. Pig is a technology originating from Yahoo that offers a relational data-flow language. It is procedural, supports splits, and provides useful operators for joining and grouping data. Code can be inserted anywhere in the data flow and is appealing because it is easy to read and learn. However, Pig is a purpose-built language; it excels at simple data flows, but it is inefficient for implementing non-trivial algorithms. In Pig, the same example can be implemented as follows: LogLine = load 'file.logs' as (level, message); LevelGroup = group LogLine by level; Result = foreach LevelGroup generate group, COUNT(LogLine); store Result into 'Results.txt'; Both Pig and Hive support extra functionality through loadable user-defined functions (UDF) implemented in Java classes. Cascading is implemented in Java and designed to be expressive and extensible. It is based on the design pattern of pipelines that many other technologies follow. The pipeline is inspired from the original chain of responsibility design pattern and allows ordered lists of actions to be executed. It provides a Java-based API for data-processing flows. Developers with functional programming backgrounds quickly introduced new domain specific languages that leverage its capabilities. Scalding, Cascalog, and PyCascading are popular implementations on top of Cascading, which are implemented in programming languages such as Scala, Clojure, and Python. Introducing Cascading Cascading is an abstraction that empowers us to write efficient MapReduce applications. The API provides a framework for developers who want to think in higher levels and follow Behavior Driven Development (BDD) and Test Driven Development (TDD) to provide more value and quality to the business. Cascading is a mature library that was released as an open source project in early 2008. It is a paradigm shift and introduces new notions that are easier to understand and work with. In Cascading, we define reusable pipes where operations on data are performed. Pipes connect with other pipes to create a pipeline. At each end of a pipeline, a tap is used. Two types of taps exist: source, where input data comes from and sink, where the data gets stored. In the preceding image, three pipes are connected to a pipeline, and two input sources and one output sink complete the flow. A complete pipeline is called a flow, and multiple flows bind together to form a cascade. In the following diagram, three flows form a cascade: The Cascading framework translates the pipes, flows, and cascades into sets of map and reduce phases. The flow and cascade planner ensure that no flow or cascade is executed until all its dependencies are satisfied. The preceding abstraction makes it easy to use a whiteboard to design and discuss data processing logic. We can now work on a productive higher level abstraction and build complex applications for ad targeting, logfile analysis, bioinformatics, machine learning, predictive analytics, web content mining, and for extract, transform and load (ETL) jobs. By abstracting from the complexity of key-value pairs and map and reduce phases of MapReduce, Cascading provides an API that so many other technologies are built on. What happens inside a pipe Inside a pipe, data flows in small containers called tuples. A tuple is like a fixed size ordered list of elements and is a base element in Cascading. Unlike an array or list, a tuple can hold objects with different types. Tuples stream within pipes. Each specific stream is associated with a schema. The schema evolves over time, as at one point in a pipe, a tuple of size one can receive an operation and transform into a tuple of size three. To illustrate this concept, we will use a JSON transformation job. Each line is originally stored in tuples of size one with a schema: 'jsonLine. An operation transforms these tuples into new tuples of size three: 'time, 'user, and 'action. Finally, we extract the epoch, and then the pipe contains tuples of size four: 'epoch, 'time, 'user, and 'action. Pipe assemblies Transformation of tuple streams occurs by applying one of the five types of operations, also called pipe assemblies: Each: To apply a function or a filter to each tuple GroupBy: To create a group of tuples by defining which element to use and to merge pipes that contain tuples with similar schemas Every: To perform aggregations (count, sum) and buffer operations to every group of tuples CoGroup: To apply SQL type joins, for example, Inner, Outer, Left, or Right joins SubAssembly: To chain multiple pipe assemblies into a pipe To implement the pipe for the logfile example with the INFO, WARNING, and ERROR levels, three assemblies are required: The Each assembly generates a tuple with two elements (level/message), the GroupBy assembly is used in the level, and then the Every assembly is applied to perform the count aggregation. We also need a source tap to read from a file and a sink tap to store the results in another file. Implementing this in Cascading requires 20 lines of code; in Scala/Scalding, the boilerplate is reduced to just the following: TextLine(inputFile) .mapTo('line->'level,'message) { line:String => tokenize(line) } .groupBy('level) { _.size } .write(Tsv(outputFile)) Cascading is the framework that provides the notions and abstractions of tuple streams and pipe assemblies. Scalding is a domain-specific language (DSL) that specializes in the particular domain of pipeline execution and further minimizes the amount of code that needs to be typed. Cascading extensions Cascading offers multiple extensions that can be used as taps to either read from or write data to, such as SQL, NoSQL, and several other distributed technologies that fit nicely with the MapReduce paradigm. A data processing application, for example, can use taps to collect data from a SQL database and some more from the Hadoop file system. Then, process the data, use a NoSQL database, and complete a machine learning stage. Finally, it can store some resulting data into another SQL database and update a mem-cache application. Summary This article explains the core technologies used in the distributed model of Hadoop Resources for Article: Further resources on this subject: Analytics – Drawing a Frequency Distribution with MapReduce (Intermediate) [article] Understanding MapReduce [article] Advanced Hadoop MapReduce Administration [article]
Read more
  • 0
  • 0
  • 2340
article-image-kendo-ui-dataviz-advance-charting
Packt
23 Jun 2014
10 min read
Save for later

Kendo UI DataViz – Advance Charting

Packt
23 Jun 2014
10 min read
(For more resources related to this topic, see here.) Creating a chart to show stock history The Kendo UI library provides a specialized chart widget that can be used to display the stock price data for a particular stock over a period of time. In this recipe, we will take a look at creating a Stock chart and customizing it. Getting started Include the CSS files, kendo.dataviz.min.css and kendo.dataviz.default.min.css, in the head section. These files are used in styling some of the parts of a stock history chart. How to do it… A Stock chart is made up of two charts: a pane that shows you the stock history and another pane that is used to navigate through the chart by changing the date range. The stock price for a particular stock on a day can be denoted by the following five attributes: Open: This shows you the value of the stock when the trading starts for the day Close: This shows you the value of the stock when the trading closes for the day High: This shows you the highest value the stock was able to attain on the day Low: This shows you the lowest value the stock reached on the day Volume: This shows you the total number of shares of that stock traded on the day Let's assume that a service returns this data in the following format: [ { "Date" : "2013/01/01", "Open" : 40.11, "Close" : 42.34, "High" : 42.5, "Low" : 39.5, "Volume": 10000 } . . . ] We will use the preceding data to create a Stock chart. The kendoStockChart function is used to create a Stock chart, and it is configured with a set of options similar to the area chart or Column chart. In addition to the series data, you can specify the navigator option to show a navigation pane below the chart that contains the entire stock history: $("#chart").kendoStockChart({ title: { text: 'Stock history' }, dataSource: { transport: { read: '/services/stock?q=ADBE' } }, dateField: "Date", series: [{ type: "candlestick", openField: "Open", closeField: "Close", highField: "High", lowField: "Low" }], navigator: { series: { type: 'area', field: 'Volume' } } }); In the preceding code snippet, the DataSource object refers to the remote service that would return the stock data for a set of days. The series option specifies the series type as candlestick; a candlestick chart is used here to indicate the stock price for a particular day. The mappings for openField, closeField, highField, and lowField are specified; they will be used in plotting the chart and also to show a tooltip when the user hovers over it. The navigator option is specified to create an area chart, which uses volume data to plot the chart. The dateField option is used to specify the mapping between the date fields in the chart and the one in the response. How it works… When you load the page, you will see two panes being shown; the navigator is below the main chart. By default, the chart displays data for all the dates in the DataSource object, as shown in the following screenshot: In the preceding screenshot, a candlestick chart is created and it shows you the stock price over a period of time. Also, notice that in the navigator pane, all date ranges are selected by default, and hence, they are reflected in the chart (candlestick) as well. When you hover over the series, you will notice that the stock quote for the selected date is shown. This includes the date and other fields such as Open, High, Low, and Close. The area of the chart is adjusted to show you the stock price for various dates such that the dates are evenly distributed. In the previous case, the dates range from January 1, 2013 to January 31, 2013. However, when you hover over the series, you will notice that some of the dates are omitted. To overcome this, you can either increase the width of the chart area or use the navigator to reduce the date range. The former option is not advisable if the date range spans across several months and years. To reduce the date range in the navigator, move the two date range selectors towards each other to narrow down the dates, as shown in the following screenshot: When you try to narrow down the dates, you will see a tooltip in the chart, indicating the date range that you are trying to select. The candlestick chart is adjusted to show you the stock price for the selected date range. Also, notice that the opacity of the selected date range in the navigator remains the same while the rest of the area's opacity is reduced. Once the date range is selected, the selected pane can be moved in the navigator. There's more… There are several options available to you to customize the behavior and the look and feel of the Stock Chart widget. Specifying the date range in the navigator when initializing the chart By default, all date ranges in the chart are selected and the user will have to narrow them down in the navigator pane. When you work with a large dataset, you will want to show the stock data for a specific range of date when the chart is rendered. To do this, specify the select option in navigator: navigator: { series: { type: 'area', field: 'Volume' }, select: { from: '2013/01/07', to: '2013/01/14' } } In the previous code snippet, the from and to date ranges are specified. Now, when you render the page, you will see that the same dates are selected in the navigator pane. Customizing the look and feel of the Stock Chart widget There are various options available to customize the navigator pane in the Stock Chart widget. Let's increase the height of the pane and also include a title text for it: navigator: { . . pane: { height: '50px', title: { text: 'Stock Volume' } } } Now when you render the page, you will see that the title and height of the navigator pane have been increased. Using the Radial Gauge widget The Radial Gauge widget allows you to build a dashboard-like application wherein you want to indicate a value that lies in a specific range. For example, a car's dashboard can contain a couple of Radial Gauge widgets that can be used to indicate the current speed and RPM. How to do it… To create a Radial Gauge widget, invoke the kendoRadialGauge function on the selected DOM element. A Radial Gauge widget contains some components, and it can be configured by providing options, as shown in the following code snippet: $("#chart").kendoRadialGauge({ scale: { startAngle: 0, endAngle: 180, min: 0, max: 180 }, pointer: { value: 20 } }); Here the scale option is used to configure the range for the Radial Gauge widget. The startAngle and endAngle options are used to indicate the angle at which the Radial Gauge widget's range should start and end. By default, its values are 30 and 210, respectively. The other two options, that is, min and max, are used to indicate the range values over which the value can be plotted. The pointer option is used to indicate the current value in the Radial Gauge widget. There are several options available to configure the Radial Gauge widget; these include positioning of the labels and configuring the look and feel of the widget. How it works… When you render the page, you will see a Radial Gauge widget that shows you the scale from 0 to 180 and the pointer pointing to the value 20. Here, the values from 0 to 180 are evenly distributed, that is, the major ticks are in terms of 20. There are 10 minor ticks, that is, ticks between two major ticks. The widget shows values in the clockwise direction. Also, the pointer value 20 is selected in the scale. There's more… The Radial Gauge widget can be customized to a great extent by including various options when initializing the widget. Changing the major and minor unit values Specify the majorUnit and minorUnit options in the scale: scale: { startAngle: 0, endAngle: 180, min: 0, max: 180, majorUnit: 30, minorUnit: 10, } The scale option specifies the majorUnit value as 30 (instead of the default 20) and minorUnit as 10. This will now add labels at every 30 units and show you two minor ticks between the two major ticks, each at a distance of 10 units, as shown in the following screenshot: The ticks shown in the preceding screenshot can also be customized: scale: { . . minorTicks: { size: 30, width: 1, color: 'green' }, majorTicks: { size: 100, width: 2, color: 'red' } } Here, the size option is used to specify the length of the tick marker, width is used to specify the thickness of the tick, and the color option is used to change the color of the tick. Now when you render the page, you will see the changes for the major and minor ticks. Changing the color of the radial using the ranges option The scale attribute can include the ranges option to specify a radial color for the various ranges on the Radial Gauge widget: scale: { . . ranges: [ { from: 0, to: 60, color: '#00F' }, { from: 60, to: 130, color: '#0F0' }, { from: 130, to: 200, color: '#F00' } ] } In the preceding code snippet, the ranges array contains three objects that specify the color to be applied on the circumference of the widget. The from and to values are used to specify the range of tick values for which the color should be applied. Now when you render the page, you will see the Radial Gauge widget showing the colors for various ranges along the circumference of the widget, as shown in the following screenshot: In the preceding screenshot, the startAngle and endAngle fields are changed to 10 and 250, respectively. The widget can be further customized by moving the labels outside. This can be done by specifying the labels attribute with position as outside. In the preceding screenshot, the labels are positioned outside, hence, the radial appears inside. Updating the pointer value using a Slider widget The pointer value is set when the Radial Gauge widget is initialized. It is possible to change the pointer value of the widget at runtime using a Slider widget. The changes in the Slider widget can be observed, and the pointer value of the Radial Gauge can be updated accordingly. Let's use the Radial Gauge widget. A Slider widget is created using an input element: <input id="slider" value="0" /> The next step is to initialize the previously mentioned input element to a Slider widget: $('#slider').kendoSlider({ min: 0, max: 200, showButtons: false, smallStep: 10, tickPlacement: 'none', change: updateRadialGuage }); The min and max values specify the range of values that can be set for the slider. The smallStep attribute specifies the minimum increment value of the slider. The change attribute specifies the function that should be invoked when the slider value changes. The updateRadialGuage function should then update the value of the pointer in the Radial Gauge widget: function updateRadialGuage() { $('#chart').data('kendoRadialGauge') .value($('#slider').val()); } The function gets the instance of the widget and then sets its value to the value obtained from the Slider widget. Here, the slider value is changed to 100, and you will notice that it is reflected in the Radial Gauge widget.
Read more
  • 0
  • 0
  • 2015

article-image-building-web-application-php-and-mariadb-introduction-caching
Packt
11 Jun 2014
4 min read
Save for later

Building a Web Application with PHP and MariaDB - Introduction to caching

Packt
11 Jun 2014
4 min read
Let's begin with database caching. All the data for our application is stored on MariaDB. When a request is made for retrieving the list of available students, we run a query on our course_registry database. Running a single query at a time is simple but as the application gets popular, we will have more concurrent users. As the number of concurrent connections to the database increases, we will have to make sure that our database server is optimized to handle that load. In this section, we will look at the different types of caching that can be performed in the database. Let's start with query caching. Query caching is available by default on MariaDB; to verify if the installation has a query cache, we will use the have_query_cache global variable. Let's use the SHOW VARIABLES command to verify if the query cache is available on our MariaDB installation, as shown in the following screenshot: Now that we have a query cache, let's verify if it is active. To do this, we will use the query_cache_type global variable, shown as follows: From this query, we can verify that the query cache is turned on. Now, let's take a look at the memory that is allocated for the query cache by using the query_cache_size command, shown as follows: The query cache size is currently set to 64 MB; let's modify our query cache size to 128 MB. The following screenshot shows the usage of the SET GLOBAL syntax: We use the SET GLOBAL syntax to set the value for the query_cache_size command, and we verify this by reloading the value of the query_cache_size command. Now that we have the query cache turned on and working, let's look at a few statistics that would give us an idea of how often the queries are being cached. To retrieve this information, we will query the Qcache variable, as shown in the following screenshot: From this output, we can verify whether we are retrieving a lot of statistics about the query cache. One thing to verify is the Qcache_not_cached variable that is high for our database. This is due to the use of prepared statements. The prepared statements are not cached by MariaDB. Another important variable to keep an eye on is the Qcache_lowmem_prunes variable that will give us an idea of the number of queries that were deleted due to low memory. This will indicate that the query cache size has to be increased. From these stats, we understand that for as long as we use the prepared statements, our queries will not be cached on the database server. So, we should use a combination of prepared statements and raw SQL statements, depending on our use cases. Now that we understand a good bit about query caches, let's look at the other caches that MariaDB provides, such as the table open cache, the join cache, and the memory storage cache. The table open cache allows us to define the number of tables that can be left open by the server to allow faster look-ups. This will be very helpful where there is a huge number of requests for a table, and so the table need not be opened for every request. The join buffer cache is commonly used for queries that perform a full join, wherein there are no indexes to be used for finding rows for the next table. Normally, indexes help us avoid these problems. The memory storage cache, previously known as the heap cache, is commonly is used for read-only caches of data from other tables or for temporary work areas. Let's look at the variables that are with MariaDB, as shown in the following screenshot: Database caching is a very important step towards making our application scalable. However, it is important to understand when to cache, the correct caching techniques, and the size for each cache. Allocation of memory for caching has to be done very carefully as the application can run out of memory if too much space is allocated. A good method to allocate memory for caching is by running benchmarks to see how the queries perform, and have a list of popular queries that will run often so that we can begin by caching and optimizing the database for those queries. Now that we have a good understanding of database caching, let's proceed to application-level caching. Resources for Article: Introduction to Kohana PHP Framework Creating and Consuming Web Services in CakePHP 1.3 Installing MariaDB on Windows and Mac OS X
Read more
  • 0
  • 0
  • 2506