Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - Server-Side Web Development

406 Articles
article-image-getting-organized-npm-and-bower
Packt
06 Oct 2016
13 min read
Save for later

Getting Organized with NPM and Bower

Packt
06 Oct 2016
13 min read
In this article by Philip Klauzinski and John Moore, the authors of the book Mastering JavaScript Single Page Application Development, we will learn about the basics of NMP and Bower. JavaScript was the bane of the web development industry during the early days of the browser-rendered Internet. Now, powers hugely impactful libraries such as jQuery, and JavaScript-rendered content (as opposed to server-side-rendered content) is even indexed by many search engines. What was once largely considered an annoying language used primarily to generate popup windows and alert boxes has now become, arguably, the most popular programming language in the world. (For more resources related to this topic, see here.) Not only is JavaScript now more prevalent than ever in frontend architecture, but it has become a server-side language as well, thanks to the Node.js runtime. We have also now seen the proliferation of document-oriented databases, such as MongoDB, which store and return JSON data. With JavaScript present throughout the development stack, the door is now open for JavaScript developers to become full-stack developers without the need to learn a traditional server-side language. Given the right tools and know-how, any JavaScript developer can create single page applications (SPAs) comprising entirely the language they know best, and they can do so using an architecture such as MEAN (MongoDB, Express, AngularJS, and Node.js). Organization is key to the development of any complex single page application. If you don't get organized from the beginning, you are sure to introduce an inordinate number of regressions to your app. The Node.js ecosystem will help you do this with a full suite of indispensable and open source tools, three of which we will discuss here. In this article, you will learn about: Node Package Manager The Bower front-end package manager What is Node Package Manager? Within any full-stack JavaScript environment, Node Package Manager (NPM) will be your go-to tool for setting up your development environment and managing server-side libraries. NPM can be used within both global and isolated environment contexts. We will first explore the use of NPM globally. Installing Node.js and NPM NPM is a component of Node.js, so before you can use it, you must install Node.js. You can find installers for both Mac and Windows at nodejs.org. Once you have Node.js installed, using NPM is incredibly easy and is done from the command-line interface (CLI). Start by ensuring you have the latest version of NPM installed, as it is updated more often than Node.js itself: $ npm install -g npm When using NPM, the -g option will apply your changes to your global environment. In this case, you want your version of NPM to apply globally. As stated previously, NPM can be used to manage packages both globally and within isolated environments. Therefore, we want essential development tools to be applied globally so that you can use them in multiple projects on the same system. On Mac and some Unix-based systems, you may have to run the npm command as the superuser (prefix the command with sudo) in order to install packages globally, depending on how NPM was installed. If you run into this issue and wish to remove the need to prefix npm with sudo, see docs.npmjs.com/getting-started/fixing-npm-permissions. Configuring your package.json file For any project you develop, you will keep a local package.json file to manage your Node.js dependencies. This file should be stored at the root of your project directory, and it will only pertain to that isolated environment. This allows you to have multiple Node.js projects with different dependency chains on the same system. When beginning a new project, you can automate the creation of the package.json file from the command line: $ npm init Running npm init will take you through a series of JSON property names to define through command-line prompts, including your app's name, version number, description, and more. The name and version properties are required, and your Node.js package will not install without them being defined. Several of the properties will have a default value given within parentheses in the prompt so that you may simply hit Enter to continue. Other properties will simply allow you to hit Enter with a blank entry and will not be saved to the package.json file or be saved with a blank value: name: (my-app) version: (1.0.0) description: entry point: (index.js) The entry point prompt will be defined as the main property in package.json and is not necessary unless you are developing a Node.js application. In our case, we can forgo this field. The npm init command may in fact force you to save the main property, so you will have to edit package.json afterward to remove it; however, that field will have no effect on your web app. You may also choose to create the package.json file manually using a text editor if you know the appropriate structure to employ. Whichever method you choose, your initial version of the package.json file should look similar to the following example: { "name": "my-app", "version": "1.0.0", "author": "Philip Klauzinski", "license": "MIT", "description": "My JavaScript single page application." } If you want your project to be private and want to ensure that it does not accidently get published to the NPM registry, you may want to add the private property to your package.json file and set it to true. Additionally, you may remove some properties that only apply to a registered package: { "name": "my-app", "author": "Philip Klauzinski", "description": "My JavaScript single page application.", "private": true } Once you have your package.json file set up the way you like it, you can begin installing Node.js packages locally for your app. This is where the importance of dependencies begins to surface. NPM dependencies There are three types of dependencies that can be defined for any Node.js project in your package.json file: dependencies, devDependencies, and peerDependencies. For the purpose of building a web-based SPA, you will only need to use the devDependencies declaration. The devDependencies ones are those that are required for developing your application, but not required for its production environment or for simply running it. If other developers want to contribute to your Node.js application, they will need to run npm install from the command line to set up the proper development environment. For information on the other types of dependencies, see docs.npmjs.com. When adding devDependencies to your package.json file, the command line again comes to the rescue. Let's use the installation of Browserify as an example: $ npm install browserify --save-dev This will install Browserify locally and save it along with its version range to the devDependencies object in your package.json file. Once installed, your package.json file should look similar to the following example: { "name": "my-app", "version": "1.0.0", "author": "Philip Klauzinski", "license": "MIT", "devDependencies": { "browserify": "^12.0.1" } } The devDependencies object will store each package as key-value pairs, in which the key is the package name and the value is the version number or version range. Node.js uses semantic versioning, where the three digits of the version number represent MAJOR.MINOR.PATCH. For more information on semantic version formatting, see semver.org. Updating your development dependencies You will notice that the version number of the installed package is preceded by a caret (^) symbol by default. This means that package updates will only allow patch and minor updates for versions above 1.0.0. This is meant to prevent major version changes from breaking your dependency chain when updating your packages to the latest versions. To update your devDependencies and save the new version numbers, you will enter the following from the command line. $ npm update --save-dev Alternatively, you can use the -D option as a shortcut for --save-dev: $ npm update -D To update all globally installed NPM packages to their latest versions, run npm update with the -g option: $ npm update -g For more information on semantic versioning within NPM, see docs.npmjs.com/misc/semver. Now that you have NPM set up and you know how to install your development dependencies, you can move on to installing Bower. Bower Bower is a package manager for frontend web assets and libraries. You will use it to maintain your frontend stack and control version chains for libraries such as jQuery, AngularJS, and any other components necessary to your app's web interface. Installing Bower Bower is also a Node.js package, so you will install it using NPM, much like you did with the Browserify example installation in the previous section, but this time you will be installing the package globally. This will allow you to run bower from the command line anywhere on your system without having to install it locally for each project. $ npm install -g bower You can alternatively install Bower locally as a development dependency so that you may maintain different versions of it for different projects on the same system, but this is generally not necessary. $ npm install bower --save-dev Next, check that Bower is properly installed by querying the version from the command line. $ bower -v Bower also requires the Git version control system (VCS) to be installed on your system in order to work with packages. This is because Bower communicates directly with GitHub for package management data. If you do not have Git installed on your system, you can find instructions for Linux, Mac, and Windows at git-scm.com. Configuring your bower.json file The process of setting up your bower.json file is comparable to that of the package.json file for NPM. It uses the same JSON format, has both dependencies and devDependencies, and can also be automatically created. $ bower init Once you type bower init from the command line, you will be prompted to define several properties with some defaults given within parentheses: ? name: my-app ? version: 0.0.0 ? description: My app description. ? main file: index.html ? what types of modules does this package expose? (Press <space> to? what types of modules does this package expose? globals ? keywords: my, app, keywords ? authors: Philip Klauzinski ? license: MIT ? homepage: http://gui.ninja ? set currently installed components as dependencies? No ? add commonly ignored files to ignore list? Yes ? would you like to mark this package as private which prevents it from being accidentally published to the registry? Yes These questions may vary depending on the version of Bower you install. Most properties in the bower.json file are not necessary unless you are publishing your project to the Bower registry, indicated in the final prompt. You will most likely want to mark your package as private unless you plan to register it and allow others to download it as a Bower package. Once you have created the bower.json file, you can open it in a text editor and change or remove any properties you wish. It should look something like the following example: { "name": "my-app", "version": "0.0.0", "authors": [ "Philip Klauzinski" ], "description": "My app description.", "main": "index.html", "moduleType": [ "globals" ], "keywords": [ "my", "app", "keywords" ], "license": "MIT", "homepage": "http://gui.ninja", "ignore": [ "**/.*", "node_modules", "bower_components", "test", "tests" ], "private": true } If you wish to keep your project private, you can reduce your bower.json file to two properties before continuing: { "name": "my-app", "private": true } Once you have the initial version of your bower.json file set up the way you like it, you can begin installing components for your app. Bower components location and the .bowerrc file Bower will install components into a directory named bower_components by default. This directory will be located directly under the root of your project. If you wish to install your Bower components under a different directory name, you must create a local system file named .bowerrc and define the custom directory name there: { "directory": "path/to/my_components" } An object with only a single directory property name is all that is necessary to define a custom location for your Bower components. There are many other properties that can be configured within a .bowerrc file. For more information on configuring Bower, see bower.io/docs/config/. Bower dependencies Bower also allows you to define both the dependencies and devDependencies objects like NPM. The distinction with Bower, however, is that the dependencies object will contain the components necessary for running your app, while the devDependencies object is reserved for components that you might use for testing, transpiling, or anything that does not need to be included in your frontend stack. Bower packages are managed using the bower command from the CLI. This is a user command, so it does not require super user (sudo) permissions. Let's begin by installing jQuery as a frontend dependency for your app: $ bower install jquery --save The --save option on the command line will save the package and version number to the dependencies object in bower.json. Alternatively, you can use the -S option as a shortcut for --save: $ bower install jquery -S Next, let's install the Mocha JavaScript testing framework as a development dependency: $ bower install mocha --save-dev In this case, we will use --save-dev on the command line to save the package to the devDependencies object instead. Your bower.json file should now look similar to the following example: { "name": "my-app", "private": true, "dependencies": { "jquery": "~2.1.4" }, "devDependencies": { "mocha": "~2.3.4" } } Alternatively, you can use the -D option as a shortcut for --save-dev: $ bower install mocha –D You will notice that the package version numbers are preceded by the tilde (~) symbol by default, in contrast to the caret (^) symbol, as is the case with NPM. The tilde serves as a more stringent guard against package version updates. With a MAJOR.MINOR.PATCH version number, running bower update will only update to the latest patch version. If a version number is composed of only the major and minor versions, bower update will update the package to the latest minor version. Searching the Bower registry All registered Bower components are indexed and searchable through the command line. If you don't know the exact package name of a component you wish to install, you can perform a search to retrieve a list of matching names. Most components will have a list of keywords within their bower.json file so that you can more easily find the package without knowing the exact name. For example, you may want to install PhantomJS for headless browser testing: $ bower search phantomjs The list returned will include any package with phantomjs in the package name or within its keywords list: phantom git://github.com/ariya/phantomjs.git dt-phantomjs git://github.com/keesey/dt-phantomjs qunit-phantomjs-runner git://github.com/jonkemp/... parse-cookie-phantomjs git://github.com/sindresorhus/... highcharts-phantomjs git://github.com/pesla/highcharts-phantomjs.git mocha-phantomjs git://github.com/metaskills/mocha-phantomjs.git purescript-phantomjs git://github.com/cxfreeio/purescript-phantomjs.git You can see from the returned list that the correct package name for PhantomJS is in fact phantom and not phantomjs. You can then proceed to install the package now that you know the correct name: $ bower install phantom --save-dev Now, you have Bower installed and know how to manage your frontend web components and development tools, but how do you integrate them into your SPA? This is where Grunt comes in. Summary Now that you have learned to set up an optimal development environment with NPM and supply it with frontend dependencies using Bower, it's time to start learning more about building a real app. Resources for Article: Further resources on this subject: API with MongoDB and Node.js [article] Tips & Tricks for Ext JS 3.x [article] Responsive Visualizations Using D3.js and Bootstrap [article]
Read more
  • 0
  • 0
  • 2549

article-image-extending-yii
Packt
03 Oct 2016
14 min read
Save for later

Extending Yii

Packt
03 Oct 2016
14 min read
Introduction      In this article by Dmitry Eliseev, the author of the book Yii Application Development Cookbook Third Edition, we will see three Yii extensions—helpers, behaviors, and components. In addition, we will learn how to make your extension reusable and useful for the community and will focus on the many things you should do in order to make your extension as efficient as possible. (For more resources related to this topic, see here.) Helpers There are a lot of built-in framework helpers, like StringHelper in the yiihelpers namespace. It contains sets of helpful static methods for manipulating strings, files, arrays, and other subjects. In many cases, for additional behavior you can create your own helper and put any static functions into one. For example, we will implement a number helper in this recipe. Getting ready Create a new yii2-app-basic application by using composer, as described in the official guide at http://www.yiiframework.com/doc-2.0/guide-start-installation.html. How to do it… Create the helpers directory in your project and write the NumberHelper class: <?php namespace apphelpers; class NumberHelper { public static function format($value, $decimal = 2) { return number_format($value, $decimal, '.', ','); } } Add the actionNumbers method into SiteController: <?php ... class SiteController extends Controller { … public function actionNumbers() { return $this->render('numbers', ['value' => 18878334526.3]); } } Add the views/site/numbers.php view: <?php use apphelpersNumberHelper; use yiihelpersHtml; /* @var $this yiiwebView */ /* @var $value float */ $this->title = 'Numbers'; $this->params['breadcrumbs'][] = $this->title; ?> <div class="site-numbers"> <h1><?= Html::encode($this->title) ?></h1> <p> Raw number:<br /> <b><?= $value ?></b> </p> <p> Formatted number:<br /> <b><?= NumberHelper::format($value) ?></b> </p> </div> Open the action and see this result: In other cases you can specify another count of decimal numbers; for example: NumberHelper::format($value, 3) How it works… Any helper in Yii2 is just a set of functions implemented as static methods in corresponding classes. You can use one to implement any different format of output for manipulations with values of any variable, and for other cases. Note: Usually, static helpers are light-weight clean functions with a small count of arguments. Avoid putting your business logic and other complicated manipulations into helpers . Use widgets or other components instead of helpers in other cases. See also For more information about helpers, refer to http://www.yiiframework.com/doc-2.0/guide-helper-overview.html. And for examples of built-in helpers, see sources in the helpers directory of the framework, refer to https://github.com/yiisoft/yii2/tree/master/framework/helpers. Creating model behaviors There are many similar solutions in today's web applications. Leading products such as Google's Gmail are defining nice UI patterns; one of these is soft delete. Instead of a permanent deletion with multiple confirmations, Gmail allows users to immediately mark messages as deleted and then easily undo it. The same behavior can be applied to any object such as blog posts, comments, and so on. Let's create a behavior that will allow marking models as deleted, restoring models, selecting not yet deleted models, deleted models, and all models. In this recipe we'll follow a test-driven development approach to plan the behavior and test if the implementation is correct. Getting ready Create a new yii2-app-basic application by using composer, as described in the official guide at http://www.yiiframework.com/doc-2.0/guide-start-installation.html. Create two databases for working and for tests. Configure Yii to use the first database in your primary application in config/db.php. Make sure the test application uses a second database in tests/codeception/config/config.php. Create a new migration: <?php use yiidbMigration; class m160427_103115_create_post_table extends Migration { public function up() { $this->createTable('{{%post}}', [ 'id' => $this->primaryKey(), 'title' => $this->string()->notNull(), 'content_markdown' => $this->text(), 'content_html' => $this->text(), ]); } public function down() { $this->dropTable('{{%post}}'); } } Apply the migration to both working and testing databases: ./yii migrate tests/codeception/bin/yii migrate Create a Post model: <?php namespace appmodels; use appbehaviorsMarkdownBehavior; use yiidbActiveRecord; /** * @property integer $id * @property string $title * @property string $content_markdown * @property string $content_html */ class Post extends ActiveRecord { public static function tableName() { return '{{%post}}'; } public function rules() { return [ [['title'], 'required'], [['content_markdown'], 'string'], [['title'], 'string', 'max' => 255], ]; } } How to do it… Let's prepare a test environment, starting with defining the fixtures for the Post model. Create the tests/codeception/unit/fixtures/PostFixture.php file: <?php namespace apptestscodeceptionunitfixtures; use yiitestActiveFixture; class PostFixture extends ActiveFixture { public $modelClass = 'appmodelsPost'; public $dataFile = '@tests/codeception/unit/fixtures/data/post.php'; } Add a fixture data file in tests/codeception/unit/fixtures/data/post.php: <?php return [ [ 'id' => 1, 'title' => 'Post 1', 'content_markdown' => 'Stored *markdown* text 1', 'content_html' => "<p>Stored <em>markdown</em> text 1</p>n", ], ]; Then, we need to create a test case tests/codeception/unit/MarkdownBehaviorTest: . .php: <?php namespace apptestscodeceptionunit; use appmodelsPost; use apptestscodeceptionunitfixturesPostFixture; use yiicodeceptionDbTestCase; class MarkdownBehaviorTest extends DbTestCase { public function testNewModelSave() { $post = new Post(); $post->title = 'Title'; $post->content_markdown = 'New *markdown* text'; $this->assertTrue($post->save()); $this->assertEquals("<p>New <em>markdown</em> text</p>n", $post->content_html); } public function testExistingModelSave() { $post = Post::findOne(1); $post->content_markdown = 'Other *markdown* text'; $this->assertTrue($post->save()); $this->assertEquals("<p>Other <em>markdown</em> text</p>n", $post->content_html); } public function fixtures() { return [ 'posts' => [ 'class' => PostFixture::className(), ] ]; } } Run unit tests: codecept run unit MarkdownBehaviorTest and ensure that tests have not passed Codeception PHP Testing Framework v2.0.9 Powered by PHPUnit 4.8.27 by Sebastian Bergmann and contributors. Unit Tests (2) --------------------------------------------------------------------------- Trying to test ... MarkdownBehaviorTest::testNewModelSave Error Trying to test ... MarkdownBehaviorTest::testExistingModelSave Error --------------------------------------------------------------------------- Time: 289 ms, Memory: 16.75MB Now we need to implement a behavior, attach it to the model, and make sure the test passes. Create a new directory, behaviors. Under this directory, create the MarkdownBehavior class: <?php namespace appbehaviors; use yiibaseBehavior; use yiibaseEvent; use yiibaseInvalidConfigException; use yiidbActiveRecord; use yiihelpersMarkdown; class MarkdownBehavior extends Behavior { public $sourceAttribute; public $targetAttribute; public function init() { if (empty($this->sourceAttribute) || empty($this->targetAttribute)) { throw new InvalidConfigException('Source and target must be set.'); } parent::init(); } public function events() { return [ ActiveRecord::EVENT_BEFORE_INSERT => 'onBeforeSave', ActiveRecord::EVENT_BEFORE_UPDATE => 'onBeforeSave', ]; } public function onBeforeSave(Event $event) { if ($this->owner->isAttributeChanged($this->sourceAttribute)) { $this->processContent(); } } private function processContent() { $model = $this->owner; $source = $model->{$this->sourceAttribute}; $model->{$this->targetAttribute} = Markdown::process($source); } } Let's attach the behavior to the Post model: class Post extends ActiveRecord { ... public function behaviors() { return [ 'markdown' => [ 'class' => MarkdownBehavior::className(), 'sourceAttribute' => 'content_markdown', 'targetAttribute' => 'content_html', ], ]; } } Run the test and make sure it passes: Codeception PHP Testing Framework v2.0.9 Powered by PHPUnit 4.8.27 by Sebastian Bergmann and contributors. Unit Tests (2) --------------------------------------------------------------------------- Trying to test ... MarkdownBehaviorTest::testNewModelSave Ok Trying to test ... MarkdownBehaviorTest::testExistingModelSave Ok --------------------------------------------------------------------------- Time: 329 ms, Memory: 17.00MB That's it. We've created a reusable behavior and can use it for all future projects by just connecting it to a model. How it works… Let's start with the test case. Since we want to use a set of models, we will define fixtures. A fixture set is put into the DB each time the test method is executed. We will prepare unit tests for specifying how the behavior works: First, we test processing new model content. The behavior must convert Markdown text from a source attribute to HTML and store the second one to target attribute. Second, we test updated content of an existing model. After changing Markdown content and saving the model, we must get updated HTML content. Now let's move to the interesting implementation details. In behavior, we can add our own methods that will be mixed into the model that the behavior is attached to. We can also subscribe to our own component events. We are using it to add our own listener: public function events() { return [ ActiveRecord::EVENT_BEFORE_INSERT => 'onBeforeSave', ActiveRecord::EVENT_BEFORE_UPDATE => 'onBeforeSave', ]; } And now we can implement this listener: public function onBeforeSave(Event $event) { if ($this->owner->isAttributeChanged($this->sourceAttribute)) { $this->processContent(); } } In all methods, we can use the owner property to get the object the behavior is attached to. In general we can attach any behavior to your models, controllers, application, and other components that extend the yiibaseComponent class. We can also attach one behavior again and again to model for the processing of different attributes: class Post extends ActiveRecord { ... public function behaviors() { return [ [ 'class' => MarkdownBehavior::className(), 'sourceAttribute' => 'description_markdown', 'targetAttribute' => 'description_html', ], [ 'class' => MarkdownBehavior::className(), 'sourceAttribute' => 'content_markdown', 'targetAttribute' => 'content_html', ], ]; } } Besides, we can also extend the yiibaseAttributeBehavior class, like yiibehaviorsTimestampBehavior, to update specified attributes for any event. See also To learn more about behaviors and events, refer to the following pages: http://www.yiiframework.com/doc-2.0/guide-concept-behaviors.html http://www.yiiframework.com/doc-2.0/guide-concept-events.html For more information about Markdown syntax, refer to http://daringfireball.net/projects/markdown/. Creating components If you have some code that looks like it can be reused but you don't know if it's a behavior, widget, or something else, it's most probably a component. The component should be inherited from the yiibaseComponent class. Later on, the component can be attached to the application and configured using the components section of a configuration file. That's the main benefit compared to using just a plain PHP class. We are also getting behaviors, events, getters, and setters support. For our example, we'll implement a simple Exchange application component that will be able to get currency rates from the http://fixer.io site, attach them to the application, and use them. Getting ready Create a new yii2-app-basic application by using composer, as described in the official guide at http://www.yiiframework.com/doc-2.0/guide-start-installation.html. How to do it… To get a currency rate, our component should send an HTTP GET query to a service URL, like http://api.fixer.io/2016-05-14?base=USD. The service must return all supported rates on the nearest working day: { "base":"USD", "date":"2016-05-13", "rates": { "AUD":1.3728, "BGN":1.7235, ... "ZAR":15.168, "EUR":0.88121 } } The component should extract needle currency from the response in a JSON format and return a target rate. Create a components directory in your application structure. Create the component class example with the following interface: <?php namespace appcomponents; use yiibaseComponent; class Exchange extends Component { public function getRate($source, $destination, $date = null) { } } Implement the component functional: <?php namespace appcomponents; use yiibaseComponent; use yiibaseInvalidConfigException; use yiibaseInvalidParamException; use yiicachingCache; use yiidiInstance; use yiihelpersJson; class Exchange extends Component { /** * @var string remote host */ public $host = 'http://api.fixer.io'; /** * @var bool cache results or not */ public $enableCaching = false; /** * @var string|Cache component ID */ public $cache = 'cache'; public function init() { if (empty($this->host)) { throw new InvalidConfigException('Host must be set.'); } if ($this->enableCaching) { $this->cache = Instance::ensure($this->cache, Cache::className()); } parent::init(); } public function getRate($source, $destination, $date = null) { $this->validateCurrency($source); $this->validateCurrency($destination); $date = $this->validateDate($date); $cacheKey = $this->generateCacheKey($source, $destination, $date); if (!$this->enableCaching || ($result = $this->cache->get($cacheKey)) === false) { $result = $this->getRemoteRate($source, $destination, $date); if ($this->enableCaching) { $this->cache->set($cacheKey, $result); } } return $result; } private function getRemoteRate($source, $destination, $date) { $url = $this->host . '/' . $date . '?base=' . $source; $response = Json::decode(file_get_contents($url)); if (!isset($response['rates'][$destination])) { throw new RuntimeException('Rate not found.'); } return $response['rates'][$destination]; } private function validateCurrency($source) { if (!preg_match('#^[A-Z]{3}$#s', $source)) { throw new InvalidParamException('Invalid currency format.'); } } private function validateDate($date) { if (!empty($date) && !preg_match('#d{4}-d{2}-d{2}#s', $date)) { throw new InvalidParamException('Invalid date format.'); } if (empty($date)) { $date = date('Y-m-d'); } return $date; } private function generateCacheKey($source, $destination, $date) { return [__CLASS__, $source, $destination, $date]; } } Attach our component in the config/console.php or config/web.php configuration files: 'components' => [ 'cache' => [ 'class' => 'yiicachingFileCache', ], 'exchange' => [ 'class' => 'appcomponentsExchange', 'enableCaching' => true, ], // ... db' => $db, ], We can now use a new component directly or via a get method: echo Yii::$app->exchange->getRate('USD', 'EUR'); echo Yii::$app->get('exchange')->getRate('USD', 'EUR', '2014-04-12'); Create a demonstration console controller: <?phpnamespace appcommands;use yiiconsoleController;class ExchangeController extends Controller{ public function actionTest($currency, $date = null) { echo Yii::$app->exchange->getRate('USD', $currency, $date) . PHP_EOL; }} And try to run any commands: $ ./yii exchange/test EUR > 0.90196 $ ./yii exchange/test EUR 2015-11-24 > 0.93888 $ ./yii exchange/test OTHER > Exception 'yiibaseInvalidParamException' with message 'Invalid currency format.' $ ./yii exchange/test EUR 2015/24/11 Exception 'yiibaseInvalidParamException' with message 'Invalid date format.' $ ./yii exchange/test ASD > Exception 'RuntimeException' with message 'Rate not found.' As a result you must see rate values in success cases or specific exceptions in error ones. In addition to creating your own components, you can do more. Overriding existing application components Most of the time there will be no need to create your own application components, since other types of extensions, such as widgets or behaviors, cover almost all types of reusable code. However, overriding core framework components is a common practice and can be used to customize the framework's behavior for your specific needs without hacking into the core. For example, to be able to format numbers using the Yii::app()->formatter->asNumber($value) method instead of the NumberHelper::format method from the Helpers recipe, follow the next steps: Extend the yiii18nFormatter component like the following: <?php namespace appcomponents; class Formatter extends yiii18nFormatter { public function asNumber($value, $decimal = 2) { return number_format($value, $decimal, '.', ','); } } Override the class of the built-in formatter component: 'components' => [ // ... formatter => [ 'class' => 'appcomponentsFormatter, ], // … ], Right now, we can use this method directly: echo Yii::app()->formatter->asNumber(1534635.2, 3); or as a new format for GridView and DetailView widgets: <?= yiigridGridView::widget([ 'dataProvider' => $dataProvider, 'columns' => [ 'id', 'created_at:datetime', 'title', 'value:number', ], ]) ?> You can also extend every existing component without overwriting its source code. How it works… To be able to attach a component to an application it can be extended from the yiibaseComponent class. Attaching is as simple as adding a new array to the components’ section of configuration. There, a class value specifies the component's class and all other values are set to a component through the corresponding component's public properties and setter methods. Implementation itself is very straightforward; We are wrapping http://api.fixer.io calls into a comfortable API with validators and caching. We can access our class by its component name using Yii::$app. In our case, it will be Yii::$app->exchange. See also For official information about components, refer to http://www.yiiframework.com/doc-2.0/guide-concept-components.html. For the NumberHelper class sources, see Helpers recipe. Summary In this article we learnt about the Yii extensions—helpers, behavior, and components. Helpers contains sets of helpful static methods for manipulating strings, files, arrays, and other subjects. Behaviors allow you to enhance the functionality of an existing component class without needing to change the class's inheritance. Components are the main building blocks of Yii applications. A component is an instance of CComponent or its derived class. Using a component mainly involves accessing its properties and raising/handling its events. Resources for Article: Further resources on this subject: Creating an Extension in Yii 2 [article] Atmosfall – Managing Game Progress with Coroutines [article] Optimizing Games for Android [article]
Read more
  • 0
  • 0
  • 1772

article-image-using-model-serializers-eliminate-duplicate-code
Packt
23 Sep 2016
12 min read
Save for later

Using model serializers to eliminate duplicate code

Packt
23 Sep 2016
12 min read
In this article by Gastón C. Hillar, author of, Building RESTful Python Web Services, we will cover the use of model serializers to eliminate duplicate code and use of default parsing and rendering options. (For more resources related to this topic, see here.) Using model serializers to eliminate duplicate code The GameSerializer class declares many attributes with the same names that we used in the Game model and repeats information such as the types and the max_length values. The GameSerializer class is a subclass of the rest_framework.serializers.Serializer, it declares attributes that we manually mapped to the appropriate types, and overrides the create and update methods. Now, we will create a new version of the GameSerializer class that will inherit from the rest_framework.serializers.ModelSerializer class. The ModelSerializer class automatically populates both a set of default fields and a set of default validators. In addition, the class provides default implementations for the create and update methods. In case you have any experience with Django Web Framework, you will notice that the Serializer and ModelSerializer classes are similar to the Form and ModelForm classes. Now, go to the gamesapi/games folder folder and open the serializers.py file. Replace the code in this file with the following code that declares the new version of the GameSerializer class. The code file for the sample is included in the restful_python_chapter_02_01 folder. from rest_framework import serializers from games.models import Game class GameSerializer(serializers.ModelSerializer): class Meta: model = Game fields = ('id', 'name', 'release_date', 'game_category', 'played') The new GameSerializer class declares a Meta inner class that declares two attributes: model and fields. The model attribute specifies the model related to the serializer, that is, the Game class. The fields attribute specifies a tuple of string whose values indicate the field names that we want to include in the serialization from the related model. There is no need to override either the create or update methods because the generic behavior will be enough in this case. The ModelSerializer superclass provides implementations for both methods. We have reduced boilerplate code that we didn’t require in the GameSerializer class. We just needed to specify the desired set of fields in a tuple. Now, the types related to the game fields is included only in the Game class. Press Ctrl + C to quit Django’s development server and execute the following command to start it again. python manage.py runserver Using the default parsing and rendering options and move beyond JSON The APIView class specifies default settings for each view that we can override by specifying appropriate values in the gamesapi/settings.py file or by overriding the class attributes in subclasses. As previously explained the usage of the APIView class under the hoods makes the decorator apply these default settings. Thus, whenever we use the decorator, the default parser classes and the default renderer classes will be associated with the function views. By default, the value for the DEFAULT_PARSER_CLASSES is the following tuple of classes: ( 'rest_framework.parsers.JSONParser', 'rest_framework.parsers.FormParser', 'rest_framework.parsers.MultiPartParser' ) When we use the decorator, the API will be able to handle any of the following content types through the appropriate parsers when accessing the request.data attribute. application/json application/x-www-form-urlencoded multipart/form-data When we access the request.data attribute in the functions, Django REST Framework examines the value for the Content-Type header in the incoming request and determines the appropriate parser to parse the request content. If we use the previously explained default values, Django REST Framework will be able to parse the previously listed content types. However, it is extremely important that the request specifies the appropriate value in the Content-Type header. We have to remove the usage of the rest_framework.parsers.JSONParser class in the functions to make it possible to be able to work with all the configured parsers and stop working with a parser that only works with JSON. The game_list function executes the following two lines when request.method is equal to 'POST': game_data = JSONParser().parse(request) game_serializer = GameSerializer(data=game_data) We will remove the first line that uses the JSONParser and we will pass request.data as the data argument for the GameSerializer. The following line will replace the previous lines: game_serializer = GameSerializer(data=request.data) The game_detail function executes the following two lines when request.method is equal to 'PUT': game_data = JSONParser().parse(request) game_serializer = GameSerializer(game, data=game_data) We will make the same edits done for the code in the game_list function. We will remove the first line that uses the JSONParser and we will pass request.data as the data argument for the GameSerializer. The following line will replace the previous lines: game_serializer = GameSerializer(game, data=request.data) By default, the value for the DEFAULT_RENDERER_CLASSES is the following tuple of classes: ( 'rest_framework.renderers.JSONRenderer', 'rest_framework.renderers.BrowsableAPIRenderer', ) When we use the decorator, the API will be able to render any of the following content types in the response through the appropriate renderers when working with the rest_framework.response.Response object. application/json text/html By default, the value for the DEFAULT_CONTENT_NEGOTIATION_CLASS is the rest_framework.negotiation.DefaultContentNegotiation class. When we use the decorator, the API will use this content negotiation class to select the appropriate renderer for the response based on the incoming request. This way, when a request specifies that it will accept text/html, the content negotiation class selects the rest_framework.renderers.BrowsableAPIRenderer to render the response and generate text/html instead of application/json. We have to replace the usages of both the JSONResponse and HttpResponse classes in the functions with the rest_framework.response.Response class. The Response class uses the previously explained content negotiation features, renders the received data into the appropriate content type and returns it to the client. Now, go to the gamesapi/games folder folder and open the views.py file. Replace the code in this file with the following code that removes the JSONResponse class, uses the @api_view decorator for the functions and the rest_framework.response.Response class. The modified lines are highlighted. The code file for the sample is included in the restful_python_chapter_02_02 folder. from rest_framework.parsers import JSONParser from rest_framework import status from rest_framework.decorators import api_view from rest_framework.response import Response from games.models import Game from games.serializers import GameSerializer @api_view(['GET', 'POST']) def game_list(request): if request.method == 'GET': games = Game.objects.all() games_serializer = GameSerializer(games, many=True) return Response(games_serializer.data) elif request.method == 'POST': game_serializer = GameSerializer(data=request.data) if game_serializer.is_valid(): game_serializer.save() return Response(game_serializer.data, status=status.HTTP_201_CREATED) return Response(game_serializer.errors, status=status.HTTP_400_BAD_REQUEST) @api_view(['GET', 'PUT', 'POST']) def game_detail(request, pk): try: game = Game.objects.get(pk=pk) except Game.DoesNotExist: return Response(status=status.HTTP_404_NOT_FOUND) if request.method == 'GET': game_serializer = GameSerializer(game) return Response(game_serializer.data) elif request.method == 'PUT': game_serializer = GameSerializer(game, data=request.data) if game_serializer.is_valid(): game_serializer.save() return Response(game_serializer.data) return Response(game_serializer.errors, status=status.HTTP_400_BAD_REQUEST) elif request.method == 'DELETE': game.delete() return Response(status=status.HTTP_204_NO_CONTENT) After you save the previous changes, run the following command: http OPTIONS :8000/games/ The following is the equivalent curl command: curl -iX OPTIONS :8000/games/ The previous command will compose and send the following HTTP request: OPTIONS http://localhost:8000/games/. The request will match and run the views.game_list function, that is, the game_list function declared within the games/views.py file. We added the @api_view decorator to this function, and therefore, it is capable of determining the supported HTTP verbs, parsing and rendering capabilities. The following lines show the output: HTTP/1.0 200 OK Allow: GET, POST, OPTIONS, PUT Content-Type: application/json Date: Thu, 09 Jun 2016 21:35:58 GMT Server: WSGIServer/0.2 CPython/3.5.1 Vary: Accept, Cookie X-Frame-Options: SAMEORIGIN { "description": "", "name": "Game Detail", "parses": [ "application/json", "application/x-www-form-urlencoded", "multipart/form-data" ], "renders": [ "application/json", "text/html" ] } The response header includes an Allow key with a comma-separated list of HTTP verbs supported by the resource collection as its value: GET, POST, OPTIONS. As our request didn’t specify the allowed content type, the function rendered the response with the default application/json content type. The response body specifies the Content-type that the resource collection parses and the Content-type that it renders. Run the following command to compose and send and HTTP request with the OPTIONS verb for a game resource. Don’t forget to replace 3 with a primary key value of an existing game in your configuration: http OPTIONS :8000/games/3/ The following is the equivalent curl command: curl -iX OPTIONS :8000/games/3/ The previous command will compose and send the following HTTP request: OPTIONS http://localhost:8000/games/3/. The request will match and run the views.game_detail function, that is, the game_detail function declared within the games/views.py file. We also added the @api_view decorator to this function, and therefore, it is capable of determining the supported HTTP verbs, parsing and rendering capabilities. The following lines show the output: HTTP/1.0 200 OK Allow: GET, POST, OPTIONS Content-Type: application/json Date: Thu, 09 Jun 2016 20:24:31 GMT Server: WSGIServer/0.2 CPython/3.5.1 Vary: Accept, Cookie X-Frame-Options: SAMEORIGIN { "description": "", "name": "Game List", "parses": [ "application/json", "application/x-www-form-urlencoded", "multipart/form-data" ], "renders": [ "application/json", "text/html" ] } The response header includes an Allow key with comma-separated list of HTTP verbs supported by the resource as its value: GET, POST, OPTIONS, PUT. The response body specifies the content-type that the resource parses and the content-type that it renders, with the same contents received in the previous OPTIONS request applied to a resource collection, that is, to a games collection. When we composed and sent POST and PUT commands, we had to use the use the -H "Content-Type: application/json" option to indicate curl to send the data specified after the -d option as application/json instead of the default application/x-www-form-urlencoded. Now, in addition to application/json, our API is capable of parsing application/x-www-form-urlencoded and multipart/form-data data specified in the POST and PUT requests. Thus, we can compose and send a POST command that sends the data as application/x-www-form-urlencoded with the changes made to our API. We will compose and send an HTTP request to create a new game. In this case, we will use the -f option for HTTPie that serializes data items from the command line as form fields and sets the Content-Type header key to the application/x-www-form-urlencoded value. http -f POST :8000/games/ name='Toy Story 4' game_category='3D RPG' played=false release_date='2016-05-18T03:02:00.776594Z' The following is the equivalent curl command. Notice that we don’t use the -H option and curl will send the data in the default application/x-www-form-urlencoded: curl -iX POST -d '{"name":"Toy Story 4", "game_category":"3D RPG", "played": "false", "release_date": "2016-05-18T03:02:00.776594Z"}' :8000/games/ The previous commands will compose and send the following HTTP request: POST http://localhost:8000/games/ with the Content-Type header key set to the application/x-www-form-urlencoded value and the following data: name=Toy+Story+4&game_category=3D+RPG&played=false&release_date=2016-05-18T03%3A02%3A00.776594Z The request specifies /games/, and therefore, it will match '^games/$' and run the views.game_list function, that is, the updated game_detail function declared within the games/views.py file. As the HTTP verb for the request is POST, the request.method property is equal to 'POST', and therefore, the function will execute the code that creates a GameSerializer instance and passes request.data as the data argument for its creation. The rest_framework.parsers.FormParser class will parse the data received in the request, the code creates a new Game and, if the data is valid, it saves the new Game. If the new Game was successfully persisted in the database, the function returns an HTTP 201 Created status code and the recently persisted Game serialized to JSON in the response body. The following lines show an example response for the HTTP request, with the new Game object in the JSON response: HTTP/1.0 201 Created Allow: OPTIONS, POST, GET Content-Type: application/json Date: Fri, 10 Jun 2016 20:38:40 GMT Server: WSGIServer/0.2 CPython/3.5.1 Vary: Accept, Cookie X-Frame-Options: SAMEORIGIN { "game_category": "3D RPG", "id": 20, "name": "Toy Story 4", "played": false, "release_date": "2016-05-18T03:02:00.776594Z" } After the changes we made in the code, we can run the following command to see what happens when we compose and send an HTTP request with an HTTP verb that is not supported: http PUT :8000/games/ The following is the equivalent curl command: curl -iX PUT :8000/games/ The previous command will compose and send the following HTTP request: PUT http://localhost:8000/games/. The request will match and try to run the views.game_list function, that is, the game_list function declared within the games/views.py file. The @api_view decorator we added to this function doesn’t include 'PUT' in the string list with the allowed HTTP verbs, and therefore, the default behavior returns a 405 Method Not Allowed status code. The following lines show the output with the response from the previous request. A JSON content provides a detail key with a string value that indicates the PUT method is not allowed. HTTP/1.0 405 Method Not Allowed Allow: GET, OPTIONS, POST Content-Type: application/json Date: Sat, 11 Jun 2016 00:49:30 GMT Server: WSGIServer/0.2 CPython/3.5.1 Vary: Accept, Cookie X-Frame-Options: SAMEORIGIN { "detail": "Method "PUT" not allowed." } Summary This article covers the use of model serializers and how it is effective in removing duplicate code. Resources for Article: Further resources on this subject: Making History with Event Sourcing [article] Implementing a WCF Service in the Real World [article] WCF – Windows Communication Foundation [article]
Read more
  • 0
  • 0
  • 4056
Banner background image

article-image-how-to-build-and-deploy-node-app-docker
John Oerter
20 Sep 2016
7 min read
Save for later

How to Build and Deploy a Node App with Docker

John Oerter
20 Sep 2016
7 min read
How many times have you deployed your app that was working perfectly in your local environment to production, only to see it break? Whether it was directly related to the bug or feature you were working on, or another random issue entirely, this happens all too often for most developers. Errors like this not only slow you down, but they're also embarrassing. Why does this happen? Usually, it's because your development environment on your local machine is different from the production environment you're deploying to. The tenth factor of the Twelve-Factor App is Dev/prod parity. This means that your development, staging, and production environments should be as similar as possible. The authors of the Twelve-Factor App spell out three "gaps" that can be present. They are: The time gap: A developer may work on code that takes days, weeks, or even months to go into production. The personnel gap: Developers write code, ops engineers deploy it. The tools gap: Developers may be using a stack like Nginx, SQLite, and OS X, while the production deployment uses Apache, MySQL, and Linux. (Source) In this post, we will mostly focus on the tools gap, and how to bridge that gap in a Node application with Docker. The Tools Gap In the Node ecosystem, the tools gap usually manifests itself either in differences in Node and npm versions, or differences in package dependency versions. If a package author publishes a breaking change in one of your dependencies or your dependencies' dependencies, it is entirely possible that your app will break on the next deployment (assuming you reinstall dependencies with npm install on every deployment), while it runs perfectly on your local machine. Although you can work around this issue using tools like npm shrinkwrap, adding Docker to the mix will streamline your deployment life cycle and minimize broken deployments to production. Why Docker? Docker is unique because it can be used the same way in development and production. When you enable the architecture of your app to run inside containers, you can easily scale out and create small containers that can be composed together to make one awesome system. Then, you can mimic this architecture in development so you never have to guess how your app will behave in production. In regards to the time gap and the personnel gap, Docker makes it easier for developers to automate deployments, thereby decreasing time to production and making it easier for full-stack teams to own deployments. Tools and Concepts When developing inside Docker containers, the two most important concepts are docker-compose and volumes. docker-compose helps define mulit-container environments and the ability to run them with one command. Here are some of the more often used docker-compose commands: docker-compose build: Builds images for services defined in docker-compose.yml docker-compose up: Creates and starts services. This is the same as running docker-compose create && docker-compose start docker-compose run: Runs a one-off command inside a container Volumes allow you to mount files from the host machine into the container. When the files on your host machine change, they change inside the container as well. This is important so that we don't have to constantly rebuild containers during development every time we make a change. You can also use a tool like node-mon to automatically restart the node app on changes. Let's walk through some tips and tricks with developing Node apps inside Docker containers. Set up Dockerfile and docker-compose.yml When you start a new project with Docker, you'll first want to define a barebones Dockerfile and docker-compose.yml to get you started. Here's an example Dockerfile: FROM node:6.2.1 RUN useradd --user-group --create-home --shell /bin/false app-user ENV HOME=/home/app-user USER app-user WORKDIR $HOME/app This Dockerfile displays two best practices: Favor exact version tags over floating tags such as latest. Node releases often these days, and you don't want to implicitly upgrade when building your container on another machine. By specifying a version such as 6.2.1, you ensure that anyone who builds the image will always be working from the same node version. Create a new user to run the app inside the container. Without this step, everything would run under root in the container. You certainly wouldn't do that on a physical machine, so don't do in Docker containers either. Here's an example starter docker-compose.yml: web: build: . volumes: - .:/home/app-user/app Pretty simple right? Here we are telling Docker to build the web service based on our Dockerfile and create a volume from our current host directory to /home/app-user/app inside the container. This simple setup lets you build the container with docker-compose build and then run bash inside it with docker-compose run --rm web /bin/bash. Now, it's essentially the same as if you were SSH'd into a remote server or working off a VM, except that any file you create inside the container will be on your host machine and vice versa. With that in mind, you can bootstrap your Node app from inside your container using npm init -y and npm shrinkwrap. Then, you can install any modules you need such as Express. Install node modules on build With that done, we need to update our Dockerfile to install dependencies from npm when the image is built. Here is the updated Dockerfile: FROM node:6.2.1 RUN useradd --user-group --create-home --shell /bin/false app-user ENV HOME=/home/app-user COPY package.json npm-shrinkwrap.json $HOME/app/ RUN chown -R app-user:app-user $HOME/* USER app-user WORKDIR $HOME/app RUN npm install Notice that we had to change the ownership of the copied files to app-user. This is because files copied into a container are automatically owned by root. Add a volume for the node_modules directory We also need to make an update to our docker-compose.yml to make sure that our modules are installed inside the container properly. web: build: . volumes: - .:/home/app-user/app - /home/app-user/app/node_modules Without adding a data volume to /home/app-user/app/node_modules, the node_modules wouldn't exist at runtime in the container because our host directory, which won't contain the node_modules directory, would be mounted and hide the node_modules directory that was created when the container was built. For more information, see this Stack Overflow post. Running your app Once you've got an entry point to your app ready to go, simply add it as a CMD in your Dockerfile: CMD ["node", "index.js"] This will automatically start your app on docker-compose up. Running tests inside your container is easy as well. docker-compose --rm run web npm test You could easily hook this into CI. Production Now going to production with your Docker-powered Node app is a breeze! Just use docker-compose again. You will probably want to define another docker-compose.yml that is especially written for production use. This means removing volumes, binding to different ports, setting NODE_ENV=production, and so on. Once you have a production config file, you can tell docker-compose to use it, like so: docker-compose -f docker-compose.yml -f docker-compose.production.yml up The -f lets you specify a list of files that are merged in the order specified. Here is a complete Dockerfile and docker-compose.yml for reference: # Dockerfile FROM node:6.2.1 RUN useradd --user-group --create-home --shell /bin/false app-user ENV HOME=/home/app-user COPY package.json npm-shrinkwrap.json $HOME/app/ RUN chown -R app-user:app-user $HOME/* USER app-user WORKDIR $HOME/app RUN npm install CMD ["node", "index.js"] # docker-compose.yml web: build: . ports: - '3000:3000' volumes: - .:/home/app-user/app - /home/app-user/app/node_modules About the author John Oerter is a software engineer from Omaha, Nebraska, USA. He has a passion for continuous improvement and learning in all areas of software development, including Docker, JavaScript, and C#. He blogs here.
Read more
  • 0
  • 0
  • 2987

article-image-hello-tdd
Packt
14 Sep 2016
6 min read
Save for later

Hello TDD!

Packt
14 Sep 2016
6 min read
In this article by Gaurav Sood, the author of the book Scala Test-Driven Development, tells  basics of Test-Driven Development. We will explore: What is Test-Driven Development? What is the need for Test-Driven Development? Brief introduction to Scala and SBT (For more resources related to this topic, see here.) What is Test-Driven Development? Test-Driven Development or TDD(as it is referred to commonly)is the practice of writing your tests before writing any application code. This consists of the following iterative steps: This process is also referred to asRed-Green-Refactor-Repeat. TDD became more prevalent with the use of agile software development process, though it can be used as easily with any of the Agile's predecessors like Waterfall. Though TDD is not specifically mentioned in agile manifesto (http://agilemanifesto.org), it has become a standard methodology used with agile. Saying this, you can still use agile without using TDD. Why TDD? The need for TDD arises from the fact that there can be constant changes to the application code. This becomes more of a problem when we are using agile development process, as it is inherently an iterative development process. Here are some of the advantages, which underpin the need for TDD: Code quality: Tests on itself make the programmer more confident of their code. Programmers can be sure of syntactic and semantic correctness of their code. Evolving architecture: A purely test-driven application code gives way to an evolving architecture. This means that we do not have to pre define our architectural boundaries and the design patterns. As the application grows so does the architecture. This results in an application that is flexible towards future changes. Avoids over engineering: Tests that are written before the application code define and document the boundaries. These tests also document the requirements and application code. Agile purists normally regard comments inside the code as a smell. According to them your tests should document your code. Since all the boundaries are predefined in the tests, it is hard to write application code, which breaches these boundaries. This however assumes that TDD is following religiously. Paradigm shift: When I had started with TDD, I noticed that the first question I asked myself after looking at the problem was; "How can I solve it?" This however is counterproductive. TDD forces the programmer to think about the testability of the solution before its implementation. To understand how to test a problem would mean a better understanding of the problem and its edge cases. This in turn can result into refinement of the requirements or discovery or some new requirements. Now it had become impossible for me not to think about testability of the problem before the solution. Now the first question I ask myself is; "How can I test it?". Maintainable code: I have always found it easier to work on an application that has historically been test-driven rather than on one that is not. Why? Only because when I make change to the existing code, the existing tests make sure that I do not break any existing functionality. This results in highly maintainable code, where many programmers can collaborate simultaneously. Brief introduction to Scala and SBT Let us look at Scala and SBT briefly. It is assumed that the reader is familiar with Scala and therefore will not go into the depth of it. What is Scala Scala is a general-purpose programming language. Scala is an acronym for Scalable Language. This reflects the vision of its creators of making Scala a language that grows with the programmer's experience of it. The fact that Scala and Java objects can be freely mixed, makes transition from Java to Scala quite easy. Scala is also a full-blown functional language. Unlike Haskell, which is a pure functional language, Scala allows interoperability with Java and support for objectoriented programming. Scala also allows use of both pure and impure functions. Impure functions have side affect like mutation, I/O and exceptions. Purist approach to Scala programming encourages use of pure functions only. Scala is a type-safe JVM language that incorporates both object oriented and functional programming into an extremely concise, logical, and extraordinarily powerful language. Why Scala? Here are some advantages of using Scala: Functional solution to problem is always better: This is my personal view and open for contention. Elimination of mutation from application code allows application to be run in parallelacross hosts and cores without any deadlocks. Better concurrency model: Scala has an actor model that is better than Java's model of locks on thread. Concise code:Scala code is more concise than itsmore verbose cousin Java. Type safety/ static typing: Scala does type checking at compile time. Pattern matching: Case statements in Scala are superpowerful. Inheritance:Mixin traits are great and they definitely reduce code repetition. There are other features of Scala like closure and monads, which will need more understanding of functional language concepts to learn. Scala Build Tool Scala Build Tool (SBT) is a build tool that allows compiling, running, testing, packaging, and deployment of your code. SBT is mostly used with Scala projects, but it can as easily be used for projects in other languages. Here, we will be using SBT as a build tool for managing our project and running our tests. SBT is written in Scala and can use many of the features of Scala language. Build definitions for SBT are also written in Scala. These definitions are both flexible and powerful. SBT also allows use of plugins and dependency management. If you have used a build tool like Maven or Gradlein any of your previous incarnations, you will find SBT a breeze. Why SBT? Better dependency management Ivy based dependency management Only-update-on-request model Can launch REPL in project context Continuous command execution Scala language support for creating tasks Resources for learning Scala Here are few of the resources for learning Scala: http://www.scala-lang.org/ https://www.coursera.org/course/progfun https://www.manning.com/books/functional-programming-in-scala http://www.tutorialspoint.com/scala/index.htm Resources for SBT Here are few of the resources for learning SBT: http://www.scala-sbt.org/ https://twitter.github.io/scala_school/sbt.html Summary In this article we learned what is TDD and why to use it. We also learned about Scala and SBT.  Resources for Article: Further resources on this subject: Overview of TDD [article] Understanding TDD [article] Android Application Testing: TDD and the Temperature Converter [article]
Read more
  • 0
  • 0
  • 918

article-image-mapping-requirements-modular-web-shop-app
Packt
07 Sep 2016
11 min read
Save for later

Mapping Requirements for a Modular Web Shop App

Packt
07 Sep 2016
11 min read
In this article by Branko Ajzele, author of the book Modular Programming with PHP 7, we will be building a software application from the ground up requires diverse skills, as it involves more than just writing down a code. Writing down functional requirements and sketching out a wireframe are often among the first steps in the process, especially if we are working on a client project. These steps are usually done by roles other than the developer, as they require certain insight into client business case, user behavior, and alike. Being part of a larger development team means that, we as developers, usually get requirements, designs, and wireframes then start coding against them. Delivering projects by oneself, makes it tempting to skip these steps and get our hands started with code alone. More often than not, this is an unproductive approach. Laying down functional requirements and a few wireframes is a skill worth knowing and following, even if one is just a developer. (For more resources related to this topic, see here.) Later in this article, we will go over a high-level application requirement, alongside a rough wireframe. In this article, we will be covering the following topics: Defining application requirements Wireframing Defining a technology stack Defining application requirements We need to build a simple, but responsive web shop application. In order to do so, we need to lay out some basic requirements. The types of requirements we are interested in at the moment are those that touch upon interactions between a user and a system. The two most common techniques to specify requirements in regards to user usage are use case and user story. The user stories are a less formal, yet descriptive enough way to outline these requirements. Using user stories, we encapsulate the customer and store manager actions as mentioned here. A customer should be able to do the following: Browse through static info pages (about us, customer service) Reach out to the store owner via a contact form Browse the shop categories See product details (price, description) See the product image with a large view (zoom) See items on sale See best sellers Add the product to the shopping cart Create a customer account Update customer account info Retrieve a lost password Check out See the total order cost Choose among several payment methods Choose among several shipment methods Get an email notification after an order has been placed Check order status Cancel an order See order history A store manager should be able to do the following: Create a product (with the minimum following attributes: title, price, sku, url-key, description, qty, category, and image) Upload a picture to the product Update and delete a product Create a category (with the minimum following attributes: title, url-key, description, and image) Upload a picture to a category Update and delete a category Be notified if a new sales order has been created Be notified if a new sales order has been canceled See existing sales orders by their statuses Update the status of the order Disable a customer account Delete a customer account User stories are a convenient high-level way of writing down application requirements. Especially useful as an agile mode of development. Wireframing With user stories laid out, let's shift our focus to actual wireframing. For reasons we will get into later on, our wireframing efforts will be focused around the customer perspective. There are numerous wireframing tools out there, both free and commercial. Some commercial tools like https://ninjamock.com, which we will use for our examples, still provide a free plan. This can be very handy for personal projects, as it saves us a lot of time. The starting point of every web application is its home page. The following wireframe illustrates our web shop app's homepage: Here we can see a few sections determining the page structure. The header is comprised of a logo, category menu, and user menu. The requirements don't say anything about category structure, and we are building a simple web shop app, so we are going to stick to a flat category structure, without any sub-categories. The user menu will initially show Register and Login links, until the user is actually logged in, in which case the menu will change as shown in following wireframes. The content area is filled with best sellers and on sale items, each of which have an image, title, price, and Add to Cart button defined. The footer area contains links to mostly static content pages, and a Contact Us page. The following wireframe illustrates our web shop app's category page: The header and footer areas remain conceptually the same across the entire site. The content area has now changed to list products within any given category. Individual product areas are rendered in the same manner as it is on the home page. Category names and images are rendered above the product list. The width of a category image gives some hints as to what type of images we should be preparing and uploading onto our categories. The following wireframe illustrates our web shop app's product page: The content area here now changes to list individual product information. We can see a large image placeholder, title, sku, stock status, price, quantity field, Add to Cart button, and product description being rendered. The IN STOCK message is to be displayed when an item is available for purchase and OUT OF STOCK when an item is no longer available. This is to be related to the product quantity attribute. We also need to keep in mind the "See the product image with a big view (zoom)" requirement, where clicking on an image would zoom into it. The following wireframe illustrates our web shop app's register page: The content area here now changes to render a registration form. There are many ways that we can implement the registration system. More often than not, the minimal amount of information is asked on a registration screen, as we want to get the user in as quickly as possible. However, let's proceed as if we are trying to get more complete user information right here on the registration screen. We ask not just for an e-mail and password, but for entire address information as well. The following wireframe illustrates our web shop app's login page: The content area here now changes to render a customer login and forgotten password form. We provide the user with Email and Password fields in case of login, or just an Email field in case of a password reset action. The following wireframe illustrates our web shop app's customer account page: The content area here now changes to render the customer account area, visible only to logged in customers. Here we see a screen with two main pieces of information. The customer information being one, and order history being the other. The customer can change their e-mail, password, and other address information from this screen. Furthermore, the customer can view, cancel, and print all of their previous orders. The My Orders table lists orders top to bottom, from newest to oldest. Though not specified by the user stories, the order cancelation should work only on pending orders. This is something that we will touch upon in more detail later on. This is also the first screen that shows the state of the user menu when the user is logged in. We can see a dropdown showing the user's full name, My Account, and Sign Out links. Right next to it, we have the Cart (%s) link, which is to list exact quantities in a cart. The following wireframe illustrates our web shop app's checkout cart page: The content area here now changes to render the cart in its current state. If the customer has added any products to the cart, they are to be listed here. Each item should list the product title, individual price, quantity added, and subtotal. The customer should be able to change quantities and press the Update Cart button to update the state of the cart. If 0 is provided as the quantity, clicking the Update Cart button will remove such an item from the cart. Cart quantities should at all time reflect the state of the header menu Cart (%s) link. The right-hand side of a screen shows a quick summary of current order total value, alongside a big, clear Go to Checkout button. The following wireframe illustrates our web shop app's checkout cart shipping page: The content area here now changes to render the first step of a checkout process, the shipping information collection. This screen should not be accessible for non-logged in customers. The customer can provide us with their address details here, alongside a shipping method selection. The shipping method area lists several shipping methods. On the right hand side, the collapsible order summary section is shown, listing current items in the cart. Below it, we have the cart subtotal value and a big clear Next button. The Next button should trigger only when all of the required information is provided, in which case it should take us to payment information on the checkout cart payment page. The following wireframe illustrates our web shop app's checkout cart payment page: The content area here now changes to render the second step of a checkout process, the payment information collection. This screen should not be accessible for non-logged in customers. The customer is presented with a list of available payment methods. For the simplicity of the application, we will focus only on flat/fixed payments, nothing robust such as PayPal or Stripe. On the right-hand side of the screen, we can see a collapsible Order summary section, listing current items in the cart. Below it, we have the order totals section, individually listing Cart Subtotal, Standard Delivery, Order Total, and a big clear Place Order button. The Place Order button should trigger only when all of the required information is provided, in which case it should take us to the checkout success page. The following wireframe illustrates our web shop app's checkout success page: The content area here now changes to output the checkout successful message. Clearly this page is only visible to logged in customers that just finished the checkout process. The order number is clickable and links to the My Account area, focusing on the exact order. By reaching this screen, both the customer and store manager should receive a notification email, as per the Get email notification after order has been placed and Be notified if the new sales order has been created requirements. With this, we conclude our customer facing wireframes. In regards to store manager user story requirements, we will simply define a landing administration interface for now, as shown in the following screenshot: Using the framework later on, we will get a complete auto-generated CRUD interface for the multiple Add New and List & Manage links. The access to this interface and its links will be controlled by the framework's security component, since this user will not be a customer or any user in the database as such. Defining a technology stack Once the requirements and wireframes are set, we can focus our attention to the selection of a technology stack. Choosing the right one in this case, is more of a matter of preference, as application requirements for the most part can be easily met by any one of those frameworks. Our choice however, falls onto Symfony. Aside from PHP frameworks, we still need a CSS framework to deliver some structure, styling, and responsiveness within the browser on the client side. Since the focus of this book is on PHP technologies, let's just say we choose the Foundation CSS framework for that task. Summary Creating web applications can be a tedious and time consuming task. Web shops probably being one of the most robust and intensive type of application out there, as they encompass a great deal of features. There are many components involved in delivering the final product; from database, server side (PHP) code to client side (HTML, CSS, and JavaScript) code. In this article, we started off by defining some basic user stories which in turn defined high-level application requirements for our small web shop. Adding wireframes to the mix helped us to visualize the customer facing interface, while the store manager interface is to be provided out of the box by the framework. We further glossed over two of the most popular frameworks that support modular application design. We turned our attention to Symfony as server side technology and Foundation as a client side responsive framework. Resources for Article: Further resources on this subject: Running Simpletest and PHPUnit [article] Understanding PHP basics [article] PHP Magic Features [article]
Read more
  • 0
  • 0
  • 1385
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-setting-mongodb
Packt
12 Aug 2016
10 min read
Save for later

Setting up MongoDB

Packt
12 Aug 2016
10 min read
In this article by Samer Buna author of the book Learning GraphQL and Relay, we're mostly going to be talking about how an API is nothing without access to a database. Let's set up a local MongoDB instance, add some data in there, and make sure we can access that data through our GraphQL schema. (For more resources related to this topic, see here.) MongoDB can be locally installed on multiple platforms. Check the documentation site for instructions for your platform (https://docs.mongodb.com/manual/installation/). For Mac, the easiest way is probably Homebrew: ~ $ brew install mongodb Create a db folder inside a data folder. The default location is /data/db: ~ $ sudo mkdir -p /data/db Change the owner of the /data folder to be the current logged-in user: ~ $ sudo chown -R $USER /data Start the MongoDB server: ~ $ mongod If everything worked correctly, we should be able to open a new terminal and test the mongo CLI: ~/graphql-project $ mongo MongoDB shell version: 3.2.7 connecting to: test > db.getName() test > We're using MongoDB version 3.2.7 here. Make sure that you have this version or newer versions of MongoDB. Let's go ahead and create a new collection to hold some test data. Let's name that collection users: > db.createCollection("users")" { "ok" : 1 } Now we can use the users collection to add documents that represent users. We can use the MongoDB insertOne() function for that: > db.users.insertOne({ firstName: "John"," lastName: "Doe"," email: "john@example.com" }) We should see an output like: { "acknowledged" : true, "insertedId" : ObjectId("56e729d36d87ae04333aa4e1") } Let's go ahead and add another user: > db.users.insertOne({ firstName: "Jane"," lastName: "Doe"," email: "jane@example.com" }) We can now verify that we have two user documents in the users collection using: > db.users.count() 2 MongoDB has a built-in unique object ID which you can see in the output for insertOne(). Now that we have a running MongoDB, and we have some test data in there, it's time to see how we can read this data using a GraphQL API. To communicate with a MongoDB from a Node.js application, we need to install a driver. There are many options that we can choose from, but GraphQL requires a driver that supports promises. We will use the official MongoDB Node.js driver which supports promises. Instructions on how to install and run the driver can be found at: https://docs.mongodb.com/ecosystem/drivers/node-js/. To install the MongoDB official Node.js driver under our graphql-project app, we do: ~/graphql-project $ npm install --save mongodb └─┬ mongodb@2.2.4 We can now use this mongodb npm package to connect to our local MongoDB server from within our Node application. In index.js: const mongodb = require('mongodb'); const assert = require('assert'); const MONGO_URL = 'mongodb'://localhost:27017/test'; mongodb.MongoClient.connect(MONGO_URL, (err, db) => { assert.equal(null, err); console.log('Connected' to MongoDB server'); // The readline interface code }); The MONGO_URL variable value should not be hardcoded in code like this. Instead, we can use a node process environment variable to set it to a certain value before executing the code. On a production machine, we would be able to use the same code and set the process environment variable to a different value. Use the export command to set the environment variable value: export MONGO_URL=mongodb://localhost:27017/test Then in the Node code, we can read the exported value by using: process.env.MONGO_URL If we now execute the node index.js command, we should see the Connected to MongoDB server line right before we ask for the Client Request. At this point, the Node.js process will not exit after our interaction with it. We'll need to force exit the process with Ctrl + C to restart it. Let's start our database API with a simple field that can answer this question: How many total users do we have in the database? The query could be something like: { usersCount } To be able to use a MongoDB driver call inside our schema main.js file, we need access to the db object that the MongoClient.connect() function exposed for us in its callback. We can use the db object to count the user documents by simply running the promise: db.collection('users').count() .then(usersCount => console.log(usersCount)); Since we only have access to the db object in index.js within the connect() function's callback, we need to pass a reference to that db object to our graphql() function. We can do that using the fourth argument for the graphql() function, which accepts a contextValue object of globals, and the GraphQL engine will pass this context object to all the resolver functions as their third argument. Modify the graphql function call within the readline interface in index.js to be: graphql.graphql(mySchema, inputQuery, {}, { db }).then(result => { console.log('Server' Answer :', result.data); db.close(() => rli.close()); }); The third argument to the graphql() function is called the rootValue, which gets passed as the first argument to the resolver function on the top level type. We are not using that feature here. We passed the connected database object db as part of the global context object. This will enable us to use db within any resolver function. Note also how we're now closing the rli interface within the callback for the operation that closes the db. We should not leave any open db connections behind. Here's how we can now use the resolver third argument to resolve our usersCount top-level field with the db count() operation: fields: { // "hello" and "diceRoll"..." usersCount: { type: GraphQLInt, resolve: (_, args, { db }) => db.collection('users').count() } } A couple of things to notice about this code: We destructured the db object from the third argument for the resolve() function so that we can use it directly (instead of context.db). We returned the promise itself from the resolve() function. The GraphQL executor has native support for promises. Any resolve() function that returns a promise will be handled by the executor itself. The executor will either successfully resolve the promise and then resolve the query field with the promise-resolved value, or it will reject the promise and return an error to the user. We can test our query now: ~/graphql-project $ node index.js Connected to MongoDB server Client Request: { usersCount } Server Answer : { usersCount: 2 } *** #GitTag: chapter1-setting-up-mongodb *** Setting up an HTTP interface Let's now see how we can use the graphql() function under another interface, an HTTP one. We want our users to be able to send us a GraphQL request via HTTP. For example, to ask for the same usersCount field, we want the users to do something like: /graphql?query={usersCount} We can use the Express.js node framework to handle and parse HTTP requests, and within an Express.js route, we can use the graphql() function. For example (don't add these lines yet): const app = express(); app.use('/graphql', (req, res) => { // use graphql.graphql() to respond with JSON objects }); However, instead of manually handling the req/res objects, there is a GraphQL Express.js middleware that we can use, express-graphql. This middleware wraps the graphql() function and prepares it to be used by Express.js directly. Let's go ahead and bring in both the Express.js library and this middleware: ~/graphql-project $ npm install --save express express-graphql ├─┬ express@4.14.0 └─┬ express-graphql@0.5.3 In index.js, we can now import both express and the express-graphql middleware: const graphqlHTTP = require('express-graphql'); const express = require('express'); const app = express(); With these imports, the middleware main function will now be available as graphqlHTTP(). We can now use it in an Express route handler. Inside the MongoClient.connect() callback, we can do: app.use('/graphql', graphqlHTTP({ schema: mySchema, context: { db } })); app.listen(3000, () => console.log('Running Express.js on port 3000') ); Note that at this point we can remove the readline interface code as we are no longer using it. Our GraphQL interface from now on will be an HTTP endpoint. The app.use line defines a route at /graphql and delegates the handling of that route to the express-graphql middleware that we imported. We pass two objects to the middleware, the mySchema object, and the context object. We're not passing any input query here because this code just prepares the HTTP endpoint, and we will be able to read the input query directly from a URL field. The app.listen() function is the call we need to start our Express.js app. Its first argument is the port to use, and its second argument is a callback we can use after Express.js has started. We can now test our HTTP-mounted GraphQL executor with: ~/graphql-project $ node index.js Connected to MongoDB server Running Express.js on port 3000 In a browser window go to: http://localhost:3000/graphql?query={usersCount} *** #GitTag: chapter1-setting-up-an-http-interface *** The GraphiQL editor The graphqlHTTP() middleware function accepts another property on its parameter object graphiql, let's set it to true: app.use('/graphql', graphqlHTTP({ schema: mySchema, context: { db }, graphiql: true })); When we restart the server now and navigate to http://localhost:3000/graphql, we'll get an instance of the GraphiQL editor running locally on our GraphQL schema: GraphiQL is an interactive playground where we can explore our GraphQL queries and mutations before we officially use them. GraphiQL is written in React and GraphQL, and it runs completely within the browser. GraphiQL has many powerful editor features such as syntax highlighting, code folding, and error highlighting and reporting. Thanks to GraphQL introspective nature, GraphiQL also has intelligent type-ahead of fields, arguments, and types. Put the cursor in the left editor area, and type a selection set: { } Place the cursor inside that selection set and press Ctrl + space. You should see a list of all fields that our GraphQL schema support, which are the three fields that we have defined so far (hello, diceRoll, and usersCount): If Ctrl +space does not work, try Cmd + space, Alt + space, or Shift + space. The __schema and __type fields can be used to introspectively query the GraphQL schema about what fields and types it supports. When we start typing, this list starts to get filtered accordingly. The list respects the context of the cursor, if we place the cursor inside the arguments of diceRoll(), we'll get the only argument we defined for diceRoll, the count argument. Go ahead and read all the root fields that our schema support, and see how the data gets reported on the right side with the formatted JSON object: *** #GitTag: chapter1-the-graphiql-editor *** Summary In this article, we learned how to set up a local MongoDB instance, add some data in there, so that we can access that data through our GraphQL schema. Resources for Article: Further resources on this subject: Apache Solr and Big Data – integration with MongoDB [article] Getting Started with Java Driver for MongoDB [article] Documents and Collections in Data Modeling with MongoDB [article]
Read more
  • 0
  • 0
  • 1883

article-image-laravel-50-essentials
Packt
12 Aug 2016
9 min read
Save for later

Laravel 5.0 Essentials

Packt
12 Aug 2016
9 min read
In this article by Alfred Nutile from the book, Laravel 5.x Cookbook, we will learn the following topics: Setting up Travis to Auto Deploy when all is Passing Working with Your .env File Testing Your App on Production with Behat (For more resources related to this topic, see here.) Setting up Travis to Auto Deploy when all is Passing Level 0 of any work should be getting a deployment workflow setup. What that means in this case is that a push to GitHub will trigger our Continuous Integration (CI). And then from the CI, if the tests are passing, we trigger the deployment. In this example I am not going to hit the URL Forge gives you but I am going to send an Artifact to S3 and then have call CodeDeploy to deploy this Artifact. Getting ready… You really need to see the section before this, otherwise continue knowing this will make no sense. How to do it… Install the travis command line tool in Homestead as noted in their docs https://github.com/travis-ci/travis.rb#installation. Make sure to use Ruby 2.x: sudo apt-get install ruby2.0-dev sudo gem install travis -v 1.8.2 --no-rdoc --no-ri Then in the recipe folder I run the command > travis setup codedeploy I answer all the questions keeping in mind:     The KEY and SECRET are the ones we made of the I AM User in the Section before this     The S3 KEY is the filename not the KEY we used for a above. So in my case I just use the name again of the file latest.zip since it sits inside the recipe-artifact bucket. Finally I open the .travis.yml file, which the above modifies and I update the before-deploy area so the zip command ignores my .env file otherwise it would overwrite the file on the server. How it works… Well if you did the CodeDeploy section before this one you will know this is not as easy as it looks. After all the previous work we are able to, with the one command travis setup codedeploy punch in securely all the needed info to get this passing build to deploy. So after phpunit reports things are passing we are ready. With that said we had to have a lot of things in place, S3 bucket to put the artifact, permission with the KEY and SECRET to access the Artifact and CodeDeploy, and a CodeDeploy Group and Application to deploy to. All of this covered in the previous section. After that it is just the magic of Travis and CodeDeploy working together to make this look so easy. See also… Travis Docs: https://docs.travis-ci.com/user/deployment/codedeploy https://github.com/travis-ci/travis.rb https://github.com/travis-ci/travis.rb#installation Working with Your .env File The workflow around this can be tricky. Going from Local, to TravisCI, to CodeDeploy and then to AWS without storing your info in .env on GitHub can be a challenge. What I will show here are some tools and techniques to do this well. Getting ready…. A base install is fine I will use the existing install to show some tricks around this. How to do it… Minimize using Conventions as much as possible     config/queue.php I can do this to have one or more Queues     config/filesystems.php Use the Config file as much as possible. For example this is in my .env If I add config/marvel.php and then make it look like this My .env can be trimmed down by KEY=VALUES later on I can call to those:    Config::get('marvel.MARVEL_API_VERSION')    Config::get('marvel.MARVEL_API_BASE_URL') Now to easily send to Staging or Production using the EnvDeployer library >composer require alfred-nutile-inc/env-deployer:dev-master Follow the readme.md for that library. Then as it says in the docs setup your config file so that it matches the destination IP/URL and username and path for those services. I end up with this config file config/envdeployer.php Now the trick to this library is you start to enter KEY=VALUES into your .env file stacked on top of each other. For example, my database settings might look like this. so now I can type: >php artisan envdeployer:push production Then this will push over SSH your .env to production and swap out the related @production values for each KEY they are placed above. How it works… The first mindset to follow is conventions before you put a new KEY=VALUE into the .env file set back and figure out defaults and conventions around what you already must have in this file. For example must haves, APP_ENV, and then I always have APP_NAME so those two together do a lot to make databases, queues, buckets and so on. all around those existing KEYs. It really does add up, whether you are working alone or on a team focus on these conventions and then using the config/some.php file workflow to setup defaults. Then libraries like the one I use above that push this info around with ease. Kind of like Heroku you can command line these settings up to the servers as needed. See also… Laravel Validator for the .env file: https://packagist.org/packages/mathiasgrimm/laravel-env-validator Laravel 5 Fundamentals: Environments and Configuration: https://laracasts.com/series/laravel-5-fundamentals/episodes/6 Testing Your App on Production with Behat So your app is now on Production! Start clicking away at hundreds of little and big features so you can make sure everything went okay or better yet run Behat! Behat on production? Sounds crazy but I will cover some tips on how to do this including how to setup some remote conditions and clean up when you are done. Getting ready… Any app will do. In my case I am going to hit production with some tests I made earlier. How to do it… Tag a Behat test @smoke or just a Scenario that you know it is safe to run on Production for example features/home/search.feature. Update behat.yml adding a profile call production. Then run > vendor/bin/behat -shome_ui --tags=@smoke --profile=production I run an Artisan command to run all these Then you will see it hit the production url and only the Scenarios you feel are safe for Behat. Another method is to login as a demo user. And after logging in as that user you can see data that is related to that user only so you can test authenticated level of data and interactions. For example database/seeds/UserTableSeeder.php add the demo user to the run method Then update your .env. Now push that .env setting up to Production.  >php artisan envdeploy:push production Then we update our behat.yml file to run this test even on Production features/auth/login.feature. Now we need to commit our work and push to GitHub so TravisCI can deploy and changes: Since this is a seed and not a migration I need to rerun seeds on production. Since this is a new site, and no one has used it this is fine BUT of course this would have been a migration if I had to do this later in the applications life. Now let's run this test, from our vagrant box > vendor/bin/behat -slogin_ui --profile=production But it fails because I am setting up the start of this test for my local database not the remote database features/bootstrap/LoginPageUIContext.php. So I can basically begin to create a way to setup the state of the world on the remote server. > php artisan make:controller SetupBehatController And update that controller to do the setup. And make the route app/Http/routes.php Then update the behat test features/bootstrap/LoginPageUIContext.php And we should do some cleanup! First add a new method to features/bootstrap/LoginPageUIContext.php. Then add that tag to the Scenarios this is related to features/auth/login.feature Then add the controller like before and route app/Http/Controllers/CleanupBehatController.php Then Push and we are ready test this user with fresh state and then clean up when they are done! In this case I could test editing the Profile from one state to another. How it works… Not to hard! Now we have a workflow that can save us a ton of clicking around Production after every deployment. To begin with I add the tag @smoke to tests I considered safe for production. What does safe mean? Basically read only tests that I know will not effect that site's data. Using the @smoke tag I have a consistent way to make Suites or Scenarios as safe to run on Production. But then I take it a step further and create a way to test authenticated related state. Like make a Favorite or updating a Profile! By using some simple routes and a user I can begin to tests many other things on my long list of features I need to consider after every deploy. All of this happens with the configurability of Behat and how it allows me to manage different Profiles and Suites in the behat.yml file! Lastly I tie into the fact that Behat has hooks. I this case I tie in to the @AfterScenario by adding that to my Annotation. And I add another hooks @profile so it only runs if the Scenario has that Tag. That is it, thanks to Behat, Hooks and how easy it is to make Routes in Laravel I can easily take care of a large percentage of what otherwise would be a tedious process after every deployment! See also… Behat Docus on Hooks—http://docs.behat.org/en/v3.0/guides/3.hooks.html Saucelabs—on behat.yml setting later and you can test your site on numerous devices: https://saucelabs.com/. Summary This article gives a summary of Setting up Travis, working with .env files and Behat.  Resources for Article: Further resources on this subject: CRUD Applications using Laravel 4 [article] Laravel Tech Page [article] Eloquent… without Laravel! [article]
Read more
  • 0
  • 0
  • 3396

article-image-building-grid-system-susy
Packt
09 Aug 2016
14 min read
Save for later

Building a Grid System with Susy

Packt
09 Aug 2016
14 min read
In this article by Luke Watts, author of the book Mastering Sass, we will build a responsive grid system using the Susy library and a few custom mixins and functions. We will set a configuration map with our breakpoints which we will then loop over to automatically create our entire grid, using interpolation to create our class names. (For more resources related to this topic, see here.) Detailing the project requirements For this example, we will need bower to download Susy. After Susy has been downloaded we will only need two files. We'll place them all in the same directory for simplicity. These files will be style.scss and _helpers.scss. We'll place the majority of our SCSS code in style.scss. First, we'll import susy and our _helpers.scss at the beginning of this file. After that we will place our variables and finally our code which will create our grid system. Bower and Susy To check if you have bower installed open your command line (Terminal on Unix or CMD on Windows) and run: bower -v If you see a number like "1.7.9" you have bower. If not you will need to install bower using npm, a package manager for NodeJS. If you don't already have NodeJS installed, you can download it from: https://nodejs.org/en/. To install bower from your command line using npm you will need to run: npm install -g bower Once bower is installed cd into the root of your project and run: bower install susy This will create a directory called bower_components. Inside that you will find a folder called susy. The full path to file we will be importing in style.scss is bower_components/susy/sass/_susy.scss. However we can leave off the underscore (_) and also the extension (.scss). Sass will still load import the file just fine. In style.scss add the following at the beginning of our file: // style.scss @import 'bower_components/susy/sass/susy'; Helpers (mixins and functions) Next, we'll need to import our _helpers.scss file in style.scss. Our _helpers.scss file will contain any custom mixins or functions we'll create to help us in building our grid. In style.scss import _helpers.scss just below where we imported Susy: // style.scss @import 'bower_components/susy/sass/susy'; @import 'helpers'; Mixin: bp (breakpoint) I don't know about you, but writing media queries always seems like bit of a chore to me. I just don't like to write (min-width: 768px) all the time. So for that reason I'm going to include the bp mixin, which means instead of writing: @media(min-width: 768px) { // ... } We can simply use: @include bp(md) { // ... } First we are going to create a map of our breakpoints. Add the $breakpoints map to style.scss just below our imports: // style.scss @import 'bower_components/susy/sass/susy'; @import 'helpers'; $breakpoints: ( sm: 480px, md: 768px, lg: 980px ); Then, inside _helpers.scss we're going to create our bp mixin which will handle creating our media queries from the $breakpoints map. Here's the breakpoint (bp) mixin: @mixin bp($size: md) { @media (min-width: map-get($breakpoints, $size)) { @content; } } Here we are setting the default breakpoint to be md (768px). We then use the built in Sass function map-get to get the relevant value using the key ($size). Inside our @media rule we use the @content directive which will allows us pass any Sass or CSS directly into our bp mixin to our @media rule. The container mixin The container mixin sets the max-width of the containing element, which will be the .container element for now. However, it is best to use the container mixin to semantically restrict certain parts of the design to your max width instead of using presentational classes like container or row. The container mixin takes a width argument, which will be the max-width. It also automatically applies the micro-clearfix hack. This prevents the containers height from collapsing when the elements inside it are floated. I prefer the overflow: hidden method myself, but they do the same thing essentially. By default, the container will be set to max-width: 100%. However, you can set it to be any valid unit of dimension, such as 60em, 1160px, 50%, 90vw, or whatever. As long as it's a valid CSS unit it will work. In style.scss let's create our .container element using the container mixin: // style.scss .container { @include container(1160px); } The preceding code will give the following CSS output: .container { max-width: 1160px; margin-left: auto; margin-right: auto; } .container:after { content: " "; display: block; clear: both; } Due to the fact the container uses a max-width we don't need to specify different dimensions for various screen sizes. It will be 100% until the screen is above 1160px and then the max-width value will kick in. The .container:after rule is the micro-clearfix hack. The span mixin To create columns in Susy we use the span mixin. The span mixin sets the width of that element and applies a padding or margin depending on how Susy is set up. By default, Susy will apply a margin to the right of each column, but you can set it to be on the left, or to be padding on the left or right or padding or margin on both sides. Susy will do the necessary work to make everything work behind the scenes. To create a half width column in a 12 column grid you would use: .col-6 { @include span(6 of 12); } The of 12 let's Susy know this is a 12 column grid. When we define our $susy map later we can tell Susy how many columns we are using via the columns property. This means we can drop the of 12 part and simply use span(6) instead. Susy will then know we are using 12 columns unless we explicitly pass another value. The preceding SCSS will output: .col-6 { width: 49.15254%; float: left; margin-right: 1.69492%; } Notice the width and margin together would actually be 50.84746%, not 50% as you might expect. Therefor two of these column would actually be 101.69492%. That will cause the last column to wrap to the next row. To prevent this, you would need to remove the margin from the last column. The last keyword To address this, Susy uses the last keyword. When you pass this to the span mixin it lets Susy know this is the last column in a row. This removes the margin right and also floats the element in question to the right to ensure it's at the very end of the row. Let's take the previous example where we would have two col-6 elements. We could create a class of col-6-last and apply the last keyword to that span mixin: .col-6 { @include span(6 of 12); &-last { @include span(last 6 of 12) } } The preceding SCSS will output: .col-6 { width: 49.15254%; float: left; margin-right: 1.69492%; } .col-6-last { width: 49.15254%; float: right; margin-right: 0; } You can also place the last keyword at the end. This will also work: .col-6 { @include span(6 of 12); &-last { @include span(6 of 12 last) } } The $susy configuration map Susy allows for a lot of configuration through its configuration map which is defined as $susy. The settings in the $susy map allow us to set how wide the container should be, how many columns our grid should have, how wide the gutters are, whether those gutters should be margins or padding, and whether the gutters should be on the left, right or both sides of each column. Actually, there are even more settings available depending what type of grid you'd like to build. Let's, define our $susy map with the container set to 1160px just after our $breakpoints map: // style.scss $susy: ( container: 1160px, columns: 12, gutters: 1/3 ); Here we've set our containers max-width to be 1160px. This is used when we use the container mixin without entering a value. We've also set our grid to be 12 columns with the gutters, (padding or margin) to be 1/3 the width of a column. That's about all we need to set for our purposes, however, Susy has a lot more to offer. In fact, to cover everything in Susy would need an entirely book of its own. If you want to explore more of what Susy can do you should read the documentation at http://susydocs.oddbird.net/en/latest/. Setting up a grid system We've all used a 12 column grid which has various sizes (small, medium, large) or a set breakpoint (or breakpoints). These are the most popular methods for two reasons...it works, and it's easy to understand. Furthermore, with the help of Susy we can achieve this with less than 30 lines of Sass! Don't believe me? Let's begin. The concept of our grid system Our grid system will be similar to that of Foundation and Bootstrap. It will have 3 breakpoints and will be mobile-first. It will have a container, which will act as both .container and .row, therefore removing the need for a .row class. The breakpoints Earlier we defined three sizes in our $breakpoints map. These were: $breakpoints: ( sm: 480px, md: 768px, lg: 980px ); So our grid will have small, medium and large breakpoints. The columns naming convention Our columns will use a similar naming convention to that of Bootstrap. There will be four available sets of columns. The first will start from 0px up to the 399px (example: .col-12) The next will start from 480px up to 767px (example: .col-12-sm) The medium will start from 768px up to 979px (example: .col-12-md) The large will start from 980px (example: .col-12-lg) Having four options will give us the most flexibility. Building the grid From here we can use an @for loop and our bp mixin to create our four sets of classes. Each will go from 1 through 12 (or whatever our Susy columns property is set to) and will use the breakpoints we defined for small (sm), medium (md) and large (lg). In style.scss add the following: // style.scss @for $i from 1 through map-get($susy, columns) { .col-#{$i} { @include span($i); &-last { @include span($i last); } } } These 9 lines of code are responsible for our mobile-first set of column classes. This loops from one through 12 (which is currently the value of the $susy columns property) and creates a class for each. It also adds a class which handles removing the final columns right margin so our last column doesn't wrap onto a new line. Having control of when this happens will give us the most control. The preceding code would create: .col-1 { width: 6.38298%; float: left; margin-right: 2.12766%; } .col-1-last { width: 6.38298%; float: right; margin-right: 0; } /* 2, 3, 4, and so on up to col-12 */ That means our loop which is only 9 lines of Sass will generate 144 lines of CSS! Now let's create our 3 breakpoints. We'll use an @each loop to get the sizes from our $breakpoints map. This will mean if we add another breakpoint, such as extra-large (xl) it will automatically create the correct set of classes for that size. @each $size, $value in $breakpoints { // Breakpoint will go here and will use $size } Here we're looping over the $breakpoints map and setting a $size variable and a $value variable. The $value variable will not be used, however the $size variable will be set to small, medium and large for each respective loop. We can then use that to set our bp mixin accordingly: @each $size, $value in $breakpoints { @include bp($size) { // The @for loop will go here similar to the above @for loop... } } Now, each loop will set a breakpoint for small, medium and large, and any additional sizes we might add in the future will be generated automatically. Now we can use the same @for loop inside the bp mixin with one small change, we'll add a size to the class name: @each $size, $value in $breakpoints { @include bp($size) { @for $i from 1 through map-get($susy, columns) { .col-#{$i}-#{$size} { @include span($i); &-last { @include span($i last); } } } } } That's everything we need for our grid system. Here's the full stye.scss file: / /style.scss @import 'bower_components/susy/sass/susy'; @import 'helpers'; $breakpoints: ( sm: 480px, md: 768px, lg: 980px ); $susy: ( container: 1160px, columns: 12, gutters: 1/3 ); .container { @include container; } @for $i from 1 through map-get($susy, columns) { .col-#{$i} { @include span($i); &-last { @include span($i last); } } } @each $size, $value in $breakpoints { @include bp($size) { @for $i from 1 through map-get($susy, columns) { .col-#{$i}-#{$size} { @include span($i); &-last { @include span($i last); } } } } } With our bp mixin that's 45 lines of SCSS. And how many lines of CSS does that generate? Nearly 600 lines of CSS! Also, like I've said, if we wanted to create another breakpoint it would only require a change to the $breakpoint map. Then, if we wanted to have 16 columns instead we would only need to the $susy columns property. The above code would then automatically loop over each and create the correct amount of columns for each breakpoint. Testing our grid Next we need to check our grid works. We mainly want to check a few column sizes for each breakpoint and we want to be sure our last keyword is doing what we expect. I've created a simple piece of HTML to do this. I've also add a small bit of CSS to the file to correct box-sizing issues which will happen because of the additional 1px border. I've also restricted the height so text which wraps to a second line won't affect the heights. This is simply so everything remains in line so it's easy to see our widths are working. I don't recommend setting heights on elements. EVER. Instead using padding or line-height if you can to give an element more height and let the content dictate the size of the element. Create a file called index.html in the root of the project and inside add the following: <!doctype html> <html lang="en-GB"> <head> <meta charset="UTF-8"> <title>Susy Grid Test</title> <link rel="stylesheet" type="text/css" href="style.css" /> <style type="text/css"> *, *::before, *::after { box-sizing: border-box; } [class^="col"] { height: 1.5em; background-color: grey; border: 1px solid black; } </style> </head> <body> <div class="container"> <h1>Grid</h1> <div class="col-12 col-10-sm col-2-md col-10-lg">.col-sm-10.col-2-md.col-10-lg</div> <div class="col-12 col-2-sm-last col-10-md-last col-2-lg-last">.col-sm-2-last.col-10-md-last.col-2-lg-last</div> <div class="col-12 col-9-sm col-3-md col-9-lg">.col-sm-9.col-3-md.col-9-lg</div> <div class="col-12 col-3-sm-last col-9-md-last col-3-lg-last">.col-sm-3-last.col-9-md-last.col-3-lg-last</div> <div class="col-12 col-8-sm col-4-md col-8-lg">.col-sm-8.col-4-md.col-8-lg</div> <div class="col-12 col-4-sm-last col-8-md-last col-4-lg-last">.col-sm-4-last.col-8-md-last.col-4-lg-last</div> <div class="col-12 col-7-sm col-md-5 col-7-lg">.col-sm-7.col-md-5.col-7-lg</div> <div class="col-12 col-5-sm-last col-7-md-last col-5-lg-last">.col-sm-5-last.col-7-md-last.col-5-lg-last</div> <div class="col-12 col-6-sm col-6-md col-6-lg">.col-sm-6.col-6-md.col-6-lg</div> <div class="col-12 col-6-sm-last col-6-md-last col-6-lg-last">.col-sm-6-last.col-6-md-last.col-6-lg-last</div> </div> </body> </html> Use your dev tools responsive tools or simply resize the browser from full size down to around 320px and you'll see our grid works as expected. Summary In this article we used Susy grids as well as a simple breakpoint mixin (bp) to create a solid, flexible grid system. With just under 50 lines of Sass we generated our grid system which consists of almost 600 lines of CSS.  Resources for Article: Further resources on this subject: Implementation of SASS [article] Use of Stylesheets for Report Designing using BIRT [article] CSS Grids for RWD [article]
Read more
  • 0
  • 0
  • 2222

article-image-basic-website-using-nodejs-and-mysql-database
Packt
14 Jul 2016
5 min read
Save for later

Basic Website using Node.js and MySQL database

Packt
14 Jul 2016
5 min read
In this article by Fernando Monteiro author of the book Node.JS 6.x Blueprints we will understand some basic concepts of a Node.js application using a relational database (Mysql) and also try to look at some differences between Object Document Mapper (ODM) from MongoDB and Object Relational Mapper (ORM) used by Sequelize and Mysql. For this we will create a simple application and use the resources we have available as sequelize is a powerful middleware for creation of models and mapping database. We will also use another engine template called Swig and demonstrate how we can add the template engine manually. (For more resources related to this topic, see here.) Creating the baseline applications The first step is to create another directory, I'll use the root folder. Create a folder called chapter-02. Open your terminal/shell on this folder and type the express command: express –-git Note that we are using only the –-git flag this time, we will use another template engine but we will install it manually. Installing Swig template Engine The first step to do is change the default express template engine to use Swig, a pretty simple template engine very flexible and stable, also offers us a syntax very similar to Angular which is denoting expressions just by using double curly brackets {{ variableName }}. More information about Swig can be found on the official website at: http://paularmstrong.github.io/swig/docs/ Open the package.json file and replace the jade line for the following: "swig": "^1.4.2" Open your terminal/shell on project folder and type: npm install Before we proceed let's make some adjust to app.js, we need to add the swig module. Open app.js and add the following code, right after the var bodyParser = require('body-parser'); line: var swig = require('swig'); Replace the default jade template engine line for the following code: var swig = new swig.Swig(); app.engine('html', swig.renderFile); app.set('view engine', 'html'); Refactoring the views folder Let's change the views folder to the following new structure: views pages/ partials/ Remove the default jade files form views. Create a file called layout.html inside pages folder and place the following code: <!DOCTYPE html> <html> <head> </head> <body> {% block content %} {% endblock %} </body> </html> Create a index.html inside the views/pages folder and place the following code: {% extends 'layout.html' %} {% block title %}{% endblock %} {% block content %} <h1>{{ title }}</h1> Welcome to {{ title }} {% endblock %} Create a error.html page inside the views/pages folder and place the following code: {% extends 'layout.html' %} {% block title %}{% endblock %} {% block content %} <div class="container"> <h1>{{ message }}</h1> <h2>{{ error.status }}</h2> <pre>{{ error.stack }}</pre> </div> {% endblock %} We need to adjust the views path on app.js, replace the code on line 14 for the following code: // view engine setup app.set('views', path.join(__dirname, 'views/pages')); At this time we completed the first step to start our MVC application. In this example we will use the MVC pattern in its full meaning, Model, View, Controller. Creating controllers folder Create a folder called controllers inside the root project folder. Create a index.js inside the controllers folder and place the following code: // Index controller exports.show = function(req, res) { // Show index content res.render('index', { title: 'Express' }); }; Edit the app.js file and replace the original index route app.use('/', routes); with the following code: app.get('/', index.show); Add the controller path to app.js on line 9, replace the original code, with the following code: // Inject index controller var index = require('./controllers/index'); Now it's time to get if all goes as expected, we run the application and check the result. Type on your terminal/shell the following command: npm start Check with the following URL: http://localhost:3000, you'll see the welcome message of express framework. Removing the default routes folder Remove the routes folder and its content. Remove the user route from the app.js, after the index controller and on line 31. Adding partials files for head and footer Inside views/partials create a new file called head.html and place the following code: <meta charset="utf-8"> <title>{{ title }}</title> <link rel='stylesheet' href='https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.0.0-alpha.2/css/bootstrap.min.css'> <link rel="stylesheet" href="/stylesheets/style.css"> Inside views/partials create a file called footer.html and place the following code: <script src='https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.1/jquery.min.js'></script> <script src='https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.0.0-alpha.2/js/bootstrap.min.js'></script> Now is time to add the partials file to layout.html page using the include tag. Open layout.html and add the following highlighted code: <!DOCTYPE html> <html> <head> {% include "../partials/head.html" %} </head> <body> {% block content %} {% endblock %} {% include "../partials/footer.html" %} </body> </html> Finally we are prepared to continue with our project, this time our directories structure looks like the following image: Folder structure Summaray In this article, we are discussing the basic concept of Node.js and Mysql database and we also saw how to refactor express engine template and use another resource like Swig template library to build a basic website. Resources for Article: Further resources on this subject: Exception Handling in MySQL for Python [article] Python Scripting Essentials [article] Splunk's Input Methods and Data Feeds [article]
Read more
  • 0
  • 0
  • 10617
article-image-you-begin
Packt
13 Jul 2016
14 min read
Save for later

Before You Begin

Packt
13 Jul 2016
14 min read
In this article by Ashley Chiasson, the author of the book Mastering Articulate Storyline, provides you with an introduction to the purpose of this book, best practices related to e-learning product development. In this article, we will cover the following topics: Pushing Articulate Storyline to the limit Best practices How to be mindful of reusability Methods for organizing your project The differences between storyboarding and rapid development Ways of streamlining your development (For more resources related to this topic, see here.) Pushing Articulate Storyline to the limit The purpose of this book is really to get you comfortable with pushing Articulate Storyline to its limits. Doing this may also broaden your imagination, allowing you to push your creativity to its limits. There are so many things you can do within Storyline, and a lot of those features, interactions, or functions are overlooked because they just aren't used all that often. Often times, the basic functionality overshadows the more advanced functions because they're easier, they often address the need, and they take less time to learn. That's understandable, but this book is going to open your mind to many more things possible within this tool. You'll get excited, frustrated, excited again, and probably frustrated a few more times, but with all of the practical activities for you to follow along with (and or reverse engineer), you'll be mastering Articulate Storyline and pushing it to its limits within no time! If you don't quite get one of the concepts explained, don't worry. You'll always have access to this book and the activity downloads as a handy reference or refresher. Best practices Before you get too far into your development, it's important to take some steps to streamline your approach by establishing best practices—doing this will help you become more organized and efficient. Everyone has their own process, so this is by no means a prescribed format for the proper way of doing things. These are just some recommendations, from personal experience, that have proven effective as an e-learning developer. Please note that these best practices are not necessarily Storyline-related, but are best practices to consider ahead of development within any e-learning project. Your best practices will likely be project-specific in terms of how your clients or how your organization's internal processes work. Sometimes you'll be provided with storyboard ahead of development and sometimes you'll be expected to rapidly develop. Sometimes you'll be provided with all multimedia ahead of development and sometimes you'll be provided with multimedia after an alpha review. You may want to do a content dump at the beginning of your development process or you may want to work through each slide from start until finish before moving on. Through experience and observation of what other developers are doing, you will learn how to define and adapt your best practices. When a new project comes along, it's always a good idea to employ some form of organization. There are many great reasons for this, some of which include being mindful of reusability, maintaining and organizing project and file structure, and streamlining your development process. This article aims to provide you with as much information as necessary to ensure that you are effectively organizing your projects for enhanced efficiency and an understanding of why these methods should always be considered best practices. How to be mindful of reusability When I think about reusability in e-learning, I think about objects and content that can be reused in a variety of contexts. Developers often run into this when working on large projects or in industries that involve trade-specific content. When working on multiple projects within one sector, you may come across assets used previously in one course (for example, a 3D model of an aircraft) that may be reused in another course of the same content base. Being able to reuse content and/or assets can come in handy as it can save you resources in the long run. Reusing previously established assets (if permitted to do so, of course) would reduce the amount of development time various departments and/or individuals need to spend. Best practices for reusability might include creating your own content repository and defining a file naming convention that will make it easy for you to quickly find what you're looking for. If you're extra savvy, you can create a metadata-coded database, but that might require a lot more effort than you have available. While it does take extra time to either come up with a file naming convention or apply metadata tagging to all assets within your repository, the goal is to make your life easier in the long run. Much like the dreaded administrative tasks required of small business owners, it's not the most sought-after task, but it's a necessary one, especially if you truly want to optimize efficiency! Within Articulate Storyline, you may want to maintain a repository of themes and interactions as you can use elements of these assets for future development and they can save you a lot of time. Most projects, in the early stages, require an initial prototype for the client to sign off on the general look and feel. In this prototyping phase, having a repository of themes and interactions can really make the process a lot smoother because you can call on previous work in order to easily facilitate the elemental design of a new project. Storyline allows you to import content from many sources (for example, PowerPoint, Articulate Engage, Articulate Quizmaker, and more), so you don't feel limited to just reusing Storyline interactions and/or themes. Just structure your repository in an organized manner and you will be able to easily locate the files and file types that you're looking to use at a later date. Another great thing Articulate Storyline is good for when it comes to reusability is Question Banks! Most courses contain questions, knowledge checks, assessments, or whatever you want to call them, but all too seldom do people think about compiling these questions in one neat area for reuse later on. Instead, people often add new question slides, add the question, and go on their merry development way. If you're one of those people, you need to STOP. Your life will be entirely changed by the concept of question banks—if not entirely, at least a little bit, or at least the part of your life that dabbles in development will be changed in some small way. Question banks allow you to create a bank of questions (who would have thought) and call on these questions at any time for placement within your story—reusability at its finest, at least in Storyline. Methods for organizing your project Organizing your project is a necessary evil. Surely there is someone out there who loves this process, but for others who just want to develop all day and all night, there may be a smaller emphasis placed on organization. However, you can take some simple steps to organize your project that can be reused for future projects. Within Storyline, the organizational emphasis of this article will be placed on using Story View and optimizing the use of scenes. These are two elements of Storyline that, depending on the size of your project, can make a world of difference when it comes to making sense of all the content you've authored in terms of making the structure of your content more palatable. Using the Story View Story View is such a great feature of Storyline! It provides you with a bird's eye view of your project, or story, and essentially shows you a visual blueprint of all scenes and slides. This is particularly helpful in projects that involve a lot of branching. Instead of seeing the individual parts, you're seeing the parts as they represent the whole—the Gestalt psychology would be proud! You can also use Story View to plan out the movement of existing scenes or slides if content isn't lining up quite the way you want it to: Optimizing scene use Scenes play a very big role in maintaining organization within your story. They serve to group slides into smaller segments of the entire story and are typically defined using logical breaks. However, it's all up to you how you decide to group your slides. If the story you're working on consists of multiple topics or modules, each topic or module would logically become a new scene. Visually, scenes work in tandem with Story View in that while you're in Story View, you can clearly see the various scenes and move things around appropriately. Functionally, scenes serve to create submenus in the main Storyline menu, but you can change this if you don't want to see each scene delineated in the menu. From an organization and control perspective, scenes can help you reel in unwieldy and overwhelming content. This particularly comes in handy with large courses, where you can easily lose your place when trying to track down a specific slide of a scene, for example, in a sea of 150 slides. In this sense, scenes allow you to chunk content into more manageable scenes within your story and will likely allow you to save on development and revision time. Using scenes will also help when it comes to previewing your story. Instead of having to wait to load 150 slides each time you preview, you can choose to preview a scene and will only have to wait for the slides in that scene to load—perhaps 15 slides of the entire course instead of 150. Scenes really are a magical thing! Asset management Asset management is just what it sounds like—managing your assets. Now, your assets may come in many forms, for example, media assets (your draft and/or completed images/video/audio), customer furnished assets (files provided by the client, which could be raw images/video/audio/PowerPoint/Word documents, and so on.), or content output (outputs from whichever authoring tool you're using). If you've worked on large projects, you will likely relate to how unwieldy these assets can become if you don't have a system in place for keeping everything organized. This is where the management element comes into play. Structuring your folders Setting up a consistent folder structure is really important when it comes to managing your assets. Structuring your folders may seem like a daunting administrative task, but once you determine a structure that works well for you and your projects, you can copy the structure for each project. So yeah, there is a little bit of up front effort, but the headache it will save you in the long run when it comes to tracking down assets for reuse is worth the effort! Again, this folder structure is in no way prescribed, but it is a recommendation, and one that has worked well. It looks something like the following: It may look overwhelming, but it's really not that bad. There are likely more elements accounted here than you may need for your project, but all main elements are included, and you can customize it as you see fit. This is how the folder structure breaks down: Project Folder: 100 Project Management Depending on how large the project is, this folder may have subfolders, for example: Meeting Minutes Action Tracking Risk Management Contracts Invoices 200 Development This folder typically contains subfolders related to my development, for example: Client-Furnished Information (CFI) Scripts and Storyboards Scripts Audio Narration Storyboards Media Video Audio Draft Audio Final Audio Images Flash Output Quality Assurance 300 Client This folder will include anything sent to the client for review, for example: Delivered Review Comments Final Within these folders, there may be other subfolders, but this is the general structure that has proven effective for me. When it comes to filenames, you may wish to follow a file naming convention dictated by the client or follow an internal file naming convention, which indicates the project, type of media, asset number, and version number, for example, PROJECT_A_001_01. If there are multiple courses for one project, you may also want to add an arbitrary course number to keep tabs on which asset belongs to which course. Once a file naming convention has been determined, these filenames will be managed within a spreadsheet, housed within the main 200>Media folder. The basic goal of this recommended folder structure is to organize your course assets and break them into three groups to further help with the organization. If this folder structure sounds like it might be functional for your purposes, go ahead and download a ready-made version of the folder structure. Storyboarding and rapid prototyping Storyboarding and rapid prototyping will likely make their way into your development glossary, if they haven't already, so they're important concepts to discuss when it comes to streamlining your development. Through experience, you'll learn how each of these concepts can help you become more efficient, and this section will discuss some benefits and detriments of both. Storyboarding is a process wherein the sequence of an e-learning project is laid out visually or textually. This process allows instructional designers to layout the e-learning project to indicate screens, topics, teaching points, onscreen text, and media descriptions. However, storyboards may not be limited to just those elements. There are many variations. However, the previously mentioned elements are most commonly represented within a storyboard. Other elements may include audio narration script, assessment items, high-level learning objectives, filenames, source/reference images, or screenshots illustrating the anticipated media asset or screen to be developed. The good thing about storyboarding is that it allows you to organize the content and provides documentation that may be reviewed prior to entry into an authoring environment. Storyboarding provides subject matter experts with a great opportunity for ironing out textual content to ensure accuracy, and can help developers in terms of reducing small text changes once in the authoring environment. These small changes are just that, small, but they also add up quickly and can quickly throw a wrench into your well-oiled, efficient, development machine. Storyboarding also has its downsides. It is an extra step in the development process and may be perceived, by potential clients, as an additional and unnecessary expense. Because storyboards do not depict the final product, reviewers may have difficulty in reviewing content as they cannot contextualize without being able to see the final product. This can be especially true when it comes to reviewing a storyboard involving complex branching scenarios. Rapid prototyping on the other hand involves working within the authoring environment, in this case Articulate Storyline, to develop your e-learning project, slide by slide. This may occur in developing an initial prototype, but may also occur throughout the lifecycle of the project as a means for eliminating the step of storyboarding from the development process. With rapid prototyping, reviewers have added context of visuals and functionality. They are able to review a proposed version of the end product, and as such, their review comments may become more streamlined and their review may take less time to conduct. However, reviewers may also get overloaded by visual stimuli, which may hamper their ability to review for content accuracy. Additionally, rapid prototyping may become less rapid when it comes to revising complex interactions. In both situations, there are clear advantages and disadvantages, so a best practice should be to determine an appropriate way ahead with regard to development and understand which process may best suit the project for which you are authoring. Streamlining your development Storyline provides you with so many ways to streamline your development. A sampling of topics discussed includes the following: Setting up auto-save Setting up defaults Keyboard shortcuts Dockable panels Using the format painter Using the eyedropper Cue points Duplicating objects Naming objects Summary This article introduced you to the concept of pushing Articulate Storyline 2 to its limits, provided you with some tips and tricks when it comes to best practices and being mindful of reusability, identified a functional folder structure and explained the importance that organization will play in your Storyline development, explained the difference between storyboarding and rapid prototyping, and gave you a taste of some topics that may help you streamline your development process. You are now armed with all of my best advice for staying productive and organized, and you should be ready to start a new Storyline project! Resources for Article: Further resources on this subject: Data Science with R [article] Sizing and Configuring your Hadoop Cluster [article] Creating Your Own Theme—A Wordpress Tutorial [article]
Read more
  • 0
  • 0
  • 1450

article-image-working-spring-tag-libraries
Packt
13 Jul 2016
26 min read
Save for later

Working with Spring Tag Libraries

Packt
13 Jul 2016
26 min read
In this article by Amuthan G, the author of the book Spring MVC Beginners Guide - Second Edition, you are going to learn more about the various tags that are available as part of the Spring tag libraries. (For more resources related to this topic, see here.) After reading this article, you will have a good idea about the following topics: JavaServer Pages Standard Tag Library (JSTL) Serving and processing web forms Form-binding and whitelisting Spring tag libraries JavaServer Pages Standard Tag Library JavaServer Pages (JSP) is a technology that lets you embed Java code inside HTML pages. This code can be inserted by means of <% %> blocks or by means of JSTL tags. To insert Java code into JSP, the JSTL tags are generally preferred, since tags adapt better to their own tag representation of HTML, so your JSP pages will look more readable. JSP even lets you  define your own tags; you must write the code that actually implements the logic of your own tags in Java. JSTL is just a standard tag library provided by Oracle. We can add a reference to the JSTL tag library in our JSP pages as follows: <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%> Similarly, Spring MVC also provides its own tag library to develop Spring JSP views easily and effectively. These tags provide a lot of useful common functionality such as form binding, evaluating errors and outputting messages, and more when we work with Spring MVC. In order to use these, Spring MVC has provided tags in our JSP pages. We must add a reference to that tag library in our JSP pages as follows: <%@taglib prefix="form" uri="http://www.springframework.org/tags/form" %> <%@taglib prefix="spring" uri="http://www.springframework.org/tags" %> These taglib directives declare that our JSP page uses a set of custom tags related to Spring and identify the location of the library. It also provides a means to identify the custom tags in our JSP page. In the taglib directive, the uri attribute value resolves to a location that the servlet container understands and the prefix attribute informs which bits of markup are custom actions. Serving and processing forms In Spring MVC, the process of putting a HTML form element's values into model data is called form binding. The following line is a typical example of how we put data into the Model from the Controller: model.addAttribute(greeting,"Welcome") Similarly, the next line shows how we retrieve that data in the View using a JSTL expression: <p> ${greeting} </p> But what if we want to put data into the Model from the View? How do we retrieve that data in the Controller? For example, consider a scenario where an admin of our store wants to add new product information to our store by filling out and submitting a HTML form. How can we collect the values filled out in the HTML form elements and process them in the Controller? This is where the Spring tag library tags help us to bind the HTML tag element's values to a form backing bean in the Model. Later, the Controller can retrieve the formbacking bean from the Model using the @ModelAttribute (org.springframework.web.bind.annotation.ModelAttribute) annotation. The form backing bean (sometimes called the form bean) is used to store form data. We can even use our domain objects as form beans; this works well when there's a close match between the fields in the form and the properties in our domain object. Another approach is creating separate classes for form beans, which is sometimes called Data Transfer Objects (DTO). Time for action – serving and processing forms The Spring tag library provides some special <form> and <input> tags, which are more or less similar to HTML form and input tags, but have some special attributes to bind form elements’ data with the form backed bean. Let's create a Spring web form in our application to add new products to our product list: Open our ProductRepository interface and add one more method declaration to it as follows: void addProduct(Product product); Add an implementation for this method in the InMemoryProductRepository class as follows: @Override public void addProduct(Product product) { String SQL = "INSERT INTO PRODUCTS (ID, " + "NAME," + "DESCRIPTION," + "UNIT_PRICE," + "MANUFACTURER," + "CATEGORY," + "CONDITION," + "UNITS_IN_STOCK," + "UNITS_IN_ORDER," + "DISCONTINUED) " + "VALUES (:id, :name, :desc, :price, :manufacturer, :category, :condition, :inStock, :inOrder, :discontinued)"; Map<String, Object> params = new HashMap<>(); params.put("id", product.getProductId()); params.put("name", product.getName()); params.put("desc", product.getDescription()); params.put("price", product.getUnitPrice()); params.put("manufacturer", product.getManufacturer()); params.put("category", product.getCategory()); params.put("condition", product.getCondition()); params.put("inStock", product.getUnitsInStock()); params.put("inOrder", product.getUnitsInOrder()); params.put("discontinued", product.isDiscontinued()); jdbcTempleate.update(SQL, params); } Open our ProductService interface and add one more method declaration to it as follows: void addProduct(Product product); And add an implementation for this method in the ProductServiceImpl class as follows: @Override public void addProduct(Product product) { productRepository.addProduct(product); } Open our ProductController class and add two more request mapping methods as follows: @RequestMapping(value = "/products/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/products/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { productService.addProduct(newProduct); return "redirect:/market/products"; } Finally, add one more JSP View file called addProduct.jsp under the  src/main/webapp/WEB-INF/views/ directory and add the following tag reference declaration as the very first line in it: <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%> <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> Now add the following code snippet under the tag declaration line and save addProduct.jsp. Note that I skipped some <form:input> binding tags for some of the fields of the product domain object, but I strongly encourage you to add binding tags for the skipped fields while you are trying out this exercise: <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css"> <title>Products</title> </head> <body> <section> <div class="jumbotron"> <div class="container"> <h1>Products</h1> <p>Add products</p> </div> </div> </section> <section class="container"> <form:form method="POST" modelAttribute="newProduct" class="form-horizontal"> <fieldset> <legend>Add new product</legend> <div class="form-group"> <label class="control-label col-lg-2 col-lg-2" for="productId">Product Id</label> <div class="col-lg-10"> <form:input id="productId" path="productId" type="text" class="form:input-large"/> </div> </div> <!-- Similarly bind <form:input> tag for name,unitPrice,manufacturer,category,unitsInStock and unitsInOrder fields--> <div class="form-group"> <label class="control-label col-lg-2" for="description">Description</label> <div class="col-lg-10"> <form:textarea id="description" path="description" rows = "2"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="discontinued">Discontinued</label> <div class="col-lg-10"> <form:checkbox id="discontinued" path="discontinued"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="condition">Condition</label> <div class="col-lg-10"> <form:radiobutton path="condition" value="New" />New <form:radiobutton path="condition" value="Old" />Old <form:radiobutton path="condition" value="Refurbished" />Refurbished </div> </div> <div class="form-group"> <div class="col-lg-offset-2 col-lg-10"> <input type="submit" id="btnAdd" class="btn btn-primary" value ="Add"/> </div> </div> </fieldset> </form:form> </section> </body> </html> Now run our application and enter the URL: http://localhost:8080/webstore/market/products/add. You will be able to see a web page showing a web form to add product information as shown in the following screenshot:Add a products web form Now enter all the information related to the new product that you want to add and click on the Add button. You will see the new product added in the product listing page under the URL http://localhost:8080/webstore/market/products. What just happened? In the whole sequence, steps 5 and 6 are very important steps that need to be observed carefully. Whatever was mentioned prior to step 5 was very familiar to you I guess. Anyhow, I will give you a brief note on what we did in steps 1 to 4. In step 1, we just created an addProduct method declaration in our ProductRepository interface to add new products. And in step 2, we just implemented the addProduct method in our InMemoryProductRepository class. Steps 3 and 4 are just a Service layer extension for ProductRepository. In step 3, we declared a similar method addProduct in our ProductService and implemented it in step 4 to add products to the repository via the productRepository reference. Okay, coming back to the important step; what we did in step 5 was nothing but adding two request mapping methods, namely getAddNewProductForm and processAddNewProductForm: @RequestMapping(value = "/products/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/products/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) { productService.addProduct(productToBeAdded); return "redirect:/market/products"; } If you observe those methods carefully, you will notice a peculiar thing, that is, both the methods have the same URL mapping value in their @RequestMapping annotations (value = "/products/add"). So if we enter the URL http://localhost:8080/webstore/market/products/add in the browser, which method will Spring MVC  map that request to? The answer lies in the second attribute of the @RequestMapping annotation (method = RequestMethod.GET and method = RequestMethod.POST). Yes if you look again, even though both methods have the same URL mapping, they differ in the request method. So what is happening behind the screen is when we enter the URL http://localhost:8080/webstore/market/products/add in the browser, it is considered as a GET request, so Spring MVC will map that request to the getAddNewProductForm method. Within that method, we simply attach a new empty Product domain object with the model, under the attribute name newProduct. So in the  addproduct.jsp View, we can access that newProduct Model object: Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); Before jumping into the processAddNewProductForm method, let's review the addproduct.jsp View file for some time, so that you understand the form processing flow without confusion. In addproduct.jsp, we just added a <form:form> tag from Spring's tag library: <form:form modelAttribute="newProduct" class="form-horizontal"> Since this special <form:form> tag is coming from a Spring tag library, we need to add a reference to that tag library in our JSP file; that's why we added the following line at the top of the addProducts.jsp file in step 6: <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> In the Spring <form:form> tag, one of the important attributes is modelAttribute. In our case, we assigned the value newProduct as the value of the modelAttribute in the <form:form> tag. If you remember correctly, you can see that this value of the modelAttribute and the attribute name we used to store the newProduct object in the Model from our getAddNewProductForm method are the same. So the newProduct object that we attached to the model from the Controller method (getAddNewProductForm) is now bound to the form. This object is called the form backing bean in Spring MVC. Okay now you should look at every <form:input> tag inside the <form:form>tag. You can observe a common attribute in every tag. That attribute is path: <form:input id="productId" path="productId" type="text" class="form:input-large"/> The path attribute just indicates the field name that is relative to form backing bean. So the value that is entered in this input box at runtime will be bound to the corresponding field of the form bean. Okay, now it’s time to come back and review our processAddNewProductForm method. When will this method be invoked? This method will be invoked once we press the submit button on our form. Yes, since every form submission is considered a POST request, this time the browser will send a POST request to the same URL http://localhost:8080/webstore/products/add. So this time the processAddNewProductForm method will get invoked since it is a POST request. Inside the processAddNewProductForm method, we simply are calling the addProduct service method to add the new product to the repository: productService.addProduct(productToBeAdded); But the interesting question here is how come the productToBeAdded object is populated with the data that we entered in the form? The answer lies in the @ModelAttribute (org.springframework.web.bind.annotation.ModelAttribute) annotation. Notice the method signature of the processAddNewProductForm method: public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) Here if you look at the value attribute of the @ModelAttribute annotation, you can observe a pattern. Yes, the @ModelAttribute annotation's value and the value of the modelAttribute from the <form:form> tag are the same. So Spring MVC knows that it should assign the form bounded newProduct object to the processAddNewProductForm method's parameter productToBeAdded. The @ModelAttribute annotation is not only used to retrieve a object from the Model, but if we want we can even use the @ModelAttribute annotation to add objects to the Model. For instance, we can even rewrite our getAddNewProductForm method to something like the following with using the @ModelAttribute annotation: @RequestMapping(value = "/products/add", method = RequestMethod.GET) public String getAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { return "addProduct"; } You can see that we haven't created a new empty Product domain object and attached it to the model. All we did was added a parameter of the type Product and annotated it with the @ModelAttribute annotation, so Spring MVC will know that it should create an object of Product and attach it to the model under the name newProduct. One more thing that needs to be observed in the processAddNewProductForm method is the logical View name it is returning: redirect:/market/products. So what we are trying to tell Spring MVC by returning the string redirect:/market/products? To get the answer, observe the logical View name string carefully; if we split this string with the ":" (colon) symbol, we will get two parts. The first part is the prefix redirect and the second part is something that looks like a request path: /market/products. So, instead of returning a View name, we are simply instructing Spring to issue a redirect request to the request path /market/products, which is the request path for the list method of our ProductController. So after submitting the form, we list the products using the list method of ProductController. As a matter of fact, when we return any request path with the redirect: prefix from a request mapping method, Spring will use a special View object called RedirectView (org.springframework.web.servlet.view.RedirectView) to issue the redirect command behind the screen. Instead of landing on a web page after the successful submission of a web form, we are spawning a new request to the request path /market/products with the help of RedirectView. This pattern is called redirect-after-post, which is a common pattern to use with web-based forms. We are using this pattern to avoid double submission of the same form. Sometimes after submitting the form, if we press the browser's refresh button or back button, there are chances to resubmit the same form. This behavior is called double submission. Have a go hero – customer registration form It is great that we created a web form to add new products to our web application under the URL http://localhost:8080/webstore/market/products/add. Why don't you create a customer registration form in our application to register a new customer in our application? Try to create a customer registration form under the URL http://localhost:8080/webstore/customers/add. Customizing data binding In the last section, you saw how to bind data submitted by a HTML form to a form backing bean. In order to do the binding, Spring MVC internally uses a special binding object called WebDataBinder (org.springframework.web.bind.WebDataBinder). WebDataBinder extracts the data out of the HttpServletRequest object and converts it to a proper data format, loads it into a form backing bean, and validates it. To customize the behavior of data binding, we can initialize and configure the WebDataBinder object in our Controller. The @InitBinder (org.springframework.web.bind.annotation.InitBinder) annotation helps us to do that. The @InitBinder annotation designates a method to initialize WebDataBinder. Let's look at a practical use of customizing WebDataBinder. Since we are using the actual domain object itself as form backing bean, during the form submission there is a chance for security vulnerabilities. Because Spring automatically binds HTTP parameters to form bean properties, an attacker could bind a suitably-named HTTP parameter with form properties that weren't intended for binding. To address this problem, we can explicitly tell Spring which fields are allowed for form binding. Technically speaking, the process of explicitly telling which fields are allowed for binding is called whitelisting binding in Spring MVC; we can do whitelisting binding using WebDataBinder. Time for action – whitelisting form fields for binding In the previous exercise while adding a new product, we bound every field of the Product domain in the form, but it is meaningless to specify unitsInOrder and discontinued values during the addition of a new product because nobody can make an order before adding the product to the store and similarly discontinued products need not be added in our product list. So we should not allow these fields to be bounded with the form bean while adding a new product to our store. However all the other fields of the Product domain object to be bound. Let's see how to this with the following steps: Open our ProductController class and add a method as follows: @InitBinder public void initialiseBinder(WebDataBinder binder) { binder.setAllowedFields("productId", "name", "unitPrice", "description", "manufacturer", "category", "unitsInStock", "condition"); } Add an extra parameter of the type BindingResult (org.springframework.validation.BindingResult) to the processAddNewProductForm method as follows: public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded, BindingResult result) In the same processAddNewProductForm method, add the following condition just before the line saving the productToBeAdded object: String[] suppressedFields = result.getSuppressedFields(); if (suppressedFields.length > 0) { throw new RuntimeException("Attempting to bind disallowed fields: " + StringUtils.arrayToCommaDelimitedString(suppressedFields)); } Now run our application and enter the URL http://localhost:8080/webstore/market/products/add. You will be able to see a web page showing a web form to add new product information. Fill out all the fields, particularly Units in order and discontinued. Now press the Add button and you will see a HTTP status 500 error on the web page as shown in the following image: The add product page showing an error for disallowed fields Now open addProduct.jsp from /Webshop/src/main/webapp/WEB-INF/views/ in your project and remove the input tags that are related to the Units in order and discontinued fields. Basically, you need to remove the following block of code: <div class="form-group"> <label class="control-label col-lg-2" for="unitsInOrder">Units In Order</label> <div class="col-lg-10"> <form:input id="unitsInOrder" path="unitsInOrder" type="text" class="form:input-large"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="discontinued">Discontinued</label> <div class="col-lg-10"> <form:checkbox id="discontinued" path="discontinued"/> </div> </div> Now run our application again and enter the URL http://localhost:8080/webstore/market/products/add. You will be able to see a web page showing a web form to add a new product, but this time without the Units in order and Discontinued fields. Now enter all information related to the new product and click on the Add button. You will see the new product added in the product listing page under the URL http://localhost:8080/webstore/market/products. What just happened? Our intention was to put some restrictions on binding HTTP parameters with the form baking bean. As we already discussed, the automatic binding feature of Spring could lead to a potential security vulnerability if we used a domain object itself as form bean. So we have to explicitly tell Spring MVC which are fields are allowed. That's what we are doing in step 1. The @InitBinder annotation designates a Controller method as a hook method to do some custom configuration regarding data binding on the WebDataBinder. And WebDataBinder is the thing that is doing the data binding at runtime, so we need to tell which fields are allowed to bind to WebDataBinder. If you observe our initialiseBinder method from ProductController, it has a parameter called binder, which is of the type WebDataBinder. We are simply calling the setAllowedFields method on the binder object and passing the field names that are allowed for binding. Spring MVC will call this method to initialize WebDataBinder before doing the binding since it has the @InitBinder annotation. WebDataBinder also has a method called setDisallowedFields to strictly specify which fields are disallowed for binding . If you use this method, Spring MVC allows any HTTP request parameters to be bound except those fields names specified in the setDisallowedFields method. This is called blacklisting binding. Okay, we configured which the allowed fields are for binding, but we need to verify whether any fields other than those allowed are bound with the form baking bean. That's what we are doing in steps 2 and 3. We changed processAddNewProductForm by adding one extra parameter called result, which is of the type BindingResult. Spring MVC will fill this object with the result of the binding. If any attempt is made to bind any fields other than the allowed fields, the BindingResult object will have a getSuppressedFields count greater than zero. That's why we were checking the suppressed field count and throwing a RuntimeException exception: if (suppressedFields.length > 0) { throw new RuntimeException("Attempting to bind disallowed fields: " + StringUtils.arrayToCommaDelimitedString(suppressedFields)); } Here the static class StringUtils comes from org.springframework.util.StringUtils. We want to ensure that our binding configuration is working—that's why we run our application without changing the View file addProduct.jsp in step 4. And as expected, we got the HTTP status 500 error saying Attempting to bind disallowed fields when we submit the Add products form with the unitsInOrder and discontinued fields filled out. Now we know our binder configuration is working, we could change our View file so not to bind the disallowed fields—that's what we were doing in step 6; just removing the input field elements that are related to the disallowed fields from the addProduct.jsp file. After that, our added new products page just works fine, as expected. If any of the outside attackers try to tamper with the POST request and attach a HTTP parameter with the same field name as the form baking bean, they will get a RuntimeException. The whitelisting is just an example of how can we customize the binding with the help of WebDataBinder. But by using WebDataBinder, we can perform many more types of binding customization as well. For example, WebDataBinder internally uses many PropertyEditor (java.beans.PropertyEditor) implementations to convert the HTTP request parameters to the target field of the form backing bean. We can even register custom PropertyEditor objects with WebDataBinder to convert more complex data types. For instance, look at the following code snippet that shows how to register the custom PropertyEditor to convert a Date class: @InitBinder public void initialiseBinder (WebDataBinder binder) { DateFormat dateFormat = new SimpleDateFormat("MMM d, YYYY"); CustomDateEditor orderDateEditor = new CustomDateEditor(dateFormat, true); binder.registerCustomEditor(Date.class, orderDateEditor); } There are many advanced configurations we can make with WebDataBinder in terms of data binding, but for a beginner level, we don’t need to go that deep. Pop quiz – data binding Considering the following data binding customization and identify the possible matching field bindings: @InitBinder public void initialiseBinder(WebDataBinder binder) { binder.setAllowedFields("unit*"); } NoOfUnit unitPrice priceUnit united Externalizing text messages So far in all our View files, we hardcoded text values for all the labels; for instance, take our addProduct.jsp file—for the productId input tag, we have a label tag with the hardcoded text value as Product id: <label class="control-label col-lg-2 col-lg-2" for="productId">Product Id</label> Externalizing these texts from a View file into a properties file will help us to have a single centralized control for all label messages. Moreover, it will help us to make our web pages ready for internationalization. But in order to perform internalization, we need to externalize the label messages first. So now you are going to see how to externalize locale-sensitive text messages from a web page to a property file. Time for action – externalizing messages Let's externalize the labels texts in our addProduct.jsp: Open our addProduct.jsp file and add the following tag lib reference at the top: <%@ taglib prefix="spring" uri="http://www.springframework.org/tags" %> Change the product ID <label> tag's value ID to <spring:message code="addProdcut.form.productId.label"/>. After changing your product ID <label> tag's value, it should look as follows: <label class="control-label col-lg-2 col-lg-2" for="productId"> <spring:message code="addProduct.form.productId.label"/> </label> Create a file called messages.properties under /src/main/resources in your project and add the following line to it: addProduct.form.productId.label = New Product ID Now open our web application context configuration file WebApplicationContextConfig.java and add the following bean definition to it: @Bean public MessageSource messageSource() { ResourceBundleMessageSource resource = new ResourceBundleMessageSource(); resource.setBasename("messages"); return resource; } Now run our application again and enter the URL http://localhost:8080/webstore/market/products/add. You will be able to see the added product page with the product ID label showing as New Product ID. What just happened? Spring MVC has a special a tag called <spring:message> to externalize texts from JSP files. In order to use this tag, we need to add a reference to a Spring tag library—that's what we did in step 1. We just added a reference to the Spring tag library in our addProduct.jsp file: <%@ taglib prefix="spring" uri="http://www.springframework.org/tags" %> In step 2, we just used that tag to externalize the label text of the product ID input tag: <label class="control-label col-lg-2 col-lg-2" for="productId"> <spring:message code="addProduct.form.productId.label"/> </label> Here, an important thing you need to remember is the code attribute of <spring:message> tag, we have assigned the value addProduct.form.productId.label as the code for this <spring:message> tag. This code attribute is a kind of key; at runtime Spring will try to read the corresponding value for the given key (code) from a message source property file. We said that Spring will read the message’s value from a message source property file, so we need to create that file property file. That's what we did in step 3. We just created a property file with the name messages.properties under the resource directory. Inside that file, we just assigned the label text value to the message tag code: addProduct.form.productId.label = New Product ID Remember for demonstration purposes I just externalized a single label, but a typical web application will have externalized messages  for almost all tags; in that case messages messages.properties file will have many code-value pair entries. Okay, we created a message source property file and added the <spring:message> tag in our JSP file, but to connect these two, we need to create one more Spring bean in our web application context for the org.springframework.context.support.ResourceBundleMessageSource class with the name messageSource—we did that in step 4: @Bean public MessageSource messageSource() { ResourceBundleMessageSource resource = new ResourceBundleMessageSource(); resource.setBasename("messages"); return resource; } One important property you need to notice here is the basename property; we assigned the value messages for that property. If you remember, this is the name of the property file that we created in step 3. That is all we did to enable the externalizing of messages in a JSP file. Now if we run the application and open up the Add products page, you can see that the product ID label will have the same text as we assigned to the  addProdcut.form.productId.label code in the messages.properties file. Have a go hero – externalize all the labels from all the pages I just showed you how to externalize the message for a single label; you can now do that for every single label available in all the pages. Summary At the start of this article, you saw how to serve and process forms, and you learned how to bind form data with a form backing bean. You also learned how to read a bean in the Controller. After that, we went a little deeper into the form bean binding and configured the binder in our Controller to whitelist some of the POST parameters from being bound to the form bean. Finally, you saw how to use one more Spring special tag <spring:message> to externalize the messages in a JSP file. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application[article] Mixing ASP.NET Webforms and ASP.NET MVC[article] ASP.NET MVC Framework[article]
Read more
  • 0
  • 1
  • 14485

article-image-animating-elements
Packt
05 Jul 2016
17 min read
Save for later

Animating Elements

Packt
05 Jul 2016
17 min read
In this article by Alex Libby, author of the book Mastering PostCSS for Web Design, you will study about animating elements. A question if you had the choice of three websites: one static, one with badly done animation, and one that has been enhanced with subtle use of animation. Which would you choose? Well, my hope is the answer to that question should be number three: animation can really make a website stand out if done well, or fail miserably if done badly! So far, our content has been relatively static, save for the use of media queries. It's time though to take a look at how PostCSS can help make animating content a little easier. We'll begin with a quick recap on the basics of animation before exploring the route to moving away from pure animation through to SASS and finally across to PostCSS. We will cover a number of topics throughout this article, which will include: A recap on the use of jQuery to animate content Switching to CSS-based animation Exploring the use of prebuilt libraries, such as Animate.css (For more resources related to this topic, see here.) Let's make a start! Revisiting basic animations Animation is quickly becoming a king in web development; more and more websites are using animations to help bring life and keep content fresh. If done correctly, they add an extra layer of experience for the end user; if done badly, the website will soon lose more custom than water through a sieve! Throughout the course of the article, we'll take a look at making the change from writing standard animation through to using processors, such as SASS, and finally, switching to using PostCSS. I can't promise you that we'll be creating complex JavaScript-based demos, such as the Caaaat animation (http://roxik.com/cat/ try resizing the window!), but we will see that using PostCSS is really easy when creating animations for the browser. To kick off our journey, we'll start with a quick look at the traditional animation. How many times have you had to use .animate() in jQuery over the years? Thankfully, we have the power of CSS3 to help with simple animations, but there was a time when we had to animate content using jQuery. As a quick reminder, try running animate.html from the T34 - Basic animation using jQuery animate() folder. It's not going to set the world on fire, but is a nice reminder of the times gone by, when we didn't know any better: If we take a look at a profile of this animation from within a DOM inspector from within a browser, such as Firefox, it would look something like this screenshot: While the numbers aren't critical, the key point here are the two dotted green lines and that the results show a high degree of inconsistent activity. This is a good indicator that activity is erratic, with a low frame count, resulting in animations that are jumpy and less than 100% smooth. The great thing though is that there are options available to help provide smoother animations; we'll take a brief look at some of the options available before making the change to using PostCSS. For now though, let's make that first step to moving away from using jQuery, beginning with a look at the options available for reducing dependency on the use of .animate() or jQuery. Moving away from jQuery Animating content can be a contentious subject, particularly if jQuery or JavaScript is used. If we were to take a straw poll of 100 people and ask which they used, it is very likely that we would get mixed answers! A key answer of "it depends" is likely to feature at or near the top of the list of responses; many will argue that animating content should be done using CSS, while others will affirm that JavaScript-based solutions still have value. Leaving this aside, shall we say lively debate? If we're looking to move away from using jQuery and in particular .animate(), then we have some options available to us: Upgrade your version of jQuery! Yes, this might sound at odds with the theme of this article, but the most recent versions of jQuery introduced the use of requestAnimationFrame, which improved performance, particularly on mobile devices. A quick and dirty route is to use the jQuery Animate Enhanced plugin, available from http://playground.benbarnett.net/jquery-animate-enhanced/ - although a little old, it still serves a useful purpose. It will (where possible) convert .animate() calls into CSS3 equivalents; it isn't able to convert all, so any that are not converted will remain as .animate() calls. Using the same principle, we can even take advantage of the JavaScript animation library, GSAP. The Greensock team have made available a plugin (from https://greensock.com/jquery-gsap-plugin) that replaces jQuery.animate() with their own GSAP library. The latter is reputed to be 20 times faster than standard jQuery! With a little effort, we can look to rework our existing code. In place of using .animate(), we can add the equivalent CSS3 style(s) into our stylesheet and replace existing calls to .animate() with either .removeClass() or .addClass(), as appropriate. We can switch to using libraries, such as Transit (http://ricostacruz.com/jquery.transit/). It still requires the use of jQuery, but gives better performance than using the standard .animate() command. Another alternative is Velocity JS by Jonathan Shapiro, available from http://julian.com/research/velocity/; this has the benefit of not having jQuery as a dependency. There is even talk of incorporating all or part of the library into jQuery, as a replacement for .animate(). For more details, check out the issue log at https://github.com/jquery/jquery/issues/2053. Many people automatically assume that CSS animations are faster than JavaScript (or even jQuery). After all, we don't need to call an external library (jQuery); we can use styles that are already baked into the browser, right? The truth is not as straightforward as this. In short, the right use of either will depend on your requirements and the limits of each method. For example, CSS animations are great for simple state changes, but if sequencing is required, then you may have to resort to using the JavaScript route. The key, however, is less in the method used, but more in how many frames per second are displayed on the screen. Most people cannot distinguish above 60fps. This produces a very smooth experience. Anything less than around 25FPS will produce blur and occasionally appear jerky – it's up to us to select the best method available, that produces the most effective solution. To see the difference in frame rate, take a look at https://frames-per-second.appspot.com/ the animations on this page can be controlled; it's easy to see why 60FPS produces a superior experience! So, which route should we take? Well, over the next few pages, we'll take a brief look at each of these options. In a nutshell, they are all methods that either improve how animations run or allow us to remove the dependency on .animate(), which we know is not very efficient! True, some of these alternatives still use jQuery, but the key here is that your existing code could be using any or a mix of these methods. All of the demos over the next few pages were run at the same time as a YouTube video was being run; this was to help simulate a little load and get a more realistic comparison. Running animations under load means less graphics processing power is available, which results in a lower FPS count. Let's kick off with a look at our first option—the Transit JS library. Animating content with Transit.js In an ideal world, any project we build will have as few dependencies as possible; this applies equally to JavaScript or jQuery-based content as CSS styling. To help with reducing dependencies, we can use libraries such as TransitJS or Velocity to construct our animations. The key here is to make use of the animations that these libraries create as a basis for applying styles that we can then manipulate using .addClass() or .removeClass(). To see what I mean, let's explore this concept with a simple demo: We'll start by opening up a copy of animate.html. To make it easier, we need to change the reference to square-small from a class to a selector: <div id="square-small"></div> Next, go ahead and add in a reference to the Transit library immediately before the closing </head> tag: <script src="js/jquery.transit.min.js"></script> The Transit library uses a slightly different syntax, so go ahead and update the call to .animate() as indicated: smallsquare.transition({x: 280}, 'slow'); Save the file and then try previewing the results in a browser. If all is well, we should see no material change in the demo. But the animation will be significantly smoother—the frame count is higher, at 44.28fps, with less dips. Let's compare this with the same profile screenshot taken for revisiting basic animations earlier in this article. Notice anything? Profiling browser activity can be complex, but there are only two things we need to concern ourselves with here: the fps value and the state of the green line. The fps value, or frames per second, is over three times higher, and for a large part, the green line is more consistent with fewer more short-lived dips. This means that we have a smoother, more consistent performance; at approximately 44fps, the average frame rate is significantly better than using standard jQuery. But we're still using jQuery! There is a difference though. Libraries such as Transit or Velocity convert animations where possible to CSS3 equivalents. If we take a peek under the covers, we can see this in the flesh: We can use this to our advantage by removing the need to use .animate() and simply use .addClass() or .removeClass(). If you would like to compare our simple animation when using Transit or Velocity, there are examples available in the code download, as demos T35A and T35B, respectively. To take it to the next step, we can use the Velocity library to create a version of our demo using plain JavaScript. We'll see how as part of the next demo. Beware though this isn't an excuse to still use JavaScript; as we'll see, there is little difference in the frame count! Animating with plain JavaScript Many developers are used to working with jQuery. After all, it makes it a cinch to reference just about any element on a page! Sometimes though, it is preferable to work in native JavaScript; this could be for speed. If we only need to support newer browsers (such as IE11 or Edge, and recent versions of Chrome or Firefox), then adding jQuery as a dependency isn't always necessary. The beauty about libraries, such as Transit (or Velocity), means that we don't always have to use jQuery to still achieve the same effect; as we'll see shortly, removing jQuery can help improve matters! Let's put this to the test and adapt our earlier demo to work without using jQuery: We'll start by extracting a copy of the T35B folder from the code download bundle. Save this to the root of our project area. Next, we need to edit a copy of animate.html within this folder. Go ahead and remove the link to jQuery and then remove the link to velocity.ui.min.js; we should be left with this in the <head> of our file: <link rel="stylesheet" type="text/css" href="css/style.css">   <script src="js/velocity.min.js"></script> </head> A little further down, alter the <script> block as shown:   <script>     var smallsquare = document.getElementById('square-small');     var animbutton = document.getElementById('animation-button');     animbutton.addEventListener("click", function() {       Velocity(document.getElementById('square-small'), {left: 280}, {duration: 'slow'});     }); </script> Save the file and then preview the results in a browser. If we monitor performance of our demo using a DOM Inspector, we can see a similar frame rate being recorded in our demo: With jQuery as a dependency no longer in the picture, we can clearly see that the frame rate is improved—the downside though is that the support is reduced for some browsers, such as IE8 or 9. This may not be an issue for your website; both Microsoft and the jQuery Core Team have announced changes to drop support for IE8-10 and IE8 respectively, which will help encourage users to upgrade to newer browsers. It has to be said though that while using CSS3 is preferable for speed and keeping our pages as lightweight as possible, using Velocity does provide a raft of extra opportunities that may be of use to your projects. The key here though is to carefully consider if you really do need them or whether CSS3 will suffice and allow you to use PostCSS. Switching classes using jQuery At this point, there is one question that comes to mind—what about using class-based animation? By this, I mean dropping any dependency on external animation libraries, and switching to use plain jQuery with either .addClass() or .removeClass() methods. In theory, it sounds like a great idea—we can remove the need to use .animate() and simply swap classes as needed, right? Well, it's an improvement, but it is still lower than using a combination of pure JavaScript and switching classes. It will all boil down to a trade-off between using the ease of jQuery to reference elements against pure JavaScript for speed, as follows: 1.      We'll start by opening a copy of animate.html from the previous exercise. First, go ahead and replace the call to VelocityJS with this line within the <head> of our document: <script src="js/jquery.min.js"></script> 2.      Next, remove the code between the <script> tags and replace it with this: var smallsquare = $('.rectangle').find('.square-small'); $('#animation-button').on("click", function() {       smallsquare.addClass("move");       smallsquare.one('transitionend', function(e) {     $('.rectangle').find('.square-small') .removeClass("move");     });  }); 3.      Save the file. If we preview the results in a browser, we should see no apparent change in how the demo appears, but the transition is marginally more performant than using a combination of jQuery and Transit. The real change in our code though, will be apparent if we take a peek under the covers using a DOM Inspector. Instead of using .animate(), we are using CSS3 animation styles to move our square-small <div>. Most browsers will accept the use of transition and transform, but it is worth running our code through a process, such as Autocomplete, to ensure we apply the right vendor prefixes to our code. The beauty about using CSS3 here is that while it might not suit large, complex animations, we can at least begin to incorporate the use of external stylesheets, such as Animate.css, or even use a preprocessor, such as SASS to create our styles. It's an easy change to make, so without further ado and as the next step on our journey to using PostCSS, let's take a look at this in more detail. If you would like to create custom keyframe-based animations, then take a look at http://cssanimate.com/, which provides a GUI-based interface for designing them and will pipe out the appropriate code when requested! Making use of prebuilt libraries Up to this point, all of our animations have had one thing in common; they are individually created and stored within the same stylesheet as other styles for each project. This will work perfectly well, but we can do better. After all, it's possible that we may well create animations that others have already built! Over time, we may also build up a series of animations that can form the basis of a library that can be reused for future projects. A number of developers have already done this. One example of note is the Animate.css library created by Dan Eden. In the meantime, let's run through a quick demo of how it works as a precursor to working with it in PostCSS. The images used in this demo are referenced directly from the LoremPixem website as placeholder images. Let's make a start: We'll start by extracting a copy of the T37 folder from the code download bundle. Save the folder to our project area. Next, open a new file and add the following code: body { background: #eee; } #gallery {   width: 745px;   height: 500px;   margin-left: auto;   margin-right: auto; }   #gallery img {   border: 0.25rem solid #fff;   margin: 20px;   box-shadow: 0.25rem 0.25rem 0.3125rem #999;   float: left; } .animated {   animation-duration: 1s; animation-fill-mode: both; } .animated:hover {   animation-duration: 1s;   animation-fill-mode: both; }  Save this as style.css in the css subfolder within the T37 folder. Go ahead and preview the results in a browser. If all is well, then we should see something akin to this screenshot: If we run the demo, we should see images run through different types of animation; there is nothing special or complicated here. The question is though, how does it all fit in with PostCSS? Well, there's a good reason for this; there will be some developers who have used Animate.css in the past and will be familiar with how it works; we will also be using a the postcss-animation plugin later in Updating code to use PostCSS, which is based on the Animate.css stylesheet library. For those of you who are not familiar with the stylesheet library though, let's quickly run through how it works within the context of our demo. Dissecting the code to our demo The effects used in our demo are quite striking. Indeed, one might be forgiven for thinking that they required a lot of complex JavaScript! This, however, could not be further from the truth. The Animate.css file contains a number of animations based on @keyframe similar to this: @keyframes bounce {   0%, 20%, 50%, 80%, 100% {transform: translateY(0);}   40% {transform: translateY(-1.875rem);}   60% {transform: translateY(-0.9375rem);} } We pull in the animations using the usual call to the library within the <head> section of our code. We can then call any animation by name from within our code:   <div id="gallery">     <a href="#"><img class="animated bounce" src="http://lorempixum.com/200/200/city/1" alt="" /></a> ...   </div>   </body> You will notice the addition of the .animated class in our code. This controls the duration and timing of the animation, which are set according to which animation name has been added to the code. The downside of not using JavaScript (or jQuery for that matter) means that the animation will only run once when the demo is loaded; we can set it to run continuously by adding the .infinite class to the element being animated (this is part of the Animate library). We can fake a click option in CSS, but it is an experimental hack that is not supported across all the browsers. To affect any form of control, we really need to use JavaScript (or even jQuery)! If you are interested in details of the hack, then take a look at this response on Stack Overflow at http://stackoverflow.com/questions/13630229/can-i-have-an-onclick-effect-in-css/32721572#32721572. Okay! Onward we go. We've covered the basic use of prebuilt libraries, such as Animate. It's time to step up a gear and make the transition to PostCSS. Summary In this article, we studied about recap on the use of jQuery to animate content. We also looked into switching to CSS-based animation. At last, we saw how to make use of prebuilt libraries in short.  Resources for Article:   Further resources on this subject: Responsive Web Design with HTML5 and CSS3 - Second Edition [article] Professional CSS3 [article] Instant LESS CSS Preprocessor How-to [article]
Read more
  • 0
  • 0
  • 1576
article-image-web-components
Packt
17 Jun 2016
12 min read
Save for later

Web Components

Packt
17 Jun 2016
12 min read
In this article by Arshak Khachatryan, the author of Getting Started with Polymer, we will discuss web components. Currently, web technologies are growing rapidly. Though most websites use these technologies nowadays, we come across many with a bad, unresponsive UI design and awful performance. The only reason we should think about a responsive website is that users are now moving to the mobile web. 55% of the web users use mobile phones because they are faster and more comfortable. This is why we need to provide mobile content in the simplest way possible. Everything is moving to minimalism, even the Web. The new web standards are changing rapidly too. In this article, we will cover one of these new technologies, web components, and what they do. We will discuss the following specifications of web components in this article: Templates Shadow DOM (For more resources related to this topic, see here.) Templates In this section, we will discuss what we can do with templates. However, let's answer a few questions before this. What are templates, and why should we use them? Templates are basically fragments of HTML, but let's call these fragments as the "zombie" fragments of HTML as they are neither alive nor dead. What is meant by "neither alive nor dead"? Let me explain this with a real-life example. Once, when I was working on the ucraft.me project (it's a website built with a lot of cool stuff in it), we faced some rather new challenges with the templates. We had a lot of form elements, but we didn't know where to save the form elements content. We didn't want to load the DOM of each form element, but what could we do? As always, we did some magic; we created a lot of div elements with the form elements and hid it with CSS. But the CSS display: none property did not render the element, but it loaded the element. This was also a problem because there were a lot of form element templates, and it affected the performance of the website. I recommended to my team that they work with templates. Templates can contain HTML content, but they do not load the element nor render. We call template elements "dead elements" because they do not load the content until you get their content with JavaScript. Let's move ahead, and let me show you some examples of how you can create templates and do some stuff with its content. Imagine that you are working on a big project where you need to load some dynamic content without AJAX. If I had a task such as this, I would create a PHP file and get its content by calling the jQuery .load() function. However, now, you can save your content inside of the <template> element and get the content without any jQuery and AJAX but with a single line of JavaScript code. Let's create a template. In index.html, we have <template> and some content we want to get in the future, as shown in the following code block: <template class="superman"> <div> <img src="assets/img/superman.png" class="animated_superman" /> </div> </template> The time has now come for JavaScript! Execute the following code: <script> // selecting the template element with querySelector() var tmpl = document.querySelector('.superman'); //getting the <template> content var content = tmpl.content; // making some changes in the content content.querySelector('.animated_superman').width = 200; // appending the template to the body document.body.appendChild(content); </script> So, that's it! Cool, right? The content will load only after you append the content to the document. So, do you realize that templates are a part of the future web? If you are using Chrome Canary, just turn on the flags of experimental web platform features and enable HTML imports and experimental JavaScript. There are four ways to use templates, which are: Add templates with hidden elements in the document and just copy and paste the data when you need it, as follows: <div hidden data-template="superman"> <div> <p>SuperMan Head</p> <img src="assets/img/superman.png" class="animated_superman" /> </div> </div> However, the problem is that a browser will load all the content. It means that the browser will load but not render images, video, audio, and so on. Get the content of the template as a string (by requesting with AJAX or from <script type="x-template">). However, we might have some problems in working with the string. This can be dangerous for XSS attacks; we just need to pay some more attention to this: <script data-template="batman" type="x-template"> <div> <p>Batman Head this time!</p> <img src="assets/img/superman.png" class="animated_superman" /> </div> </div> Compiled templates such as Hogan.js (http://twitter.github.io/hogan.js/) work with strings. So, they have the same flaw as the patterns of the second type. Templates do not have these disadvantages. We will work with DOM and not with the strings. We will then decide when to run the code. In conclusion: The <template> tag is not intended to replace the system of standardization. There are no tricky iteration operators or data bindings. Its main feature is to be able to insert "live" content along with scripts. Lastly, it does not require any libraries. Shadow DOM The Shadow DOM specification is a separate standard. A part of it is used for standard DOM elements, but it is also used to create with web components. In this section, you will learn what Shadow DOM is and how to use it. Shadow DOM is an internal DOM element that is separated from an external document. It can store your ID, styles, and so on. Most importantly, Shadow DOM is not visible outside of its scope without the use of special techniques. Hence, there are no conflicts with the external world; it's like an iframe. Inside the browser The Shadow DOM concept has been used for a long time inside browsers themselves. When the browser shows complex controls, such as a <input type = "range"> slider or a <input type = "date"> calendar within itself, it constructs them out of the most ordinary styled <div>, <span>, and other elements. They are invisible at the first glance, but they can be easily seen if the checkbox in Chrome DevTools is set to display Shadow DOM: In the preceding code, #shadow-root is the Shadow DOM. Getting items from the Shadow DOM can only be done using special JavaScript calls or selectors. They are not children but a more powerful separation of content from the parent. In the preceding Shadow DOM, you can see a useful pseudo attribute. It is nonstandard and is present for solely historical reasons. It can be styled via CSS with the help of subelements—for example, let's change the form input dates to red via the following code: <style> input::-webkit-datetime-edit { background: red; } </style> <input type="date" /> Once again, make a note of the pseudo custom attribute. Speaking chronologically, in the beginning, the browsers started to experiment with encapsulated DOM structure inside their scopes, then Shadow DOM appeared which allowed developers to do the same. Now, let's work with the Shadow DOM from JavaScript or the standard Shadow DOM. Creating a Shadow DOM The Shadow DOM can create any element within the elem.createShadowRoot() call, as shown by the following code: <div id="container">You know why?</div> <script> var root = container.createShadowRoot(); root.innerHTML = "Because I'm Batman!"; </script> If you run this example, you will see that the contents of the #container element disappeared somewhere, and it only shows "Because I'm Batman!". This is because the element has a Shadow DOM and ignores the previous content of the element. Because of the creation of Shadow DOM, instead of the content, the browser has shown only the Shadow DOM. If you wish, you can put the contents of the ordinary inside this Shadow DOM. To do this, you need to specify where it is to be done. The Shadow DOM is done through the "insertion point", and it is declared using the <content> tag; here's an example: <div id="container">You know why?</div> <script> var root = container.createShadowRoot(); root.innerHTML = '<h1><content></content></h1><p>Winter is coming!</p>'; </script> Now, you will see "You know why?" in the title followed by "Winter is coming!". Here's a Shadow DOM example in Chrome DevTool: The following are some important details about the Shadow DOM: The <content> tag affects only the display, and it does not move the nodes physically. As you can see in the preceding picture, the node "You know why?" remained inside the div#container. It can even be obtained using container.firstElementChild. Inside the <content> tag, we have the content of the element itself. In this example string "You know why?". With the select attribute of the <content> element, you can specify a particular selector content you want to transfer; for example, <content select="p"></content> will transfer only paragraphs. Inside the Shadow DOM, you can use the <content> tag multiple times with different values of select, thus indicating where to place which part of the original content. However, it is impossible to duplicate nodes. If the node is shown in a <content> tag, then the next node will be missed. For example, if there is a <content select="h3.title"> tag and then <content select= "h3">, the first <content> will show the headers <h3> with the class title, while the second will show all the others, except for the ones already shown. In the preceding example from DevTools, the <content></content> tag is empty. If we add some content in the <content> tag, it will show that in that case, if there are no other nodes. Check out the following code: <div id="container"> <h3>Once upon a time, in Westeros</h3> <strong>Ruled a king by name Joffrey and he's dead!</strong> </div> <script> var root = container.createShadowRoot(); root.innerHTML = '<content select='h3'></content> <content select=".writer"> Jon Snow </content> <content></content>'; </script> When you run the JS code, you will see the following: The first <content select='h3'> tag will display the title The second <content select = ".hero"> tag would show the hero name, but if there's no any element with this selector, it will take the default value: <content select=".hero"> The third <content> tag displays the rest of the original contents of the elements without the header <h3>, which it had launched earlier Once again, note that <content> moves nodes on the DOM physically. Root shadowRoot After the creation of a root in the internal DOM, the tree will be available as container.shadowRoot. It is a special object that supports the basic methods of CSS requests and is described in detail in ShadowRoot. You need to go through container.shadowRoot if you need to work with content in the Shadow DOM. You can create a new Shadow DOM tree of JavaScript; here's an example: <div id="container">Polycasts</div> <script> // create a new Shadow DOM tree for element var root = container.createShadowRoot(); root.innerHTML = '<h1><content></content></h1> <strong>Hey googlers! Let's code today.</strong>'; </script> <script> // read data from Shadow DOM for elem var root = container.shadowRoot; // Hey googlers! Let's code today. document.write('<br/><em>container: ' + root. querySelector('strong').innerHTML); // empty as physical nodes - is content document.write('<br/><em>content: ' + root. querySelector('content').innerHTML); </script> To finish up, Shadow DOM is a tool to create a separate DOM tree inside the cell, which is not visible from outside without using special techniques: A lot of browser components with complex structures have Shadow DOM already. You can create Shadow DOM inside every element by calling elem.createShadowRoot(). In the future, it will be available as a elem.shadowRoot root, and you can access it inside the Shadow DOM. It is not available for custom elements. Once the Shadow DOM appears in the element, the content of it is hidden. You can see just the Shadow DOM. The <content> element moves the contents of the original item in the Shadow DOM only visually. However, it remains in the same place in the DOM structure. Detailed specifications are given at http://w3c.github.io/webcomponents/spec/shadow/. Summary Using web components, you can easily create your web application by splitting it into parts/components. Resources for Article: Further resources on this subject: Handling the DOM in Dart [article] Manipulation of DOM Objects using Firebug [article] jQuery 1.4 DOM Manipulation Methods for Style Properties and Class Attributes [article]
Read more
  • 0
  • 0
  • 1673

article-image-fine-tune-your-web-application-profiling-and-automation
Packt
07 Jun 2016
17 min read
Save for later

Fine Tune Your Web Application by Profiling and Automation

Packt
07 Jun 2016
17 min read
In this article by James Singleton, author of the book,ASP.NET Core 1.0 High Performance,sheds some light on how to improve the performance of your web application by profiling and testing it. In this article, we will cover writing automated tests to monitor performance along with adding these to aContinuous Integration(CI) and deployment system by constantly checking for regressions. (For more resources related to this topic, see here.) Profiling and measurement It's impossible to overstate how important profiling, measuring, and analyzingreliable evidence is, especially when dealing with web application performance. Maybe you used Glimpseor MiniProfilerto provide insights into the running of your web application;or perhaps, you are familiar with the Visual Studio diagnostics tools and the Application InsightsSoftware Development Kit (SDK). There's another tool that's worth mentioning and that's the Prefix profiler, which you can get at prefix.io.Prefix is a free, web‑based,ASP.NET profiler thatsupports ASP.NET Core. However, it doesn't yet support .NET Core (although this is planned),so you'll need to run ASP.NETCore on .NET Framework 4.6, for now. There's a live demo on their website (at demo.prefix.io) if you want to quickly check it out. You may also want to look at the PerfView performance analysis tool from Microsoft, which is used in the development of .NET Core. You can download PerfView from https://www.microsoft.com/en-us/download/details.aspx?id=28567, as a ZIP file that you can just extract and run. It is useful to analyze the memory of .NET applications among other things. You can use PerfView for many debugging activities, for example, to snapshot the heap or force GC runs. We don't have space for a detailed walkthrough here, but the included instructions are good, and there blogs on MSDN with guides and many video tutorials on Channel 9 at channel9.msdn.com/Series/PerfView-Tutorial if you need more information.Sysinternals tools (technet.microsoft.com/sysinternals) can also be helpful, but as they are not focused on .NET, they are less useful in this context. While tools such as these are great, what would be even better is building performance monitoring into your development workflow. Automate everything that you can and make performance checks transparent, routine, and run by default. Manual processes are bad becausesteps can be skipped and errors can easily be made. You wouldn't dream of developing software by e-mailing files around or editing code directly on a production server, so why not automate your performance tests too? Change control processes exist to ensure consistency and reduce errors. This is why using a Source Control Management (SCM) system, such as git or Team Foundation Server (TFS) is essential. It's also extremely useful to have a build server and perform Continuous Integration(CI) or even fully automated deployments. If the code that is deployed in production differs from what you have on your local workstation, then you have very little chance of success. This is one of the reasons why SQL Stored Procedures (SPs/sprocs) are difficult to work with,at least without rigorous version control. It's far too easy to modify an old version of an SP on a development database, accidentally revert a bug fix, and end up with a regression.If you must use sprocs, then you will need a versioning system such, as ReadyRoll (which Redgate has now acquired). If you practice Continuous Delivery (CD),then you'll have a build server, such as JetBrains TeamCity, ThoughtWorksGoCD, orCruiseControl.NET,or a cloud service, such as AppVeyor. Perhaps, you even automating your deployments using a tool, such as Octopus Deploy, and have your own internal NuGet feeds using software such as TheMotleyFool's Klondike or a cloud service such as MyGet (which also supports npm, bower, and VSIX packages). Bypassing processes and doing things manually will cause problems, even if you follow a script. If it can be automated, then it probably should be, and this includes testing. Automated testing As previously mentioned, the key to improving almost everything is automation. Tests thatare only run manually on developer workstations add very little value. It should of course be possible to run the tests on desktops, but this shouldn't be the official result because there's no guarantee that they will pass on a server (where the correct functioning matters more). Although automation usually occurs on servers, it can be useful to automate tests running on developer workstations too. One way of doing this in Visual Studio is to use a plugin, such as NCrunch. This runs your tests as you work, which can be very useful if you practice Test-Driven Development (TDD) and write your tests before your implementations. You can read more about NCrunch and see the pricing at ncrunch.net, or there's a similar open source project at continuoustests.com. One way of enforcing testing is to use gated check-ins in TFS, but this can be a little draconian, and if you use an SCM-like git, then it's easier to work on branches and simply block merges until all of the tests pass. You want to encourage developers to check-in early and often because this makes merges easier.Therefore, it's a bad idea to have features in progress sitting on workstations for a long time (generally no longer than a day). Continuous integration CI systems automatically build and test all of your branches, and they feed this information back to your version control system. For example, using the GitHubAPI,you can block the merging of pull requests until the build server has reported success of the merge result. Both Bitbucket and GitLab offer free CI systems called pipelines, so you may not need any extra systems in addition to one for source control because everything is in one place. GitLab also offers an integrated Docker container registry, and there is an open source version that you can install locally. Docker is well supported by .NET Core, and the new version of Visual Studio.You cando something similar with Visual Studio Team Services for CI builds and unit testing. Visual Studioalso has git services built into it. This process works well for unit testing because unit tests must be quick so that you get feedback early.Shortening the iteration cycle is a good way of increasing productivity,and you'll want the lag to be as small as possible. However, running tests on each build isn't suitable for all types of testing because not all tests can be quick. In this case, you'll need an additional strategy so as not to slow down your feedback loop. There are many unit testing frameworks available for .NET, for example NUnit, xUnit, and MSTest (Microsoft's unit test framework), along with multiple graphical ways of running tests locally, such as the Visual Studio Test Explorer and the ReSharper plugin. People have their favorites, but it doesn't really matter what you choose because most CI systems will support all of them. Slow testing Some tests are slow,but even if each test is fast they can easily add up to a lengthy time if you have a lot of them. This is especially true if they can't be parallelized and need to be run in sequence.Therefore, you should always aim to have each test stand on its own, without any dependencies on others. It's good practice to divide your tests into rings of importance so that you can at least run a subset of the most crucial on every CI build. However, if you have a large test suite or some tests thatare unavoidably slow, then you may choose to only run these once a day (perhaps overnight) or every week (maybe over the weekend). Some testing is simply slow by nature, and performance testing can often fall into this category, for example, load testing or User Interface (UI) testing. These are usually classed as integration testing, rather than unit testing, because they require your code to be deployed to an environment for testing, and the tests can't simply exercise the binaries. To make use of such automated testing, you will need to have an automated deployment system in addition to your CI system. If you have enough confidence in your test system, then you caneven have live deployments happen automatically. This works well if you also use feature switching to control the rollout of new features. Realistic environments Using a test environment that is as close to production (or as live-like) as possible is a good step toward ensuring reliable results. You cantry and use a smaller set of servers, and then scale your results up to get an estimate of live performance, but this assumes that you have an intimate knowledge of how your application scales, and what hardware constraints will be the bottlenecks. A better option is to use your live environment or rather what will become your production stack. You first create a staging environment that is identical to live, then you deploy your code to it, and run your full test suite, including a comprehensive performance test, ensuring that it behaves correctly. Once you are happy, then you simply swap staging and production, perhaps using DNS or Azure staging slots. Your old live environment now either becomes your test environment or if you use immutable cloud instances, then you can simply terminate it and spin up a new staging system. This concept is known as blue‑green deployment. You don't necessarily have to move all users across at once in a big bang. You canmove a few over first to test whether everything is correct. Web UI testing tools One of the most popular web testing tools is Selenium, which allows you to easily write tests and automate web browsers using WebDriver. Selenium is useful for many other tasks apart from testing, and you can read more about it at docs.seleniumhq.org. WebDriver is a protocol for remote controlling web browsers, and you can read about it at w3c.github.io/webdriver/webdriver-spec.html. Selenium uses real browsers, the same versions your users will access your web application with. This makes it excellent to get representative results, but it can cause issues if itrunsfrom the command line in an unattended fashion. For example, you may find your test server's memory full of dead browser processes, which have timed out. You may find it easier to use a dedicated headless test browser, which while not exactly the same as what your users will see, is more suitable for automation. The best approach is of course to use a combination of both, perhaps running headless tests first and then running the same tests on real browsers with WebDriver. One of the most well-known headless test browsers is PhantomJS. This is based on the WebKit engine, so it should give similar results to Chrome and Safari. PhantomJS is useful for many things apart from testing, such as capturing screenshots, and many different testing frameworks can drive it. As the name suggests,JavaScript can control PhantomJS, and you can read more about it at phantomjs.org. WebKit is an open source engine for web browsers, which was originally part of the KDE Linux desktop environment. It is mainly used in Apple's Safari browser, but a fork called Blink is used in Google Chrome, Chromium, and Opera. You can read more at webkit.org. Other automatable testing browsers based on different engines are available, but they have some limitations. For example, SlimerJS (slimerjs.org) is based on the Gecko engine used by Firefox, but is not fully headless. You probably want to use a higher-level testing utility rather than scripting browser engines directly. One such utility that provides many useful abstractions is CasperJS(casperjs.org),which supports running onboth PhantomJS and SlimerJS. Another library is Capybara, which allows you to easily simulate user interactions in Ruby. It supports Selenium, WebKit, Rack, and PhantomJS (via Poltergeist), although it's more suitable for Rails apps.You can read more at jnicklas.github.io/capybara. There is also TrifleJS (triflejs.org), which uses the .NET WebBrowser class (the Internet Explorer Trident engine), but this is a work in progress. Additionally, there's Watir (watir.com), which is a set of Ruby libraries that target Internet Explorer and WebDriver. However, neither have been updated in a while, and IE has changed a lot recently. Microsoft Edge (codenamed Spartan)is the new version of IE, and the Trident engine has been forked to EdgeHTML.The JavaScript engine (Chakra) has been open sourced as ChakraCore (github.com/Microsoft/ChakraCore). It shouldn't matter too much what browser engine you use, and PhantomJS will work fine as a first pass for automated tests. You can always test with real browsers after using a headless one, perhaps with Selenium or with PhantomJS using WebDriver. When we refer to browser engines (WebKit/Blink, Gecko, and Trident/EdgeHTML), we generally mean only the rendering and layout engine, not the JavaScript engine (SFX/Nitro/FTL/B3, V8, SpiderMonkey, and Chakra/ChakraCore). You'll probably still want to use a utility such as CasperJS to make writing tests easier, and you'll likely need a test framework, such as Jasmine (jasmine.github.io) or QUnit (qunitjs.com), too. You can also use a test runner thatsupports both Jasmine and QUnit, such as Chutzpah (mmanela.github.io/chutzpah). You can integrate your automated tests with many different CI systems, for example, Jenkins or JetBrains TeamCity. If you prefer a cloud-hosted option, then there's Travis CI (travis-ci.org) andAppVeyor (appveyor.com), which is also suitableto build .NET apps. You may prefer to run your integration and UI tests from your deployment system, for example, to verify a successful deployment in Octopus Deploy. There are also dedicated,cloud-based,web-application UI testing services available, such as BrowserStack (browserstack.com). Automating UI performancetests Automated UI tests are clearly great to check functional regressions, but they are also useful to test performance. You have programmatic access to the same information provided by the network inspector in the browser developer tools. You can integrate the YSlow (yslow.org)performance analyzerwith PhantomJS, enabling your CI system to check for common web performance mistakes on every commit. YSlow came out of Yahoo!, and it provides rules used to identify bad practices, which can slow down web applications for users. It's a similar idea to Google's PageSpeed Insights service (which can be automated via its API). However, YSlow is pretty old, and things have moved on in web development recently, for example, HTTP/2. A modern alternative is "the coach" from sitespeed.io, and you can read more at github.com/sitespeedio/coach.You should check out their other open source tools too, such as the dashboard at dashboard.sitespeed.io, which uses Graphite and Grafana. You canalso export the network results (in industry standard HAR format) and analyze them however you like. For example, visualizing them graphically in waterfall format, as you might do manually with your browser developer tools. The HTTP Archive (HAR) format is a standard way of representing the content of monitored network data to export it to other software. You can copy or save as HAR in some browser developer tools by right-clicking on a network request. DevOps When using automation and techniques, such as feature switching, it is essential to have a good view of your environments so that you know the utilization of all the hardware. Good tooling is important to perform this monitoring, and you want to easily be able to see the vital statistics of every server. This will consist of at least the CPU, memory, and disk space consumption, but it may include more, and you will want alarms set up to alert you if any of these stray outside allowed bands. The practice of DevOps is the culmination of all of the automation that we covered previously with development, operations, and quality assurance testing teams all collaborating. The only missing pieces left now are provisioning and configuring infrastructure and then monitoring it while in use. Although DevOps is a culture, there is plenty of tooling that can help. DevOps tooling One of the primary themes of DevOps tooling is defining infrastructure as code. The idea is that you shouldn't manually perform a task, such as setting up a server, when you can create software to do it for you. You canthen reuse these provisioning scripts, which will not only save you time, but it will also ensure that all of the machines are consistent and free of mistakes or missed steps. Provisioning There are many systems available to commission and configure new machines. Some popular configuration management automation tools are Ansible (ansible.com), Chef (chef.io), and Puppet (puppet.com). Not all of these tools work great on Windows servers, partly because Linux is easier to automate. However, you can run ASP.NETCore on Linux and still develop on Windows using Visual Studio, while testing in a VM. Developing for a VM is a great idea because it solves the problems in setting up environments and issues where it "works on my machine" but not in production. Vagrant (vagrantup.com) is a great command line tool to manage developer VMs. It allows you to easily create, spin up, and share developer environments. The successor to Vagrant, Otto (ottoproject.io) takes this a step further and abstracts deployment too.Therefore,you can push to multiple cloud providers without worrying about the intricacies of CloudFormation, OpsWorks, or anything else. If you create your infrastructure as code, then your scripts can be versioned and tested, just like your application code. We'll stop before we get too far off-topic, but the point is that if you have reliable environments, which you can easily verify, instantiate, and perform testing on, then CI is a lot easier. Monitoring Monitoring is essential, especially for web applications, and there are many tools available to help with it. A popular open source infrastructure monitoring system is Nagios (nagios.org). Another more modern open source alerting and metrics tool is Prometheus(prometheus.io). If you use a cloud platform, then there will be monitoring built in, for example AWS CloudWatch or Azure Diagnostics.There are also cloud servicesto directly monitor your website, such as Pingdom (pingdom.com), UptimeRobot (uptimerobot.com),Datadog (datadoghq.com),and PagerDuty (pagerduty.com). You probably already have a system in place to measure availability, but you can also use the same systems to monitor performance. This is not only helpfulto ensure a responsive users experience, but it can also provide early warning signs that a failure is imminent. If you are proactive and take preventative action, then you can save yourself a lot of trouble reactively fighting fires. It helps consider application support requirements at design time. Development, testing, and operations aren't competing disciplines, and you will succeed more often if you work as one team rather than simply throwing an application over the fence and saying it "worked in test, ops problem now". Summary In this article, we saw how wecan integrate automated testing into a CI system in order to monitor for performance regressions. We also learned some strategies to roll out changes and ensure that tests accurately reflect real life. We also briefly covered some options for DevOps practices and cloud-hosting providers, which together make continuous performance testing much easier. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application [article] Creating a NHibernate session to access database within ASP.NET [article] Working With ASP.NET DataList Control [article]
Read more
  • 0
  • 0
  • 3159