Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-testing-and-quality-control
Packt
04 Jan 2017
19 min read
Save for later

Testing and Quality Control

Packt
04 Jan 2017
19 min read
In this article by Pablo Solar Vilariño and Carlos Pérez Sánchez, the author of the book, PHP Microservices, we will see the following topics: (For more resources related to this topic, see here.) Test-driven development Behavior-driven development Acceptance test-driven development Tools Test-driven development Test-Driven Development (TDD) is part of Agile philosophy, and it appears to solve the common developer's problem that shows when an application is evolving and growing, and the code is getting sick, so the developers fix the problems to make it run but every single line that we add can be a new bug or it can even break other functions. Test-driven development is a learning technique that helps the developer to learn about the domain problem of the application they are going to build, doing it in an iterative, incremental, and constructivist way: Iterative because the technique always repeats the same process to get the value Incremental because for each iteration, we have more unit tests to be used Constructivist because it is possible to test all we are developing during the process straight away, so we can get immediate feedback Also, when we finish developing each unit test or iteration, we can forget it because it will be kept from now on throughout the entire development process, helping us to remember the domain problem through the unit test; this is a good approach for forgetful developers. It is very important to understand that TDD includes four things: analysis, design, development, and testing; in other words, doing TDD is understanding the domain problem and correctly analyzing the problem, designing the application well, developing well, and testing it. It needs to be clear; TDD is not just about implementing unit tests, it is the whole process of software development. TDD perfectly matches projects based on microservices because using microservices in a large project is dividing it into little microservices or functionalities, and it is like an aggrupation of little projects connected by a communication channel. The project size is independent of using TDD because in this technique, you divide each functionality into little examples, and to do this, it does not matter if the project is big or small, and even less when our project is divided by microservices. Also, microservices are still better than a monolithic project because the functionalities for the unit tests are organized in microservices, and it will help the developers to know where they can begin using TDD. How to do TDD? Doing TDD is not difficult; we just need to follow some steps and repeat them by improving our code and checking that we did not break anything. TDD involves the following steps: Write the unit test: It needs to be the simplest and clearest test possible, and once it is done, it has to fail; this is mandatory. If it does not fail, there is something that we are not doing properly. Run the tests: If it has errors (it fails), this is the moment to develop the minimum code to pass the test, just what is necessary, do not code additional things. Once you develop the minimum code to pass the test, run the test again (step two); if it passes, go to the next step, if not then fix it and run the test again. Improve the test: If you think it is possible to improve the code you wrote, do it and run the tests again (step two). If you think it is perfect then write a new unit test (step one). To do TDD, it is necessary to write the tests before implementing the function; if the tests are written after the implementation has started, it is not TDD; it is just testing. If we start implementing the application without testing and it is finished, or if we start creating unit tests during the process, we are doing the classic testing and we are not approaching the TDD benefits. Developing the functions without prior testing, the abstract idea of the domain problem in your mind can be wrong or may even be clear at the start but during the development process it can change or the concepts can be mixed. Writing the tests after that, we are checking if all the ideas in our main were correct after we finished the implementation, so probably we have to change some methods or even whole functionalities after spend time coding. Obviously, testing is always better than not testing, but doing TDD is still better than just classic testing. Why should I use TDD? TDD is the answer to questions such as: Where shall I begin? How can I do it? How can I write code that can be modified without breaking anything? How can I know what I have to implement? The goal is not to write many unit tests without sense but to design it properly following the requirements. In TDD, we do not to think about implementing functions, but we think about good examples of functions related with the domain problem in order to remove the ambiguity created by the domain problem. In other words, by doing TDD, we should reproduce a specific function or case of use in X examples until we get the necessary examples to describe the function or task without ambiguity or misinterpretations. TDD can be the best way to document your application. Using other methodologies of software development, we start thinking about how the architecture is going to be, what pattern is going to be used, how the communication between microservices is going to be, and so on, but what happens if once we have all this planned, we realize that this is not necessary? How much time is going to pass until we realize that? How much effort and money are we going to spend? TDD defines the architecture of our application by creating little examples in many iterations until we realize what the architecture is; the examples will slowly show us the steps to follow in order to define what the best structures, patterns, or tools to use are, avoiding expenditure of resources during the firsts stages of our application. This does not mean that we are working without an architecture; obviously, we have to know if our application is going to be a website or a mobile app and use a proper framework. What is going to be the interoperability in the application? In our case it will be an application based on microservices, so it will give us support to start creating the first unit tests. The architectures that we remove are the architectures on top of the architecture, in other words, the guidelines to develop an application as always. TDD will produce an architecture without ambiguity from unit testing. TDD is not cure-all: In other words, it does not give the same results to a senior developer as to a junior developer, but it is useful for the entire team. Let's look at some advantages of using TDD: Code reuse: Creates every functionality with only the necessary code to pass the tests in the second stage (Green) and allows you to see if there are more functions using the same code structure or parts of a specific function, so it helps you to reuse the previous code you wrote. Teamwork is easier: It allows you to be confident with your team colleagues. Some architects or senior developers do not trust developers with poor experience, and they need to check their code before committing the changes, creating a bottleneck at that point, so TDD helps to trust developers with less experience. Increases communication between team colleagues: The communication is more fluent, so the team share their knowledge about the project reflected on the unit tests. Avoid overdesigning application in the first stages: As we said before, doing TDD allows you to have an overview of the application little by little, avoiding the creation of useless structures or patterns in your project, which, maybe, you will trash in the future stages. Unit tests are the best documentation: The best way to give a good point of view of a specific functionality is reading its unit test. It will help to understand how it works instead of human words. Allows discovering more use cases in the design stage: In every test you have to create, you will understand how the functionality should work better and all the possible stages that a functionality can have. Increases the feeling of a job well done: In every commit of your code, you will have the feeling that it was done properly because the rest of the unit tests passes without errors, so you will not be worried about other broken functionalities. Increases the software quality: During the step of refactoring, we spend our efforts on making the code more efficient and maintainable, checking that the whole project still works properly after the changes. TDD algorithm The technical concepts and steps to follow the TDD algorithm are easy and clear, and the proper way to make it happen improves by practicing it. There are only three steps, called red, green, and refactor: Red – Writing the unit tests It is possible to write a test even when the code is not written yet; you just need to think about whether it is possible to write a specification before implementing it. So, in this first step you should consider that the unit test you start writing is not like a unit test, but it is like an example or specification of the functionality. In TDD, this first example or specification is not immovable; in other words, the unit test can be modified in the future. Before starting to write the first unit test, it is necessary to think about how the Software Under Test (SUT) is going to be. We need to think about how the SUT code is going to be and how we would check that it works they way we want it to. The way that TDD works drives us to firstly design what is more comfortable and clear if it fits the requirements. Green – Make the code work Once the example is written, we have to code the minimum to make it pass the test; in other words, set the unit test to green. It does not matter if the code is ugly and not optimized, it will be our task in the next step and iterations. In this step, the important thing is only to write the necessary code for the requirements without unnecessary things. It does not mean writing without thinking about the functionality, but thinking about it to be efficient. It looks easy but you will realize that you will write extra code the first time. If you concentrate on this step, new questions will appear about the SUT behavior with different entries, but you should be strong and avoid writing extra code about other functionalities related to the current one. Instead of coding them, take notes to convert them into functionalities in the next iterations. Refactor – Eliminate redundancy Refactoring is not the same as rewriting code. You should be able to change the design without changing the behavior. In this step, you should remove the duplicity in your code and check if the code matches the principles of good practices, thinking about the efficiency, clarity, and future maintainability of the code. This part depends on the experience of each developer. The key to good refactoring is making it in small steps To refactor a functionality, the best way is to change a little part and then execute all the available tests; if they pass, continue with another little part, until you are happy with the obtained result. Behavior-driven development Behavior-Driven Development (BDD) is a process that broadens the TDD technique and mixes it with other design ideas and business analyses provided to the developers, in order to improve the software development. In BDD, we test the scenarios and classes’ behavior in order to meet the scenarios, which can be composed by many classes. It is very useful to use a DSL in order to have a common language to be used by the customer, project owner, business analyst, or developers. The goal is to have a ubiquitous language. What is BDD? As we said before, BDD is an AGILE technique based on TDD and ATDD, promoting the collaboration between the entire team of a project. The goal of BDD is that the entire team understands what the customer wants, and the customer knows what the rest of the team understood from their specifications. Most of the times, when a project starts, the developers don't have the same point of view as the customer, and during the development process the customer realizes that, maybe, they did not explain it or the developer did not understand it properly, so it adds more time to changing the code to meet the customer's needs. So, BDD is writing test cases in human language, using rules, or in a ubiquitous language, so the customer and developers can understand it. It also defines a DSL for the tests. How does it work? It is necessary to define the features as user stories (we will explain what this is in the ATDD section of this article) and their acceptance criteria. Once the user story is defined, we have to focus on the possible scenarios, which describe the project behavior for a concrete user or a situation using DSL. The steps are: Given [context], When [event occurs], Then [Outcome]. To sum up, the defined scenario for a user story gives the acceptance criteria to check if the feature is done. Acceptance Test-Driven Development Perhaps, the most important methodology in a project is the Acceptance Test-Driven Development (ATDD) or Story Test-Driven Development (STDD); it is TDD but on a different level. The acceptance (or customer) tests are the written criteria for a project meeting the business requirements that the customer demands. They are examples (like the examples in TDD) written by the project owner. It is the start of development for each iteration, the bridge between Scrum and agile development. In ATDD, we start the implementation of our project in a way different from the traditional methodologies. The business requirements written in human language are replaced by executables agreed upon by some team members and also the customer. It is not about replacing the whole documentation, but only a part of the requirements. The advantages of using ATDD are the following: Real examples and a common language for the entire team to understand the domain It allows identifying the domain rules properly It is possible to know if a user story is finished in each iteration The workflow works from the first steps The development does not start until the tests are defined and accepted by the team ATDD algorithm The algorithm of ATDD is like that of TDD but reaches more people than only the developers; in other words, doing ATDD, the tests of each story are written in a meeting that includes the project owners, developers, and QA technicians because the entire team must understand what is necessary to do and why it is necessary, so they can see if it is what the code should do. The ATDD cycle is depicted in the following diagram: Discuss The starting point of the ATDD algorithm is the discussion. In this first step, the business has a meeting with the customer to clarify how the application should work, and the analyst should create the user stories from that conversation. Also, they should be able to explain the conditions of satisfaction of every user story in order to be translated into examples. By the end of the meeting, the examples should be clear and concise, so we can get a list of examples of user stories in order to cover all the needs of the customer, reviewed and understood for him. Also, the entire team will have a project overview in order to understand the business value of the user story, and in case the user story is too big, it could be divided into little user stories, getting the first one for the first iteration of this process. Distill High-level acceptance tests are written by the customer and the development team. In this step, the writing of the test cases that we got from the examples in the discussion step begins, and the entire team can take part in the discussion and help clarify the information or specify the real needs of that. The tests should cover all the examples that were discovered in the discussion step, and extra tests could be added during this process bit by bit till we understand the functionality better. At the end of this step, we will obtain the necessary tests written in human language, so the entire team (including the customer) can understand what they are going to do in the next step. These tests can be used like a documentation. Develop In this step, the development of acceptance test cases is begun by the development team and the project owner. The methodology to follow in this step is the same as TDD, the developers should create a test and watch it fail (Red) and then develop the minimum amount of lines to pass (Green). Once the acceptance tests are green, this should be verified and tested to be ready to be delivered. During this process, the developers may find new scenarios that need to be added into the tests or even if it needs a large amount of work, it could be pushed to the user story. At the end of this step, we will have software that passes the acceptance tests and maybe more comprehensive tests. Demo The created functionality is shown by running the acceptance test cases and manually exploring the features of the new functionality. After the demonstration, the team discusses whether the user story was done properly and it meets the product owner's needs and decides if it can continue with the next story. Tools After knowing more about TDD and BDD, it is time to explain a few tools you can use in your development workflow. There are a lot of tools available, but we will only explain the most used ones. Composer Composer is a PHP tool used to manage software dependencies. You only need to declare the libraries needed by your project and the composer will manage them, installing and updating when necessary. This tool has only a few requirements: if you have PHP 5.3.2+, you are ready to go. In the case of a missing requirement, the composer will warn you. You could install this dependency manager on your development machine, but since we are using Docker, we are going to install it directly on our PHP-FPM containers. The installation of composer in Docker is very easy; you only need to add the following rule to the Dockerfile: RUN curl -sS https://getcomposer.org/installer | php -- --install-"dir=/usr/bin/ --filename=composer PHPUnit Another tool we need for our project is PHPUnit, a unit test framework. As before, we will be adding this tool to our PHP-FPM containers to keep our development machine clean. If you are wondering why we are not installing anything on our development machine except for Docker, the response is clear. Having everything in the containers will help you avoid any conflict with other projects and gives you the flexibility of changing versions without being too worried. Add the following RUN command to your PHP-FPM Dockerfile, and you will have the latest PHPUnit version installed and ready to use: RUN curl -sSL https://phar.phpunit.de/phpunit.phar -o "/usr/bin/phpunit && chmod +x /usr/bin/phpunit Now that we have all our requirements too, it is time to install our PHP framework and start doing some TDD stuff. Later, we will continue updating our Docker environment with new tools. We choose Lumen for our example. Please feel free to adapt all the examples to your favorite framework. Our source code will be living inside our containers, but at this point of development, we do not want immutable containers. We want every change we make to our code to be available instantaneously in our containers, so we will be using a container as a storage volume. To create a container with our source and use it as a storage volume, we only need to edit our docker-compose.yml and create one source container per each microservice, as follows: source_battle: image: nginx:stable volumes: - ../source/battle:/var/www/html command: "true" The above piece of code creates a container image named source_battle, and it stores our battle source (located at ../source/battle from the docker-compose.yml current path). Once we have our source container available, we can edit each one of our services and assign a volume. For instance, we can add the following line in our microservice_battle_fpm and microservice_battle_nginx container descriptions: volumes_from: - source_battle Our battle source will be available in our source container in the path, /var/www/html, and the remaining step to install Lumen is to do a simple composer execution. First, you need to be sure that your infrastructure is up with a simple command, as follows: $ docker-compose up The preceding command spins up our containers and outputs the log to the standard IO. Now that we are sure that everything is up and running, we need to enter in our PHP-FPM containers and install Lumen. If you need to know the names assigned to each one of your containers, you can do a $ docker ps and copy the container name. As an example, we are going to enter the battle PHP-FPM container with the following command: $ docker exec -it docker_microservice_battle_fpm_1 /bin/bash The preceding command opens an interactive shell in your container, so you can do anything you want; let's install Lumen with a single command: # cd /var/www/html && composer create-project --prefer-dist "laravel/lumen . Repeat the preceding commands for each one of your microservices. Now, you have everything ready to start doing Unit tests and coding your application. Summary In this article, you learned about test-driven development, behavior-driven development, acceptance test-driven development, and PHPUnit. Resources for Article: Further resources on this subject: Running Simpletest and PHPUnit [Article] Understanding PHP basics [Article] The Multi-Table Query Generator using phpMyAdmin and MySQL [Article]
Read more
  • 0
  • 0
  • 3896

article-image-setting-environment
Packt
04 Jan 2017
14 min read
Save for later

Setting Up the Environment

Packt
04 Jan 2017
14 min read
In this article by Sohail Salehi, the author of the book Angular 2 Services, you can ask the two fundamental questions, when a new development tool is announced or launched are: how different is the new tool from other competitor tools and how enhanced it is when compared to its own the previous versions? If we are going to invest our time in learning a new framework, common sense says we need to make sure we get a good return on our investment. There are so many good articles out there about cons and pros of each framework. To me, choosing Angular 2 boils down to three aspects: The foundation: Angular is introduced and supported by Google and targeted at “ever green” modern browsers. This means we, as developers, don't need to lookout for hacky solutions in each browser upgrade, anymore. The browser will always be updated to the latest version available, letting Angular worry about the new changes and leaving us out of it. This way we can focus more on our development tasks. The community: Think about the community as an asset. The bigger the community, the wider range of solutions to a particular problem. Looking at the statistics, Angular community still way ahead of others and the good news is this community is leaning towards being more involved and more contributing on all levels. The solution: If you look at the previous JS frameworks, you will see most of them focusing on solving a problem for a browser first, and then for mobile devices. The argument for that could be simple: JS wasn't meant to be a language for mobile development. But things have changed to a great extent over the recent years and people now use mobile devices more than before. I personally believe a complex native mobile application – which is implemented in Java or C – is more performant, as compared to its equivalent implemented in JS. But the thing here is that not every mobile application needs to be complex. So business owners have started asking questions like: Why do I need a machine-gun to kill a fly? (For more resources related to this topic, see here.) With that question in mind, Angular 2 chose a different approach. It solves the performance challenges faced by mobile devices first. In other words, if your Angular 2 application is fast enough on mobile environments, then it is lightning fast inside the “ever green” browsers. So that is what we are going to do in this article. First we are going to learn about Angular 2 and the main problem it is going to solve. Then we talk a little bit about the JavaScript history and the differences between Angular 2 and AngularJS 1. Introducing “The Sherlock Project” is next and finally we install the tools and libraries we need to implement this project. Introducing Angular 2 The previous JS frameworks we've used already have a fluid and easy workflow towards building web applications rapidly. But as developers what we are struggling with is the technical debt. In simple words, we could quickly build a web application with an impressive UI. But as the product kept growing and the change requests started kicking in, we had to deal with all maintenance nightmares which forces a long list of maintenance tasks and heavy costs to the businesses. Basically the framework that used to be an amazing asset, turned into a hairy liability (or technical debt if you like). One of the major "revamps" in Angular 2 is the removal of a lot of modules resulting in a lighter and faster core. For example, if you are coming from an Angular 1.x background and don't see $scope or $log in the new version, don't panic, they are still available to you via other means, But there is no need to add overhead to the loading time if we are not going to use all modules. So taking the modules out of the core results in a better performance. So to answer the question, one of the main issues Angular 2 addresses is the performance issues. This is done through a lot of structural changes. There is no backward compatibility We don't have backward compatibility. If you have some Angular projects implemented with the previous version (v1.x), depending on the complexity of the project, I wouldn't recommend migrating to the new version. Most likely, you will end up hammering a lot of changes into your migrated Angular 2 project and at the end you will realize it was more cost effective if you would just create a new project based on Angular 2 from scratch. Please keep in mind, the previous versions of AngularJS and Angular 2 share just a name, but they have huge differences in their nature and that is the price we pay for a better performance. Previous knowledge of AngularJS 1.x is not necessary You might wondering if you need to know AngularJS 1.x before diving into Angular 2. Absolutely not. To be more specific, it might be even better if you didn't have any experience using Angular at all. This is because your mind wouldn't be preoccupied with obsolete rules and syntaxes. For example, we will see a lot of annotations in Angular 2 which might make you feel uncomfortable if you come from a Angular 1 background. Also, there are different types of dependency injections and child injectors which are totally new concepts that have been introduced in Angular 2. Moreover there are new features for templating and data-binding which help to improve loading time by asynchronous processing. The relationship between ECMAScript, AtScript and TypeScript The current edition of ECMAScript (ES5) is the one which is widely accepted among all well known browsers. You can think of it as the traditional JavaScript. Whatever code is written in ES5 can be executed directly in the browsers. The problem is most of modern JavaScript frameworks contain features which require more than the traditional JavaScript capabilities. That is why ES6 was introduced. With this edition – and any future ECMAScript editions – we will be able to empower JavaScript with the features we need. Now, the challenge is running the new code in the current browsers. Most browsers, nowadays recognize standard JavaScript codes only. So we need a mediator to transform ES6 to ES5. That mediator is called a transpiler and the technical term for transformations is transpiling. There are many good transpilers out there and you are free to choose whatever you feel comfortable with. Apart from TypeScript, you might want to consider Babel (babeljs.io) as your main transpiler. Google originally planned to use AtScript to implement Angular 2, but later they joined forces with Microsoft and introduced TypeScript as the official transpiler for Angular 2. The following figure summarizes the relationship between various editions of ECMAScript, AtScript and TypeScript. For more details about JavaScript, ECMAScript and how they evolved during the past decade visit the following link: https://en.wikipedia.org/wiki/ECMAScript Setting up tools and getting started! It is important to get the foundation right before installing anything else. Depending on your operating system, install Node.js and its package manager- npm. . You can find a detailed installation manual on Node.js official website. https://nodejs.org/en/ Make sure both Node.js and npm are installed globally (they are accessible system wide) and have the right permissions. At the time of writing npm comes with Node.js out of the box. But in case their policy changes in the future, you can always download the npm and follow the installation process from the following link. https://npmjs.com The next stop would be the IDE. Feel free to choose anything that you are comfortable with. Even a simple text editor will do. I am going to use WebStorm because of its embedded TypeScript syntax support and Angular 2 features which speeds up development process. Moreover it is light weight enough to handle the project we are about to develop. You can download it from here: https://jetbrains.com/webstorm/download We are going to use simple objects and arrays as a place holder for the data. But at some stage we will need to persist the data in our application. That means we need a database. We will use the Google's Firebase Realtime database for our needs. It is fast, it doesn't need to download or setup anything locally and more over it responds instantly to your requests and aligns perfectly with Angular's two-way data-binding. For now just leave the database as it is. You don't need to create any connections or database objects. Setting up the seed project The final requirement to get started would be an Angular 2 seed project to shape the initial structure of our project. If you look at the public source code repositories you can find several versions of these seeds. But I prefer the official one for two reasons: Custom made seeds usually come with a personal twist depending on the developers taste. Although sometimes it might be a great help, but since we are going to build everything from scratch and learn the fundamental concepts, they are not favorable to our project. The official seeds are usually minimal. They are very slim and don't contain overwhelming amount of 3rd party packages and environmental configurations. Speaking about packages, you might be wondering what happened to the other JavaScript packages we needed for this application. We didn't install anything else other than Node and NPM. The next section will answer this question. Setting up an Angular 2 project in WebStorm Assuming you have installed WebStorm, fire the IDE and and checkout a new project from a git repository. Now set the repository URL to: https://github.com/angular/angular2-seed.git and save the project in a folder called “the sherlock project” as shown in the figure below: Hit the Clone button and open the project in WebStorm. Next, click on the package.json file and observe the dependencies. As you see, this is a very lean seed with minimal configurations. It contains the Angular 2 plus required modules to get the project up and running. The first thing we need to do is install all required dependencies defined inside the package.json file. Right click on the file and select the “run npm install” option. Installing the packages for the first time will take a while. In the mean time, explore the devDependencies section of package.json in the editor. As you see, we have all the required bells and whistles to run the project including TypeScript, web server and so on to start the development: "devDependencies": { "@types/core-js": "^0.9.32", "@types/node": "^6.0.38", "angular2-template-loader": "^0.4.0", "awesome-typescript-loader": "^1.1.1", "css-loader": "^0.23.1", "raw-loader": "^0.5.1", "to-string-loader": "^1.1.4", "typescript": "^2.0.2", "webpack": "^1.12.9", "webpack-dev-server": "^1.14.0", "webpack-merge": "^0.8.4" }, We also have some nifty scripts defined inside package.json that automate useful processes. For example, to start a webserver and see the seed in action we can simply execute following command in a terminal: $ npm run server or we can right click on package.json file and select “Show npm Scripts”. This will open another side tab in WebStorm and shows all available scripts inside the current file. Basically all npm related commands (which you can run from command-line) are located inside the package.json file under the scripts key. That means if you have a special need, you can create your own script and add it there. You can also modify the current ones. Double click on start script and it will run the web server and loads the seed application on port 3000. That means if you visit http://localhost:3000 you will see the seed application in your browser: If you are wondering where the port number comes from, look into package.json file and examine the server key under the scripts section:    "server": "webpack-dev-server --inline --colors --progress --display-error-details --display-cached --port 3000 --content-base src", There is one more thing before we move on to the next topic. If you open any .ts file, WebStorm will ask you if you want it to transpile the code to JavaScript. If you say No once and it will never show up again. We don't need WebStorm to transpile for us because the start script is already contains a transpiler which takes care of all transformations for us. Front-end developers versus back-end developers Recently, I had an interesting conversation with a couple of colleagues of mine which worth sharing here in this article. One of them is an avid front-end developer and the other is a seasoned back-end developer. You guessed what I'm going to talk about: The debate between back-end/front-end developers and who is the better half. We have seen these kind of debates between back-end and front-end people in development communities long enough. But the interesting thing which – in my opinion – will show up more often in the next few months (years) is a fading border between the ends (front-end/back-end). It feels like the reason that some traditional front-end developers are holding up their guard against new changes in Angular 2, is not just because the syntax has changed thus causing a steep learning curve, but mostly because they now have to deal with concepts which have existed natively in back-end development for many years. Hence, the reason that back-end developers are becoming more open to the changes introduced in Angular 2 is mostly because these changes seem natural to them. Annotations or child dependency injections for example is not a big deal to back-enders, as much as it bothers the front-enders. I won't be surprised to see a noticeable shift in both camps in the years to come. Probably we will see more back-enders who are willing to use Angular as a good candidate for some – if not all – of their back-end projects and probably we will see more front-enders taking Object-Oriented concepts and best practices more seriously. Given that JavaScript was originally a functional scripting language they probably will try to catch-up with the other camp as fast as they can. There is no comparison here and I am not saying which camp has advantage over the other one. My point is, before modern front-end frameworks, JavaScript was open to be used in a quick and dirty inline scripts to solve problems quickly. While this is a very convenient approach it causes serious problems when you want to scale a web application. Imagine the time and effort you might have to make finding all of those dependent codes and re-factor them to reflect the required changes. When it comes to scalability, we need a full separation between layers and that requires developers to move away from traditional JavaScript and embrace more OOP best practices in their day to day development tasks. That what has been practiced in all modern front-end frameworks and Angular 2 takes it to the next level by completely restructuring the model-view-* concept and opening doors to the future features which eventually will be native part of any web browser. Introducing “The Sherlock Project” During the course of this journey we are going to dive into all new Angular 2 concepts by implementing a semi-AI project called: “The Sherlock Project”. This project will basically be about collecting facts, evaluating and ranking them, and making a decision about how truthful they are. To achieve this goal, we will implement a couple of services and inject them to the project, wherever they are needed. We will discuss one aspect of the project and will focus on one related service. At the end, all services will come together to act as one big project. Summary This article covered a brief introduction to Angular 2. We saw where Angular2 comes from, what problems is it going to solve and how we can benefit from it. We talked about the project that we will be creating with Angular 2. We also saw whic other tools are needed for our project and how to install and configure them. Finally, we cloned an Angular 2 seed project, installed all its dependencies and started a web server to examine the application output in a browser Resources for Article: Further resources on this subject: Introduction to JavaScript [article] Third Party Libraries [article] API with MongoDB and Node.js [article]
Read more
  • 0
  • 0
  • 3480

article-image-getting-started-aurelia
Packt
03 Jan 2017
28 min read
Save for later

Getting Started with Aurelia

Packt
03 Jan 2017
28 min read
In this article by Manuel Guilbault, the author of the book Learning Aurelia, we will how Aurelia is such a modern framework. brainchild of Rob Eisenberg, father of Durandal, it is based on cutting edge Web standards, and is built on modern software architecture concepts and ideas, to offer a powerful toolset and an awesome developer experience. (For more resources related to this topic, see here.) This article will teach you how Aurelia works, and how you can use it to build real-world applications from A to Z. In fact, while reading the article and following the examples, that’s exactly what you will do. You will start by setting up your development environment and creating the project, then I will walk you through concepts such as routing, templating, data-binding, automated testing, internationalization, and bundling. We will discuss application design, communication between components, and integration of third parties. We will cover every topic most modern, real-world single-page applications require. In this first article, we will start by defining some terms that will be used throughout the article. We will quickly cover some core Aurelia concepts. Then we will take a look at the core Aurelia libraries and see how they interact with each other to form a complete, full-featured framework. We will see also what tools are needed to develop an Aurelia application and how to install them. Finally, we will start creating our application and explore its global structure. Terminology As this article is about a JavaScript framework, JavaScript plays a central role in it. If you are not completely up to date with the terminology, which has changed a lot in the last few years, let me clear things up. JavaScript (or JS) is a dialect, or implementation, of the ECMAScript (ES) standard. It is not the only implementation, but it definitely is the most popular. In this article, I will use the JS acronym to talk about actual JavaScript code or code files and the ES acronym when talking about an actual version of the ECMAScript standard. Like everything in computer programing, the ECMAScript standard evolves over time. At the moment of writing, the latest version is ES2016 and was published in June 2016. It was originally called ES7, but TC39, the committee drafting the specification, decided to change their approval and naming model, hence the new name. The previous version, named ES2015 (ES6) before the naming model changed, was published in June 2015 and was a big step forward as compared to the version before it. This older version, named ES5, was published in 2009 and was the most recent version for 6 years, so it is now widely supported by all modern browsers. If you have been writing JavaScript in the last five years, you should be familiar with ES5. When they decided to change the ES naming model, the TC39 committee also chose to change the specification’s approval model. This decision was made in an effort to publish new versions of the language at a quicker pace. As such, new features are being drafted and discussed by the community, and must pass through an approval process. Each year, a new version of the specification will be released, comprising the features that were approved during the year. Those upcoming features are often referred to as ESNext. This term encompasses language features that are approved or at least pretty close to approval but not yet published. It can be reasonable to expect that most or at least some of those features will be published in the next language version. As ES2015 and ES2016 are still recent things, they are not fully supported by most browsers. Moreover, ESNext features have typically no browser support at all. Those multiple names can be pretty confusing. To make things simpler, I will stick with the official names ES5 for the previous version, ES2016 for the current version and ESNext for the next version. Before going any further, you should make yourself familiar with the features introduced by ES2016 and with ESNext decorators, if you are not already. We will use these features throughout the article. If you don’t know where to start with ES2015 and ES2016, you can find a great overview of the new features on Babel’s website: https://babeljs.io/docs/learn-es2015/ As for ESNext decorators, Addy Osmani, a Google engineer, explained them pretty well: https://medium.com/google-developers/exploring-es7-decorators-76ecb65fb841 For further reading, you can take a look at the feature proposals (decorators, class property declarations, async functions, and so on) for future ES versions: https://github.com/tc39/proposals Core concepts Before we start getting our hands dirty, there are a couple of core concepts that need to be explained. Conventions First, Aurelia relies a lot on conventions. Most of those conventions are configurable, and can be changed if they don’t suit your needs. Each time we’ll encounter a convention throughout the article, we will see how to change it whenever possible. Components Components are a first class citizen of Aurelia. What is an Aurelia component? It is a pair made of an HTML template, called the view, and a JavaScript class, called the view-model. The view is responsible for displaying the component, while the view-model controls its data and behavior. Typically, the view sits in an .html file and the view-model in a .js file. By convention, those two files are bound through a naming rule, they must be in the same directory and have the same name (except for their extension, of course). Here’s an example of an empty component with no data, no behavior, and a static template: component.js export class MyComponent {} component.html <template> <p>My component</p> </template> A component must comply with two constraints, a view’s root HTML element must be the template element, and the view-model class must be exported from the .js file. As a rule of thumb, the only function that should be exported by a component’s JS file should be the view-model class. If multiple classes or functions are exported, Aurelia will iterate on the file’s exported functions and classes and will use the first it finds as the view-model. However, since the enumeration order of an object’s keys is not deterministic as per the ES specification, nothing guarantees that the exports will be iterated in the same order they were declared, so Aurelia may pick the wrong class as the component’s view-model. The only exception to that rule is some view resources In addition to its view-model class, a component’s JS file can export things like value converters, binding behaviors, and custom attributes basically any view resource that can’t have a view, which excludes custom elements. Components are the main building blocks of an Aurelia application. Components can use other components; they can be composed to form bigger or more complex components. Thanks to the slot mechanism, you can design a component’s template so parts of it can be replaced or customized. Architecture Aurelia is not your average monolithic framework. It is a set of loosely coupled libraries with well-defined abstractions. Each of its core libraries solves a specific and well-defined problem common to single-page applications. Aurelia leverages dependency injection and a plugin architecture so you can discard parts of the framework and replace them with third-party or even your own implementations. Or you can just throw away features you don’t need so your application is lighter and faster to load. The core Aurelia libraries can be divided into multiple categories. Let’s have a quick glance. Core features The following libraries are mostly independent and can be used by themselves if needed. They each provide a focused set of features and are at the core of Aurelia: aurelia-dependency-injection: A lightweight yet powerful dependency injection container. It supports multiple lifetime management strategies and child containers. aurelia-logging: A simple logger, supporting log levels and pluggable consumers. aurelia-event-aggregator: A lightweight message bus, used for decoupled communication. aurelia-router: A client-side router, supporting static, parameterized or wildcard routes, and child routers. aurelia-binding: An adaptive and pluggable data-binding library. aurelia-templating: An extensible HTML templating engine. Abstraction layers The following libraries mostly define interfaces and abstractions in order to decouple concerns and enable extensibility and pluggable behaviors. This does not mean that some of the libraries in the previous section do not expose their own abstractions besides their features. Some of them do. But the libraries described in the current section have almost no other purpose than defining abstractions: aurelia-loader: An abstraction defining an interface for loading JS modules, views, and other resources. aurelia-history: An abstraction defining an interface for history management used by routing. aurelia-pal: An abstraction for platform-specific capabilities. It is used to abstract away the platform on which the code is running, such as a browser or Node.js. Indeed, this means that some Aurelia libraries can be used on the server side. Default implementations The following libraries are the default implementations of abstractions exposed by libraries from the two previous sections: aurelia-loader-default: An implementation of the aurelia-loader abstraction for SystemJS and require-based loaders. aurelia-history-browser: An implementation of the aurelia-history abstraction based on standard browser hash change and push state mechanisms. aurelia-pal-browser: An implementation of the aurelia-pal abstraction for the browser. aurelia-logging-console: An implementation of the aurelia-logging abstraction for the browser console. Integration layers The following libraries’ purpose is to integrate some of the core libraries together. They provide interface implementations and adapters, along with default configuration or behaviors: aurelia-templating-router: An integration layer between the aurelia-router and the aurelia-templating libraries. aurelia-templating-binding: An integration layer between the aurelia-templating and the aurelia-binding libraries. aurelia-framework: An integration layer that brings together all of the core Aurelia libraries into a full-featured framework. aurelia-bootstrapper: An integration layer that brings default configuration for aurelia-framework and handles application starting. Additional tools and plugins If you take a look at Aurelia’s organization page on GitHub at https://github.com/aurelia, you will see many more repositories. The libraries listed in the previous sections are just the core of Aurelia—the tip of the iceberg, if I may. Many other libraries exposing additional features or integrating third-party libraries are available on GitHub, some of them developed and maintained by the Aurelia team, many others by the community. I strongly suggest that you explore the Aurelia ecosystem by yourself after reading this article, as it is rapidly growing, and the Aurelia community is doing some very exciting things. Tooling In the following section, we will go over the tools needed to develop our Aurelia application. Node.js and NPM Aurelia being a JavaScript framework, it just makes sense that its development tools are also in JavaScript. This means that the first thing you need to do when getting started with Aurelia is to install Node.js and NPM on your development environment. Node.js is a server-side runtime environment based on Google’s V8 JavaScript engine. It can be used to build complete websites or web APIs, but it is also used by a lot of front-end projects to perform development and build tasks, such as transpiling, linting, and minimizing. NPM is the de facto package manager for Node.js. It uses http://www.npmjs.com as its main repository, where all available packages are stored. It is bundled with Node.js, so if you install Node.js on your computer, NPM will also be installed. To install Node.js and NPM on your development environment, you simply need to go to https://nodejs.org/ and download the proper installer suiting your environment. If Node.js and NPM are already installed, I strongly recommend that you make sure to use at least the version 3 of NPM, as older versions may have issues collaborating with some of the other tools we’ll use. If you are not sure which version you have, you can check it by running the following command in a console: > npm –v If Node.js and NPM are already installed but you need to upgrade NPM, you can do so by running the following command: > npm install npm -g The Aurelia CLI Even though an Aurelia application can be built using any package manager, build system, or bundler you want, the preferred tool to manage an Aurelia project is the command line interface, a.k.a. the CLI. At the moment of writing, the CLI only supports NPM as its package manager and requirejs as its module loader and bundler, probably because they both are the most mature and stable. It also uses Gulp 4 behind the scene as its build system. CLI-based applications are always bundled when running, even in development environments. This means that the performance of an application during development will be very close to what it should be like in production. This also means that bundling is a recurring concern, as new external libraries must be added to some bundle in order to be available at runtime. In this article, we’ll stick with the preferred solution and use the CLI. There are however two appendices at the end of the article covering alternatives, a first for Webpack, and a second for SystemJS with JSPM. Installing the CLI The CLI being a command line tool, it should be installed globally, by opening a console and executing the following command: > npm install -g aurelia-cli You may have to run this command with administrator privileges, depending on your environment. If you already have it installed, make sure you have the latest version, by running the following command: > au -v You can then compare the version this command outputs with the latest version number tagged on GitHub, at https://github.com/aurelia/cli/releases/latest. If you don’t have the latest version, you can simply update it by running the following command: > npm install -g aurelia-cli If for some reason the command to update the CLI fails, simply uninstall then reinstall it: > npm uninstall aurelia-cli -g > npm install aurelia-cli -g This should reinstall the latest version. The project skeletons As an alternative to the CLI, project skeletons are available at https://github.com/aurelia/skeleton-navigation. This repository contains multiple sample projects, sitting on different technologies such as SystemJS with JSPM, Webpack, ASP .Net Core, or TypeScript. Prepping up a skeleton is easy. You simply need to download and unzip the archive from GitHub or clone the repository locally. Each directory contains a distinct skeleton. Depending on which one you chose, you’ll need to install different tools and run setup commands. Generally, the instructions in the skeleton’s README.md file are pretty clear. Our application Creating an Aurelia application using the CLI is extremely simple. You just need to open a console in the directory where you want to create your project and run the following command: > au new The CLI’s project creation process will start, and you should see something like this: The first thing the CLI will ask for is the name you want to give to your project. This name will be used both to create the directory in which the project will live and to set some values, such as the name property in the package.json file it will create. Let’s name our application learning-aurelia: Next, the CLI asks what technologies we want to use to develop our application. Here, you can select a custom transpiler such as TypeScript and a CSS preprocessor such as LESS or SASS. Transpiler: Little cousin of the compiler, it translates one programming language into another. In our case, it will be used to transform ESNext code, which may not be supported by all browsers, into ES5, which is understood by all modern browsers. The default choice is to use ESNext and plain CSS, and this is what we will choose: The following steps simply recap the choices we made and ask for confirmation to create the project, then ask if we want to install our project’s dependencies which it does by default. At this point, the CLI will create the project and run an npm install behind the scene. Once it completes, our application is ready to roll: At this point, the directory you ran au new in will contain a new directory named learning-aurelia. This sub-directory will contain the Aurelia project. We’ll explore it a bit in the following section. The CLI is likely to change and offer more options in the future, as there are plans to support additional tools and technologies. Don’t be surprised if you see different or new options when you run it. The path we followed to create our project uses Visual Studio Code as the default code editor. If you want to use another editor such as Atom, Sublime, or WebStorm, which are the other supported options at the moment of writing, you simply need to select option #3 custom transpilers, CSS pre-processors and more at the beginning of the creation process, then select the default answer for each question until asked to select your default code editor. The rest of the creation process should stay pretty much the same. Note that if you select a different code editor, your own experience may differ from the examples and screenshots you’ll find in this article, as Visual Studio Code is the editor that was used during writing. If you are a TypeScript developer, you may want to create a TypeScript project. I however recommend that you stick with plain ESNext, as every example and code sample in this article has been written in JS. Trying to follow with TypeScript may prove cumbersome, although you can try if you like the challenge. The Structure of a CLI-Based Project If you open the newly created project in a code editor, you should see the following file structure: node_modules: The standard NPM directory containing the project’s dependencies; src: The directory containing the application’s source code; test: The directory containing the application’s automated test suites. .babelrc: The configuration file for Babel, which is used by the CLI to transpile our application’s ESNext code into ES5 so most browsers can run it; index.html: The HTML page that loads and launches the application; karma.conf.js: The configuration file for Karma, which is used by the CLI to run unit tests; package.json: The standard Node.js project file. The directory contains other files such as .editorconfig, .eslintrc.json, and .gitignore that are of little interest to learn Aurelia, so we won’t cover them. In addition to all of this, you should see a directory named aurelia_project. This directory contains things related to the building and bundling of the application using the CLI. Let’s see what it’s made of. The aurelia.json file The first thing of importance in this directory is a file named aurelia.json. This file contains the configuration used by the CLI to test, build, and bundle the application. This file can change drastically depending on the choices you make during the project creation process. There are very few scenarios where this file needs to be modified by hand. Adding an external library to the application is such a scenario. Apart from this, this file should mostly never be updated manually. The first interesting section in this file is the platform: "platform": { "id": "web", "displayName": "Web", "output": "scripts", "index": "index.html" }, This section tells the CLI that the output directory where the bundles are written is named scripts. It also tells that the HTML index page, which will load and launch the application, is the index.html file. The next interesting part is the transpiler section: "transpiler": { "id": "babel", "displayName": "Babel", "fileExtension": ".js", "options": { "plugins": [ "transform-es2015-modules-amd" ] }, "source": "src/**/*.js" }, This section tells the CLI to transpile the application’s source code using Babel. It also defines additional plugins as some are already configured in .babelrc to be used when transpiling the source code. In this case, it adds a plugin that will output transpiled files as AMD-compliant modules, for requirejs compatibility. Tasks The aurelia_project directory contains a subdirectory named tasks. This subdirectory contains various Gulp tasks to build, run, and test the application. These tasks can be executed using the CLI. The first thing you can try is to run au without any argument: > au This will list all available commands, along with their available arguments. This list includes built-in commands such as new, which we used already, or generate, which we’ll see in the next section along with the Gulp tasks declared in the tasks directory. To run one of those tasks, simply execute au with the name of the task as its first argument: > au build This command will run the build task which is defined in aurelia_project/tasks/build.js. This task transpiles the application code using Babel, executes the CSS and markup preprocessors if any, and bundles the code in the scripts directory. After running it, you should see two new files in scripts: app-bundle.js and vendor-bundle.js. Those are the actual files that will be loaded by index.html when the application is launched. The former contains all application code both JS files and templates, while the later contains all external libraries used by the application including Aurelia libraries. You may have noticed a command named run in the list of available commands. This task is defined in aurelia_project/tasks/run.js, and executes the build task internally before spawning a local HTTP server to serve the application: > au run By default, the HTTP server will listen for requests on the port 9000, so you can open your favorite browser and go to http://localhost:9000/ to see the default, demo application in action. If you ever need to change the port number on which the development HTTP server runs, you just need to open aurelia_project/tasks/run.js, and locate the call to the browserSync function. The object passed to this function contains a property named port. You can change its value accordingly. The run task can accept a --watch switch: > au run --watch If this switch is present, the task will keep monitoring the source code and, when any code file changes, will rebuild the application and automatically refresh the browser. This can be pretty useful during development. Generators The CLI also offers a way to generate code, using classes defined in the aurelia_project/generators directory. At the moment of writing, there are generators to create custom attributes, custom elements, binding behaviors, value converters, and even tasks and generators, yes, there is a generator to generate generators. If you are not familiar with Aurelia at all, most of those concepts, value converters, binding behaviors, and custom attributes and elements probably mean nothing to you. Don’t worry. A generator can be executed using the built-in generate command: > au generate attribute This command will run the custom attribute generator. It will ask for the name of the attribute to generate then create it in the src/resources/attributes directory. If you take a look at this generator which is found in aurelia_project/generators/attribute.js, you’ll see that the file exports a single class named AttributeGenerator. This class uses the @inject decorator to declare various classes from the aurelia-cli library as dependencies and have instances of them injected in its constructor. It also defines an execute method, which is called by the CLI when running the generator. This method leverages the services provided by aurelia-cli to interact with the user and generate code files. The exact generator names available by default are attribute, element, binding-behavior, value-converter, task, and generator. Environments CLI-based applications support environment-specific configuration values. By default, the CLI supports three environments—development, staging, and production. The configuration object for each of these environments can be found in the different files dev.js, stage.js, and prod.js located in the aurelia_project/environments directory. A typical environment file looks like this: aurelia_project/environments/dev.js export default { debug: true, testing: true }; By default, the environment files are used to enable debugging logging and test-only templating features in the Aurelia framework depending on the environment we’ll see this in a next section. The environment objects can however be enhanced with whatever properties you may need. Typically, it could be used to configure different URLs for a backend, depending on the environment. Adding a new environment is simply a matter of adding a file for it in the aurelia_project/environments directory. For example, you can add a local environment by creating a local.js file in the directory. Many tasks, basically build and all other tasks using it, such as run and test expect an environment to be specified using the env argument: > au build --env prod Here, the application will be built using the prod.js environment file. If no env argument is provided, dev will be used by default. When executed, the build task just copies the proper environment file to src/environment.js before running the transpiler and bundling the output. This means that src/environment.js should never be modified by hand, as it will be automatically overwritten by the build task. The Structure of an Aurelia application The previous section described the files and folders that are specific to a CLI-based project. However, some parts of the project are pretty much the same whatever the build system and package manager are. These are the more global topics we will see in this section. The hosting page The first entry point of an Aurelia application is the HTML page loading and hosting it. By default, this page is named index.html and is located at the root of the project. The default hosting page looks like this: index.html <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Aurelia</title> </head> <body aurelia-app="main"> <script src="scripts/vendor-bundle.js" data-main="aurelia-bootstrapper"></script> </body> </html> When this page loads, the script element inside the body element loads the scripts/vendor-bundle.js file, which contains requirejs itself along with definitions for all external libraries and references to app-bundle.js. When loading, requirejs checks the data-main attribute and uses its value as the entry point module. Here, aurelia-bootstrapper kicks in. The bootstrapper first looks in the DOM for elements with the aurelia-app attribute, we can find such an attribute on the body element in the default index.html file. This attribute identifies elements acting as application viewports. The bootstrapper uses the attribute’s value as the name of the application’s main module and locates the module, loads it, and renders the resulting DOM inside the element, overwriting any previous content. The application is now running. Even though the default application doesn’t illustrate this scenario, it is possible for an HTML file to host multiple Aurelia applications. It just needs to contain multiple elements with an aurelia-app attribute, each element referring to its own main module. The main module By convention, the main module referred to by the aurelia-app attribute is named main, and as such is located under src/main.js. This file is expected to export a configure function, which will be called by the Aurelia bootstrapping process and will be passed a configuration object used to configure and boot the framework. By default, the main configure function looks like this: src/main.js import environment from './environment'; export function configure(aurelia) { aurelia.use .standardConfiguration() .feature('resources'); if (environment.debug) { aurelia.use.developmentLogging(); } if (environment.testing) { aurelia.use.plugin('aurelia-testing'); } aurelia.start().then(() => aurelia.setRoot()); } The configure function starts by telling Aurelia to use its defaults configuration, and to load the resources feature. It also conditionally loads the development logging plugin based on the environment’s debug property, and the testing plugin based on the environment’s testing property. This means that, by default, both plugins will be loaded in development, while none will be loaded in production. Lastly, the function starts the framework then attaches the root component to the DOM. The start method returns a Promise, whose resolution triggers the call to setRoot. If you are not familiar with Promises in JavaScript, I strongly suggest that you look it up before going any further, as they are a core concept in Aurelia. The root component At the root of any Aurelia application is a single component, which contains everything within the application. By convention, this root component is named app. It is composed of two files—app.html, which contains the template to render the component, and app.js, which contains its view-model class. In the default application, the template is extremely simple: src/app.html <template> <h1>${message}</h1> </template> This template is made of a single h1 element, which will contain the value of the view-model’s message property as text, thanks to string interpolation. The app view-model looks like this: src/app.js export class App { constructor() { this.message = 'Hello World!'; } } This file simply exports a class having a message property containing the string “Hello World!”. This component will be rendered when the application starts. If you run the application and navigate to the application in your favorite browser, you’ll see a h1 element containing “Hello World!”. You may notice that there is no reference to Aurelia in this component’s code. In fact, the view-model is just plain ESNext and it can be used by Aurelia as is. Of course, we’re going to leverage many Aurelia features in many of our view-models later on, so most of our view-models will in fact have dependencies on Aurelia libraries, but the key point here is that you don’t have to use any Aurelia library in your view-models if you don’t want to, because Aurelia is designed to be as less intrusive as possible. Conventional bootstrapping It is possible to leave the aurelia-app attribute empty in the hosting page: <body aurelia-app> In such a case, the bootstrapping process is much simpler. Instead of loading a main module containing a configure function, the bootstrapper will simply use the framework’s default configuration and load the app component as the application root. This can be a simpler way to get started for a very simple application, as it negates the need for the src/main.js file you can simply delete it. However, it means that you are stuck with the default framework configuration. You cannot load features nor plugins. For most real-life applications, you’ll need to keep the main module, which means specifying it as the aurelia-app attribute’s value. Customizing Aurelia configuration The configure function of the main module receives a configuration object, which is used to configure the framework: src/main.js //Omitted snippet… aurelia.use .standardConfiguration() .feature('resources'); if (environment.debug) { aurelia.use.developmentLogging(); } if (environment.testing) { aurelia.use.plugin('aurelia-testing'); } //Omitted snippet… Here, the standardConfiguration() method is a simple helper that encapsulates the following: aurelia.use .defaultBindingLanguage() .defaultResources() .history() .router() .eventAggregator(); This is the default Aurelia configuration. It loads the default binding language, the default templating resources, the browser history plugin, the router plugin, and the event aggregator. This is the default set of features that a typical Aurelia application uses. All those plugins will be covered at one point or another throughout this article. All those plugins are optional except the binding language, which is needed by the templating engine. If you don’t need one, just don’t load it. In addition to the standard configuration, some plugins are loaded depending on the environment’s settings. When the environment’s debug property is true, Aurelia’s console logger is loaded using the developmentLogging() method, so traces and errors can be seen in the browser console. When the environment’s testing property is true, the aurelia-testing plugin is loaded using the plugin method. This plugin registers some resources that are useful when debugging components. The last line in the configure function starts the application and displays its root component, which is named app by convention. You may, however, bypass the convention and pass the name of your root component as the first argument to setRoot, if you named it otherwise: aurelia.start().then(() => aurelia.setRoot('root')); Here, the root component is expected to sit in the src/root.html and src/root.js files. Summary Getting started with Aurelia is very easy, thanks to the CLI. Installing the tooling and creating an empty project is simply a matter of running a couple of commands, and it takes typically more time waiting for the initial npm install to complete than doing the actual setup. We’ll go over dependency injection and logging, and we’ll start building our application by adding components and configuring routes to navigate between them. Resources for Article: Further resources on this subject: Introduction to JavaScript [article] Breaking into Microservices Architecture [article] Create Your First React Element [article]
Read more
  • 0
  • 0
  • 4112

article-image-building-extensible-chat-bot-using-javascript-yaml
Andrea Falzetti
03 Jan 2017
6 min read
Save for later

Building an extensible Chat Bot using JavaScript & YAML

Andrea Falzetti
03 Jan 2017
6 min read
In this post, I will share with you my experience in designing and coding an extensible chat bot using YAML and JavaScript. I will show you that it's not always required to use AI, ML, or NPL to make a great bot. This won't be a step-by-step guide. My goal is to give you an overview of a project like this and inspire you to build your own. Getting Started The concept I will offer you consists of creating a collection of YAML scripts that will represent all the possible conversations that the bot can handle. When a new conversation starts, the YAML code gets converted to a JavaScript object that we can easily manage in Node.js. I have used WebSockets implemented with socket.io to transmit the messages back and forth. First, we need to agree on a set of commands that can be used to create the YAML scripts, let’s start with some essentials: messages: To send an array of messages from the bot to the user input: To ask the user for some data next: The action we want to perform next within the script script: The next script that will follow in the conversation with the user when, empty, not-empty: conditional statements YAML script example A sample script will look like the following:   Link to the gist Then we need to implement those commands in our bot engine. The Bot engine Using the WebSockets I send to the bot the name of the script that I want to play, the bot engine loads it and converts it to a JavaScript object. If the client doesn't know which script to play first, it can call a default script called “hello” that will introduce the bot. In each script, the first action to run will be the one with index 0. As per the example above, the first thing the bot will do is sending two messages to the user:   With the command next we jump to the next block, index = 1. Again we send another message and immediately after an input request.   At this point, the client receives the input request and allows the user to type the answer. Submitted the value, we send the data back to the bot that will append the information to a key-value store, where all data is received from the user live and is accessible via a key (for example,user_name). Using the when statement, we define conditional branches of the conversation. When dealing with data validation this is often this case. In the example, we want to make sure to get a valid name from the user, so if the value received is empty, we jump back in the script and ask for it again, continuing with the following steponly when the name is valid. Finally, the bot will send a message, this time containing a variable previously received and stored in the key-value store.   In the next script, we will see how to handle multiple choice and buttons, to allow the user to make a decision.   Link to the gist The conversation will start with a message from the bot,followed by an input request, with input type set as buttons_single_select which in the in client it translates to “display multiple buttons” using an array of options received with the input request:   When the user clicks on one of the options, the UI sends back the choice of the user to the bot which will eventually match it with one of the existing branches. Found the correct branch the bot engine will look for the next command to run, in this case is another another input request, this time expecting to receive an image from the client.   Once the file has been sent, the conversation ends just after the bot sends a last message confirming the file got uploaded successfully. Using YAML gives you the flexibility to build many different conversations and also allows you to easily implement A/B testing of your conversation. Implementing the bot engine with JavaScript / Node.js To build something able to play the YAML scripts above, you need to iterate the script commands until you find an explicit end command. It’s very important to keep in memory the index of current command in progress, so that you can move on as soon as the current task completes. When you meet a new command, you should pass it to a resolver that knows what each command does and is able to run the specific portion of code or function. Additionally, you will need a function that listens to the input received from the clients, validating and saving it into a key-value store. Extending the command set This approach allows you to create a large set of commands, that do different things including queries, Ajax requests, API calls to external services, etc. You can combine your command with a when statement so that a callback or promise can evolve in its specific branch depending on the result you got. Conclusion If you are wondering where the demo images come from, they are a screenshot of a React view built with the help of devices.css, a CSS package that provides the flat shape of iPhone, Android, and Windows phones in different colors only using pure CSS. I have built this view to test the bot, using socket.io-client for the WebSockets and React for the UI. This is not just a proof of concept; I am working on a project where we have currently implemented this logic. I invite you to review, think about it and leave a feedback. Thanks! About the author Andrea Falzetti is an enthusiastic full-stack developer based in London. He has been designing and developing web applications for over 5 years. He is currently focused on node.js, react, microservices architecture, serverless, conversational UI, chat bots and machine learning. He is currently working at Activate Media, where his role is to estimate, design and lead the development of web and mobile platforms.
Read more
  • 0
  • 0
  • 8587

article-image-getting-started-spring-boot
Packt
03 Jan 2017
17 min read
Save for later

Getting Started with Spring Boot

Packt
03 Jan 2017
17 min read
In this article by, Greg Turnquist, author of the book, Learning Spring Boot – Second Edition, we will cover the following topics: Introduction Creating a bare project using http://start.spring.io Seeing how to run our app straight inside our IDE with no stand alone containers (For more resources related to this topic, see here.) Perhaps you've heard about Spring Boot? It's only cultivated the most popular explosion in software development in years. Clocking millions of downloads per month, the community has exploded since it's debut in 2013. I hope you're ready for some fun, because we are going to take things to the next level as we use Spring Boot to build a social media platform. We'll explore its many valuable features all the way from tools designed to speed up development efforts to production-ready support as well as cloud native features. Despite some rapid fire demos you might have caught on YouTube, Spring Boot isn't just for quick demos. Built atop the de facto standard toolkit for Java, the Spring Framework, Spring Boot will help us build this social media platform with lightning speed AND stability. In this article, we'll get a quick kick off with Spring Boot using Java the programming language. Maybe that makes you chuckle? People have been dinging Java for years as being slow, bulky, and not the means for agile shops. Together, we'll see how that is not the case. At any time, if you're interested in a more visual medium, feel free to checkout my Learning Spring Boot [Video] at https://www.packtpub.com/application-development/learning-spring-boot-video. What is step #1 when we get underway with a project? We visit Stack Overflow and look for an example project to build a project! Seriously, the amount of time spent adapting another project's build file, picking dependencies, and filling in other details about our project adds up. At the Spring Initializr (http://start.spring.io), we can enter minimal details about our app, pick our favorite build system, the version of Spring Boot we wish to use, and then choose our dependencies off a menu. Click the Download button, and we have a free standing, ready-to-run application. In this article, let's take a quick test drive and build small web app. We can start by picking Gradle from the dropdown. Then, select 1.4.1.BUILD-SNAPSHOT as the version of Spring Boot we wish to use. Next, we need to pick our application's coordinates: Group - com.greglturnquist.learningspringboot Artifact - learning-spring-boot Now comes the fun part. We get to pick the ingredients for our application like picking off a delicious menu. If we start typing, for example, "Web", into the Dependencies box, we'll see several options appear. To see all the available options, click on the Switch to the full version link toward the bottom. There are lots of overrides, such as switching from JAR to WAR, or using an older version of Java. You can also pick Kotlin or Groovy as the primary language for your application. For starters, in this day and age, there is no reason to use anything older than Java 8. And JAR files are the way to go. WAR files are only needed when applying Spring Boot to an old container. To build our social media platform, we need a few ingredients as shown: Web (embedded Tomcat + Spring MVC) WebSocket JPA (Spring Data JPA) H2 (embedded SQL data store) Thymeleaf template engine Lombok (to simplify writing POJOs) The following diagram shows an overview of these ingredients: With these items selected, click on Generate Project. There are LOTS of other tools that leverage this site. For example, IntelliJ IDEA lets you create a new project inside the IDE, giving you the same options shown here. It invokes the web site's REST API, and imports your new project. You can also interact with the site via cURL or any other REST-based tool. Now let's unpack that ZIP file and see what we've got: a build.gradle build file a Gradle wrapper, so there's no need to install Gradle a LearningSpringBootApplication.java application class an application.properties file a LearningSpringBootApplicationTests.java test class We built an empty Spring Boot project. Now what? Before we sink our teeth into writing code, let's take a peek at the build file. It's quite terse, but carries some key bits. Starting from the top: buildscript { ext { springBootVersion = '1.4.1.BUILD-SNAPSHOT' } repositories { mavenCentral() maven { url "https://repo.spring.io/snapshot" } maven { url "https://repo.spring.io/milestone" } } dependencies { classpath("org.springframework.boot:spring-boot-gradle- plugin:${springBootVersion}") } } This contains the basis for our project: springBootVersion shows us we are using Spring Boot 1.4.1.BUILD-SNAPSHOT The Maven repositories it will pull from are listed next Finally, we see the spring-boot-gradle-plugin, a critical tool for any Spring Boot project The first piece, the version of Spring Boot, is important. That's because Spring Boot comes with a curated list of 140 third party library versions extending well beyond the Spring portfolio and into some of the most commonly used libraries in the Java ecosystem. By simply changing the version of Spring Boot, we can upgrade all these libraries to newer versions known to work together. There is an extra project, the Spring IO Platform (https://spring.io/platform), which includes an additional 134 curated versions, bringing the total to 274. The repositories aren't as critical, but it's important to add milestones and snapshots if fetching a library not released to Maven central or hosted on some vendor's local repository. Thankfully, the Spring Initializr does this for us based on the version of Spring Boot selected on the site. Finally, we have the spring-boot-gradle-plugin (and there is a corresponding spring-boot-maven-plugin for Maven users). This plugin is responsible for linking Spring Boot's curated list of versions with the libraries we select in the build file. That way, we don't have to specify the version number. Additionally, this plugin hooks into the build phase and bundle our application into a runnable über JAR, also known as a shaded or fat JAR. Java doesn't provide a standardized way to load nested JAR files into the classpath. Spring Boot provides the means to bundle up third-party JARs inside an enclosing JAR file and properly load them at runtime. Read more at http://docs.spring.io/spring-boot/docs/1.4.1.BUILD-SNAPSHOT/reference/htmlsingle/#executable-jar. With an über JAR in hand, we only need put it on a thumb drive, and carry it to another machine, to a hundred virtual machines in the cloud or your data center, or anywhere else, and it simply runs where we can find a JVM. Peeking a little further down in build.gradle, we can see the plugins that are enabled by default: apply plugin: 'java' apply plugin: 'eclipse' apply plugin: 'spring-boot' The java plugin indicates the various tasks expected for a Java project The eclipse plugin helps generate project metadata for Eclipse users The spring-boot plugin is where the actual spring-boot-gradle-plugin is activated An up-to-date copy of IntelliJ IDEA can ready a plain old Gradle build file fine without extra plugins. Which brings us to the final ingredient used to build our application: dependencies. Spring Boot starters No application is complete without specifying dependencies. A valuable facet of Spring Boot are its virtual packages. These are published packages that don't contain any code, but instead simply list other dependencies. The following list shows all the dependencies we selected on the Spring Initializr site: dependencies { compile('org.springframework.boot:spring-boot-starter-data-jpa') compile('org.springframework.boot:spring-boot-starter- thymeleaf') compile('org.springframework.boot:spring-boot-starter-web') compile('org.springframework.boot:spring-boot-starter- websocket') compile('org.projectlombok:lombok') runtime('com.h2database:h2') testCompile('org.springframework.boot:spring-boot-starter-test') } If you'll notice, most of these packages are Spring Boot starters: spring-boot-starter-data-jpa pulls in Spring Data JPA, Spring JDBC, Spring ORM, and Hibernate spring-boot-starter-thymeleaf pulls in Thymeleaf template engine along with Spring Web and the Spring dialect of Thymeleaf spring-boot-starter-web pulls in Spring MVC, Jackson JSON support, embedded Tomcat, and Hibernate's JSR-303 validators spring-boot-starter-websocket pulls in Spring WebSocket and Spring Messaging These starter packages allow us to quickly grab the bits we need to get up and running. Spring Boot starters have gotten so popular that lots of other third party library developers are crafting their own. In addition to starters, we have three extra libraries: Project Lombok makes it dead simple to define POJOs without getting bogged down in getters, setters, and others details. H2 is an embedded database allowing us to write tests, noodle out solutions, and get things moving before getting involved with an external database. spring-boot-starter-test pulls in Spring Boot Test, JSON Path, JUnit, AssertJ, Mockito, Hamcrest, JSON Assert, and Spring Test, all within test scope. The value of this last starter, spring-boot-starter-test, cannot be overstated. With a single line, the most powerful test utilities are at our fingertips, allowing us to write unit tests, slice tests, and full blown our-app-inside-embedded-Tomcat tests. It's why this starter is included in all projects without checking a box on the Spring Initializr site. Now to get things off the ground, we need to shift focus to the tiny bit of code written for us by the Spring Initializr. Running a Spring Boot application The fabulous http://start.spring.io website created a tiny class, LearningSpringBootApplication as shown in the following code: package com.greglturnquist.learningspringboot; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class LearningSpringBootApplication { public static void main(String[] args) { SpringApplication.run( LearningSpringBootApplication.class, args); } } This tiny class is actually a fully operational web application! The @SpringBootApplication annotation tells Spring Boot, when launched, to scan recursively for Spring components inside this package and register them. It also tells Spring Boot to enable autoconfiguration, a process where beans are automatically created based on classpath settings, property settings, and other factors. Finally, it indicates that this class itself can be a source for Spring bean definitions. It holds a public static void main(), a simple method to run the application. There is no need to drop this code into an application server or servlet container. We can just run it straight up, inside our IDE. The amount of time saved by this feature, over the long haul, adds up fast. SpringApplication.run() points Spring Boot at the leap off point. In this case, this very class. But it's possible to run other classes. This little class is runnable. Right now! In fact, let's give it a shot. . ____ _ __ _ _ /\ / ___'_ __ _ _(_)_ __ __ _ ( ( )___ | '_ | '_| | '_ / _` | \/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |___, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v1.4.1.BUILD-SNAPSHOT) 2016-09-18 19:52:44.214: Starting LearningSpringBootApplication on ret... 2016-09-18 19:52:44.217: No active profile set, falling back to defaul... 2016-09-18 19:52:44.513: Refreshing org.springframework.boot.context.e... 2016-09-18 19:52:45.785: Bean 'org.springframework.transaction.annotat... 2016-09-18 19:52:46.188: Tomcat initialized with port(s): 8080 (http) 2016-09-18 19:52:46.201: Starting service Tomcat 2016-09-18 19:52:46.202: Starting Servlet Engine: Apache Tomcat/8.5.5 2016-09-18 19:52:46.323: Initializing Spring embedded WebApplicationCo... 2016-09-18 19:52:46.324: Root WebApplicationContext: initialization co... 2016-09-18 19:52:46.466: Mapping servlet: 'dispatcherServlet' to [/] 2016-09-18 19:52:46.469: Mapping filter: 'characterEncodingFilter' to:... 2016-09-18 19:52:46.470: Mapping filter: 'hiddenHttpMethodFilter' to: ... 2016-09-18 19:52:46.470: Mapping filter: 'httpPutFormContentFilter' to... 2016-09-18 19:52:46.470: Mapping filter: 'requestContextFilter' to: [/*] 2016-09-18 19:52:46.794: Building JPA container EntityManagerFactory f... 2016-09-18 19:52:46.809: HHH000204: Processing PersistenceUnitInfo [ name: default ...] 2016-09-18 19:52:46.882: HHH000412: Hibernate Core {5.0.9.Final} 2016-09-18 19:52:46.883: HHH000206: hibernate.properties not found 2016-09-18 19:52:46.884: javassist 2016-09-18 19:52:46.976: HCANN000001: Hibernate Commons Annotations {5... 2016-09-18 19:52:47.169: Using dialect: org.hibernate.dialect.H2Dialect 2016-09-18 19:52:47.358: HHH000227: Running hbm2ddl schema export 2016-09-18 19:52:47.359: HHH000230: Schema export complete 2016-09-18 19:52:47.390: Initialized JPA EntityManagerFactory for pers... 2016-09-18 19:52:47.628: Looking for @ControllerAdvice: org.springfram... 2016-09-18 19:52:47.702: Mapped "{[/error]}" onto public org.springfra... 2016-09-18 19:52:47.703: Mapped "{[/error],produces=[text/html]}" onto... 2016-09-18 19:52:47.724: Mapped URL path [/webjars/**] onto handler of... 2016-09-18 19:52:47.724: Mapped URL path [/**] onto handler of type [c... 2016-09-18 19:52:47.752: Mapped URL path [/**/favicon.ico] onto handle... 2016-09-18 19:52:47.778: Cannot find template location: classpath:/tem... 2016-09-18 19:52:48.229: Registering beans for JMX exposure on startup 2016-09-18 19:52:48.278: Tomcat started on port(s): 8080 (http) 2016-09-18 19:52:48.282: Started LearningSpringBootApplication in 4.57... Scrolling through the output, we can see several things: The banner at the top gives us a readout of the version of Spring Boot. (BTW, you can create your own ASCII art banner by creating either banner.txt or banner.png into src/main/resources/) Embedded Tomcat is initialized on port 8080, indicating it's ready for web requests Hibernate is online with the H2 dialect enabled A few Spring MVC routes are registered, such as /error, /webjars, a favicon.ico, and a static resource handler And the wonderful Started LearningSpringBootApplication in 4.571 seconds message Spring Boot uses embedded Tomcat, so there's no need to install a container on our target machine. Non-web apps don't even require Apache Tomcat. The JAR itself is the new container that allows us to stop thinking in terms of old fashioned servlet containers. Instead, we can think in terms of apps. All these factors add up to maximum flexibility in application deployment. How does Spring Boot use embedded Tomcat among other things? As mentioned earlier, it has autoconfiguration meaning it has Spring beans that are created based on different conditions. When Spring Boot sees Apache Tomcat on the classpath, it creates an embedded Tomcat instance along with several beans to support that. When it spots Spring MVC on the classpath, it creates view resolution engines, handler mappers, and a whole host of other beans needed to support that, letting us focus on coding custom routes. With H2 on the classpath, it spins up an in-memory, embedded SQL data store. Spring Data JPA will cause Spring Boot to craft an EntityManager along with everything else needed to start speaking JPA, letting us focus on defining repositories. At this stage, we have a running web application, albeit an empty one. There are no custom routes and no means to handle data. But we can add some real fast. Let's draft a simple REST controller: package com.greglturnquist.learningspringboot; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController; @RestController public class HomeController { @GetMapping public String greeting(@RequestParam(required = false, defaultValue = "") String name) { return name.equals("") ? "Hey!" : "Hey, " + name + "!"; } } Let's examine this tiny REST controller in detail: The @RestController annotation indicates that we don't want to render views, but instead write the results straight into the response body. @GetMapping is Spring's shorthand annotation for @RequestMapping(method = RequestMethod.GET, …[]). In this case, it defaults the route to "/". Our greeting() method has one argument: @RequestParam(required = false, defaultValue = "") String name. It indicates that this value can be requested via an HTTP query (?name=Greg), the query isn't required, and in case it's missing, supply an empty string. Finally, we are returning one of two messages depending on whether or not name is empty using Java's classic ternary operator. If we re-launch the LearningSpringBootApplication in our IDE, we'll see a new entry in the console. 2016-09-18 20:13:08.149: Mapped "{[],methods=[GET]}" onto public java.... We can then ping our new route in the browser at http://localhost:8080 and http://localhost:8080?name=Greg. Try it out! That's nice, but since we picked Spring Data JPA, how hard would it be to load some sample data and retrieve it from another route? (Spoiler alert: not hard at all.) We can start out by defining a simple Chapter entity to capture book details as shown in the following code: package com.greglturnquist.learningspringboot; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.Id; import lombok.Data; @Data @Entity public class Chapter { @Id @GeneratedValue private Long id; private String name; private Chapter() { // No one but JPA uses this. } public Chapter(String name) { this.name = name; } } This little POJO let's us the details about the chapter of a book as follows: The @Data annotation from Lombok will generate getters, setters, a toString() method, a constructor for all required fields (those marked final), an equals() method, and a hashCode() method. The @Entity annotation flags this class as suitable for storing in a JPA data store. The id field is marked with JPA's @Id and @GeneratedValue annotations, indicating this is the primary key, and that writing new rows into the corresponding table will create a PK automatically. Spring Data JPA will by default create a table named CHAPTER with two columns, ID, and NAME. The key field is name, which is populated by the publicly visible constructor. JPA requires a no-arg constructor, so we have included one, but marked it private so no one but JPA may access it. To interact with this entity and it's corresponding table in H2, we could dig in and start using the autoconfigured EntityManager supplied by Spring Boot. By why do that, when we can declare a repository-based solution? To do so, we'll create an interface defining the operations we need. Check out this simple interface: package com.greglturnquist.learningspringboot; import org.springframework.data.repository.CrudRepository; public interface ChapterRepository extends CrudRepository<Chapter, Long> { } This declarative interface creates a Spring Data repository as follows: CrudRepository extends Repository, a Spring Data Commons marker interface that signals Spring Data to create a concrete implementation while also capturing domain information. CrudRepository, also from Spring Data Commons, has some pre-defined CRUD operations (save, delete, deleteAll, findOne, findAll). It specifies the entity type (Chapter) and the type of the primary key (Long). Spring Data JPA will automatically wire up a concrete implementation of our interface. Spring Data doesn't engage in code generation. Code generation has a sordid history of being out of date at some of the worst times. Instead, Spring Data uses proxies and other mechanisms to support all these operations. Never forget - the code you don't write has no bugs. With Chapter and ChapterRepository defined, we can now pre-load the database, as shown in the following code: package com.greglturnquist.learningspringboot; import org.springframework.boot.CommandLineRunner; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration public class LoadDatabase { @Bean CommandLineRunner init(ChapterRepository repository) { return args -> { repository.save( new Chapter("Quick start with Java")); repository.save( new Chapter("Reactive Web with Spring Boot")); repository.save( new Chapter("...and more!")); }; } } This class will be automatically scanned in by Spring Boot and run in the following way: @Configuration marks this class as a source of beans. @Bean indicates that the return value of init() is a Spring Bean. In this case, a CommandLineRunner. Spring Boot runs all CommandLineRunner beans after the entire application is up and running. This bean definition is requesting a copy of the ChapterRepository. Using Java 8's ability to coerce the args → {} lambda function into a CommandLineRunner, we are able to write several save() operations, pre-loading our data. With this in place, all that's left is write a REST controller to serve up the data! package com.greglturnquist.learningspringboot; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; @RestController public class ChapterController { private final ChapterRepository repository; public ChapterController(ChapterRepository repository) { this.repository = repository; } @GetMapping("/chapters") public Iterable<Chapter> listing() { return repository.findAll(); } } This controller is able to serve up our data as follows: @RestController indicates this is another REST controller. Constructor injection is used to automatically load it with a copy of the ChapterRepository. With Spring, if there is a only one constructor call, there is no need to include an @Autowired annotation. @GetMapping tells Spring that this is the place to route /chapters calls. In this case, it returns the results of the findAll() call found in CrudRepository. If we re-launch our application and visit http://localhost:8080/chapters, we can see our pre-loaded data served up as a nicely formatted JSON document: It's not very elaborate, but this small collection of classes has helped us quickly define a slice of functionality. And if you'll notice, we spent zero effort configuring JSON converters, route handlers, embedded settings, or any other infrastructure. Spring Boot is designed to let us focus on functional needs, not low level plumbing. Summary So in this article we introduced the Spring Boot concept in brief and we rapidly crafted a Spring MVC application using the Spring stack on top of Apache Tomcat with little configuration from our end. Resources for Article: Further resources on this subject: Writing Custom Spring Boot Starters [article] Modernizing our Spring Boot app [article] Introduction to Spring Framework [article]
Read more
  • 0
  • 0
  • 3635

article-image-tools-intypescript
Packt
02 Jan 2017
14 min read
Save for later

Tools inTypeScript

Packt
02 Jan 2017
14 min read
In this article by Nathan Rozentals, author of the book Mastering TypeScript, Second Edition, you will learn how to build enterprise-ready, industrial web applications using TypeScript and leading JavaScript frameworks. In this article, we will cover the following topics: What is TypeScript? The benefits of TypeScript (For more resources related to this topic, see here.) What is TypeScript? TypeScript is both a language and a set of tools to generate JavaScript. It was designed by Anders Hejlsberg at Microsoft (the designer of C#) as an open source project to help developers write enterprise scale JavaScript. TypeScript generates JavaScript – it's as simple as that. Instead of requiring a completely new runtime environment, TypeScript generated JavaScript can reuse all of the existing JavaScript tools, frameworks, and wealth of libraries that are available for JavaScript. The TypeScript language and compiler, however, brings the development of JavaScript closer to a more traditional object-oriented experience. EcmaScript JavaScript as a language has been around for a long time, and is also governed by a language feature standard. The language defined in this standard is called ECMAScript, and each JavaScript interpreter must deliver functions and features that conform to this standard. The definition of this standard helped the growth of JavaScript and the web in general, and allowed websites to render correctly on many different browsers on many different operating systems. The ECMAScript standard was published in 1999 and is known as ECMA-262, third edition. With the popularity of the language, and the explosive growth of internet applications, the ECMAScript standard needed to be revised and updated. This process resulted in a draft specification for ECMAScript, called the fourth edition. Unfortunately, this draft suggested a complete overhaul of the language, and was not well received. Eventually, leaders from Yahoo, Google, and Microsoft tabled an alternate proposal which they called ECMAScript 3.1. This proposal was numbered 3.1, as it was a smaller feature set of the third edition, and sat between edition three and four of the standard. This proposal for language changes tabled earlier was eventually adopted as the fifth edition of the standard, and was called ECMAScript 5. The ECMAScript fourth edition was never published, but it was decided to merge the best features of both the fourth edition and the 3.1 feature set into a sixth edition named ECMAScript Harmony. The TypeScript compiler has a parameter that can switch between different versions of the ECMAScript standard. TypeScript currently supports ECMAScript 3, ECMAScript 5, and ECMAScript 6. When the compiler runs over your TypeScript, it will generate compile errors if the code you are attempting to compile is not valid for that standard. The team at Microsoft has committed to follow the ECMAScript standards in any new versions of the TypeScript compiler, so as new editions are adopted, the TypeScript language and compiler will follow suit. The benefits of TypeScript To give you a flavor of the benefits of TypeScript (and this is by no means the full list), let's have a very quick look at some of the things that TypeScript brings to the table: A compilation step Strong or static typing Type definitions for popular JavaScript libraries Encapsulation Private and public member variable decorators Compiling One of the most frustrating things about JavaScript development is the lack of a compilation step. JavaScript is an interpreted language, and therefore needs to be run in order to test that it is valid. Every JavaScript developer will tell horror stories of hours spent trying to find bugs in their code, only to find that they have missed a stray closing brace { , or a simple comma ,– or even a double quote " where there should have been a single quote ' . Even worse, the real headaches arrive when you misspell a property name, or unwittingly reassign a global variable. TypeScript will compile your code, and generate compilation errors where it finds these sorts of syntax errors. This is obviously very useful, and can help to highlight errors before the JavaScript is run. In large projects, programmers will often need to do large code merges and with today's tools doing automatic merges, it is surprising how often the compiler will pick up these types of errors. While tools to do this sort of syntax checking like JSLint have been around for years, it is obviously beneficial to have these tools integrated into your IDE. Using TypeScript in a continuous integration environment will also fail a build completely when compilation errors are found, further protecting your programmers against these types of bugs. Strong typing JavaScript is not strongly typed. It is a language that is very dynamic, as it allows objects to change their properties and behavior on the fly. As an example of this, consider the following code: var test = "this is a string"; test = 1; test = function(a, b) { return a + b; } On the first line of the preceding code, the variable test is bound to a string. It is then assigned a number, and finally is redefined to be a function that expects two parameters. Traditional object-oriented languages, however, will not allow the type of a variable to change, hence they are called strongly typed languages. While all of the preceding code is valid JavaScript and could be justified, it is quite easy to see how this could cause runtime errors during execution. Imagine that you were responsible for writing a library function to add two numbers, and then another developer inadvertently reassigned your function to subtract these numbers instead. These sorts of errors may be easy to spot in a few lines of code, but it becomes increasingly difficult to find and fix these as your code base and your development team grows. Another feature of strong typing is that the IDE you are working in understands what type of variable you are working with, and can bring better autocomplete or Intellisense options to the fore. TypeScript's syntactic sugar TypeScript introduces a very simple syntax to check the type of an object at compile time. This syntax has been referred to as syntactic sugar, or more formally, type annotations. Consider the following TypeScript code: var test: string = "this is a string"; test = 1; test = function(a, b) { return a + b; } Note on the first line of this code snippet, we have introduced a colon : and a string keyword between our variable and its assignment. This type annotation syntax means that we are setting the type of our variable to be of type string, and that any code that does not adhere to these rules will generate a compile error. Running the preceding code through the TypeScript compiler will generate two errors: hello.ts(3,1): error TS2322: Type 'number' is not assignable to type 'string'. hello.ts(4,1): error TS2322: Type '(a: any, b: any) => any' is not assignable to type 'string'. The first error is fairly obvious. We have specified that the variable test is a string, and therefore attempting to assign a number to it will generate a compile error. The second error is similar to the first, and is, in essence, saying that we cannot assign a function to a string. In this way, the TypeScript compiler introduces strong or static typing to your JavaScript code, giving you all of the benefits of a strongly typed language. TypeScript is therefore described as a superset of JavaScript. Type definitions for popular JavaScript libraries As we have seen, TypeScript has the ability to annotate JavaScript, and bring strong typing to the JavaScript development experience. But how do we strongly type existing JavaScript libraries? The answer is surprisingly simple—by creating a definition file. TypeScript uses files with a .d.ts extension as a sort of header file, similar to languages such as C++, to superimpose strongly typing on existing JavaScript libraries. These definition files hold information that describes each available function, and or variables, along with their associated type annotations. Let's have a quick look at what a definition would look like. As an example, I have lifted a function from the popular Jasmine unit testing framework, called describe: var describe = function(description, specDefinitions) { return jasmine.getEnv().describe(description, specDefinitions); }; Note that this function has two parameters—description and specDefinitions. But JavaScript does not tell us what sort of variables these are. We would need to have a look at the Jasmine documentation to figure out how to call this function. If we head over to http://jasmine.github.io/2.0/introduction.html, we will see an example of how to use this function: describe("A suite", function () { it("contains spec with an expectation", function () { expect(true).toBe(true); }); }); From the documentation, then, we can easily see that the first parameter is a string, and the second parameter is a function. But there is nothing in JavaScript that forces us to conform to this API. As mentioned before, we could easily call this function with two numbers or inadvertently switch the parameters around, sending a function first, and a string second. We will obviously start getting runtime errors if we do this, but TypeScript using a definition file can generate compile-time errors before we even attempt to run this code. Let's have a look at a piece of the jasmine.d.ts definition file: declare function describe( description: string, specDefinitions: () => void ): void; This is the TypeScript definition for the describe function. Firstly, declare function describe tells us that we can use a function called describe, but that the implementation of this function will be provided at runtime. Clearly, the description parameter is strongly typed to a string, and the specDefinitions parameter is strongly typed to be a function that returns void. TypeScript uses the double braces () syntax to declare functions, and the arrow syntax to show the return type of the function. So () => void is a function that does not return anything. Finally, the describe function itself will return void. If our code were to try and pass in a function as the first parameter, and a string as the second parameter (clearly breaking the definition of this function), as shown in the following example: describe(() => { /* function body */}, "description"); TypeScript will generate the following error: hello.ts(11,11): error TS2345: Argument of type '() => void' is not assignable to parameter of type 'string'. This error is telling us that we are attempting to call the describe function with invalid parameters,andit clearly shows that TypeScript will generate errors if we attempt to use external JavaScript libraries incorrectly. DefinitelyTyped Soon after TypeScript was released, Boris Yankov started a Git Hub repository to house definition files, called Definitely Typed (http://definitelytyped.org). This repository has now become the first port of call for integrating external libraries into TypeScript, and it currently holds definitions for over 1,600 JavaScript libraries. Encapsulation One of the fundamental principles of object-oriented programming is encapsulation—the ability to define data, as well as a set of functions that can operate on that data, into a single component. Most programming languages have the concept of a class for this purpose, providing a way to define a template for data and related functions. Let's first take a look at a simple TypeScript class definition: class MyClass { add(x, y) { return x + y; } } varclassInstance = new MyClass(); var result = classInstance.add(1,2); console.log(`add(1,2) returns ${result}`); This code is pretty simple to read and understand. We have created a class, named MyClass, with a simple add function. To use this class we simply create an instance of it, and call the add function with two arguments. JavaScript, unfortunately, does not have a class statement, but instead uses functions to reproduce the functionality of classes. Encapsulation through classes is accomplished by either using the prototype pattern, or by using the closure pattern. Understanding prototypes and the closure pattern, and using them correctly, is considered a fundamental skill when writing enterprise-scale JavaScript. A closure is essentially a function that refers to independent variables. This means that variables defined within a closure function remember the environment in which they were created. This provides JavaScript with a way to define local variables, and provide encapsulation. Writing the MyClass definition in the preceding code, using a closure in JavaScript, would look something like the following: varMyClass = (function () { // the self-invoking function is the // environment that will be remembered // by the closure function MyClass() { // MyClass is the inner function, // the closure } MyClass.prototype.add = function (x, y) { return x + y; }; return MyClass; }()); varclassInstance = new MyClass(); var result = classInstance.add(1, 2); console.log("add(1,2) returns " + result); We start with a variable called MyClass, and assign it to a function that is executed immediately note the })(); syntax near the bottom of the closure definition. This syntax is a common way to write JavaScript in order to avoid leaking variables into the global namespace. We then define a new function named MyClass, and return this new function to the outer calling function. We then use the prototype keyword to inject a new function into the MyClass definition. This function is named add and takes two parameters, returning their sum. The last few lines of the code show how to use this closure in JavaScript. Create an instance of the closure type, and then execute the add function. Running this code will log add(1,2) returns 3 to the console, as expected. Looking at the JavaScript code versus the TypeScript code, we can easily see how simple the TypeScript looks compared to the equivalent JavaScript. Remember how we mentioned that JavaScript programmers can easily misplace a brace {, or a bracket (? Have a look at the last line in the closure definition—})();. Getting one of these brackets or braces wrong can take hours of debugging to find. Public and private accessors A further object oriented principle that is used in Encapsulation is the concept of data hiding that is the ability to have public and private variables. Private variables are meant to be hidden to the user of a particular class as these variables should only be used by the class itself. Inadvertently exposing these variables can easily cause runtime errors. Unfortunately, JavaScript does not have a native way of declaring variables private. While this functionality can be emulated using closures, a lot of JavaScript programmers simply use the underscore character _ to denote a private variable. At runtime though, if you know the name of a private variable you can easily assign a value to it. Consider the following JavaScript code: varMyClass = (function() { function MyClass() { this._count = 0; } MyClass.prototype.countUp = function() { this._count ++; } MyClass.prototype.getCountUp = function() { return this._count; } return MyClass; }()); var test = new MyClass(); test._count = 17; console.log("countUp : " + test.getCountUp()); The MyClass variable is actually a closure with a constructor function, a countUp function, and a getCountUp function. The variable _count is supposed to be a private member variable that is used only within the scope of the closure. Using the underscore naming convention gives the user of this class some indication that the variable is private, but JavaScript will still allow you to manipulate the variable _count. Take a look at the second last line of the code snippet. We are explicitly setting the value of _count to 17 which is allowed by JavaScript, but not desired by the original creator of the class. The output of this code would be countUp : 17. TypeScript, however, introduces the public and private keywords that can be used on class member variables. Trying to access a class member variable that has been marked as private will generate a compile time error. As an example of this, the JavaScript code above can be written in TypeScript, as follows: class CountClass { private _count: number; constructor() { this._count = 0; } countUp() { this._count ++; } getCount() { return this._count; } } varcountInstance = new CountClass() ; countInstance._count = 17; On the second line of our code snippet, we have declared a private member variable named _count. Again, we have a constructor, a countUp, and a getCount function. If we compile this file, the compiler will generate an error: hello.ts(39,15): error TS2341: Property '_count' is private and only accessible within class 'CountClass'. This error is generated because we are trying to access the private variable _count in the last line of the code. The TypeScript compiler, therefore, is helping us to adhere to public and private accessors by generating a compile error when we inadvertently break this rule. Summary In this article, we took a quick look at what TypeScript is and what benefits it can bring to the JavaScript development experience. Resources for Article: Further resources on this subject: Introducing Object Oriented Programmng with TypeScript [article] Understanding Patterns and Architecturesin TypeScript [article] Writing SOLID JavaScript code with TypeScript [article]
Read more
  • 0
  • 0
  • 3838
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-setting-microsoft-bot-framework-dev-environment
Packt
30 Dec 2016
8 min read
Save for later

Setting up Microsoft Bot Framework Dev Environment

Packt
30 Dec 2016
8 min read
In this article by Kishore Gaddam, author of the book Building Bots with Microsoft Bot Framework, we introduced what is Microsoft Bot Framework and how it helps in the development of bots. (For more resources related to this topic, see here.) Since past several decades, the corporate, government, and business world has experienced several waves of IT architecture foundations, moving from mainframes, to minicomputers, to distributed PCs, to the Internet, to social/mobile and now the Cloud/Internet of Thuings (IoT) Stack. We call this the Sixth wave of Corporate IT, and like its predecessors, Cloud and IoT technologies are causing significant disruption and displacement, even while it drives new levels of productivity. Each architecture focused on key business processes and supported killer technology applications to drive new levels of value. Very soon we will be looking at an enormous networked interconnection of everyday machines to one another, as well as to humans. Machine-to-machine-to-human connectivity will have a profound impact on the consumer and corporate IT experience. As these machines become social and talkto us, we have enormous opportunity to greatly enhance their value proposition through improved product quality, customer experience, and lowered cost of operations. A heightened consumer expectation for more personal and real-time interactions is driving business to holistically embrace the next wave of technology innovation like Cloud, IoT, and Bots to boost business performance. In this age of billions of connected devices, there is a need for such a technology where our apps could talk back, like bots? Bots that have specific purposes and talk to any device or any app or to anyone, Bots that live in cloud, Bots that we can talk to you via any communication channel such as email, text, voice, chat, and others. Bots can go where no apps have gone before when it comes to machine-to-machine-to-human connectivity. And to make this happen we will need a whole new platform. A Platform for Conversations. Conservation as a Service (CaaS) Messaging apps in general are becoming a second home screen for many people, acting as their entry point into the internet. And where the youngins are, the brands will follow. Companies are coming to messaging apps as bots and apps, to offer everything from customer service to online shopping and banking. Conversations are shaping up be the next major human-computer interface. Thanks to advances in natural language processing and machine learning, the tech is finally getting fast and accurate enough to be viable. Imagine a platform where language is the new UI layer. When we talk about conversation as a platform, there are 3 parts: There are people talking to people – Skype translator as an ex where people can communicate across cross languages Then there is the presence or being able to enhance a conversation by the ability to be present and interact remotely Then there is personal assistance and the bots Think of Bots as the new mechanism that you can converse with. Instead of looking through multiple mobile apps or pages and pages of websites, you can call on any application as a bot within the conversational canvas. Bots are the new apps and digital assistants are the meta apps. This way intelligence is infused into all our interactions. This leads us to Microsoft Bot Framework, which is a comprehensive offering from Microsoft to build and deploy high quality bots for your users to interact using Conversation as a Platform (CaaP). This is a framework that lets you build and connect intelligent bots. The idea is that they interact naturally wherever your users are talking, like Skype, Slack, Facebook Messenger, Text/SMS, and others. Basically any kind of channel that you use today as a human being to talk to other people, well, you will be able to use them to talk to bots all using natural language. Microsoft Bot Framework is a Microsoft operated CaaP service and an open source SDK. Bot Framework is one of the many tools Microsoft is offering to for building a complete Bot. Other tools include Language Understanding Intelligent Service (LUIS), Speech APIs, Microsoft Azure, Cortana Intelligence Suit and many more. Your Bot The Microsoft Bot Builder SDK is one of three main components of the Microsoft Bot Framework. First you have to build your bot. Your bot lives in the cloud and you host it yourself. You write it just like a web service component using Node.js or C#, like a ASP.NET WebAPI component. Microsoft Bot builder SDK is open source and so you will have more languages and web stack get supported over time. Your bot will have its own logic, but you also need a conversation logic using dialogs to model a conversation. The Bot builder SDK gives you facilities for this and there are many types of dialogs that are included from simple Yes/No questions to full natural language understanding or LUIS, which is one of the API's provided in Microsoft Cognitive Services: Bot Connector Bot Connector is hosted and operated by Microsoft. Think of it as a central router between your bots and many channels to communicate with your bots. Apart from routing messages, it would be managing state within the conversation. The Bot Connector is an easy way to create a single back-end and then publish to a bunch of different platforms called channels. Bot Directory Bot Directory is where user will be able to find bots. It's like app store for mobile apps. The Bot Directory is a public directory of all reviewed bots registered through the developer portal. Users will be able to discover, try, and add bots to their favorite conversation experiences from the Bot Directory. Anyone can access it and anyone can submit Bots to the directory. As you begin your development with Microsoft Bot Framework, you might be wondering how to best get started. Bots can be built in C#, however, Microsoft's Bot Framework can also be used to build bots using Node.js. For developing any bots, we need to first setup the development environment and have the right tools installed for successfully developing and deploying a bot. Let's see how we can setup a development environment using Visual Studio. Setting up development environment Let's first look at the Prerequisites required to set up the development environment: Prerequisites To use the Microsoft Bot Framework Connector, you must have: A Microsoft Account (Hotmail, Live, or Outlook) to log into the Bot Framework developer portal, which you will use to register your Bot. An Azure subscription (Free trial: https://azure.microsoft.com/en-us/). This Azure subscription is essential for having an Azure-accessible REST endpoint exposing a callback for the Connector service. Developer accounts on one or more communication services (such as Skype, Slack, Facebook) where your Bot will communicate. In addition, you may wish to have an Azure App Insights account so you can capture telemetry from your Bot. There are additionally different ways to go about building a Bot; from scratch, coded directly to the Bot Connector REST API, the Bot Builder SDK's for Node.js and .NET, and the Bot Connector .NET template which is what this quick start guide demonstrates. Setting up Bot Framework Connector SDK .NET This is a step-by-step guide to setting up dev environment to develop a Bot in C# using the Bot Framework Connector SDK .NET template: Install prerequisite software Visual Studio 2015 (latest update) - you can download the community version here for free: www.visualstudio.com Important: Please update all Visual Studio extensions to their latest versions to do so navigate to Tools | Extensions and Updates | Updates Download and install the Bot Application template: Download the file from the direct download link at http://aka.ms/bf-bc-vstemplate Save the zip file to your Visual Studio 2015 templates directory which is traditionally in %USERPROFILE%DocumentsVisual Studio 2015TemplatesProjectTemplatesVisual C# Open Visual Studio. Create a new C# project using the new Bot Application template. The template is a fully functional Echo Bot that takes the user's text utterance as input and returns it as output. In order to run however: The bot has to be registered with Bot Connector The AppId and AppPassword from the Bot Framework registration page have to be recorded in the project's web.config The project needs to be published to the web Emulator Use the Bot Framework Emulator to test your Bot application. The Bot Framework provides a channel emulator that lets you test calls to your Bot as if it were being called by the Bot Framework cloud service. To install the Bot Framework Emulator, download it from https://download.botframework.com/bf-v3/tools/emulator/publish.html. One installed, you're ready to test. First, start your Bot in Visual Studio using a browser as the application host. The following screenshot uses Microsoft Edge: Summary In this article, we introduced what is Microsoft Bot Framework and how it helps in the development of bots. Also, we have seen how to setup development environment, Emulator and the tools needed for programming. This article is based on the thought that programming knowledge and experience grow best when they grow together.  Resources for Article: Further resources on this subject: Talking to Bot using Browser [article] Webhooks in Slack [article] Creating our first bot, WebBot [article]
Read more
  • 0
  • 0
  • 3851

article-image-mvvm-and-data-binding
Packt
28 Dec 2016
9 min read
Save for later

MVVM and Data Binding

Packt
28 Dec 2016
9 min read
In this article by Steven F. Daniel, author of the book Mastering Xamarin UI Development, you will learn how to build stunning, maintainable, cross-platform mobile application user interfaces with the power of Xamarin. In this article, we will cover the following topics: Understanding the MVVM pattern architecture Implement the MVVM ViewModels within the app (For more resources related to this topic, see here.) Understanding the MVVM pattern architecture In this section we will be taking a look at the MVVM pattern architecture and the communication between the components that make up the architecture. The MVVM design pattern is designed to control the separation between the user interfaces (Views), the ViewModels that contain the actual binding to the Model, and the models that contain the actual structure of the entities representing information stored on a database or from a web service. The following screenshot shows the communication between each of the components contained within the MVVM design pattern architecture: The MVVM design pattern is divided into three main areas, as you can see from the preceding screenshot and these are explained in the following table: MVVM type Description Model The Model is basically a representation of business related entities used by an application, and is responsible for fetching data from either a database, or web service, and then de-serialized to the entities contained within the Model. View The View component of the MVVM model basically represents the actual screens that make up the application, along with any custom control components, and control elements, such as buttons, labels, and text fields. The Views contained within the MVVM pattern are platform-specific and are dependent on the platform APIs that are used to render the information that is contained within the application's user interface. ViewModel The ViewModel essentially controls, and manipulates the Views by acting as their main data context. The ViewModel contains a series of properties that are bound to the information contained within each Model, and those properties are then bound to each of the Views to represent this information within the user interface. ViewModels can also contain command objects that provide action-based events that can trigger the execution of event methods that occur within the View. For example, when the user taps on a toolbar item, or a button. ViewModels generally implement the INotifyPropertyChanged interface. Such a class fires a PropertyChanged event whenever one of their properties change. The data binding mechanism in Xamarin.Forms attaches a handler to this PropertyChanged event so it can be notified when a property changes and keep the target updated with the new value. Now that you have a good understanding of the components that are contained within MVVM design pattern architecture, we can begin to create our entity models and update our user interface files. In Xamarin.Forms, the term View is used to describe form controls, such as buttons and labels, and uses the term Page to describe the user interface or screen. Whereas, in MVVM, Views are used to describe the user interface, or screen. Implementing the MVVM ViewModels within your app In this section, we will begin by setting up the basic structure for our TrackMyWalks solution to include the folder that will be used to represent our ViewModels. Let's take a look at how we can achieve this, by following the steps: Launch the Xamarin Studio application and ensure that the TrackMyWalks solution is loaded within the Xamarin Studio IDE. Next, create a new folder within the TrackMyWalks PCL project, called ViewModels as shown in the following screenshot: Creating the WalkBaseViewModel for the TrackMyWalks app In this section, we will begin by creating a base MVVM ViewModel that will be used by each of our ViewModels when we create these, and then the Views (pages) will implement those ViewModels and use them as their BindingContext. Let's take a look at how we can achieve this, by following the steps: Create an empty class within the ViewModels folder, shown in the following screenshot: Next, choose the Empty Class option located within the General section, and enter in WalkBaseViewModel for the name of the new class file to create, as shown in the following screenshot: Next, click on the New button to allow the wizard to proceed and create the new empty class file, as shown in the preceding screenshot. Up until this point, all we have done is create our WalkBaseViewModel class file. This abstract class will act as the base ViewModel class that will contain the basic functionality that each of our ViewModels will inherit from. As we start to build the base class, you will see that it contains a couple of members and it will implement the INotifyPropertyChangedInterface,. As we progress through this article, we will build to this class, which will be used by the TrackMyWalks application. To proceed with creating the base ViewModel class, perform the following step as shown: Ensure that the WalkBaseViewModel.cs file is displayed within the code editor, and enter in the following code snippet: // // WalkBaseViewModel.cs // TrackMyWalks Base ViewModel // // Created by Steven F. Daniel on 22/08/2016. // Copyright © 2016 GENIESOFT STUDIOS. All rights reserved. // using System.ComponentModel; using System.Runtime.CompilerServices; namespace TrackMyWalks.ViewModels { public abstract class WalkBaseViewModel : INotifyPropertyChanged { protected WalkBaseViewModel() { } public event PropertyChangedEventHandler PropertyChanged; protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = null) { var handler = PropertyChanged; if (handler != null) { handler(this, new PropertyChangedEventArgs(propertyName)); } } } } In the preceding code snippet, we begin by creating a new abstract class for our WalkBaseViewModel that implements from the INotifyPropertyChanged interface class, that allows the View or page to be notified whenever properties contained within the ViewModel have changed. Next, we declare a variable PropertyChanged that inherits from the PropertyChangedEventHandler that will be used to indicate whenever properties on the object have changed. Finally, within the OnPropertyChanged method, this will be called when it has determined that a change has occurred on a property within the ViewModel from a child class. The INotifyPropertyChanged interface is used to notify clients, typically binding clients, when the value of a property has changed. Implementing the WalksPageViewModel In the previous section, we built our base class ViewModel for our TrackMyWalks application, and this will act as the main class that will allow our View or pages to be notified whenever changes to properties within the ViewModel have been changed. In this section, we will need to begin building the ViewModel for our WalksPage,. This model will be used to store the WalkEntries, which will later be used and displayed within the ListView on the WalksPage content page. Let's take a look at how we can achieve this, by following the steps: First, create a new class file within the ViewModels folder called WalksPageViewModel, as you did in the previous section, entitled Creating the WalkBaseViewModel located within this article. Next, ensure that the WalksPageViewModel.cs file is displayed within the code editor, and enter in the following code snippet: // // WalksPageViewModel.cs // TrackMyWalks ViewModels // // Created by Steven F. Daniel on 22/08/2016. // Copyright © 2016 GENIESOFT STUDIOS. All rights reserved. // using System.Collections.ObjectModel; using TrackMyWalks.Models; namespace TrackMyWalks.ViewModels { public class WalksPageViewModel : WalkBaseViewModel { ObservableCollection<WalkEntries> _walkEntries; public ObservableCollection<WalkEntries> walkEntries { get { return _walkEntries; } set { _walkEntries = value; OnPropertyChanged(); } } In the above code snippet, we begin by ensuring that our ViewModel inherits from the WalkBaseViewModel class. Next, we create an ObservableCollection variable _walkEntries which is very useful when you want to know when the collection has changed, and an event is triggered that will tell the user what entries have been added or removed from the WalkEntries model. In our next step, we create the ObservableCollection constructor WalkEntries, that is defined within the System.Collections.ObjectModel class, and accepts a List parameter containing our WalkEntries model. The WalkEntries property will be used to bind to the ItemSource property of the ListView within the WalksMainPage. Finally, we define the getter (get) and setter (set) methods that will return and set the content of our _walkEntries when it has been determined when a property has been modified or not. Next, locate the WalksPageViewModel class constructor, and enter the following highlighted code sections:         public WalksPageViewModel() { walkEntries = new ObservableCollection<WalkEntries>() { new WalkEntries { Title = "10 Mile Brook Trail, Margaret River", Notes = "The 10 Mile Brook Trail starts in the Rotary Park near Old Kate, a preserved steam " + "engine at the northern edge of Margaret River. ", Latitude = -33.9727604, Longitude = 115.0861599, Kilometers = 7.5, Distance = 0, Difficulty = "Medium", ImageUrl = "http://trailswa.com.au/media/cache/media/images/trails/_mid/" + "FullSizeRender1_600_480_c1.jpg" }, new WalkEntries { Title = "Ancient Empire Walk, Valley of the Giants", Notes = "The Ancient Empire is a 450 metre walk trail that takes you around and through some of " + "the giant tingle trees including the most popular of the gnarled veterans, known as " + "Grandma Tingle.", Latitude = -34.9749188, Longitude = 117.3560796, Kilometers = 450, Distance = 0, Difficulty = "Hard", ImageUrl = "http://trailswa.com.au/media/cache/media/images/trails/_mid/" + "Ancient_Empire_534_480_c1.jpg" }, }; } } } In the preceding code snippet, we began by creating a new ObservableCollection for our walkEntries method and then added each of the walk list items that we would like to store within our model. As each item is added, the ObservableCollection, constructor is called, and the setter (set) method is invoked to add the item, and then the INotifyPropertyChanged event will be triggered to notify that a change has occurred. Summary In this article, you learned about the MVVM pattern architecture; we also implemented the MVVM ViewModels within the app. Additionally, we created and implemented the WalkBaseViewModel for the TrackMyWalks application. Resources for Article: Further resources on this subject: A cross-platform solution with Xamarin.Forms and MVVM architecture [article] Building a Gallery Application [article] Heads up to MvvmCross [article]
Read more
  • 0
  • 0
  • 2917

article-image-examining-encodingjson-package-go
Packt
28 Dec 2016
13 min read
Save for later

Examining the encoding/json Package with Go

Packt
28 Dec 2016
13 min read
In this article by Nic Jackson, author of the book Building Microservices with Go, we will examine the encoding/json package to see just how easy Go makes it for us to use JSON objects for our requests and responses. (For more resources related to this topic, see here.) Reading and writing JSON Thanks to the encoding/json package, which is built into the standard library, encoding and decoding JSON to and from Go types is both fast and easy. It implements the simplistic Marshal and Unmarshal functions; however, if we need them, the package also provides Encoder and Decoder types, which allow us greater control when reading and writing streams of JSON data. In this section, we are going to examine both of these approaches, but first let's take a look at how simple it is to convert a standard Go struct into its corresponding JSON string. Marshalling Go structs to JSON To encode JSON data, the encoding/json package provides the Marshal function, which has the following signature: func Marshal(v interface{}) ([]byte, error) This function takes one parameter, which is of the interface type, so that's pretty much any object you can think of, since interface represents any type in Go. It returns a tuple of ([]byte, error). You will see this return style quite frequently in Go. Some languages implement a try...catch approach, which encourages an error to be thrown when an operation cannot be performed. Go suggests the (return type, error) pattern, where the error is nil when an operation succeeds. In Go, unhanded errors are a bad thing, and while the language does implement the panic and recover functions, which resemble exception handling in other languages, the situations in which you should use them are quite different (The Go Programming Language, Donovan and Kernighan). In Go, panic causes normal execution to stop, and all deferred function calls in the Go routine are executed; the program will then crash with a log message. It is generally used for unexpected errors that indicate a bug in the code, and good, robust Go code will attempt to handle these runtime exceptions and return a detailed error object back to the calling function. This pattern is exactly what is implemented with the Marshal function. In case Marshal cannot create a JSON-encoded byte array from the given object, which could be due to a runtime panic, then this is captured and an error object detailing the problem is returned to the caller. Let's try this out, expanding on our existing example. Instead of simply printing a string from our handler, let's create a simple struct for the response and return that: 10 type helloWorldResponse struct { 11 Message string 12 } In our handler, we will create an instance of this object, set the message, and then use the Marshal function to encode it to a string before returning. Let's see what that will look like: 23 func helloWorldHandler(w http.ResponseWriter, r *http.Request) { 24 response := helloWorldResponse{Message: "HelloWorld"} 25 data, err := json.Marshal(response) 26 if err != nil { 27 panic("Ooops") 28 } 29 30 fmt.Fprint(w, string(data)) 31 } Now when we rerun our program and refresh our browser, we'll see the following output rendered in valid JSON: {"Message":"Hello World"} This is awesome, but the default behavior of Marshal is to take the literal name of the field and use that as the field in the JSON output. What if I prefer to use camel case and would rather see message—could we just rename the field in our struct message? Unfortunately, we can't because in Go, lowercase properties are not exported. Marshal will ignore these and will not include them in the output. All is not lost: the encoding/json package implements struct field attributes, which allow us to change the output for the property to anything we choose. The example code is as follows: 10 type helloWorldResponse struct { 11 Message string `json:"message"` 12 } Using the struct field's tags, we can have greater control over how the output will look. In the preceding example, when we marshal this struct, the output from our server would be the following: {"message":"Hello World"} This is exactly what we want, but we can use field tags to control the output even further. We can convert object types and even ignore a field altogether if we need to: struct helloWorldResponse { // change the output field to be "message" Message string `json:"message"` // do not output this field Author string `json:"-"` // do not output the field if the value is empty Date string `json:",omitempty"` // convert output to a string and rename "id" Id int `json:"id, string"` } The channel, complex types, and functions cannot be encoded in JSON. Attempting to encode these types will result in an UnsupportedTypeError being returned by the Marshal function. It also can't represent cyclic data structures, so if your stuct contains a circular reference, then Marshal will result in an infinite recursion, which is never a good thing for a web request. If we want to export our JSON pretty formatted with indentation, we can use the MarshallIndent function, which allows you to pass an additional string parameter to specify what you would like the indent to be—two spaces, not a tab, right? func MarshalIndent(v interface{}, prefix, indent string) ([]byte, error) The astute reader might have noticed that we are decoding our struct into a byte array and then writing that to the response stream. This does not seem to be particularly efficient, and in fact, it is not. Go provides encoders and decoders, which can write directly to a stream. Since we already have a stream with the ResponseWriter interface, let's do just that. Before we do so, I think we need to look at the ResponseWriter interface a little to see what is going on there. ResponseWriter is an interface that defines three methods: // Returns the map of headers which will be sent by the // WriteHeader method. Header() // Writes the data to the connection. If WriteHeader has not // already been called then Write will call // WriteHeader(http.StatusOK). Write([]byte) (int, error) // Sends an HTTP response header with the status code. WriteHeader(int)   If we have a ResponseWriter, how can we use this with fmt.Fprint(w io.Writer, a ...interface{})? This method requires a Writer interface as a parameter, and we have a ResponseWriter. If we look at the signature for Writer, we can see that it is the following: Write(p []byte) (n int, err error) Because the ResponseWriter interface implements this method, it also satisfies the Writer interface; therefore, any object that implements ResponseWriter can be passed to any function that expects Writer. Amazing! Go rocks—but we don't have an answer to our question: is there any better way to send our data to the output stream without marshalling to a temporary string before we return it? The encoding/json package has a function called NewEncoder. This returns an Encoder object, which can be used to write JSON straight to an open writer, and guess what—we have one of those: func NewEncoder(w io.Writer) *Encoder So instead of storing the output of Marshal into a byte array, we can write it straight to the HTTP response, as shown in the following code: func helloWorldHandler(w http.ResponseWriter, r *http.Request) { response := HelloWorldResponse{Message: "HelloWorld"} encoder := json.NewEncoder(w) encoder.Encode(&response) } We will look at benchmarking in a later chapter, but to see why this is important, here's a simple benchmark to check the two methods against each other; have a look at the output: go test -v -run="none" -bench=. -benchtime="5s" -benchmem testing: warning: no tests to run PASS BenchmarkHelloHandlerVariable 10000000 1211 ns/op 248 B/op 5 allocs/op BenchmarkHelloHandlerEncoder 10000000 662 ns/op 8 B/op 1 allocs/op ok github.com/nicholasjackson/building-microservices-in-go/chapter1/bench 20.650s Using the Encoder rather than marshalling to a byte array is nearly 50% faster. We are dealing with nanoseconds here, so that time may seem irrelevant, but it isn't; this was two lines of code. If you have that level of inefficiency throughout the rest of your code, your application will run slower, you will need more hardware to satisfy the load, and that will cost you money. There is nothing clever in the differences between the two methods—all we have done is understood how the standard packages work and chosen the correct option for our requirements. That is not performance tuning, that is understanding the framework. Unmarshalling JSON to Go structs Now that we have learned how we can send JSON back to the client, what if we need to read input before returning the output? We could use URL parameters, and we will see what that is all about in the next chapter, but usually, you will need more complex data structures, which include the service to accept JSON as part of an HTTP POST request. If we apply techniques similar to those we learned in the previous section (to write JSON), reading JSON is just as easy. To decode JSON into a stuct, the encoding/json package provides us with the Unmarshal function: func Unmarshal(data []byte, v interface{}) error The Unmarshal function works in the opposite way to Marshal: it allocates maps, slices, and pointers as required. Incoming object keys are matched using either the struct field name or its tag and will work with a case-insensitive match; however, an exact match is preferred. Like Marshal, Unmarshal will only set exported struct fields: those that start with an upper case letter. We start by adding a new struct to represent the request, while Unmarshal can decode the JSON into an interface{} array, which would be of one of the following types: map[string]interface{} // for JSON objects []interface{} // for JSON arrays Which type it is depends on whether our JSON is an object or an array. In my opinion, it is much clearer to the readers of our code if we explicitly state what we are expecting as a request. We can also save ourselves work by not having to manually cast the data when we come to use it. Remember two things: You do not write code for the compiler; you write code for humans to understand You will spend more time reading code than you do writing it We are going to do ourselves a favor by taking into account these two points and creating a simple struct to represent our request, which will look like this: 14 type helloWorldRequest struct { 15 Name string `json:"name"` 16 } Again, we are going to use struct field tags because while we could let Unmarshal do case-insensitive matching so that {"name": "World} would correctly unmarshal into the struct the same as {"Name": "World"}, when we specify a tag, we are being explicit about the request form, and that is a good thing. In terms of speed and performance, it is also about 10% faster, and remember: performance matters. To access the JSON sent with the request, we need to take a look at the http.Request object passed to our handler. The following listing does not show all the methods in the request, just the ones we are going to be immediately dealing with. For the full documentation, I recommend checking out the docs at https://godoc.org/net/http#Request. type Requests struct { … // Method specifies the HTTP method (GET, POST, PUT, etc.). Method string // Header contains the request header fields received by the server. The type Header is a link to map[string] []string. Header Header // Body is the request's body. Body io.ReadCloser … } The JSON that has been sent with the request is accessible in the Body field. The Body field implements the io.ReadCloser interface as a stream and does not return []byte or string data. If we need the data contained in the body, we can simply read it into a byte array, like the following example: 30 body, err := ioutil.ReadAll(r.Body) 31 if err != nil { 32 http.Error(w, "Bad request", http.StatusBadRequest) 33 return 34 } Here is something we'll need to remember: we are not calling Body.Close(); if we were making a call with a client, we would need to do this as it is not automatically closed; however, when used in a ServeHTTP handler, the server automatically closes the request stream. To see how this all works inside our handler, we can look at the following handler: 28 func helloWorldHandler(w http.ResponseWriter, r *http.Request) { 29 30 body, err := ioutil.ReadAll(r.Body) 31 if err != nil { 32 http.Error(w, "Bad request", http.StatusBadRequest) 33 return 34 } 35 36 var request HelloWorldRequest 37 err = json.Unmarshal(body, &request) 38 if err != nil { 39 http.Error(w, "Bad request", http.StatusBadRequest) 40 return 41 } 42 43 response := HelloWorldResponse{Message: "Hello " + request.Name} 44 45 encoder := json.NewEncoder(w) 46 encoder.Encode(response) 47 } Let's run this example and see how it works; to test it, we can simply use curl to send a request to the running server. If you feel more comfortable using a GUI tool, then Postman, which is available for the Google Chrome browser, will work just fine. Otherwise, feel free to use your preferred tool. $ curl localhost:8080/helloworld -d '{"name":"Nic"}' You should see the following response: {"message":"Hello Nic"} What do you think will happen if you do not include a body with your request? $ curl localhost:8080/helloworld If you guessed correctly that you would get an "HTTP status 400 Bad Request" error, then you win a prize. The following error replies to the request with the given message and status code: func Error(w ResponseWriter, error string, code int) Once we have sent this, we need to return, stopping further execution of the function as this does not close the ResponseWriter and return flow to the calling function automatically. You might think you are done, but have a go and see whether you can improve the performance of the handler. Think about the things we were talking about when marshaling JSON. Got it? Well, if not, here is the answer: again, all we are doing is using Decoder, which is the opposite of the Encoder function we used when writing JSON, as shown in the following code example. This nets an instant 33% performance increase, and with less code, too. 27 func helloWorldHandler(w http.ResponseWriter, r *http.Request) { 28 29 var request HelloWorldRequest 30 decoder := json.NewDecoder(r.Body) 31 32 err := decoder.Decode(&request) 33 if err != nil { 34 http.Error(w, "Bad request", http.StatusBadRequest) 35 return 36 } 37 38 response := HelloWorldResponse{Message: "Hello " + request.Name} 39 40 encoder := json.NewEncoder(w) 41 encoder.Encode(response) 42 } Now that you can see just how easy it is to encode and decode JSON with Go, I would recommend taking 5 minutes to spend some time digging through the documentation for the encoding/json package as there is a whole lot more than you can do with it: https://golang.org/pkg/encoding/json/ Summary In this article, we looked at encoding and decoding data using the encoding/json package. Resources for Article: Further resources on this subject: Microservices – Brave New World [article] A capability model for microservices [article] Breaking into Microservices Architecture [article]
Read more
  • 0
  • 0
  • 4336

article-image-web-scraping-electron
Dylan Frankland
23 Dec 2016
12 min read
Save for later

Web Scraping with Electron

Dylan Frankland
23 Dec 2016
12 min read
Web scraping is a necessary evil that takes unordered information from the Web and transforms it into something that can be used programmatically. The first big use of this was indexing websites for use in search engines like Google. Whether you like it or not, there will likely be a time in your life, career, academia, or personal projects where you will need to pull information that has no API or easy way to digest programatically. The simplest way to accomplish this is to do something along the lines of a curl request and then RegEx for the necessary information. That has been what the standard procedure for much of the history of web scraping until the single-page application came along. With the advent of Ajax, JavaScript became the mainstay of the Web and prevented much of it from being scraped with traditional methods such as curl that could only get static server rendered content. This is where Electron truly comes into play because it is a full-fledged browser but can be run programmatically. Electron's Advantages Electron goes beyond the abilities of other programmatic browser solutions: PhantomJS, SlimerJS, or Selenium. This is because Electron is more than just a headless browser or automation framework. PhantomJS (and by extension SlimerJS), run headlessly only preventing most debugging and because they have their own JavaScript engines, they are not 100% compatible with node and npm. Selenium, used for automating other browsers, can be a bit of a pain to set up because it requires setting up a central server and separately installing a browser you would like to scrape with. On the other hand, Electron can do all the above and more which makes it much easier to setup, run, and debug as you start to create your scraper. Applications of Electron Considering the flexibility of Electron and all the features it provides, it is poised to be used in a variety of development, testing, and data collection situations. As far as development goes, Electron provides all the features that Chrome does with its DevTools, making it easy to inspect, run arbitrary JavaScript, and put debugger statements. Testing can easily be done on websites that may have issues with bugs in browsers, but work perfectly in a different environment. These tests can easily be hooked up to a continuous integration server too. Data collection, the real reason we are all here, can be done on any type of website, including those with logins, using a familiar API with DevTools for inspection, and optionally run headlessly for automation. What more could one want? How to Scrape with Electron Because of Electron's ability to integrate with Node, there are many different npm modules at your disposal. This takes doing most of the leg work of automating the web scraping process and makes development a breeze. My personal favorite, nightmare, has never let me down. I have used it for tons of different projects from collecting info and automating OAuth processes to grabbing console errors, and also taking screenshots for visual inspection of entire websites and services. Packt's blog actually gives a perfect situation to use a web scraper. On the blog page, you will find a small single-page application that dynamically loads different blog posts using JavaScript. Using Electron, let's collect the latest blog posts so that we can use that information elsewhere. Full disclosure, Packt's Blog server-side renders the first page and could possibly be scraped using traditional methods. This won't be the case for all single-page applications. Prerequisites and Assumptions Following this guide, I assume that you already have Node (version 6+) and npm (version 3+) installed and that you are using some form of a Unix shell. I also assume that you are already inside a directory to do some coding in. Setup First of all, you will need to install some npm modules, so create a package.json like this: { "scripts": { "start": "DEBUG=nightmare babel-node ./main.js" }, "devDependencies": { "babel-cli": "6.18.0", "babel-preset-modern-node": "3.2.0", "babel-preset-stage-0": "6.16.0" }, "dependencies": { "nightmare": "2.8.1" }, "babel": { "presets": [ [ "modern-node", { "version": "6.0" } ], "stage-0" ] } } Using babel we can make use of some of the newer JavaScript utilities like async/await, making our code much more readable. nightmare has electron as a dependency, so there is no need to list that in our package.json (so easy). After this file is made, run npm install as usual. Investgating the Content to be Scraped First, we will need to identify the information we want to collect and how to get it: We go to here and we can see a page of different blog posts, but we are not sure of the order in which they are. We only want the latest and greatest, so let's change the sort order to "Date of Publication (New to Older)". Now we can see that we have only the blog posts that we want. Changing the sort order Next, to make it easier to collect more blog posts, change the number of blog posts shown from 12 to 48. Changing post view Last, but not least, are the blog posts themselves. Packt blog posts Next, we will figure the selectors that we will be using with nightmare to change the page and collect data: Inspecting the sort order selector, we find something like the following HTML: <div class="solr-pager-sort-selector"> <span>Sort By: &nbsp;</span> <select class="solr-page-sort-selector-select selectBox" style="display: none;"> <option value="" selected="">Relevance</option> <option value="ds_blog_release_date asc">Date of Publication (Older - Newer)</option> <option value="ds_blog_release_date desc">Date of Publication (Newer - Older)</option> <option value="sort_title asc">Title (A - Z)</option> <option value="sort_title desc">Title (Z - A)</option> </select> <a class="selectBox solr-page-sort-selector-select selectBox-dropdown" title="" tabindex="0" style="width: 243px; display: inline-block;"> <span class="selectBox-label" style="width: 220.2px;">Relevance</span> <span class="selectBox-arrow">▼</span> </a> </div> Checking out the "View" options, we see the following HTML: <div class="solr-pager-rows-selector"> <span>View: &nbsp;</span> <strong>12</strong> &nbsp; <a class="solr-page-rows-selector-option" data-rows="24">24</a> &nbsp; <a class="solr-page-rows-selector-option" data-rows="48">48</a> &nbsp; </div> Finally, the info we need from the post should look similar to this: <div class="packt-article-line-view float-left"> <div class="packt-article-line-view-image"> <a href="/books/content/mapt-v030-release-notes"> <noscript> &lt;img src="//d1ldz4te4covpm.cloudfront.net/sites/default/files/imagecache/ppv4_article_thumb/Mapt FB PPC3_0.png" alt="" title="" class="bookimage imagecache imagecache-ppv4_article_thumb"/&gt; </noscript> <img src="//d1ldz4te4covpm.cloudfront.net/sites/default/files/imagecache/ppv4_article_thumb/Mapt FB PPC3_0.png" alt="" title="" data-original="//d1ldz4te4covpm.cloudfront.net/sites/default/files/imagecache/ppv4_article_thumb/Mapt FB PPC3_0.png" class="bookimage imagecache imagecache-ppv4_article_thumb" style="opacity: 1;"> <div class="packt-article-line-view-bg"></div> <div class="packt-article-line-view-title">Mapt v.0.3.0 Release Notes</div> </a> </div> <a href="/books/content/mapt-v030-release-notes"> <div class="packt-article-line-view-text ellipsis"> <div> <p>New features and fixes for the Mapt platform</p> </div> </div> </a> </div> Great! Now we have all the information necessary to start collecting data from the page! Creating the Scraper First things first, create a main.js file in the same directory as the package.json. Now let's put some initial code to get the ball rolling on the first step: import Nightmare from 'nightmare'; const URL_BLOG = 'https://www.packtpub.com/books/content/blogs'; const SELECTOR_SORT = '.selectBox.solr-page-sort-selector-select.selectBox-dropdown'; // Viewport must have a width at least 1040 for the desktop version of Packt's blog const nightmare = new Nightmare({ show: true }).viewport(1041, 800); (async() => { await nightmare .goto(URL_BLOG) .wait(SELECTOR_SORT) // Always good to wait before performing an action on an element .click(SELECTOR_SORT); })(); If you try to run this code, you will notice a window pop up with Packt's blog, then resize the window, and then nothing. This is because the sort selector only listens for mousedown events, so we will need to modify our code a little, by changing click to mousedown: await nightmare .goto(BLOG_URL) .wait(SELECTOR_SORT) // Always good to wait before performing an action on an element .mousedown(SELECTOR_SORT); Running the code again, after the mousedown, we get a small HTML dropdown, which we can inspect and see the following: <ul class="selectBox-dropdown-menu selectBox-options solr-page-sort-selector-select-selectBox-dropdown-menu selectBox-options-bottom" style="width: 274px; top: 354.109px; left: 746.188px; display: block;"> <li class="selectBox-selected"><a rel="">Relevance</a></li> <li class=""><a rel="ds_blog_release_date asc">Date of Publication (Older - Newer)</a></li> <li class=""><a rel="ds_blog_release_date desc">Date of Publication (Newer - Older)</a></li> <li class=""><a rel="sort_title asc">Title (A - Z)</a></li> <li class=""><a rel="sort_title desc">Title (Z - A)</a></li> </ul> So, to select the option to sort the blog posts by date, we will need to alter our code a little further (again, it does not work with click, so use mousedown plus mouseup): import Nightmare from 'nightmare'; const URL_BLOG = 'https://www.packtpub.com/books/content/blogs'; const SELECTOR_SORT = '.selectBox.solr-page-sort-selector-select.selectBox-dropdown'; const SELECTOR_SORT_OPTION = '.solr-page-sort-selector-select-selectBox-dropdown-menu > li > a[rel="ds_blog_release_date desc"]'; // Viewport must have a width at least 1040 for the desktop version of Packt's blog const nightmare = new Nightmare({ show: true }).viewport(1041, 800); (async() => { await nightmare .goto(URL_BLOG) .wait(SELECTOR_SORT) // Always good to wait before performing an action on an element .mousedown(SELECTOR_SORT) .wait(SELECTOR_SORT_OPTION) .mousedown(SELECTOR_SORT_OPTION) // Drop down menu doesn't listen for `click` events like normal... .mouseup(SELECTOR_SORT_OPTION); })(); Awesome! We can clearly see the page reload, with posts from newest to oldest. After we have sorted everything, we need to change the number of blog posts shown (finally we can use click; lol!): import Nightmare from 'nightmare'; const URL_BLOG = 'https://www.packtpub.com/books/content/blogs'; const SELECTOR_SORT = '.selectBox.solr-page-sort-selector-select.selectBox-dropdown'; const SELECTOR_SORT_OPTION = '.solr-page-sort-selector-select-selectBox-dropdown-menu > li > a[rel="ds_blog_release_date desc"]'; const SELECTOR_VIEW = '.solr-page-rows-selector-option[data-rows="48"]'; // Viewport must have a width at least 1040 for the desktop version of Packt's blog const nightmare = new Nightmare({ show: true }).viewport(1041, 800); (async() => { await nightmare .goto(URL_BLOG) .wait(SELECTOR_SORT) // Always good to wait before performing an action on an element .mousedown(SELECTOR_SORT) .wait(SELECTOR_SORT_OPTION) .mousedown(SELECTOR_SORT_OPTION) // Drop down menu doesn't listen for `click` events like normal... .mouseup(SELECTOR_SORT_OPTION) .wait(SELECTOR_VIEW) .click(SELECTOR_VIEW); })(); Collecting the data is next; all we need to do is evaluate parts of the HTML and return the formatted information. The way the evaluate function works is by stringifying the function provided to it and injecting it into Electron's event loop. Because the function gets stringified, only a function that does not rely on outside variables or functions will execute properly. The value that is returned by this function must also be able to be stringified, so only JSON, strings, numbers, and booleans are applicable. import Nightmare from 'nightmare'; const URL_BLOG = 'https://www.packtpub.com/books/content/blogs'; const SELECTOR_SORT = '.selectBox.solr-page-sort-selector-select.selectBox-dropdown'; const SELECTOR_SORT_OPTION = '.solr-page-sort-selector-select-selectBox-dropdown-menu > li > a[rel="ds_blog_release_date desc"]'; const SELECTOR_VIEW = '.solr-page-rows-selector-option[data-rows="48"]'; // Viewport must have a width at least 1040 for the desktop version of Packt's blog const nightmare = new Nightmare({ show: true }).viewport(1041, 800); (async() => { await nightmare .goto(URL_BLOG) .wait(SELECTOR_SORT) // Always good to wait before performing an action on an element .mousedown(SELECTOR_SORT) .wait(SELECTOR_SORT_OPTION) .mousedown(SELECTOR_SORT_OPTION) // Drop down menu doesn't listen for `click` events like normal... .mouseup(SELECTOR_SORT_OPTION) .wait(SELECTOR_VIEW) .click(SELECTOR_VIEW) .evaluate( () => { let posts = []; document.querySelectorAll('.packt-article-line-view').forEach( post => { posts = posts.concat({ image: post.querySelector('img').src, title: post.querySelector('.packt-article-line-view-title').textContent.trim(), link: post.querySelector('a').href, description: post.querySelector('p').textContent.trim(), }); } ); return posts; } ) .end() .then( result => console.log(result) ); })(); Once you run your function again, you should get a very long array of objects containing information about each post. There you have it! You have successfully scraped a page using Electron! There’s a bunch more of possibilities and tricks with this, but hopefully this small example gives you a good idea of what can be done. About the author Dylan Frankland is a frontend engineer at Narvar. He is an agile web developer, with over 4 years of experience developing and designing for start-ups and medium-sized businesses to create a functional, fast, and beautiful web experiences.
Read more
  • 0
  • 0
  • 12842
article-image-object-oriented-javascript
Packt
21 Dec 2016
9 min read
Save for later

Object-Oriented JavaScript

Packt
21 Dec 2016
9 min read
In this article by Ved Antani, author of the book Object Oriented JavaScript - Third Edition, we will learn that you need to know about object-oriented JavaScript. In this article, we will cover the following topics: ECMAScript 5 ECMAScript 6D Object-oriented programming (For more resources related to this topic, see here.) ECMAScript 5 One of the most important milestone in ECMAScript revisions was ECMAScript5 (ES5), officially accepted in December 2009. ECMAScript5 standard is implemented and supported on all major browsers and server-side technologies. ES5 was a major revision because apart from several important syntactic changes and additions to the standard libraries, ES5 also introduced several new constructs in the language. For instance, ES5 introduced some new objects and properties, and also the so-called strict mode. Strict mode is a subset of the language that excludes deprecated features. The strict mode is opt-in and not required, meaning that if you want your code to run in the strict mode, you will declare your intention using (once per function, or once for the whole program) the following string: "use strict"; This is just a JavaScript string, and it's ok to have strings floating around unassigned to any variable. As a result, older browsers that don't speak ES5 will simply ignore it, so this strict mode is backwards compatible and won't break older browsers. For backwards compatibility, all the examples in this book work in ES3, but at the same time, all the code in the book is written so that it will run without warnings in ES5's strict mode. Strict mode in ES6 While strict mode is optional in ES5, all ES6 modules and classes are strict by default. As you will see soon, most of the code we write in ES6 resides in a module; hence, strict mode is enforced by default. However, it is important to understand that all other constructs do not have implicit strict mode enforced. There were efforts to make newer constructs, such as arrow and generator functions, to also enforce strict mode, but it was later decided that doing so will result in very fragment language rules and code. ECMAScript 6 ECMAScript6 revision took a long time to finish and was finally accepted on 17th June, 2015. ES6 features are slowly becoming part of major browsers and server technologies. It is possible to use transpilers to compile ES6 to ES5 and use the code on environments that do not yet support ES6 completely. ES6 substantially upgrades JavaScript as a language and brings in very exciting syntactical changes and language constructs. Broadly, there are two kinds of fundamental changes in this revision of ECMAScript, which are as follows: Improved syntax for existing features and editions to the standard library; for example, classes and promises New language features; for example, generators ES6 allows you to think differently about your code. New syntax changes can let you write code that is cleaner, easier to maintain, and does not require special tricks. The language itself now supports several constructs that required third-party modules earlier. Language changes introduced in ES6 needs serious rethink in the way we have been coding in JavaScript. A note on the nomenclature—ECMAScript6, ES6, and ECMAScript 2015 are the same, but used interchangeably. Browser support for ES6 Majority of the browsers and server frameworks are on their way toward implementing ES6 features. You can check out the what is supported and what is not by clicking: http://kangax.github.io/compat-table/es6/ Though ES6 is not fully supported on all the browsers and server frameworks, we can start using almost all features of ES6 with the help of transpilers. Transpilers are source-to-source compilers. ES6 transpilers allow you to write code in ES6 syntax and compile/transform them into equivalent ES5 syntax, which can then be run on browsers that do not support the entire range of ES6 features.The defacto ES6 transpiler at the moment is Babel Object-oriented programming Let's take a moment to review what people mean when they say object-oriented, and what the main features of this programming style are. Here's a list of some concepts that are most often used when talking about object-oriented programming (OOP): Object, method, and property Class Encapsulation Inheritance Polymorphism Let's take a closer look into each one of these concepts. If you're new to the object-oriented programming lingo, these concepts might sound too theoretical, and you might have trouble grasping or remembering them from one reading. Don't worry, it does take a few tries, and the subject can be a little dry at a conceptual level. Objects As the name object-oriented suggests, objects are important. An object is a representation of a thing (someone or something), and this representation is expressed with the help of a programming language. The thing can be anything—a real-life object, or a more convoluted concept. Taking a common object, a cat, for example, you can see that it has certain characteristics—color, name, weight, and so on—and can perform some actions—meow, sleep, hide, escape, and so on. The characteristics of the object are called properties in OOP-speak, and the actions are called methods. Classes In real life, similar objects can be grouped based on some criteria. A hummingbird and an eagle are both birds, so they can be classified as belonging to some made up Birds class. In OOP, a class is a blueprint or a recipe for an object. Another name for object is instance, so we can say that the eagle is one concrete instance of the general Birds class. You can create different objects using the same class because a class is just a template, while the objects are concrete instances based on the template. There's a difference between JavaScript and the classic OO languages such as C++ and Java. You should be aware right from the start that in JavaScript, there are no classes; everything is based on objects. JavaScript has the notion of prototypes, which are also objects. In a classic OO language, you'd say something like—create a new object for me called Bob, which is of class Person. In a prototypal OO language, you'd say—I'm going to take this object called Bob's dad that I have lying around (on the couch in front of the TV?) and reuse it as a prototype for a new object that I'll call Bob. Encapsulation Encapsulation is another OOP related concept, which illustrates the fact that an object contains (encapsulates) the following: Data (stored in properties) The means to do something with the data (using methods) One other term that goes together with encapsulation is information hiding. This is a rather broad term and can mean different things, but let's see what people usually mean when they use it in the context of OOP. Imagine an object, say, an MP3 player. You, as the user of the object, are given some interface to work with, such as buttons, display, and so on. You use the interface in order to get the object to do something useful for you, like play a song. How exactly the device is working on the inside, you don't know, and, most often, don't care. In other words, the implementation of the interface is hidden from you. The same thing happens in OOP when your code uses an object by calling its methods. It doesn't matter if you coded the object yourself or it came from some third-party library; your code doesn't need to know how the methods work internally. In compiled languages, you can't actually read the code that makes an object work. In JavaScript, because it's an interpreted language, you can see the source code, but the concept is still the same—you work with the object's interface without worrying about its implementation. Another aspect of information hiding is the visibility of methods and properties. In some languages, objects can have public, private, and protected methods and properties. This categorization defines the level of access the users of the object have. For example, only the methods of the same object have access to the private methods, while anyone has access to the public ones. In JavaScript, all methods and properties are public, but we'll see that there are ways to protect the data inside an object and achieve privacy. Inheritance Inheritance is an elegant way to reuse existing code. For example, you can have a generic object, Person, which has properties such as name and date_of_birth, and which also implements the walk, talk, sleep, and eat functionality. Then, you figure out that you need another object called Programmer. You can reimplement all the methods and properties that a Person object has, but it will be smarter to just say that the Programmer object inherits a Person object, and save yourself some work. The Programmer object only needs to implement more specific functionality, such as the writeCode method, while reusing all of the Person object's functionality. In classical OOP, classes inherit from other classes, but in JavaScript, as there are no classes, objects inherit from other objects. When an object inherits from another object, it usually adds new methods to the inherited ones, thus extending the old object. Often, the following phrases can be used interchangeably—B inherits from A and B extends A. Also, the object that inherits can pick one or more methods and redefine them, customizing them for its own needs. This way, the interface stays the same and the method name is the same, but when called on the new object, the method behaves differently. This way of redefining how an inherited method works is known as overriding. Polymorphism In the preceding example, a Programmer object inherited all of the methods of the parent Person object. This means that both objects provide a talk method, among others. Now imagine that somewhere in your code, there's a variable called Bob, and it just so happens that you don't know if Bob is a Person object or a Programmer object. You can still call the talk method on the Bob object and the code will work. This ability to call the same method on different objects, and have each of them respond in their own way, is called polymorphism. Summary In this article, you learned about how JavaScript came to be and where it is today. You were also introduced to ECMAScript 5 and ECMAScript 6, We also discussed about some of the object-oriented programming concepts. Resources for Article: Further resources on this subject: Diving into OOP Principles [article] Prototyping JavaScript [article] Developing Wiki Seek Widget Using Javascript [article]
Read more
  • 0
  • 0
  • 2979

article-image-how-build-javascript-microservices-platform
Andrea Falzetti
20 Dec 2016
6 min read
Save for later

How to build a JavaScript Microservices platform

Andrea Falzetti
20 Dec 2016
6 min read
Microservices is one of the most popular topics in software development these days, as well as JavaScript and chat bots. In this post, I share my experience in designing and implementing a platform using a microservices architecture. I will also talk about which tools were picked for this platform and how they work together. Let’s start by giving you some context about the project. The challenge The startup I am working with had the idea of building a platform to help people with depression and-tracking and managing the habits that keep them healthy, positive, and productive. The final product is an iOS app with a conversational UI, similar to a chat bot, and probably not very intelligent in version 1.0, but still with some feelings! Technology stack For this project, we decided to use Node.js for building the microservices, ReactNative for the mobile app, ReactJS for the Admin Dashboard, and ElasticSearch and Kibana for logging and monitoring the applications. And yes, we do like JavaScript! Node.js Microservices Toolbox There are many definitions of a microservice, but I am assuming we agree on a common statement that describes a microservice as an independent component that performs certain actions within your systems, or in a nutshell, a part of your software that solves a specific problem that you have. I got interested in microservices this year, especially when I found out there was a node.js toolkit called Seneca that helps you organize and scale your code. Unfortunately, my enthusiasm didn’t last long as I faced the first issue: the learning curve to approach Seneca was too high for this project; however, even if I ended up not using it, I wanted to include it here because many people are successfully using it, and I think you should be aware of it, and at least consider looking at it. Instead, we decided to go for a simpler way. We split our project into small Node applications, and using pm2, we deploy our microservices using a pm2 configuration file called ecosystem.json. As explained in the documentation, this is a good way of keeping your deployment simple, organized, and monitored. If you like control dashboards, graphs, and colored progress bars, you should look at pm2 Keymetrics--it offers a nice overview of all your processes. It has also been extremely useful in creating a GitHub Machine User, which essentially is a normal GitHub account, with its own attached ssh-key, which grants access to the repositories that contain the project’s code. Additionally, we created a Node user on our virtual machine with the ssh-key loaded in. All of the microservices run under this node user, which has access to the code base through the machine user ssh-key. In this way, we can easily pull the latest code and deploy new versions. We finally attached our ssh-keys to the Node user so each developer can login as the Node user via ssh: ssh node@<IP> cd ./project/frontend-api git pull Without being prompted for a password or authorization token from GitHub, and then using pm2, restart the microservice: pm2 restart frontend-api Another very important element that makes our microservices architecture is the API Gateway. After evaluating different options, including AWS API Gateway, HAproxy, Varnish, and Tyk, we decided to use Kong, an open source API Layer that offers modularity via plugins (everybody loves plugins!) and scalability features. We picked Kong because it supports WebSockets, as well as HTTP, but unfortunately, not all of the alternatives did. Using Kong in combination with nginx, we mapped our microservices under a single domain name, http://api.example.com, and exposed each microservices under a specific path: api.example.com/chat-bot api.example.com/admin-backend api.example.com/frontend-api … This allows us to run the microservices on separate servers on different ports, and having one single gate for the clients consuming our APIs. Finally, the API Gateway is responsible of allowing only authenticated requests to pass through, so this is a great way of protecting the microservices, because the gateway is the only public component, and all of the microservices run in a private network. What does a microservice look like, and how do they talk to each other? We started creating a microservice-boilerplate package that includes Express to expose some APIs, Passport to allow only authorized clients to use them, winston for logging their activities, and mocha and chai for testing the microservice. We then created an index.js file that initializes the express app with a default route: /api/ping. This returns a simple JSON containing the message ‘pong’, which we use to know if the service is down. One alternative is to get the status of the process using pm2: pm2 list pm2 status <microservice-pid> Whenever we want to create a new microservice, we start from this boilerplate code. It saves a lot of time, especially if the microservices have a similar shape. The main communication channel between microservices is HTTP via API calls. We are also using web sockets to allow a faster communication between some parts of the platform. We decided to use socket.io, a very simple and efficient way of implementing web sockets. I recommend creating a Node package that contains the business logic, including the object, models, prototypes, and common functions, such as read and write methods for the database. Using this approach allows you to include the package into each microservice, with the benefit of having just one place to update if something needs to change. Conclusions In this post, I covered the tools used for building a microservice architecture in Node.js. I hope you have found this useful. About the author Andrea Falzetti is an enthusiastic Full Stack Developer based in London. He has been designing and developing web applications for over 5 years. He is currently focused on node.js, react, microservices architecture, serverless, conversational UI, chat bots, and machine learning. He is currently working at Activate Media, where his role is to estimate, design, and lead the development of web and mobile platforms.
Read more
  • 0
  • 0
  • 4812

article-image-getting-started-react-and-bootstrap
Packt
13 Dec 2016
18 min read
Save for later

Getting Started with React and Bootstrap

Packt
13 Dec 2016
18 min read
In this article by Mehul Bhat and Harmeet Singh, the authors of the book Learning Web Development with React and Bootstrap, you will learn to build two simple real-time examples: Hello World! with ReactJS A simple static form application with React and Bootstrap There are many different ways to build modern web applications with JavaScript and CSS, including a lot of different tool choices, and there is a lot of new theory to learn. This article will introduce you to ReactJS and Bootstrap, which you will likely come across as you learn about modern web application development. It will then take you through the theory behind the Virtual DOM and managing the state to create interactive, reusable and stateful components ourselves and then push it to a Redux architecture. You will have to learn very carefully about the concept of props and state management and where it belongs. If you want to learn code then you have to write code, in whichever way you feel comfortable. Try to create small components/code samples, which will give you more clarity/understanding of any technology. Now, let's see how this article is going to make your life easier when it comes to Bootstrap and ReactJS. Facebook has really changed the way we think about frontend UI development with the introduction of React. One of the main advantages of this component-based approach is that it is easy to understand, as the view is just a function of the properties and state. We're going to cover the following topics: Setting up the environment ReactJS setup Bootstrap setup Why Bootstrap? Static form example with React and Bootstrap (For more resources related to this topic, see here.) ReactJS React (sometimes called React.js or ReactJS) is an open-source JavaScript library that provides a view for data rendered as HTML. Components have been used typically to render React views that contain additional components specified as custom HTML tags. React gives you a trivial virtual DOM, powerful views without templates, unidirectional data flow, and explicit mutation. It is very methodical in updating the HTML document when the data changes; and provides a clean separation of components on a modern single-page application. From below example, we will have clear idea on normal HTML encapsulation and ReactJS custom HTML tag. JavaScript code: <section> <h2>Add your Ticket</h2> </section> <script> var root = document.querySelector('section').createShadowRoot(); root.innerHTML = '<style>h2{ color: red; }</style>' + '<h2>Hello World!</h2>'; </script> ReactJS code: var sectionStyle = { color: 'red' }; var AddTicket = React.createClass({ render: function() { return (<section><h2 style={sectionStyle}> Hello World!</h2></section>)} }) ReactDOM.render(<AddTicket/>, mountNode); As your app comes into existence and develops, it's advantageous to ensure that your components are used in the right manner. The React app consists of reusable components, which makes code reuse, testing, and separation of concerns easy. React is not only the V in MVC, it has stateful components (stateful components remembers everything within this.state). It handles mapping from input to state changes, and it renders components. In this sense, it does everything that an MVC does. Let's look at React's component life cycle and its different levels. Observe the following screenshot: React isn't an MVC framework; it's a library for building a composable user interface and reusable components. React used at Facebook in production and https://www.instagram.com/ is entirely built on React. Setting up the environment When we start to make an application with ReactJS, we need to do some setup, which just involves an HTML page and includes a few files. First, we create a directory (folder) called Chapter 1. Open it up in any code editor of your choice. Create a new file called index.html directly inside it and add the following HTML5 boilerplate code: <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title>ReactJS Chapter 1</title> </head> <body> <!--[if lt IE 8]> <p class="browserupgrade">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p> <![endif]--> <!-- Add your site or application content here --> <p>Hello world! This is HTML5 Boilerplate.</p> </body> </html> This is a standard HTML page that we can update once we have included the React and Bootstrap libraries. Now we need to create couple of folders inside the Chapter 1 folder named images, css, and js (JavaScript) to make your application manageable. Once you have completed the folder structure it will look like this: Installing ReactJS and Bootstrap Once we have finished creating the folder structure, we need to install both of our frameworks, ReactJS and Bootstrap. It's as simple as including JavaScript and CSS files in your page. We can do this via a content delivery network (CDN), such as Google or Microsoft, but we are going to fetch the files manually in our application so we don't have to be dependent on the Internet while working offline. Installing React First, we have to go to this URL https://facebook.github.io/react/ and hit the download button. This will give you a ZIP file of the latest version of ReactJS that includes ReactJS library files and some sample code for ReactJS. For now, we will only need two files in our application: react.min.js and react-dom.min.js from the build directory of the extracted folder. Here are a few steps we need to follow: Copy react.min.js and react-dom.min.js to your project directory, the Chapter 1/js folder, and open up your index.html file in your editor. Now you just need to add the following script in your page's head tag section: <script type="text/js" src="js/react.min.js"></script> <script type="text/js" src="js/react-dom.min.js"></script> Now we need to include the compiler in our project to build the code because right now we are using tools such as npm. We will download the file from the following CDN path, https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.min.js, or you can give the CDN path directly. The Head tag section will look this: <script type="text/js" src="js/react.min.js"></script> <script type="text/js" src="js/react-dom.min.js"></script> <script type="text/js" src="js/browser.min.js"></script> Here is what the final structure of your js folder will look like: Bootstrap Bootstrap is an open source frontend framework maintained by Twitter for developing responsive websites and web applications. It includes HTML, CSS, and JavaScript code to build user interface components. It's a fast and easy way to develop a powerful mobile first user interface. The Bootstrap grid system allows you to create responsive 12-column grids, layouts, and components. It includes predefined classes for easy layout options (fixed-width and full width). Bootstrap has a dozen pre-styled reusable components and custom jQuery plugins, such as button, alerts, dropdown, modal, tooltip tab, pagination, carousal, badges, icons, and much more. Installing Bootstrap Now, we need to install Bootstrap. Visit http://getbootstrap.com/getting-started/#download and hit on the Download Bootstrap button. This includes the compiled and minified version of css and js for our app; we just need the CSS bootstrap.min.css and fonts folder. This style sheet will provide you with the look and feel of all of the components, and is responsive layout structure for our application. Previous versions of Bootstrap included icons as images but, in version 3, icons have been replaced as fonts. We can also customize the Bootstrap CSS style sheet as per the component used in your application: Extract the ZIP folder and copy the Bootstrap CSS from the css folder to your project folder CSS. Now copy the fonts folder of Bootstrap into your project root directory. Open your index.html in your editor and add this link tag in your head section: <link rel="stylesheet" href="css/bootstrap.min.css"> That's it. Now we can open up index.html again, but this time in your browser, to see what we are working with. Here is the code that we have written so far: <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title>ReactJS Chapter 1</title> <link rel="stylesheet" href="css/bootstrap.min.css"> <script type="text/javascript" src="js/react.min.js"></script> <script type="text/javascript" src="js/react-dom.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.min.js"></script> </head> <body> <!--[if lt IE 8]> <p class="browserupgrade">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p> <![endif]--> <!-- Add your site or application content here --> </body> </html> Using React So now we've got the ReactJS and Bootstrap style sheet and in there we've initialized our app. Now let's start to write our first Hello World app using reactDOM.render(). The first argument of the ReactDOM.render method is the component we want to render and the second is the DOM node to which it should mount (append) to: ReactDOM.render( ReactElement element, DOMElement container, [function callback]) In order to translate it to vanilla JavaScript, we use wraps in our React code, <script type"text/babel">, tag that actually performs the transformation in the browser. Let's start out by putting one div tag in our body tag: <div id="hello"></div> Now, add the script tag with the React code: <script type="text/babel"> ReactDOM.render( <h1>Hello, world!</h1>, document.getElementById('hello') ); </script> Let's open the HTML page in your browser. If you see Hello, world! in your browser then we are on good track. In the preceding screenshot, you can see it shows the Hello, world! in your browser. That's great. We have successfully completed our setup and built our first Hello World app. Here is the full code that we have written so far: <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title>ReactJS Chapter 1</title> <link rel="stylesheet" href="css/bootstrap.min.css"> <script type="text/javascript" src="js/react.min.js"></script> <script type="text/javascript" src="js/react-dom.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.min.js"></script> </head> <body> <!--[if lt IE 8]> <p class="browserupgrade">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p> <![endif]--> <!-- Add your site or application content here --> <div id="hello"></div> <script type="text/babel"> ReactDOM.render( <h1>Hello, world!</h1>, document.getElementById('hello') ); </script> </body> </html> Static form with React and Bootstrap We have completed our first Hello World app with React and Bootstrap and everything looks good and as expected. Now it's time do more and create one static login form, applying the Bootstrap look and feel to it. Bootstrap is a great way to make you app responsive grid system for different mobile devices and apply the fundamental styles on HTML elements with the inclusion of a few classes and div's. The responsive grid system is an easy, flexible, and quick way to make your web application responsive and mobile first that appropriately scales up to 12 columns per device and viewport size. First, let's start to make an HTML structure to follow the Bootstrap grid system. Create a div and add a className .container for (fixed width) and .container-fluid for (full width). Use the className attribute instead of using class: <div className="container-fluid"></div> As we know, class and for are discouraged as XML attribute names. Moreover, these are reserved words in many JavaScript libraries so, to have clear difference and identical understanding, instead of using class and for, we can use className and htmlFor create a div and adding the className row. The row must be placed within .container-fluid: <div className="container-fluid"> <div className="row"></div> </div> Now create columns that must be immediate children of a row: <div className="container-fluid"> <div className="row"> <div className="col-lg-6"></div> </div> </div> .row and .col-xs-4 are predefined classes that available for quickly making grid layouts. Add the h1 tag for the title of the page: <div className="container-fluid"> <div className="row"> <div className="col-sm-6"> <h1>Login Form</h1> </div> </div> </div> Grid columns are created by the given specified number of col-sm-* of 12 available columns. For example, if we are using a four column layout, we need to specify to col-sm-3 lead-in equal columns. Col-sm-* Small devices Col-md-* Medium devices Col-lg-* Large devices We are using the col-sm-* prefix to resize our columns for small devices. Inside the columns, we need to wrap our form elements label and input tags into a div tag with the form-group class: <div className="form-group"> <label for="emailInput">Email address</label> <input type="email" className="form-control" id="emailInput" placeholder="Email"/> </div> Forget the style of Bootstrap; we need to add the form-control class in our input elements. If we need extra padding in our label tag then we can add the control-label class on the label. Let's quickly add the rest of the elements. I am going to add a password and submit button. In previous versions of Bootstrap, form elements were usually wrapped in an element with the form-actions class. However, in Bootstrap 3, we just need to use the same form-group instead of form-actions. Here is our complete HTML code: <div className="container-fluid"> <div className="row"> <div className="col-lg-6"> <form> <h1>Login Form</h1> <hr/> <div className="form-group"> <label for="emailInput">Email address</label> <input type="email" className="form-control" id="emailInput" placeholder="Email"/> </div> <div className="form-group"> <label for="passwordInput">Password</label> <input type="password" className="form- control" id="passwordInput" placeholder="Password"/> </div> <button type="submit" className="btn btn-default col-xs-offset-9 col-xs-3">Submit</button> </form> </div> </div> </div> Now create one object inside the var loginFormHTML script tag and assign this HTML them: Var loginFormHTML = <div className="container-fluid"> <div className="row"> <div className="col-lg-6"> <form> <h1>Login Form</h1> <hr/> <div className="form-group"> <label for="emailInput">Email address</label> <input type="email" className="form-control" id="emailInput" placeholder="Email"/> </div> <div className="form-group"> <label for="passwordInput">Password</label> <input type="password" className="form- control" id="passwordInput" placeholder="Password"/> </div> <button type="submit" className="btn btn-default col-xs-offset-9 col-xs-3">Submit</button> </form> </div> </div> We will pass this object in the React.DOM()method instead of directly passing the HTML: ReactDOM.render(LoginformHTML,document.getElementById('hello')); Our form is ready. Now let's see how it looks in the browser: The compiler is unable to parse our HTML because we have not enclosed one of the div tags properly. You can see in our HTML that we have not closed the wrapper container-fluid at the end. Now close the wrapper tag at the end and open the file again in your browser. Note: Whenever you hand-code (write) your HTML code, please double check your "Start Tag" and "End Tag". It should be written/closed properly, otherwise it will break your UI/frontend look and feel. Here is the HTML after closing the div tag: <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title>ReactJS Chapter 1</title> <link rel="stylesheet" href="css/bootstrap.min.css"> <script type="text/javascript" src="js/react.min.js"></script> <script type="text/javascript" src="js/react-dom.min.js"></script> <script src="js/browser.min.js"></script> </head> <body> <!-- Add your site or application content here --> <div id="loginForm"></div> <script type="text/babel"> var LoginformHTML = <div className="container-fluid"> <div className="row"> <div className="col-lg-6"> <form> <h1>Login Form</h1> <hr/> <div className="form-group"> <label for="emailInput">Email address</label> <input type="email" className="form-control" id="emailInput" placeholder="Email"/> </div> <div className="form-group"> <label for="passwordInput">Password</label> <input type="password" className="form-control" id="passwordInput" placeholder="Password"/> </div> <button type="submit" className="btn btn-default col-xs-offset-9 col-xs-3">Submit</button> </form> </div> </div> </div> ReactDOM.render(LoginformHTML,document.getElementById('loginForm'); </script> </body> </html> Now, you can check your page on browser and you will be able to see the form with below shown look and feel. Now it's working fine and looks good. Bootstrap also provides two additional classes to make your elements smaller and larger: input-lg and input-sm. You can also check the responsive behavior by resizing the browser. That's look great. Our small static login form application is ready with responsive behavior. Some of the benefits are: Rendering your component is very easy Reading component's code would be very easy with help of JSX JSX will also help you to check your layout as well as components plug-in with each other You can test your code easily and it also allows other tools integration for enhancement As we know, React is view layer, you can also use it with other JavaScript frameworks Summary Our simple static login form application and Hello World examples are looking great and working exactly how they should, so let's recap what we've learned in the this article. To begin with, we saw just how easy it is to get ReactJS and Bootstrap installed with the inclusion of JavaScript files and a style sheet. We also looked at how the React application is initialized and started building our first form application. The Hello World app and form application which we have created demonstrates some of React's and Bootstrap's basic features such as the following: ReactDOM Render Browserify Bootstrap With Bootstrap, we worked towards having a responsive grid system for different mobile devices and applied the fundamental styles of HTML elements with the inclusion of a few classes and divs. We also saw the framework's new mobile-first responsive design in action without cluttering up our markup with unnecessary classes or elements. Resources for Article: Further resources on this subject: Getting Started with ASP.NET Core and Bootstrap 4 [article] Frontend development with Bootstrap 4 [article] Gearing Up for Bootstrap 4 [article]
Read more
  • 0
  • 0
  • 5985
article-image-structuring-your-projects
Packt
24 Nov 2016
20 min read
Save for later

Structuring Your Projects

Packt
24 Nov 2016
20 min read
In this article written by Serghei Iakovlev and David Schissler, authors of the book Phalcon Cookbook , we will cover: (For more resources related to this topic, see here.) Choosing the best place for an implementation Automation of routine tasks Creating the application structure by using code generation tools Introduction In this article you will learn that, often, by creating new projects, developers can face some issues such as what components should they create, where to place them in the application structure, what would each component implement, what naming convention to follow, and so on. Actually, creating custom components isn't a difficult matter; we will sort it out in this article. We will create our own component which will display different menus on your site depending on where we are in the application. From one project to another, the developer's work is usually repeated. This holds true for tasks such as creating the project structure, configuration, creating data models, controllers, views, and so on. For those tasks, we will discover the power of Phalcon Developer Tools and how to use them. You will learn how to create an application skeleton by using one command, and even how to create a fully functional application prototype in less than 10 minutes without writing a single line of code. Developers often come up against a situation where they need to create a lot of predefined code templates. Until you are really familiar with the framework it can be useful for you to do everything manually. But anyway all of us would like to reduce repetitive tasks. Phalcon tries to help you by providing an easy and at the same time flexible code generation tool named Phalcon Developer Tools. These tools help you simplify the creation of CRUD components for a regular application. Therefore, you can create working code in a matter of seconds without creating the code yourself. Often, when creating an application using a framework, we need to extend or add functionality to the framework components. We don't have to reinvent the wheel by rewriting those components. We can use class inheritance and extensibility, but often this approach does not work. In such cases, it is better to use additional layers between the main application and the framework by creating a middleware layer. The term middleware has a wide range of meanings, but in the context of PHP web applications it means code, which will be called in turns by each request. We will look into the main principles of creating and using middleware in your application. We will not get into each solution in depth, but instead we will work with tasks that are common for most projects, and implementations extending Phalcon. Choosing the best place for an implementation Let's pretend you want to add a custom component. As the case may be, this component allows to change your site navigation menu. For example, when you have a Sign In link on your navigation menu and you are logged in, that link needs to change to Sign Out. Then you're asking yourself where is the best place in the project to put the code, where to place the files, how to name the classes, how to make them autoload by the autoloader. Getting ready… For successful implementation of this recipe you must have your application deployed. By this we mean that you need to have a web server installed and configured for handling requests to your application, an application must be able to receive requests, and have implemented the necessary components such as Controllers, Views, and a bootstrap file. For this recipe, we assume that our application is located in the apps directory. If this is not the case, you should change this part of the path in the examples shown in this article. How to do it… Follow these steps to complete this recipe: Create the /library/ directory app, if you haven't got one, where user components will be stored. Next, create the Elements (app/library/Elements.php) component. This class extends PhalconMvcUserComponent. Generally, it is not necessary, but it helps get access to application services quickly. The contents of Elements should be: <?php namespace Library; use PhalconMvcUserComponent; use PhalconMvcViewSimple as View; class Elements extends Component { public function __construct() { // ... } public function getMenu() { // ... } } Now we register this class in the Dependency Injection Container. We use a shared instance in order to prevent creating new instances by each service resolving: $di->setShared('elements', function () { return new LibraryElements(); }); If your Session service is not initialized yet, it's time to do it in your bootstrap file. We use a shared instance for the following reasons: $di->setShared('session', function () { $session = new PhalconSessionAdapterFiles(); $session->start(); return $session; }); Create the templates directory within the directory with your views/templates. Then you need to tell the class autoloader about a new namespace, which we have just entered. Let's do it in the following way: $loader->registerNamespaces([ // The APP_PATH constant should point // to the project's root 'Library' => APP_PATH . '/apps/library/', // ... ]); Add the following code right after the tag body in the main layout of your application: <div class="container"> <div class="navbar navbar-inverse"> <div class="container-fluid"> <div class="navbar-header"> <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#blog-top- menu" aria-expanded="false"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="#">Blog 24</a> </div> <?php echo $this->elements->getMenu(); ?> </div> </div> </div> Next, we need to create a template for displaying your top menu. Let's create it in views/templates/topMenu.phtml: <div class="collapse navbar-collapse" id="blog-top-menu"> <ul class="nav navbar-nav"> <li class="active"> <a href="#">Home</a> </li> </ul> <ul class="nav navbar-nav navbar-right"> <li> <?php if ($this->session->get('identity')): ?> <a href="#">Sign Out</a> <?php else: ?> <a href="#">Sign In</a> <?php endif; ?> </li> </ul> </div> Now, let's put the component to work. First, create the protected field $simpleView and initialize it in the controller: public function __construct() { $this->simpleView = new View(); $this->simpleView->setDI($this->getDI()); } And finally, implement the getMenu method as follows: public function getMenu() { $this->simpleView->setViewsDir($this->view- >getViewsDir()); return $this->simpleView->render('templates/topMenu'); } Open the main page of your site to ensure that your top menu is rendered. How it works… The main idea of our component is to generate a top menu, and to display the correct menu option depending on the situation, meaning whether the user is authorized or not. We create the user component, Elements, putting it in a place specially designed for the purpose. Of course, when creating the directory library and placing a new class there, we should tell the autoloader about a new namespace. This is exactly what we have done. However, we should take note of one important peculiarity. We should note that if you want to get access to your components quickly even in HTML templates like $this->elements, then you should put the components in the DI container. Therefore, we put our component, LibraryElements, in the container named elements. Since our component inherits PhalconMvcUserComponent, we are able to access all registered application services just by their names. For example, the following instruction, $this->view can be written in a long form, $this->getDi()->getShared('view'), but the first one is obviously more concise. Although not necessary, for application structure purposes, it is better to use a separate directory for different views not connected straight to specific controllers and actions. In our case, the directory views/templates serves for this purpose. We create a HTML template for menu rendering and place it in views/templates/topMenu.phtml. When using the method getMenu, our component will render the view topMenu.phtml and return HTML. In the method getMenu, we get the current path for all our views and set it for the PhalconMvcViewSimple component, created earlier in the constructor. In the view topMenu we access to the session component, which earlier we placed in the DI container. By generating the menu, we check whether the user is authorized or not. In the former case, we use the Sign out menu item, in the latter case we display the menu item with an invitation to Sign in. Automation of routine tasks The Phalcon project provides you with a great tool named Developer Tools. It helps automating repeating tasks, by means of code generation of components as well as a project skeleton. Most of the components of your application can be created only with one command. In this recipe, we will consider in depth the Developer Tools installation and configuration. Getting Ready… Before you begin to work on this recipe, you should have a DBMS configured, a web server installed and configured for handling requests from your application. You may need to configure a virtual host (this is optional) for your application which will receive and handle requests. You should be able to open your newly-created project in a browser at http://{your-host-here}/appname or http://{your-host-here}/, where your-host-here is the name of your project. You should have Git installed, too. In this recipe, we assume that your operating system is Linux. Developer Tools installation instructions for Mac OS X and Windows will be similar. You can find the link to the complete documentation for Mac OS X and Windows at the end of this recipe. We used the Terminal to create the database tables, and chose MySQL as our RDBMS. Your setup might vary. The choice of a tool for creating a table in your database, as well as a particular DBMS, is yours. Note that syntax for creating a table by using other DBMSs than MySQL may vary. How to do it… Follow these steps to complete this recipe: Clone Developer Tools in your home directory: git clone git@github.com:phalcon/phalcon-devtools.git devtools Go to the newly created directory devtools, run the./phalcon.sh command, and wait for a message about successful installation completion: $ ./phalcon.sh Phalcon Developer Tools Installer Make sure phalcon.sh is in the same dir as phalcon.php and that you are running this with sudo or as root. Installing Devtools... Working dir is: /home/user/devtools Generating symlink... Done. Devtools installed! Run the phalcon command without arguments to see the available command list and your current Phalcon version: $ phalcon Phalcon DevTools (3.0.0) Available commands: commands (alias of: list, enumerate) controller (alias of: create-controller) model (alias of: create-model) module (alias of: create-module) all-models (alias of: create-all-models) project (alias of: create-project) scaffold (alias of: create-scaffold) migration (alias of: create-migration) webtools (alias of: create-webtools) Now, let's create our project. Go to the folder where you plan to create the project and run the following command: $ phalcon project myapp simple Open the website which you have just created with the previous command in your browser. You should see a message about the successful installation. Create a database for your project: mysql -e 'CREATE DATABASE myapp' -u root -p You will need to configure our application to connect to the database. Open the file app/config/config.php and correct the database connection configuration. Draw attention to the baseUri: parameter if you have not configured your virtual host according to your project. The value of this parameter must be / or /myapp/. As the result, your configuration file must look like this: <?php use PhalconConfig; defined('APP_PATH') || define('APP_PATH', realpath('.')); return new Config([ 'database' => [ 'adapter' => 'Mysql', 'host' => 'localhost', 'username' => 'root', 'password' => '', 'dbname' => 'myapp', 'charset' => 'utf8', ], 'application' => [ 'controllersDir' => APP_PATH . '/app/controllers/', 'modelsDir' => APP_PATH . '/app/models/', 'migrationsDir' => APP_PATH . '/app/migrations/', 'viewsDir' => APP_PATH . '/app/views/', 'pluginsDir' => APP_PATH . '/app/plugins/', 'libraryDir' => APP_PATH . '/app/library/', 'cacheDir' => APP_PATH . '/app/cache/', 'baseUri' => '/myapp/', ] ]); Now, after you have configured the database access, let's create a users table in your database. Create the users table and fill it with the primary data: CREATE TABLE `users` ( `id` INT(11) unsigned NOT NULL AUTO_INCREMENT, `email` VARCHAR(128) NOT NULL, `first_name` VARCHAR(64) DEFAULT NULL, `last_name` VARCHAR(64) DEFAULT NULL, `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `users_email` (`email`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `users` (`email`, `first_name`, `last_name`) VALUES ('john@doe.com', 'John', 'Doe'), ('janie@doe.com', 'Janie', 'Doe'); After that we need to create a new controller, UsersController. This controller must provide us with the main CRUD actions on the Users model and, if necessary, display data with the appropriate views. Lets do it with just one command: $ phalcon scaffold users In your web browser, open the URL associated with your newly created resource User and try to find one of the users of our database table at http://{your-host-here}/appname/users (or http://{your-host-here}/users, depending on how you have configured your server for application request handling. Finally, open your project in your file manager to see all the project structure created with Developer Tools: +-- app ¦ +-- cache ¦ +-- config ¦ +-- controllers ¦ +-- library ¦ +-- migrations ¦ +-- models ¦ +-- plugins ¦ +-- schemas ¦ +-- views ¦ +-- index ¦ +-- layouts ¦ +-- users +-- public +-- css +-- files +-- img +-- js +-- temp How it works… We installed Developer Tools with only two commands, git clone and ./phalcon. This is all we need to start using this powerful code generation tool. Next, using only one command, we created a fully functional application environment. At this stage, the application doesn't represent something outstanding in terms of features, but we have saved time from manually creating the application structure. Developer Tools did that for you! If after this command completion you examine your newly created project, you may notice that the primary application configuration has been generated also, including the bootstrap file. Actually, the phalcon project command has additional options that we have not demonstrated in this recipe. We are focusing on the main commands. Enter the command help to see all available project creating options: $ phalcon project help In the modern world, you can hardly find a web application which works without access to a database. Our application isn't an exception. We created a database for our application, and then we created a users table and filled it with primary data. Of course, we need to supply our application with what we have done in the app/config/config.php file with the database access parameters as well as the database name. After the successful database and table creation, we used the scaffold command for the pre-defined code template generation, particularly the Users controller with all main CRUD actions, all the necessary views, and the Users model. As before, we have used only one command to generate all those files. Phalcon Developer Tools is equipped with a good amount of different useful tools. To see all the available options, you can use the command help. We have taken only a few minutes to create the first version of our application. Instead of spending time with repetitive tasks (such as the creation of the application skeleton), we can now use that time to do more exciting tasks. Phalcon Developer Tools helps us save time where possible. But wait, there is more! The project is evolving, and it becomes more featureful day by day. If you have any problems you can always visit the project on GitHub https://github.com/phalcon/phalcon-devtools and search for a solution. There's more… You can find more information on Phalcon Developer Tools installation for Windows and OS X at: https://docs.phalconphp.com/en/latest/reference/tools.html. More detailed information on web server configuration can be found at: https://docs.phalconphp.com/en/latest/reference/install.html Creating the application structure by using code generation tools In the following recipe, we will discuss available code generation tools that can be used for creating a multi-module application. We don't need to create the application structure and main components manually. Getting Ready… Before you begin, you need to have Git installed, as well as any DBMS (for example, MySQL, PostgreSQL, SQLite, and the like), the Phalcon PHP extension (usually it is named php5-phalcon) and a PHP extension, which offers database connectivity support using PDO (for example, php5-mysql, php5-pgsql or php5-sqlite, and the like). You also need to be able to create tables in your database. To accomplish the following recipe, you will require Phalcon Developer Tools. If you already have it installed, you may skip the first three steps related to the installation and go to the fourth step. In this recipe, we assume that your operating system is Linux. Developer Tools installation instructions for Mac OS X and Windows will be similar. You can find the link to the complete documentation for Mac OS X and Windows at the end of this recipe. We used the Terminal to create the database tables, and chose MySQL as our RDBMS. Your setup might vary. The choice of a tool for creating a table in your database, as well as particular DBMS, is yours. Note that syntax for creating a table by using other DBMSs than MySQL may vary. How to do it… Follow these steps to complete this recipe: First you need to decide where you will install Developer Tools. Put the case that you are going to place Developer Tools in your home directory. Then, go to your home directory and run the following command: git clone git@github.com:phalcon/phalcon-devtools.git Now browse to the newly created phalcon-devtools directory and run the following command to ensure that there are no problems: ./phalcon.sh Now, as far as you have Developer Tools installed, browse to the directory, where you intend to create your project, and run the following command: phalcon project blog modules If there were no errors during the previous step, then create a Help Controller by running the following command: phalcon controller Help --base-class=ControllerBase — namespace=Blog\Frontend\Controllers Open the newly generated HelpController in the apps/frontend/controllers/HelpController.php file to ensure that you have the needed controller, as well as the initial indexAction. Open the database configuration in the Frontend module, blog/apps/frontend/config/config.php, and edit the database configuration according to your current environment. Enter the name of an existing database user and a password that has access to that database, and the application database name. You can also change the database adapter that your application needs. If you do not have a database ready, you can create one now. Now, after you have configured the database access, let's create a users table in your database. Create the users table and fill it with the primary data: CREATE TABLE `users` ( `id` INT(11) unsigned NOT NULL AUTO_INCREMENT, `email` VARCHAR(128) NOT NULL, `first_name` VARCHAR(64) DEFAULT NULL, `last_name` VARCHAR(64) DEFAULT NULL, `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `users_email` (`email`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `users` (`email`, `first_name`, `last_name`) VALUES ('john@doe.com', 'John', 'Doe'), ('janie@doe.com', 'Janie', 'Doe'); Next, let's create Controller, Views, Layout, and Model by using the scaffold command: phalcon scaffold users --ns- controllers=Blog\Frontend\Controllers —ns- models=Blog\Frontend\Models Open the newly generated UsersController located in the apps/frontend/controllers/UsersController.php file to ensure you have generated all actions needed for user search, editing, creating, displaying, and deleting. To check if all actions work as designed, if you have a web server installed and configured for this recipe, you can go to http://{your-server}/users/index. In so doing, you can make sure that the required Users model is created in the apps/frontend/models/Users.php file, all the required views are created in the apps/frontend/views/users folder, and the user layout is created in the apps/frontend/views/layouts folder. If you have a web server installed and configured for displaying the newly created site, go to http://{your-server}/users/search to ensure that the users from our table are shown. How it works… In the world of programming, code generation is designed to lessen the burden of manually creating repeated code by using predefined code templates. The Phalcon framework provides perfect code generation tools which come with Phalcon Developer Tools. We start with the installation of Phalcon Developer Tools. Note, that if you already have Developer Tools installed, you should skip the steps involving these installation. Next, we generate a fully functional MVC application, which implements the multi-module principle. One command is enough to get a working application at once. We save ourselves the trouble of creating the application directory structure, creating the bootstrap file, creating all the required files, and setting up the initial application structure. For that end, we use only one command. It's really great, isn't it? Our next step is creating a controller. In our example, we use HelpController, which displays just such an approach to creating controllers. Next, we create the table users in our database and fill it with data. With that done, we use a powerful tool for generating predefined code templates, which is called Scaffold. Using only one command in the Terminal, we generate the controller UsersController with all the necessary actions and appropriate views. Besides this, we get the Users model and required layout. If you have a web server configured you can check out the work of Developer Tools at http://{your-server}/users/index. When we use the Scaffold command, the generator determines the presence and names of our table fields. Based on these data, the tool generates a model, as well as views with the required fields. The generator provides you with ready-to-use code in the controller, and you can change this code according to your needs. However, even if you don't change anything, you can use your controller safely. You can search for users, edit and delete them, create new users, and view them. And all of this was made possible with one command. We have discussed only some of the features of code generation. Actually, Phalcon Developer Tools has many more features. For help on the available commands you can use the command phalcon (without arguments). There's more… For more detailed information on installation and configuration of PDO in PHP, visit http://php.net/manual/en/pdo.installation.php. You can find detailed Phalcon Developer Tools installation instructions at https://docs.phalconphp.com/en/latest/reference/tools.html. For more information on Scaffold, refer to https://en.wikipedia.org/wiki/Scaffold_(programming). Summary In this article, you learned about the automation of routine tasks andcreating the application structure. Resources for Article: Further resources on this subject: Using Phalcon Models, Views, and Controllers [Article] Phalcon's ORM [Article] Planning and Structuring Your Test-Driven iOS App [Article]
Read more
  • 0
  • 0
  • 3186

article-image-building-our-first-app-7-minute-workout
Packt
14 Nov 2016
27 min read
Save for later

Building Our First App – 7 Minute Workout

Packt
14 Nov 2016
27 min read
In this article by, Chandermani Arora and Kevin Hennessy, the authors of the book Angular 2 By Example, we will build a new app in Angular, and in the process, develop a better understanding of the framework. This app will also help us explore some new capabilities of the framework. (For more resources related to this topic, see here.) The topics that we will cover in this article include the following: 7 Minute Workout problem description: We detail the functionality of the app that we build. Code organization: For our first real app, we will try to explain how to organize code, specifically Angular code. Designing the model: One of the building blocks of our app is its model. We design the app model based on the app's requirements. Understanding the data binding infrastructure: While building the 7 Minute Workout view, we will look at the data binding capabilities of the framework, which include property, attribute, class, style, and event bindings. Let's get started! The first thing we will do is define the scope of our 7 Minute Workout app. What is 7 Minute Workout? We want everyone reading this article to be physically fit. Our purpose is to simulate your grey matter. What better way to do it than to build an app that targets physical fitness! 7 Minute Workout is an exercise/workout plan that requires us to perform a set of twelve exercises in quick succession within the seven minute time span. 7 Minute Workout has become quite popular due to its benefits and the short duration of the workout. We cannot confirm or refute the claims but doing any form of strenuous physical activity is better than doing nothing at all. If you are interested to know more about the workout, then check out http://well.blogs.nytimes.com/2013/05/09/the-scientific-7-minute-workout/. The technicalities of the app include performing a set of 12 exercises, dedicating 30 seconds for each of the exercises. This is followed by a brief rest period before starting the next exercise. For the app that we are building, we will be taking rest periods of 10 seconds each. So, the total duration comes out be a little more than 7 minutes. Once the 7 Minute Workout app ready, it will look something like this: Downloading the code base The code for this app can be downloaded from the GitHub site https://github.com/chandermani/angular2byexample dedicated to this article. Since we are building the app incrementally, we have created multiple checkpoints that map to GitHub branches such as checkpoint2.1, checkpoint2.2, and so on. During the narration, we will highlight the branch for reference. These branches will contain the work done on the app up to that point in time. The 7 Minute Workout code is available inside the repository folder named trainer. So let's get started! Setting up the build Remember that we are building on a modern platform for which browsers still lack support. Therefore, directly referencing script files in HTML is out of question (while common, it's a dated approach that we should avoid anyway). The current browsers do not understand TypeScript; as a matter of fact, even ES 2015 (also known as ES6) is not supported. This implies that there has to be a process that converts code written in TypeScript into standard JavaScript (ES5), which browsers can work with. Hence, having a build setup for almost any Angular 2 app becomes imperative. Having a build process may seem like overkill for a small application, but it has some other advantages as well. If you are a frontend developer working on the web stack, you cannot avoid Node.js. This is the most widely used platform for Web/JavaScript development. So, no prizes for guessing that the Angular 2 build setup too is supported over Node.js with tools such as Grunt, Gulp, JSPM, and webpack. Since we are building on the Node.js platform, install Node.js before starting. While there are quite elaborate build setup options available online, we go for a minimal setup using Gulp. The reason is that there is no one size fits all solution out there. Also, the primary aim here is to learn about Angular 2 and not to worry too much about the intricacies of setting up and running a build. Some of the notable starter sites plus build setups created by the community are as follows: Start site Location angular2-webpack-starter http://bit.ly/ng2webpack angular2-seed http://bit.ly/ng2seed angular-cli— It allows us to generate the initial code setup, including the build configurations, and has good scaffolding capabilities too. http://bit.ly/ng2-cli A natural question arises if you are very new to Node.js or the overall build process: what does a typical Angular build involve? It depends! To get an idea about this process, it would be beneficial if we look at the build setup defined for our app. Let's set up the app's build locally then. Follow these steps to have the boilerplate Angular 2 app up and running: Download the base version of this app from http://bit.ly/ng2be-base and unzip it to a location on your machine. If you are familiar with how Git works, you can just checkout the branch base: git checkout base This code serves as the starting point for our app. Navigate to the trainer folder from command line and execute these commands: npm i -g gulp npm install The first command installs Gulp globally so that you can invoke the Gulp command line tool from anywhere and execute Gulp tasks. A Gulp task is an activity that Gulp performs during the build execution. If we look at the Gulp build script (which we will do shortly), we realize that it is nothing but a sequence of tasks performed whenever a build occurs. The second command installs the app's dependencies (in the form of npm packages). Packages in the Node.js world are third-party libraries that are either used by the app or support the app's building process. For example, Gulp itself is a Node.js package. The npm is a command-line tool for pulling these packages from a central repository. Once Gulp is installed and npm pulls dependencies from the npm store, we are ready to build and run the application. From the command line, enter the following command: gulp play This compiles and runs the app. If the build process goes fine, the default browser window/tab will open with a rudimentary "Hello World" page (http://localhost:9000/index.html). We are all set to begin developing our app in Angular 2! But before we do that, it would be interesting to know what has happened under the hood. The build internals Even if you are new to Gulp, looking at gulpfile.js gives you a fair idea about what the build process is doing. A Gulp build is a set of tasks performed in a predefined order. The end result of such a process is some form of package code that is ready to be run. And if we are building our apps using TypeScript/ES2015 or some other similar language that browsers do not understand natively, then we need an additional build step, called transpilation. Code transpiling As it stands in 2016, browsers still cannot run ES2015 code. While we are quick to embrace languages that hide the not-so-good parts of JavaScript (ES5), we are still limited by the browser's capabilities. When it comes to language features, ES5 is still the safest bet as all browsers support it. Clearly, we need a mechanism to convert our TypeScript code into plain JavaScript (ES5). Microsoft has a TypeScript compiler that does this job. The TypeScript compiler takes the TypeScript code and converts it into ES5-format code that can run in all browsers. This process is commonly referred to as transpiling, and since the TypeScript compiler does it, it's called a transpiler. Interestingly, transpilation can happen at both build/compile time and runtime: Build-time transpilation: Transpilation as part of the build process takes the script files (in our case, TypeScript .ts files) and compiles them into plain JavaScript. Our build setup uses build-time transpilation. Runtime transpilation: This happens in the browser at runtime. We include the raw language-specific script files (.ts in our case), and the TypeScript compiler—which is loaded in the browser beforehand—compiles these script files on the fly.  While runtime transpilation simplifies the build setup process, as a recommendation, it should be limited to development workflows only, considering the additional performance overheard involved in loading the transpiler and transpiling the code on the fly. The process of transpiling is not limited to TypeScript. Every language targeted towards the Web such as CoffeeScript, ES2015, or any other language that is not inherently understood by a browser needs transpilation. There are transpilers for most languages, and the prominent ones (other than TypeScript) are tracuer and babel. To compile TypeScript files, we can install the TypeScript compiler manually from the command line using this: npm install -g typescript Once installed, we can compile any TypeScript file into ES5 format using the compiler (tsc.exe). But for our build setup, this process is automated using the ts2js Gulp task (check out gulpfile.js). And if you are wondering when we installed TypeScript… well, we did it as part of the npm install step, when setting up the code for the first time. The gulp-typescript package downloads the TypeScript compiler as a dependency. With this basic understanding of transpilation, we can summarize what happens with our build setup: The gulp play command kicks off the build process. This command tells Gulp to start the build process by invoking the play task. Since the play task has a dependency on the ts2js task, ts2js is executed first. The ts2js compiles the TypeScript files (.ts) located in src folder and outputs them to the dist folder at the root. Post build, a static file server is started that serves all the app files, including static files (images, videos, and HTML) and script files (check gulp.play task). Thenceforth, the build process keeps a watch on any script file changes (the gulp.watch task) you make and recompiles the code on the fly. livereload also has been set up for the app. Any changes to the code refresh the browser running the app automatically. In case browser refresh fails, we can always do a manual refresh. This is a rudimentary build setup required to run an Angular app. For complex build requirements, we can always look at the starter/seed projects that have a more complete and robust build setup, or build something of our own. Next let's look at the boilerplate app code already there and the overall code organization. Organizing code This is how we are going to organize our code and other assets for the app: The trainer folder is the root folder for the app and it has a folder (static) for the static content (such as images, CSS, audio files, and others) and a folder (src) for the app's source code. The organization of the app's source code is heavily influenced by the design of Angular and the Angular style guide (http://bit.ly/ng2-style-guide) released by the Angular team. The components folder hosts all the components that we create. We will be creating subfolders in this folder for every major component of the application. Each component folder will contain artifacts related to that component, which includes its template, its implementation and other related item. We will also keep adding more top-level folders (inside the src folder) as we build the application. If we look at the code now, the components/app folder has defined a root level component TrainerAppComponent and root level module AppModule. bootstrap.ts contains code to bootstrap/load the application module (AppModule). 7 Minute Workout uses Just In Time (JIT) compilation to compile Angular views. This implies that views are compiled just before they are rendered in the browser. Angular has a compiler running in the browser that compiles these views. Angular also supports the Ahead Of Time (AoT) compilation model. With AoT, the views are compiled on the server side using a server version of the Angular compiler. The views returned to the browser are precompiled and ready to be used. For 7 Minute Workout, we stick to the JIT compilation model just because it is easy to set up as compared to AoT, which requires server-side tweaks and package installation. We highly recommend that you use AoT compilation for production apps due the numerous benefits it offers. AoT can improve the application's initial load time and reduce its size too. Look at the AoT platform documentation (cookbook) at http://bit.ly/ng2-aot to understand how AoT compilation can benefit you. Time to start working on our first focus area, which is the app's model! The 7 Minute Workout model Designing the model for this app requires us to first detail the functional aspects of the 7 Minute Workout app, and then derive a model that satisfies those requirements. Based on the problem statement defined earlier, some of the obvious requirements are as follows: Being able to start the workout. Providing a visual clue about the current exercise and its progress. This includes the following: Providing a visual depiction of the current exercise Providing step-by-step instructions on how to do a specific exercise The time left for the current exercise Notifying the user when the workout ends. Some other valuable features that we will add to this app are as follows: The ability to pause the current workout. Providing information about the next exercise to follow. Providing audio clues so that the user can perform the workout without constantly looking at the screen. This includes: A timer click sound Details about the next exercise Signaling that the exercise is about to start Showing related videos for the exercise in progress and the ability to play them. As we can see, the central theme for this app is workout and exercise. Here, a workout is a set of exercises performed in a specific order for a particular duration. So, let's go ahead and define the model for our workout and exercise. Based on the requirements just mentioned, we will need the following details about an exercise: The name. This should be unique. The title. This is shown to the user. The description of the exercise. Instructions on how to perform the exercise. Images for the exercise. The name of the audio clip for the exercise. Related videos. With TypeScript, we can define the classes for our model. Create a folder called workout-runner inside the src/components folder and copy the model.ts file from the checkpoint2.1 branch folder workout-runner(http://bit.ly/ng2be-2-1-model-ts) to the corresponding local folder. model.ts contains the model definition for our app. The Exercise class looks like this: export class Exercise { constructor( public name: string, public title: string, public description: string, public image: string, public nameSound?: string, public procedure?: string, public videos?: Array<string>) { } } TypeScript Tips Passing constructor parameters with public or private is a shorthand for creating and initializing class members at one go. The ? suffix after nameSound, procedure, and videos implies that these are optional parameters. For the workout, we need to track the following properties: The name. This should be unique. The title. This is shown to the user. The exercises that are part of the workout. The duration for each exercise. The rest duration between two exercises. So, the model class (WorkoutPlan) looks like this: export class WorkoutPlan { constructor( public name: string, public title: string, public restBetweenExercise: number, public exercises: ExercisePlan[], public description?: string) { } totalWorkoutDuration(): number { … } } The totalWorkoutDuration function returns the total duration of the workout in seconds. WorkoutPlan has a reference to another class in the preceding definition—ExercisePlan. It tracks the exercise and the duration of the exercise in a workout, which is quite apparent once we look at the definition of ExercisePlan: export class ExercisePlan { constructor( public exercise: Exercise, public duration: number) { } } These three classes constitute our base model, and we will decide in the future whether or not we need to extend this model as we start implementing the app's functionality. Since we have started with a preconfigured and basic Angular app, you just need to understand how this app bootstrapping is occurring. App bootstrapping Look at the src folder. There is a bootstrap.ts file with only the execution bit (other than imports): platformBrowserDynamic().bootstrapModule(AppModule); The boostrapModule function call actually bootstraps the application by loading the root module, AppModule. The process is triggered by this call in index.html: System.import('app').catch(console.log.bind(console)); The System.import statement sets off the app bootstrapping process by loading the first module from bootstrap.ts. Modules defined in the context of Angular 2, (using @NgModule decorator) are different from modules SystemJS loads. SystemJS modules are JavaScript modules, which can be in different formats adhering to CommonJS, AMD, or ES2015 specifications. Angular modules are constructs used by Angular to segregate and organize its artifacts. Unless the context of discussion is SystemJS, any reference to module implies Angular module. Now we have details on how SystemJS loads our Angular app. App loading with SystemJS SystemJS starts loading the JavaScript module with the call to System.import('app') in index.html. SystemJS starts by loading bootstrap.ts first. The imports defined inside bootstrap.ts cause SystemJS to then load the imported modules. If these module imports have further import statements, SystemJS loads them too, recursively. And finally the platformBrowserDynamic().bootstrapModule(AppModule); function gets executed once all the imported modules are loaded. For the SystemJS import function to work, it needs to know where the module is located. We define this in the file, systemjs.config.js, and reference it in index.html, before the System.import script: <script src="systemjs.config.js"></script> This configuration file contains all of the necessary configuration for SystemJS to work correctly. Open systemjs.config.js, the app parameter to import function points to a folder dist as defined on the map object: var map = { 'app': 'dist', ... } And the next variable, packages, contains settings that hint SystemJS how to load a module from a package when no filename/extension is specified. For app, the default module is bootstrap.js: var packages = { 'app': { main: 'bootstrap.js', defaultExtension: 'js' }, ... }; Are you wondering what the dist folder has to do with our application? Well, this is where our transpiled scripts end up.  As we build our app in TypeScript, the TypeScript compiler converts these .ts script files in the src folder to JavaScript modules and deposits them into the dist folder. SystemJS then loads these compiled JavaScript modules. The transpiled code location has been configured as part of the build definition in gulpfile.js. Look for this excerpt in gulpfile.ts: return tsResult.js .pipe(sourcemaps.write()) .pipe(gulp.dest('dist')) The module specification used by our app can again be verified in gulpfile.js. Take a look at this line: noImplicitAny: true, module: 'system', target: 'ES5', These are TypeScript compiler options, with one being module, that is, the target module definition format. The system module type is a new module format designed to support the exact semantics of ES2015 modules within ES5. Once the scripts are transpiled and the module definitions created (in the target format), SystemJS can load these modules and their dependencies. It's time to get into the thick of action; let's build our first component. Our first component – WorkoutRunnerComponent To implement the WorkoutRunnerComponent, we need to outline the behavior of the application. What we are going to do in the WorkoutRunnerComponent implementation is as follows: Start the workout. Show the workout in progress and show the progress indicator. After the time elapses for an exercise, show the next exercise. Repeat this process until all the exercises are over. Let's start with the implementation. The first thing that we will create is the WorkoutRunnerComponent implementation. Open workout-runner folder in the src/components folder and add a new code file called workout-runner.component.ts to it. Add this chunk of code to the file: import {WorkoutPlan, ExercisePlan, Exercise} from './model' export class WorkoutRunnerComponent { } The import module declaration allows us to reference the classes defined in the model.ts file in WorkoutRunnerComponent. We first need to set up the workout data. Let's do that by adding a constructor and related class properties to the WorkoutRunnerComponent class: workoutPlan: WorkoutPlan; restExercise: ExercisePlan; constructor() { this.workoutPlan = this.buildWorkout(); this.restExercise = new ExercisePlan( new Exercise("rest", "Relax!", "Relax a bit", "rest.png"), this.workoutPlan.restBetweenExercise); } The buildWorkout on WorkoutRunnerComponent sets up the complete workout, as we will see shortly. We also initialize a restExercise variable to track even the rest periods as exercise (note that restExercise is an object of type ExercisePlan). The buildWorkout function is a lengthy function, so it's better if we copy the implementation from the workout runner's implementation available in Git branch checkpoint2.1 (http://bit.ly/ng2be-2-1-workout-runner-component-ts). The buildWorkout code looks like this: buildWorkout(): WorkoutPlan { let workout = new WorkoutPlan("7MinWorkout", "7 Minute Workout", 10, []); workout.exercises.push( new ExercisePlan( new Exercise( "jumpingJacks", "Jumping Jacks", "A jumping jack or star jump, also called side-straddle hop is a physical jumping exercise.", "JumpingJacks.png", "jumpingjacks.wav", `Assume an erect position, with feet together and arms at your side. …`, ["dmYwZH_BNd0", "BABOdJ-2Z6o", "c4DAnQ6DtF8"]), 30)); // (TRUNCATED) Other 11 workout exercise data. return workout; } This code builds the WorkoutPlan object and pushes the exercise data into the exercises array (an array of ExercisePlan objects), returning the newly built workout. The initialization is complete; now, it's time to actually implement the start workout. Add a start function to the WorkoutRunnerComponent implementation, as follows: start() { this.workoutTimeRemaining = this.workoutPlan.totalWorkoutDuration(); this.currentExerciseIndex = 0; this.startExercise(this.workoutPlan.exercises[this.currentExerciseIndex]); } Then declare the new variables used in the function at the top, with other variable declarations: workoutTimeRemaining: number; currentExerciseIndex: number; The workoutTimeRemaining variable tracks the total time remaining for the workout, and currentExerciseIndex tracks the currently executing exercise index. The call to startExercise actually starts an exercise. This how the code for startExercise looks: startExercise(exercisePlan: ExercisePlan) { this.currentExercise = exercisePlan; this.exerciseRunningDuration = 0; let intervalId = setInterval(() => { if (this.exerciseRunningDuration >= this.currentExercise.duration) { clearInterval(intervalId); } else { this.exerciseRunningDuration++; } }, 1000); } We start by initializing currentExercise and exerciseRunningDuration. The currentExercise variable tracks the exercise in progress and exerciseRunningDuration tracks its duration. These two variables also need to be declared at the top: currentExercise: ExercisePlan; exerciseRunningDuration: number; We use the setInterval JavaScript function with a delay of 1 second (1,000 milliseconds) to track the exercise progress by incrementing exerciseRunningDuration. The setInterval invokes the callback every second. The clearInterval call stops the timer once the exercise duration lapses. TypeScript Arrow functions The callback parameter passed to setInterval (()=>{…}) is a lambda function (or an arrow function in ES 2015). Lambda functions are short-form representations of anonymous functions, with added benefits. You can learn more about them at https://basarat.gitbooks.io/typescript/content/docs/arrow-functions.html. As of now, we have a WorkoutRunnerComponent class. We need to convert it into an Angular component and define the component view. Add the import for Component and a component decorator (highlighted code): import {WorkoutPlan, ExercisePlan, Exercise} from './model' import {Component} from '@angular/core'; @Component({ selector: 'workout-runner', template: ` <pre>Current Exercise: {{currentExercise | json}}</pre> <pre>Time Left: {{currentExercise.duration- exerciseRunningDuration}}</pre>` }) export class WorkoutRunnerComponent { As you already know how to create an Angular component. You understand the role of the @Component decorator, what selector does, and how the template is used. The JavaScript generated for the @Component decorator contains enough metadata about the component. This allows Angular framework to instantiate the correct component at runtime. String enclosed in backticks (` `) are a new addition to ES2015. Also called template literals, such string literals can be multi line and allow expressions to be embedded inside (not to be confused with Angular expressions). Look at the MDN article here at http://bit.ly/template-literals for more details. The preceding template HTML will render the raw ExercisePlan object and the exercise time remaining. It has an interesting expression inside the first interpolation: currentExercise | json. The currentExercise property is defined in WorkoutRunnerComponent, but what about the | symbol and what follows it (json)? In the Angular 2 world, it is called a pipe. The sole purpose of a pipe is to transform/format template data. The json pipe here does JSON data formatting. A general sense of what the json pipe does, we can remove the json pipe plus the | symbol and render the template; we are going to do this next. As our app currently has only one module (AppModule), we add the WorkoutRunnerComponent declaration to it. Update app.module.ts by adding the highlighted code: import {WorkoutRunnerComponent} from '../workout-runner/workout-runner.component'; @NgModule({ imports: [BrowserModule], declarations: [TrainerAppComponent, WorkoutRunnerComponent], Now WorkoutRunnerComponent can be referenced in the root component so that it can be rendered. Modify src/components/app/app.component.ts as highlighted in the following code: @Component({ ... template: ` <div class="navbar ...> ... </div> <div class="container ...> <workout-runner></workout-runner> </div>` }) We have changed the root component template and added the workout-runner element to it. This will render the WorkoutRunnerComponent inside our root component. While the implementation may look complete there is a crucial piece missing. Nowhere in the code do we actually start the workout. The workout should start as soon as we load the page. Component life cycle hooks are going to rescue us! Component life cycle hooks Life of an Angular component is eventful. Components get created, change state during their lifetime and finally they are destroyed. Angular provides some life cycle hooks/functions that the framework invokes (on the component) when such event occurs. Consider these examples: When component is initialized, Angular invokes ngOnInit When a component's input properties change, Angular invokes ngOnChanges When a component is destroyed, Angular invokes ngOnDestroy As developers, we can tap into these key moments and perform some custom logic inside the respective component. Angular has TypeScript interfaces for each of these hooks that can be applied to the component class to clearly communicate the intent. For example: class WorkoutRunnerComponent implements OnInit { ngOnInit (){ ... } ... The interface name can be derived by removing the prefix ng from the function names. The hook we are going to utilize here is ngOnInit. The ngOnInit function gets fired when the component's data-bound properties are initialized but before the view initialization starts. Add the ngOnInit function to the WorkoutRunnerComponent class with a call to start the workout: ngOnInit() { this.start(); } And implement the OnInit interface on WorkoutRunnerComponent; it defines the ngOnInit method: import {Component,OnInit} from '@angular/core'; … export class WorkoutRunnerComponent implements OnInit { There are a number of other life cycle hooks, including ngOnDestroy, ngOnChanges, and ngAfterViewInit, that components support; but we are not going to dwell into any of them here. Look at the developer guide (http://bit.ly/ng2-lifecycle) on Life cycle Hooks to learn more about other such hooks. Time to run our app! Open the command line, navigate to the trainer folder, and type this line: gulp play If there are no compilation errors and the browser automatically loads the app (http://localhost:9000/index.html), we should see the following output: The model data updates with every passing second! Now you'll understand why interpolations ({{ }}) are a great debugging tool. This will also be a good time to try rendering currentExercise without the json pipe (use {{currentExercise}}), and see what gets rendered. We are not done yet! Wait long enough on the index.html page and you will realize that the timer stops after 30 seconds. The app does not load the next exercise data. Time to fix it! Update the code inside the setInterval if condition: if (this.exerciseRunningDuration >= this.currentExercise.duration) { clearInterval(intervalId); let next: ExercisePlan = this.getNextExercise(); if (next) { if (next !== this.restExercise) { this.currentExerciseIndex++; } this.startExercise(next); } else { console.log("Workout complete!"); } } The if condition if (this.exerciseRunningDuration >= this.currentExercise.duration) is used to transition to the next exercise once the time duration of the current exercise lapses. We use getNextExercise to get the next exercise and call startExercise again to repeat the process. If no exercise is returned by the getNextExercise call, the workout is considered complete. During exercise transitioning, we increment currentExerciseIndex only if the next exercise is not a rest exercise. Remember that the original workout plan does not have a rest exercise. For the sake of consistency, we have created a rest exercise and are now swapping between rest and the standard exercises that are part of the workout plan. Therefore, currentExerciseIndex does not change when the next exercise is rest. Let's quickly add the getNextExercise function too. Add the function to the WorkoutRunnerComponent class: getNextExercise(): ExercisePlan { let nextExercise: ExercisePlan = null; if (this.currentExercise === this.restExercise) { nextExercise = this.workoutPlan.exercises[this.currentExerciseIndex + 1]; } else if (this.currentExerciseIndex < this.workoutPlan.exercises.length - 1) { nextExercise = this.restExercise; } return nextExercise; } The WorkoutRunnerComponent.getNextExercise returns the next exercise that needs to be performed. Note that the returned object for getNextExercise is an ExercisePlan object that internally contains the exercise details and the duration for which the exercise runs. The implementation is quite self-explanatory. If the current exercise is rest, take the next exercise from the workoutPlan.exercises array (based on currentExerciseIndex); otherwise, the next exercise is rest, given that we are not on the last exercise (the else if condition check). With this, we are ready to test our implementation. So go ahead and refresh index.html. Exercises should flip after every 10 or 30 seconds. Great! The current build setup automatically compiles any changes made to the script files when the files are saved; it also refreshes the browser post these changes. But just in case the UI does not update or things do not work as expected, refresh the browser window. If you are having a problem with running the code, look at the Git branch checkpoint2.1 for a working version of what we have done thus far. Or if you are not using Git, download the snapshot of checkpoint2.1 (a zip file) from http://bit.ly/ng2be-checkpoint2-1. Refer to the README.md file in the trainer folder when setting up the snapshot for the first time. We have are done with controller. Summary We started this article with the aim of creating an Angular app which is  complex. The 7 Minute Workout app fitted the bill, and you learned a lot about the Angular framework while building this app. We started by defining the functional specifications of the 7 Minute Workout app. We then focused our efforts on defining the code structure for the app. To build the app, we started off by defining the model of the app. Once the model was in place, we started the actual implementation, by building an Angular component. Angular components are nothing but classes that are decorated with a framework-specific decorator, @Component. We now have a basic 7 Minute Workout app. For a better user experience, we have added a number of small enhancements to it too, but we are still missing some good-to-have features that would make our app more usable. Resources for Article: Further resources on this subject: Angular.js in a Nutshell [article] Introduction to JavaScript [article] Create Your First React Element [article] <hr noshade="noshade" size="1"
Read more
  • 0
  • 0
  • 2022