Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - Server-Side Web Development

406 Articles
article-image-understanding-php-basics
Packt
17 Feb 2016
27 min read
Save for later

Understanding PHP basics

Packt
17 Feb 2016
27 min read
In this article by Antonio Lopez Zapata, the author of the book Learning PHP 7, you need to understand not only the syntax of the language, but also its grammatical rules, that is, when and why to use each element of the language. Luckily, for you, some languages come from the same root. For example, Spanish and French are romance languages as they both evolved from spoken Latin; this means that these two languages share a lot of rules, and learning Spanish if you already know French is much easier. (For more resources related to this topic, see here.) Programming languages are quite the same. If you already know another programming language, it will be very easy for you to go through this chapter. If it is your first time though, you will need to understand from scratch all the grammatical rules, so it might take some more time. But fear not! We are here to help you in this endeavor. In this chapter, you will learn about these topics: PHP in web applications Control structures Functions PHP in web applications Even though the main purpose of this chapter is to show you the basics of PHP, doing so in a reference-manual way is not interesting enough. If we were to copy paste what the official documentation says, you might as well go there and read it by yourself. Instead, let's not forget the main purpose of this book and your main goal—to write web applications with PHP. We will show you how can you apply everything you are learning as soon as possible, before you get too bored. In order to do that, we will go through the journey of building an online bookstore. At the very beginning, you might not see the usefulness of it, but that is just because we still haven't seen all that PHP can do. Getting information from the user Let's start by building a home page. In this page, we are going to figure out whether the user is looking for a book or just browsing. How do we find this out? The easiest way right now is to inspect the URL that the user used to access our application and extract some information from there. Save this content as your index.php file: <?php $looking = isset($_GET['title']) || isset($_GET['author']); ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p>You lookin'? <?php echo (int) $looking; ?></p> <p>The book you are looking for is</p> <ul> <li><b>Title</b>: <?php echo $_GET['title']; ?></li> <li><b>Author</b>: <?php echo $_GET['author']; ?></li> </ul> </body> </html> And now, access http://localhost:8000/?author=Harper Lee&title=To Kill a Mockingbird. You will see that the page is printing some of the information that you passed on to the URL. For each request, PHP stores in an array—called $_GET- all the parameters that are coming from the query string. Each key of the array is the name of the parameter, and its associated value is the value of the parameter. So, $_GET contains two entries: $_GET['author'] contains Harper Lee and $_GET['title'] contains To Kill a Mockingbird. On the first highlighted line, we are assigning a Boolean value to the $looking variable. If either $_GET['title'] or $_GET['author'] exists, this variable will be true; otherwise, false. Just after that, we close the PHP tag and then we start printing some HTML, but as you can see, we are actually mixing HTML with PHP code. Another interesting line here is the second highlighted line. We are printing the content of $looking, but before that, we cast the value. Casting means forcing PHP to transform a type of value to another one. Casting a Boolean to an integer means that the resultant value will be 1 if the Boolean is true or 0 if the Boolean is false. As $looking is true since $_GET contains valid keys, the page shows 1. If we try to access the same page without sending any information as in http://localhost:8000, the browser will say "Are you looking for a book? 0". Depending on the settings of your PHP configuration, you will see two notice messages complaining that you are trying to access the keys of the array that do not exist. Casting versus type juggling We already knew that when PHP needs a specific type of variable, it will try to transform it, which is called type juggling. But PHP is quite flexible, so sometimes, you have to be the one specifying the type that you need. When printing something with echo, PHP tries to transform everything it gets into strings. Since the string version of the false Boolean is an empty string, this would not be useful for our application. Casting the Boolean to an integer first assures that we will see a value, even if it is just "0". HTML forms HTML forms are one of the most popular ways to collect information from users. They consist a series of fields called inputs in the HTML world and a final submit button. In HTML, the form tag contains two attributes: the action points, where the form will be submitted and method that specifies which HTTP method the form will use—GET or POST. Let's see how it works. Save the following content as login.html and go to http://localhost:8000/login.html: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore - Login</title> </head> <body> <p>Enter your details to login:</p> <form action="authenticate.php" method="post"> <label>Username</label> <input type="text" name="username" /> <label>Password</label> <input type="password" name="password" /> <input type="submit" value="Login"/> </form> </body> </html> This form contains two fields, one for the username and one for the password. You can see that they are identified by the name attribute. If you try to submit this form, the browser will show you a Page Not Found message, as it is trying to access http://localhost:8000/authenticate.phpand the web server cannot find it. Let's create it then: <?php $submitted = !empty($_POST); ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p>Form submitted? <?php echo (int) $submitted; ?></p> <p>Your login info is</p> <ul> <li><b>username</b>: <?php echo $_POST['username']; ?></li> <li><b>password</b>: <?php echo $_POST['password']; ?></li> </ul> </body> </html> As with $_GET, $_POST is an array that contains the parameters received by POST. In this piece of code, we are first asking whether that array is not empty—note the ! operator. Afterwards, we just display the information received, just as in index.php. Note that the keys of the $_POST array are the values for the name argument of each input field. Control structures So far, our files have been executed line by line. Due to that, we are getting some notices on some scenarios, such as when the array does not contain what we are looking for. Would it not be nice if we could choose which lines to execute? Control structures to the rescue! A control structure is like a traffic diversion sign. It directs the execution flow depending on some predefined conditions. There are different control structures, but we can categorize them in conditionals and loops. A conditional allows us to choose whether to execute a statement or not. A loop will execute a statement as many times as you need. Let's take a look at each one of them. Conditionals A conditional evaluates a Boolean expression, that is, something that returns a value. If the expression is true, it will execute everything inside its block of code. A block of code is a group of statements enclosed by {}. Let's see how it works: <?php echo "Before the conditional."; if (4 > 3) { echo "Inside the conditional."; } if (3 > 4) { echo "This will not be printed."; } echo "After the conditional."; In this piece of code, we are using two conditionals. A conditional is defined by the keyword if followed by a Boolean expression in parentheses and by a block of code. If the expression is true, it will execute the block; otherwise, it will skip it. You can increase the power of conditionals by adding the keyword else. This tells PHP to execute a block of code if the previous conditions were not satisfied. Let's see an example: if (2 > 3) { echo "Inside the conditional."; } else { echo "Inside the else."; } This will execute the code inside else as the condition of if was not satisfied. Finally, you can also add an elseif keyword followed by another condition and block of code to continue asking PHP for more conditions. You can add as many elseif as you need after if. If you add else, it has to be the last one of the chain of conditions. Also keep in mind that as soon as PHP finds a condition that resolves to true, it will stop evaluating the rest of the conditions: <?php if (4 > 5) { echo "Not printed"; } elseif (4 > 4) { echo "Not printed"; } elseif (4 == 4) { echo "Printed."; } elseif (4 > 2) { echo "Not evaluated."; } else { echo "Not evaluated."; } if (4 == 4) { echo "Printed"; } In this last example, the first condition that evaluates to true is the one that is highlighted. After that, PHP does not evaluate any more conditions until a new if starts. With this knowledge, let's try to clean up a bit of our application, executing statements only when needed. Copy this code to your index.php file: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p> <?php if (isset($_COOKIE[username'])) { echo "You are " . $_COOKIE['username']; } else { echo "You are not authenticated."; } ?> </p> <?php if (isset($_GET['title']) && isset($_GET['author'])) { ?> <p>The book you are looking for is</p> <ul> <li><b>Title</b>: <?php echo $_GET['title']; ?></li> <li><b>Author</b>: <?php echo $_GET['author']; ?></li> </ul> <?php } else { ?> <p>You are not looking for a book?</p> <?php } ?> </body> </html> In this new code, we are mixing conditionals and HTML code in two different ways. The first one opens a PHP tag and adds an if-else clause that will print whether we are authenticated or not with echo. No HTML is merged within the conditionals, which makes it clear. The second option—the second highlighted block—shows an uglier solution, but this is sometimes necessary. When you have to print a lot of HTML code, echo is not that handy, and it is better to close the PHP tag; print all the HTML you need and then open the tag again. You can do that even inside the code block of an if clause, as you can see in the code. Mixing PHP and HTML If you feel like the last file we edited looks rather ugly, you are right. Mixing PHP and HTML is confusing, and you have to avoid it by all means. Let's edit our authenticate.php file too, as it is trying to access $_POST entries that might not be there. The new content of the file would be as follows: <?php $submitted = isset($_POST['username']) && isset($_POST['password']); if ($submitted) { setcookie('username', $_POST['username']); } ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <?php if ($submitted): ?> <p>Your login info is</p> <ul> <li><b>username</b>: <?php echo $_POST['username']; ?></li> <li><b>password</b>: <?php echo $_POST['password']; ?></li> </ul> <?php else: ?> <p>You did not submitted anything.</p> <?php endif; ?> </body> </html> This code also contains conditionals, which we already know. We are setting a variable to know whether we've submitted a login or not and to set the cookies if we have. However, the highlighted lines show you a new way of including conditionals with HTML. This way, tries to be more readable when working with HTML code, avoiding the use of {} and instead using : and endif. Both syntaxes are correct, and you should use the one that you consider more readable in each case. Switch-case Another control structure similar to if-else is switch-case. This structure evaluates only one expression and executes the block depending on its value. Let's see an example: <?php switch ($title) { case 'Harry Potter': echo "Nice story, a bit too long."; break; case 'Lord of the Rings': echo "A classic!"; break; default: echo "Dunno that one."; break; } The switch case takes an expression; in this case, a variable. It then defines a series of cases. When the case matches the current value of the expression, PHP executes the code inside it. As soon as PHP finds break, it will exit switch-case. In case none of the cases are suitable for the expression, if there is a default case         , PHP will execute it, but this is optional. You also need to know that breaks are mandatory if you want to exit switch-case. If you do not specify any, PHP will keep on executing statements, even if it encounters a new case. Let's see a similar example but without breaks: <?php $title = 'Twilight'; switch ($title) { case 'Harry Potter': echo "Nice story, a bit too long."; case 'Twilight': echo 'Uh...'; case 'Lord of the Rings': echo "A classic!"; default: echo "Dunno that one."; } If you test this code in your browser, you will see that it is printing "Uh...A classic!Dunno that one.". PHP found that the second case is valid so it executes its content. But as there are no breaks, it keeps on executing until the end. This might be the desired behavior sometimes, but not usually, so we need to be careful when using it! Loops Loops are control structures that allow you to execute certain statements several times—as many times as you need. You might use them on several different scenarios, but the most common one is when interacting with arrays. For example, imagine you have an array with elements but you do not know what is in it. You want to print all its elements so you loop through all of them. There are four types of loops. Each of them has their own use cases, but in general, you can transform one type of loop into another. Let's see them closely While While is the simplest of the loops. It executes a block of code until the expression to evaluate returns false. Let's see one example: <?php $i = 1; while ($i < 4) { echo $i . " "; $i++; } Here, we are defining a variable with the value 1. Then, we have a while clause in which the expression to evaluate is $i < 4. This loop will execute the content of the block of code until that expression is false. As you can see, inside the loop we are incrementing the value of $i by 1 each time, so after 4 iterations, the loop will end. Check out the output of that script, and you will see "0 1 2 3". The last value printed is 3, so by that time, $i was 3. After that, we increased its value to 4, so when the while was evaluating whether $i < 4, the result was false. Whiles and infinite loops One of the most common problems with while loops is creating an infinite loop. If you do not add any code inside while, which updates any of the variables considered in the while expression so it can be false at some point, PHP will never exit the loop! For This is the most complex of the four loops. For defines an initialization expression, an exit condition, and the end of the iteration expression. When PHP first encounters the loop, it executes what is defined as the initialization expression. Then, it evaluates the exit condition, and if it resolves to true, it enters the loop. After executing everything inside the loop, it executes the end of the iteration expression. Once this is done, it will evaluate the end condition again, going through the loop code and the end of iteration expression until it evaluates to false. As always, an example will help clarify this: <?php for ($i = 1; $i < 10; $i++) { echo $i . " "; } The initialization expression is $i = 1 and is executed only the first time. The exit condition is $i < 10, and it is evaluated at the beginning of each iteration. The end of the iteration expression is $i++, which is executed at the end of each iteration. This example prints numbers from 1 to 9. Another more common usage of the for loop is with arrays: <?php $names = ['Harry', 'Ron', 'Hermione']; for ($i = 0; $i < count($names); $i++) { echo $names[$i] . " "; } In this example, we have an array of names. As it is defined as a list, its keys will be 0, 1, and 2. The loop initializes the $i variable to 0, and it will iterate until the value of $i is not less than the amount of elements in the array 3 The first iteration $i is 0, the second will be 1, and the third one will be 2. When $i is 3, it will not enter the loop as the exit condition evaluates to false. On each iteration, we are printing the content of the $i position of the array; hence, the result of this code will be all three names in the array. We careful with exit conditions It is very common to set an exit condition. This is not exactly what we need, especially with arrays. Remember that arrays start with 0 if they are a list, so an array of 3 elements will have entries 0, 1, and 2. Defining the exit condition as $i <= count($array) will cause an error on your code, as when $i is 3, it also satisfies the exit condition and will try to access to the key 3, which does not exist. Foreach The last, but not least, type of loop is foreach. This loop is exclusive for arrays, and it allows you to iterate an array entirely, even if you do not know its keys. There are two options for the syntax, as you can see in these examples: <?php $names = ['Harry', 'Ron', 'Hermione']; foreach ($names as $name) { echo $name . " "; } foreach ($names as $key => $name) { echo $key . " -> " . $name . " "; } The foreach loop accepts an array; in this case, $names. It specifies a variable, which will contain the value of the entry of the array. You can see that we do not need to specify any end condition, as PHP will know when the array has been iterated. Optionally, you can specify a variable that will contain the key of each iteration, as in the second loop. Foreach loops are also useful with maps, where the keys are not necessarily numeric. The order in which PHP will iterate the array will be the same order in which you used to insert the content in the array. Let's use some loops in our application. We want to show the available books in our home page. We have the list of books in an array, so we will have to iterate all of them with a foreach loop, printing some information from each one. Append the following code to the body tag in index.php: <?php endif; $books = [ [ 'title' => 'To Kill A Mockingbird', 'author' => 'Harper Lee', 'available' => true, 'pages' => 336, 'isbn' => 9780061120084 ], [ 'title' => '1984', 'author' => 'George Orwell', 'available' => true, 'pages' => 267, 'isbn' => 9780547249643 ], [ 'title' => 'One Hundred Years Of Solitude', 'author' => 'Gabriel Garcia Marquez', 'available' => false, 'pages' => 457, 'isbn' => 9785267006323 ], ]; ?> <ul> <?php foreach ($books as $book): ?> <li> <i><?php echo $book['title']; ?></i> - <?php echo $book['author']; ?> <?php if (!$book['available']): ?> <b>Not available</b> <?php endif; ?> </li> <?php endforeach; ?> </ul> The highlighted code shows a foreach loop using the : notation, which is better when mixing it with HTML. It iterates all the $books arrays, and for each book, it will print some information as a HTML list. Also note that we have a conditional inside a loop, which is perfectly fine. Of course, this conditional will be executed for each entry in the array, so you should keep the block of code of your loops as simple as possible. Functions A function is a reusable block of code that, given an input, performs some actions and optionally returns a result. You already know several predefined functions, such as empty, in_array, or var_dump. These functions come with PHP so you do not have to reinvent the wheel, but you can create your own very easily. You can define functions when you identify portions of your application that have to be executed several times or just to encapsulate some functionality. Function declaration Declaring a function means to write it down so that it can be used later. A function has a name, takes arguments, and has a block of code. Optionally, it can define what kind of value is returning. The name of the function has to follow the same rules as variable names; that is, it has to start by a letter or underscore and can contain any letter, number, or underscore. It cannot be a reserved word. Let's see a simple example: function addNumbers($a, $b) { $sum = $a + $b; return $sum; } $result = addNumbers(2, 3); Here, the function's name is addNumbers, and it takes two arguments: $a and $b. The block of code defines a new variable $sum that is the sum of both the arguments and then returns its content with return. In order to use this function, you just need to call it by its name, sending all the required arguments, as shown in the highlighted line. PHP does not support overloaded functions. Overloading refers to the ability of declaring two or more functions with the same name but different arguments. As you can see, you can declare the arguments without knowing what their types are, so PHP would not be able to decide which function to use. Another important thing to note is the variable scope. We are declaring a $sum variable inside the block of code, so once the function ends, the variable will not be accessible any more. This means that the scope of variables declared inside the function is just the function itself. Furthermore, if you had a $sum variable declared outside the function, it would not be affected at all since the function cannot access that variable unless we send it as an argument. Function arguments A function gets information from outside via arguments. You can define any number of arguments—including 0. These arguments need at least a name so that they can be used inside the function, and there cannot be two arguments with the same name. When invoking the function, you need to send the arguments in the same order as we declared them. A function may contain optional arguments; that is, you are not forced to provide a value for those arguments. When declaring the function, you need to provide a default value for those arguments, so in case the user does not provide a value, the function will use the default one: function addNumbers($a, $b, $printResult = false) { $sum = $a + $b; if ($printResult) { echo 'The result is ' . $sum; } return $sum; } $sum1 = addNumbers(1, 2); $sum1 = addNumbers(3, 4, false); $sum1 = addNumbers(5, 6, true); // it will print the result This new function takes two mandatory arguments and an optional one. The default value is false, and is used as a normal value inside the function. The function will print the result of the sum if the user provides true as the third argument, which happens only the third time that the function is invoked. For the first two times, $printResult is set to false. The arguments that the function receives are just copies of the values that the user provided. This means that if you modify these arguments inside the function, it will not affect the original values. This feature is known as sending arguments by a value. Let's see an example: function modify($a) { $a = 3; } $a = 2; modify($a); var_dump($a); // prints 2 We are declaring the $a variable with the value 2, and then we are calling the modify method, sending $a. The modify method modifies the $a argument, setting its value to 3. However, this does not affect the original value of $a, which reminds to 2 as you can see in the var_dump function. If what you want is to actually change the value of the original variable used in the invocation, you need to pass the argument by reference. To do that, you add & in front of the argument when declaring the function: function modify(&$a) { $a = 3; } Now, after invoking the modify function, $a will be always 3. Arguments by value versus by reference PHP allows you to do it, and in fact, some native functions of PHP use arguments by reference—remember the array sorting functions; they did not return the sorted array; instead, they sorted the array provided. But using arguments by reference is a way of confusing developers. Usually, when someone uses a function, they expect a result, and they do not want their provided arguments to be modified. So, try to avoid it; people will be grateful! The return statement You can have as many return statements as you want inside your function, but PHP will exit the function as soon as it finds one. This means that if you have two consecutive return statements, the second one will never be executed. Still, having multiple return statements can be useful if they are inside conditionals. Add this function inside your functions.php file: function loginMessage() { if (isset($_COOKIE['username'])) { return "You are " . $_COOKIE['username']; } else { return "You are not authenticated."; } } Let's use it in your index.php file by replacing the highlighted content—note that to save some tees, I replaced most of the code that was not changed at all with //…: //... <body> <p><?php echo loginMessage(); ?></p> <?php if (isset($_GET['title']) && isset($_GET['author'])): ?> //... Additionally, you can omit the return statement if you do not want the function to return anything. In this case, the function will end once it reaches the end of the block of code. Type hinting and return types With the release of PHP7, the language allows developers to be more specific about what functions get and return. You can—always optionally—specify the type of argument that the function needs, for example, type hinting, and the type of result the function will return—return type. Let's first see an example: <?php declare(strict_types=1); function addNumbers(int $a, int $b, bool $printSum): int { $sum = $a + $b; if ($printSum) { echo 'The sum is ' . $sum; } return $sum; } addNumbers(1, 2, true); addNumbers(1, '2', true); // it fails when strict_types is 1 addNumbers(1, 'something', true); // it always fails This function states that the arguments need to be an integer, and Boolean, and that the result will be an integer. Now, you know that PHP has type juggling, so it can usually transform a value of one type to its equivalent value of another type, for example, the string 2 can be used as integer 2. To stop PHP from using type juggling with the arguments and results of functions, you can declare the strict_types directive as shown in the first highlighted line. This directive has to be declared on the top of each file, where you want to enforce this behavior. The three invocations work as follows: The first invocation sends two integers and a Boolean, which is what the function expects. So, regardless of the value of strict_types, it will always work. The second invocation sends an integer, a string, and a Boolean. The string has a valid integer value, so if PHP was allowed to use type juggling, the invocation would resolve to just normal. But in this example, it will fail because of the declaration on top of the file. The third invocation will always fail as the something string cannot be transformed into a valid integer. Let's try to use a function within our project. In our index.php file, we have a foreach loop that iterates the books and prints them. The code inside the loop is kind of hard to understand as it is mixing HTML with PHP, and there is a conditional too. Let's try to abstract the logic inside the loop into a function. First, create the new functions.php file with the following content: <?php function printableTitle(array $book): string { $result = '<i>' . $book['title'] . '</i> - ' . $book['author']; if (!$book['available']) { $result .= ' <b>Not available</b>'; } return $result; } This file will contain our functions. The first one, printableTitle, takes an array representing a book and builds a string with a nice representation of the book in HTML. The code is the same as before, just encapsulated in a function. Now, index.php will have to include the functions.php file and then use the function inside the loop. Let's see how this is done: <?php require_once 'functions.php' ?> <!DOCTYPE html> <html lang="en"> //... ?> <ul> <?php foreach ($books as $book): ?> <li><?php echo printableTitle($book); ?> </li> <?php endforeach; ?> </ul> //... Well, now our loop looks way cleaner, right? Also, if we need to print the title of the book somewhere else, we can reuse the function instead of duplicating code! Summary In this article, we went through all the basics of procedural PHP while writing simple examples in order to practice them. You now know how to use variables and arrays with control structures and functions and how to get information from HTTP requests among others. Resources for Article: Further resources on this subject: Getting started with Modernizr using PHP IDE[article] PHP 5 Social Networking: Implementing Public Messages[article] Working with JSON in PHP jQuery[article]
Read more
  • 0
  • 0
  • 3372

article-image-writing-blog-application-nodejs-and-angularjs
Packt
16 Feb 2016
35 min read
Save for later

Writing a Blog Application with Node.js and AngularJS

Packt
16 Feb 2016
35 min read
In this article, we are going to build a blog application by using Node.js and AngularJS. Our system will support adding, editing, and removing articles, so there will be a control panel. The MongoDB or MySQL database will handle the storing of the information and the Express framework will be used as the site base. It will deliver the JavaScript, CSS, and the HTML to the end user, and will provide an API to access the database. We will use AngularJS to build the user interface and control the client-side logic in the administration page. (For more resources related to this topic, see here.) This article will cover the following topics: AngularJS fundamentals Choosing and initializing a database Implementing the client-side part of an application with AngularJS Exploring AngularJS AngularJS is an open source, client-side JavaScript framework developed by Google. It's full of features and is really well documented. It has almost become a standard framework in the development of single-page applications. The official site of AngularJS, http://angularjs.org, provides a well-structured documentation. As the framework is widely used, there is a lot of material in the form of articles and video tutorials. As a JavaScript library, it collaborates pretty well with Node.js. In this article, we will build a simple blog with a control panel. Before we start developing our application, let's first take a look at the framework. AngularJS gives us very good control over the data on our page. We don't have to think about selecting elements from the DOM and filling them with values. Thankfully, due to the available data-binding, we may update the data in the JavaScript part and see the change in the HTML part. This is also true for the reverse. Once we change something in the HTML part, we get the new values in the JavaScript part. The framework has a powerful dependency injector. There are predefined classes in order to perform AJAX requests and manage routes. You could also read Mastering Web Development with AngularJS by Peter Bacon Darwin and Pawel Kozlowski, published by Packt Publishing. Bootstrapping AngularJS applications To bootstrap an AngularJS application, we need to add the ng-app attribute to some of our HTML tags. It is important that we pick the right one. Having ng-app somewhere means that all the child nodes will be processed by the framework. It's common practice to put that attribute on the <html> tag. In the following code, we have a simple HTML page containing ng-app: <html ng-app> <head> <script src="angular.min.js"></script> </head> <body> ... </body> </html>   Very often, we will apply a value to the attribute. This will be a module name. We will do this while developing the control panel of our blog application. Having the freedom to place ng-app wherever we want means that we can decide which part of our markup will be controlled by AngularJS. That's good, because if we have a giant HTML file, we really don't want to spend resources parsing the whole document. Of course, we may bootstrap our logic manually, and this is needed when we have more than one AngularJS application on the page. Using directives and controllers In AngularJS, we can implement the Model-View-Controller pattern. The controller acts as glue between the data (model) and the user interface (view). In the context of the framework, the controller is just a simple function. For example, the following HTML code illustrates that a controller is just a simple function: <html ng-app> <head> <script src="angular.min.js"></script> <script src="HeaderController.js"></script> </head> <body> <header ng-controller="HeaderController"> <h1>{{title}}</h1> </header> </body> </html>   In <head> of the page, we are adding the minified version of the library and HeaderController.js; a file that will host the code of our controller. We also set an ng-controller attribute in the HTML markup. The definition of the controller is as follows: function HeaderController($scope) { $scope.title = "Hello world"; } Every controller has its own area of influence. That area is called the scope. In our case, HeaderController defines the {{title}} variable. AngularJS has a wonderful dependency-injection system. Thankfully, due to this mechanism, the $scope argument is automatically initialized and passed to our function. The ng-controller attribute is called the directive, that is, an attribute, which has meaning to AngularJS. There are a lot of directives that we can use. That's maybe one of the strongest points of the framework. We can implement complex logic directly inside our templates, for example, data binding, filtering, or modularity. Data binding Data binding is a process of automatically updating the view once the model is changed. As we mentioned earlier, we can change a variable in the JavaScript part of the application and the HTML part will be automatically updated. We don't have to create a reference to a DOM element or attach event listeners. Everything is handled by the framework. Let's continue and elaborate on the previous example, as follows: <header ng-controller="HeaderController"> <h1>{{title}}</h1> <a href="#" ng-click="updateTitle()">change title</a> </header>   A link is added and it contains the ng-click directive. The updateTitle function is a function defined in the controller, as seen in the following code snippet: function HeaderController($scope) { $scope.title = "Hello world"; $scope.updateTitle = function() { $scope.title = "That's a new title."; } }   We don't care about the DOM element and where the {{title}} variable is. We just change a property of $scope and everything works. There are, of course, situations where we will have the <input> fields and we want to bind their values. If that's the case, then the ng-model directive can be used. We can see this as follows: <header ng-controller="HeaderController"> <h1>{{title}}</h1> <a href="#" ng-click="updateTitle()">change title</a> <input type="text" ng-model="title" /> </header>   The data in the input field is bound to the same title variable. This time, we don't have to edit the controller. AngularJS automatically changes the content of the h1 tag. Encapsulating logic with modules It's great that we have controllers. However, it's not a good practice to place everything into globally defined functions. That's why it is good to use the module system. The following code shows how a module is defined: angular.module('HeaderModule', []); The first parameter is the name of the module and the second one is an array with the module's dependencies. By dependencies, we mean other modules, services, or something custom that we can use inside the module. It should also be set as a value of the ng-app directive. The code so far could be translated to the following code snippet: angular.module('HeaderModule', []) .controller('HeaderController', function($scope) { $scope.title = "Hello world"; $scope.updateTitle = function() { $scope.title = "That's a new title."; } });   So, the first line defines a module. We can chain the different methods of the module and one of them is the controller method. Following this approach, that is, putting our code inside a module, we will be encapsulating logic. This is a sign of good architecture. And of course, with a module, we have access to different features such as filters, custom directives, and custom services. Preparing data with filters The filters are very handy when we want to prepare our data, prior to be displayed to the user. Let's say, for example, that we need to mention our title in uppercase once it reaches a length of more than 20 characters: angular.module('HeaderModule', []) .filter('customuppercase', function() { return function(input) { if(input.length > 20) { return input.toUpperCase(); } else { return input; } }; }) .controller('HeaderController', function($scope) { $scope.title = "Hello world"; $scope.updateTitle = function() { $scope.title = "That's a new title."; } });   That's the definition of the custom filter called customuppercase. It receives the input and performs a simple check. What it returns, is what the user sees at the end. Here is how this filter could be used in HTML: <h1>{{title | customuppercase}}</h1> Of course, we may add more than one filter per variable. There are some predefined filters to limit the length, such as the JavaScript to JSON conversion or, for example, date formatting. Dependency injection Dependency management can be very tough sometimes. We may split everything into different modules/components. They have nicely written APIs and they are very well documented. However, very soon, we may realize that we need to create a lot of objects. Dependency injection solves this problem by providing what we need, on the fly. We already saw this in action. The $scope parameter passed to our controller, is actually created by the injector of AngularJS. To get something as a dependency, we need to define it somewhere and let the framework know about it. We do this as follows: angular.module('HeaderModule', []) .factory("Data", function() { return { getTitle: function() { return "A better title."; } } }) .controller('HeaderController', function($scope, Data) { $scope.title = Data.getTitle(); $scope.updateTitle = function() { $scope.title = "That's a new title."; } });   The Module class has a method called factory. It registers a new service that could later be used as a dependency. The function returns an object with only one method, getTitle. Of course, the name of the service should match the name of the controller's parameter. Otherwise, AngularJS will not be able to find the dependency's source. The model in the context of AngularJS In the well-known Model-View-Controller pattern, the model is the part that stores the data in the application. AngularJS doesn't have a specific workflow to define models. The $scope variable could be considered a model. We keep the data in properties attached to the current scope. Later, we can use the ng-model directive and bind a property to the DOM element. We already saw how this works in the previous sections. The framework may not provide the usual form of a model, but it's made like that so that we can write our own implementation. The fact that AngularJS works with plain JavaScript objects, makes this task easily doable. Final words on AngularJS AngularJS is one of the leading frameworks, not only because it is made by Google, but also because it's really flexible. We could use just a small piece of it or build a solid architecture using the giant collection of features. Selecting and initializing the database To build a blog application, we need a database that will store the published articles. In most cases, the choice of the database depends on the current project. There are factors such as performance and scalability and we should keep them in mind. In order to have a better look at the possible solutions, we will have a look at the two of the most popular databases: MongoDB and MySQL. The first one is a NoSQL type of database. According to the Wikipedia entry (http://en.wikipedia.org/wiki/ NoSQL) on NoSQL databases: "A NoSQL or Not Only SQL database provides a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases." In other words, it's simpler than a SQL database, and very often stores information in the key value type. Usually, such solutions are used when handling and storing large amounts of data. It is also a very popular approach when we need flexible schema or when we want to use JSON. It really depends on what kind of system we are building. In some cases, MySQL could be a better choice, while in some other cases, MongoDB. In our example blog, we're going to use both. In order to do this, we will need a layer that connects to the database server and accepts queries. To make things a bit more interesting, we will create a module that has only one API, but can switch between the two database models. Using NoSQL with MongoDB Let's start with MongoDB. Before we start storing information, we need a MongoDB server running. It can be downloaded from the official page of the database https://www.mongodb.org/downloads. We are not going to handle the communication with the database manually. There is a driver specifically developed for Node.js. It's called mongodb and we should include it in our package.json file. After successful installation via npm install, the driver will be available in our scripts. We can check this as follows: "dependencies": { "mongodb": "1.3.20" }   We will stick to the Model-View-Controller architecture and the database-related operations in a model called Articles. We can see this as follows: var crypto = require("crypto"), type = "mongodb", client = require('mongodb').MongoClient, mongodb_host = "127.0.0.1", mongodb_port = "27017", collection; module.exports = function() { if(type == "mongodb") { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } else { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } }   It starts with defining a few dependencies and settings for the MongoDB connection. Line number one requires the crypto module. We will use it to generate unique IDs for every article. The type variable defines which database is currently accessed. The third line initializes the MongoDB driver. We will use it to communicate with the database server. After that, we set the host and port for the connection and at the end a global collection variable, which will keep a reference to the collection with the articles. In MongoDB, the collections are similar to the tables in MySQL. The next logical step is to establish a database connection and perform the needed operations, as follows: connection = 'mongodb://'; connection += mongodb_host + ':' + mongodb_port; connection += '/blog-application'; client.connect(connection, function(err, database) { if(err) { throw new Error("Can't connect"); } else { console.log("Connection to MongoDB server successful."); collection = database.collection('articles'); } });   We pass the host and the port, and the driver is doing everything else. Of course, it is a good practice to handle the error (if any) and throw an exception. In our case, this is especially needed because without the information in the database, the frontend has nothing to show. The rest of the module contains methods to add, edit, retrieve, and delete records: return { add: function(data, callback) { var date = new Date(); data.id = crypto.randomBytes(20).toString('hex'); data.date = date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate(); collection.insert(data, {}, callback || function() {}); }, update: function(data, callback) { collection.update( {ID: data.id}, data, {}, callback || function(){ } ); }, get: function(callback) { collection.find({}).toArray(callback); }, remove: function(id, callback) { collection.findAndModify( {ID: id}, [], {}, {remove: true}, callback ); } }   The add and update methods accept the data parameter. That's a simple JavaScript object. For example, see the following code: { title: "Blog post title", text: "Article's text here ..." }   The records are identified by an automatically generated unique id. The update method needs it in order to find out which record to edit. All the methods also have a callback. That's important, because the module is meant to be used as a black box, that is, we should be able to create an instance of it, operate with the data, and at the end continue with the rest of the application's logic. Using MySQL We're going to use an SQL type of database with MySQL. We will add a few more lines of code to the already working Articles.js model. The idea is to have a class that supports the two databases like two different options. At the end, we should be able to switch from one to the other, by simply changing the value of a variable. Similar to MongoDB, we need to first install the database to be able use it. The official download page is http://www.mysql.com/downloads. MySQL requires another Node.js module. It should be added again to the package. json file. We can see the module as follows: "dependencies": { "mongodb": "1.3.20", "mysql": "2.0.0" }   Similar to the MongoDB solution, we need to firstly connect to the server. To do so, we need to know the values of the host, username, and password fields. And because the data is organized in databases, a name of the database. In MySQL, we put our data into different databases. So, the following code defines the needed variables: var mysql = require('mysql'), mysql_host = "127.0.0.1", mysql_user = "root", mysql_password = "", mysql_database = "blog_application", connection;   The previous example leaves the password field empty but we should set the proper value of our system. The MySQL database requires us to define a table and its fields before we start saving data. So, the following code is a short dump of the table used in this article: CREATE TABLE IF NOT EXISTS `articles` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` longtext NOT NULL, `text` longtext NOT NULL, `date` varchar(100) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;   Once we have a database and its table set, we can continue with the database connection, as follows: connection = mysql.createConnection({ host: mysql_host, user: mysql_user, password: mysql_password }); connection.connect(function(err) { if(err) { throw new Error("Can't connect to MySQL."); } else { connection.query("USE " + mysql_database, function(err, rows, fields) { if(err) { throw new Error("Missing database."); } else { console.log("Successfully selected database."); } }) } });   The driver provides a method to connect to the server and execute queries. The first executed query selects the database. If everything is ok, you should see Successfully selected database as an output in your console. Half of the job is done. What we should do now is replicate the methods returned in the first MongoDB implementation. We need to do this because when we switch to the MySQL usage, the code using the class will not work. And by replicating them we mean that they should have the same names and should accept the same arguments. If we do everything correctly, at the end our application will support two types of databases. And all we have to do is change the value of the type variable: return { add: function(data, callback) { var date = new Date(); var query = ""; query += "INSERT INTO articles (title, text, date) VALUES ("; query += connection.escape(data.title) + ", "; query += connection.escape(data.text) + ", "; query += "'" + date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate() + "'"; query += ")"; connection.query(query, callback); }, update: function(data, callback) { var query = "UPDATE articles SET "; query += "title=" + connection.escape(data.title) + ", "; query += "text=" + connection.escape(data.text) + " "; query += "WHERE id='" + data.id + "'"; connection.query(query, callback); }, get: function(callback) { var query = "SELECT * FROM articles ORDER BY id DESC"; connection.query(query, function(err, rows, fields) { if (err) { throw new Error("Error getting."); } else { callback(rows); } }); }, remove: function(id, callback) { var query = "DELETE FROM articles WHERE id='" + id + "'"; connection.query(query, callback); } }   The code is a little longer than the one generated in the first MongoDB variant. That's because we needed to construct MySQL queries from the passed data. Keep in mind that we have to escape the information, which comes to the module. That's why we use connection.escape(). With these lines of code, our model is completed. Now we can add, edit, remove, or get data. Let's continue with the part that shows the articles to our users. Developing the client side with AngularJS Let's assume that there is some data in the database and we are ready to present it to the users. So far, we have only developed the model, which is the class that takes care of the access to the information. To simplify the process, we will Express here. We need to first update the package.json file and include that in the framework, as follows: "dependencies": { "express": "3.4.6", "jade": "0.35.0", "mongodb": "1.3.20", "mysql": "2.0.0" }   We are also adding Jade, because we are going to use it as a template language. The writing of markup in plain HTML is not very efficient nowadays. By using the template engine, we can split the data and the HTML markup, which makes our application much better structured. Jade's syntax is kind of similar to HTML. We can write tags without the need to close them: body p(class="paragraph", data-id="12"). Sample text here footer a(href="#"). my site   The preceding code snippet is transformed to the following code snippet: <body> <p data-id="12" class="paragraph">Sample text here</p> <footer><a href="#">my site</a></footer> </body>   Jade relies on the indentation in the content to distinguish the tags. Let's start with the project structure, as seen in the following screenshot: We placed our already written class, Articles.js, inside the models directory. The public directory will contain CSS styles, and all the necessary client-side JavaScript: the AngularJS library, the AngularJS router module, and our custom code. We will skip some of the explanations about the following code. Our index.js file looks as follows: var express = require('express'); var app = express(); var articles = require("./models/Articles")(); app.set('views', __dirname + '/views'); app.set('view engine', 'jade'); app.use(express.static(__dirname + '/public')); app.use(function(req, res, next) { req.articles = articles; next(); }); app.get('/api/get', require("./controllers/api/get")); app.get('/', require("./controllers/index")); app.listen(3000); console.log('Listening on port 3000');   At the beginning, we require the Express framework and our model. Maybe it's better to initialize the model inside the controller, but in our case this is not necessary. Just after that, we set up some basic options for Express and define our own middleware. It has only one job to do and that is to attach the model to the request object. We are doing this because the request object is passed to all the route handlers. In our case, these handlers are actually the controllers. So, Articles.js becomes accessible everywhere via the req.articles property. At the end of the script, we placed two routes. The second one catches the usual requests that come from the users. The first one, /api/get, is a bit more interesting. We want to build our frontend on top of AngularJS. So, the data that is stored in the database should not enter the Node.js part but on the client side where we use Google's framework. To make this possible, we will create routes/controllers to get, add, edit, and delete records. Everything will be controlled by HTTP requests performed by AngularJS. In other words, we need an API. Before we start using AngularJS, let's take a look at the /controllers/api/get.js controller: module.exports = function(req, res, next) { req.articles.get(function(rows) { res.send(rows); }); }   The main job is done by our model and the response is handled by Express. It's nice because if we pass a JavaScript object, as we did, (rows is actually an array of objects) the framework sets the response headers automatically. To test the result, we could run the application with node index.js and open http://localhost:3000/api/ get. If we don't have any records in the database, we will get an empty array. If not, the stored articles will be returned. So, that's the URL, which we should hit from within the AngularJS controller in order to get the information. The code of the /controller/index.js controller is also just a few lines. We can see the code as follows: module.exports = function(req, res, next) { res.render("list", { app: "" }); }   It simply renders the list view, which is stored in the list.jade file. That file should be saved in the /views directory. But before we see its code, we will check another file, which acts as a base for all the pages. Jade has a nice feature called blocks. We may define different partials and combine them into one template. The following is our layout.jade file: doctype html html(ng-app="#{app}") head title Blog link(rel='stylesheet', href='/style.css') script(src='/angular.min.js') script(src='/angular-route.min.js') body block content   There is only one variable passed to this template, which is #{app}. We will need it later to initialize the administration's module. The angular.min.js and angular-route.min.js files should be downloaded from the official AngularJS site, and placed in the /public directory. The body of the page contains a block placeholder called content, which we will later fill with the list of the articles. The following is the list.jade file: extends layout block content .container(ng-controller="BlogCtrl") section.articles article(ng-repeat="article in articles") h2 {{article.title}} br small published on {{article.date}} p {{article.text}} script(src='/blog.js')   The two lines in the beginning combine both the templates into one page. The Express framework transforms the Jade template into HTML and serves it to the browser of the user. From there, the client-side JavaScript takes control. We are using the ng-controller directive saying that the div element will be controlled by an AngularJS controller called BlogCtrl. The same class should have variable, articles, filled with the information from the database. ng-repeat goes through the array and displays the content to the users. The blog.js class holds the code of the controller: function BlogCtrl($scope, $http) { $scope.articles = [ { title: "", text: "Loading ..."} ]; $http({method: 'GET', url: '/api/get'}) .success(function(data, status, headers, config) { $scope.articles = data; }) .error(function(data, status, headers, config) { console.error("Error getting articles."); }); }   The controller has two dependencies. The first one, $scope, points to the current view. Whatever we assign as a property there is available as a variable in our HTML markup. Initially, we add only one element, which doesn't have a title, but has text. It is shown to indicate that we are still loading the articles from the database. The second dependency, $http, provides an API in order to make HTTP requests. So, all we have to do is query /api/get, fetch the data, and pass it to the $scope dependency. The rest is done by AngularJS and its magical two-way data binding. To make the application a little more interesting, we will add a search field, as follows: // views/list.jade header .search input(type="text", placeholder="type a filter here", ng-model="filterText") h1 Blog hr   The ng-model directive, binds the value of the input field to a variable inside our $scope dependency. However, this time, we don't have to edit our controller and can simply apply the same variable as a filter to the ng-repeat: article(ng-repeat="article in articles | filter:filterText") As a result, the articles shown will be filtered based on the user's input. Two simple additions, but something really valuable is on the page. The filters of AngularJS can be very powerful. Implementing a control panel The control panel is the place where we will manage the articles of the blog. Several things should be made in the backend before continuing with the user interface. They are as follows: app.set("username", "admin"); app.set("password", "pass"); app.use(express.cookieParser('blog-application')); app.use(express.session());   The previous lines of code should be added to /index.js. Our administration should be protected, so the first two lines define our credentials. We are using Express as data storage, simply creating key-value pairs. Later, if we need the username we can get it with app.get("username"). The next two lines enable session support. We need that because of the login process. We added a middleware, which attaches the articles to the request object. We will do the same with the current user's status, as follows: app.use(function(req, res, next) { if (( req.session && req.session.admin === true ) || ( req.body && req.body.username === app.get("username") && req.body.password === app.get("password") )) { req.logged = true; req.session.admin = true; }; next(); });   Our if statement is a little long, but it tells us whether the user is logged in or not. The first part checks whether there is a session created and the second one checks whether the user submitted a form with the correct username and password. If these expressions are true, then we attach a variable, logged, to the request object and create a session that will be valid during the following requests. There is only one thing that we need in the main application's file. A few routes that will handle the control panel operations. In the following code, we are defining them along with the needed route handler: var protect = function(req, res, next) { if (req.logged) { next(); } else { res.send(401, 'No Access.'); } } app.post('/api/add', protect, require("./controllers/api/add")); app.post('/api/edit', protect, require("./controllers/api/edit")); app.post('/api/delete', protect, require("./controllers/api/ delete")); app.all('/admin', require("./controllers/admin"));   The three routes, which start with /api, will use the model Articles.js to add, edit, and remove articles from the database. These operations should be protected. We will add a middleware function that takes care of this. If the req.logged variable is not available, it simply responds with a 401 - Unauthorized status code. The last route, /admin, is a little different because it shows a login form instead. The following is the controller to create new articles: module.exports = function(req, res, next) { req.articles.add(req.body, function() { res.send({success: true}); }); }   We transfer most of the logic to the frontend, so again, there are just a few lines. What is interesting here is that we pass req.body directly to the model. It actually contains the data submitted by the user. The following code, is how the req.articles.add method looks for the MongoDB implementation: add: function(data, callback) { data.ID = crypto.randomBytes(20).toString('hex'); collection.insert(data, {}, callback || function() {}); } And the MySQL implementation is as follows: add: function(data, callback) { var date = new Date(); var query = ""; query += "INSERT INTO articles (title, text, date) VALUES ("; query += connection.escape(data.title) + ", "; query += connection.escape(data.text) + ", "; query += "'" + date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate() + "'"; query += ")"; connection.query(query, callback); } In both the cases, we need title and text in the passed data object. Thankfully, due to Express' bodyParser middleware, this is what we have in the req.body object. We can directly forward it to the model. The other route handlers are almost the same: // api/edit.js module.exports = function(req, res, next) { req.articles.update(req.body, function() { res.send({success: true}); }); } What we changed is the method of the Articles.js class. It is not add but update. The same technique is applied in the route to delete an article. We can see it as follows: // api/delete.js module.exports = function(req, res, next) { req.articles.remove(req.body.id, function() { res.send({success: true}); }); }   What we need for deletion is not the whole body of the request but only the unique ID of the record. Every API method sends {success: true} as a response. While we are dealing with API requests, we should always return a response. Even if something goes wrong. The last thing in the Node.js part, which we have to cover, is the controller responsible for the user interface of the administration panel, that is, the. / controllers/admin.js file: module.exports = function(req, res, next) { if(req.logged) { res.render("admin", { app: "admin" }); } else { res.render("login", { app: "" }); } }   There are two templates that are rendered: /views/admin.jade and /views/login. jade. Based on the variable, which we set in /index.js, the script decides which one to show. If the user is not logged in, then a login form is sent to the browser, as follows: extends layout block content .container header h1 Administration hr section.articles article form(method="post", action="/admin") span Username: br input(type="text", name="username") br span Password: br input(type="password", name="password") br br input(type="submit", value="login")   There is no AngularJS code here. All we have is the good old HTML form, which submits its data via POST to the same URL—/admin. If the username and password are correct, the .logged variable is set to true and the controller renders the other template: extends layout block content .container header h1 Administration hr a(href="/") Public span | a(href="#/") List span | a(href="#/add") Add section(ng-view) script(src='/admin.js')   The control panel needs several views to handle all the operations. AngularJS has a great router module, which works with hashtags-type URLs, that is, URLs such as / admin#/add. The same module requires a placeholder for the different partials. In our case, this is a section tag. The ng-view attribute tells the framework that this is the element prepared for that logic. At the end of the template, we are adding an external file, which keeps the whole client-side JavaScript code that is needed by the control panel. While the client-side part of the applications needs only loading of the articles, the control panel requires a lot more functionalities. It is good to use the modular system of AngularJS. We need the routes and views to change, so the ngRoute module is needed as a dependency. This module is not added in the main angular.min.js build. It is placed in the angular-route.min.js file. The following code shows how our module starts: var admin = angular.module('admin', ['ngRoute']); admin.config(['$routeProvider', function($routeProvider) { $routeProvider .when('/', {}) .when('/add', {}) .when('/edit/:id', {}) .when('/delete/:id', {}) .otherwise({ redirectTo: '/' }); } ]);   We configured the router by mapping URLs to specific routes. At the moment, the routes are just empty objects, but we will fix that shortly. Every controller will need to make HTTP requests to the Node.js part of the application. It will be nice if we have such a service and use it all over our code. We can see an example as follows: admin.factory('API', function($http) { var request = function(method, url) { return function(callback, data) { $http({ method: method, url: url, data: data }) .success(callback) .error(function(data, status, headers, config) { console.error("Error requesting '" + url + "'."); }); } } return { get: request('GET', '/api/get'), add: request('POST', '/api/add'), edit: request('POST', '/api/edit'), remove: request('POST', '/api/delete') } });   One of the best things about AngularJS is that it works with plain JavaScript objects. There are no unnecessary abstractions and no extending or inheriting special classes. We are using the .factory method to create a simple JavaScript object. It has four methods that can be called: get, add, edit, and remove. Each one of them calls a function, which is defined in the helper method request. The service has only one dependency, $http. We already know this module; it handles HTTP requests nicely. The URLs that we are going to query are the same ones that we defined in the Node.js part. Now, let's create a controller that will show the articles currently stored in the database. First, we should replace the empty route object .when('/', {}) with the following object: .when('/', { controller: 'ListCtrl', template: ' <article ng-repeat="article in articles"> <hr /> <strong>{{article.title}}</strong><br /> (<a href="#/edit/{{article.id}}">edit</a>) (<a href="#/delete/{{article.id}}">remove</a>) </article> ' })   The object has to contain a controller and a template. The template is nothing more than a few lines of HTML markup. It looks a bit like the template used to show the articles on the client side. The difference is the links used to edit and delete. JavaScript doesn't allow new lines in the string definitions. The backward slashes at the end of the lines prevent syntax errors, which will eventually be thrown by the browser. The following is the code for the controller. It is defined, again, in the module: admin.controller('ListCtrl', function($scope, API) { API.get(function(articles) { $scope.articles = articles; }); });   And here is the beauty of the AngularJS dependency injection. Our custom-defined service API is automatically initialized and passed to the controller. The .get method fetches the articles from the database. Later, we send the information to the current $scope dependency and the two-way data binding does the rest. The articles are shown on the page. The work with AngularJS is so easy that we could combine the controller to add and edit in one place. Let's store the route object in an external variable, as follows: var AddEditRoute = { controller: 'AddEditCtrl', template: ' <hr /> <article> <form> <span>Title</spna><br /> <input type="text" ng-model="article.title"/><br /> <span>Text</spna><br /> <textarea rows="7" ng-model="article.text"></textarea> <br /><br /> <button ng-click="save()">save</button> </form> </article> ' };   And later, assign it to the both the routes, as follows: .when('/add', AddEditRoute) .when('/edit/:id', AddEditRoute)   The template is just a form with the necessary fields and a button, which calls the save method in the controller. Notice that we bound the input field and the text area to variables inside the $scope dependency. This comes in handy because we don't need to access the DOM to get the values. We can see this as follows: admin.controller( 'AddEditCtrl', function($scope, API, $location, $routeParams) { var editMode = $routeParams.id ? true : false; if (editMode) { API.get(function(articles) { articles.forEach(function(article) { if (article.id == $routeParams.id) { $scope.article = article; } }); }); } $scope.save = function() { API[editMode ? 'edit' : 'add'](function() { $location.path('/'); }, $scope.article); } })   The controller receives four dependencies. We already know about $scope and API. The $location dependency is used when we want to change the current route, or, in other words, to forward the user to another view. The $routeParams dependency is needed to fetch parameters from the URL. In our case, /edit/:id is a route with a variable inside. Inside the code, the id is available in $routeParams.id. The adding and editing of articles uses the same form. So, with a simple check, we know what the user is currently doing. If the user is in the edit mode, then we fetch the article based on the provided id and fill the form. Otherwise, the fields are empty and new records will be created. The deletion of an article can be done by using a similar approach, which is adding a route object and defining a new controller. We can see the deletion as follows: .when('/delete/:id', { controller: 'RemoveCtrl', template: ' ' })   We don't need a template in this case. Once the article is deleted from the database, we will forward the user to the list page. We have to call the remove method of the API. Here is how the RemoveCtrl controller looks like: admin.controller( 'RemoveCtrl', function($scope, $location, $routeParams, API) { API.remove(function() { $location.path('/'); }, $routeParams); } );   The preceding code depicts same dependencies like in the previous controller. This time, we simply forward the $routeParams dependency to the API. And because it is a plain JavaScript object, everything works as expected. Summary In this article, we built a simple blog by writing the backend of the application in Node.js. The module for database communication, which we wrote, can work with the MongoDB or MySQL database and store articles. The client-side part and the control panel of the blog were developed with AngularJS. We then defined a custom service using the built-in HTTP and routing mechanisms. Node.js works well with AngularJS, mainly because both are written in JavaScript. We found out that AngularJS is built to support the developer. It removes all those boring tasks such as DOM element referencing, attaching event listeners, and so on. It's a great choice for the modern client-side coding stack. You can refer to the following books to learn more about Node.js: Node.js Essentials Learning Node.js for Mobile Application Development Node.js Design Patterns Resources for Article: Further resources on this subject: Node.js Fundamentals [Article] AngularJS Project [Article] Working with Live Data and AngularJS [Article]
Read more
  • 0
  • 2
  • 9210

article-image-types-variables-and-function-techniques
Packt
16 Feb 2016
39 min read
Save for later

Types, Variables, and Function Techniques

Packt
16 Feb 2016
39 min read
This article is an introduction to the syntax used in the TypeScript language to apply strong typing to JavaScript. It is intended for readers that have not used TypeScript before, and covers the transition from standard JavaScript to TypeScript. We will cover the following topics in this article: Basic types and type syntax: strings, numbers, and booleans Inferred typing and duck-typing Arrays and enums The any type and explicit casting Functions and anonymous functions Optional and default function parameters Argument arrays Function callbacks and function signatures Function scoping rules and overloads (For more resources related to this topic, see here.) Basic types JavaScript variables can hold a number of data types, including numbers, strings, arrays, objects, functions, and more. The type of an object in JavaScript is determined by its assignment–so if a variable has been assigned a string value, then it will be of type string. This can, however, introduce a number of problems in our code. JavaScript is not strongly typed JavaScript objects and variables can be changed or reassigned on the fly. As an example of this, consider the following JavaScript code: var myString = "test"; var myNumber = 1; var myBoolean = true; We start by defining three variables, named myString, myNumber and myBoolean. The myString variable is set to a string value of "test", and as such will be of type string. Similarly, myNumber is set to the value of 1, and is therefore of type number, and myBoolean is set to true, making it of type boolean. Now let's start assigning these variables to each other, as follows: myString = myNumber; myBoolean = myString; myNumber = myBoolean; We start by setting the value of myString to the value of myNumber (which is the numeric value of 1). We then set the value of myBoolean to the value of myString, (which would now be the numeric value of 1). Finally, we set the value of myNumber to the value of myBoolean. What is happening here, is that even though we started out with three different types of variables—a string, a number, and a boolean—we are able to reassign any of these variables to one of the other types. We can assign a number to a string, a string to boolean, or a boolean to a number. While this type of assignment in JavaScript is legal, it shows that the JavaScript language is not strongly typed. This can lead to unwanted behaviour in our code. Parts of our code may be relying on the fact that a particular variable is holding a string, and if we inadvertently assign a number to this variable, our code may start to break in unexpected ways. TypeScript is strongly typed TypeScript, on the other hand, is a strongly typed language. Once you have declared a variable to be of type string, you can only assign string values to it. All further code that uses this variable must treat it as though it has a type of string. This helps to ensure that code that we write will behave as expected. While strong typing may not seem to be of any use with simple strings and numbers—it certainly does become important when we apply the same rules to objects, groups of objects, function definitions and classes. If you have written a function that expects a string as the first parameter and a number as the second, you cannot be blamed, if someone calls your function with a boolean as the first parameter and something else as the second. JavaScript programmers have always relied heavily on documentation to understand how to call functions, and the order and type of the correct function parameters. But what if we could take all of this documentation and include it within the IDE? Then, as we write our code, our compiler could point out to us—automatically—that we were using objects and functions in the wrong way. Surely this would make us more efficient, more productive programmers, allowing us to generating code with fewer errors? TypeScript does exactly that. It introduces a very simple syntax to define the type of a variable or a function parameter to ensure that we are using these objects, variables, and functions in the correct manner. If we break any of these rules, the TypeScript compiler will automatically generate errors, pointing us to the lines of code that are in error. This is how TypeScript got its name. It is JavaScript with strong typing - hence TypeScript. Let's take a look at this very simple language syntax that enables the "Type" in TypeScript. Type syntax The TypeScript syntax for declaring the type of a variable is to include a colon (:), after the variable name, and then indicate its type. Consider the following TypeScript code: var myString : string = "test"; var myNumber: number = 1; var myBoolean : boolean = true; This code snippet is the TypeScript equivalent of our preceding JavaScript code. We can now see an example of the TypeScript syntax for declaring a type for the myString variable. By including a colon and then the keyword string (: string), we are telling the compiler that the myString variable is of type string. Similarly, the myNumber variable is of type number, and the myBoolean variable is of type boolean. TypeScript has introduced the string, number and boolean keywords for each of these basic JavaScript types. If we attempt to assign a value to a variable that is not of the same type, the TypeScript compiler will generate a compile-time error. Given the variables declared in the preceding code, the following TypeScript code will generate some compile errors: myString = myNumber; myBoolean = myString; myNumber = myBoolean; TypeScript build errors when assigning incorrect types The TypeScript compiler is generating compile errors, because we are attempting to mix these basic types. The first error is generated by the compiler because we cannot assign a number value to a variable of type string. Similarly, the second compile error indicates that we cannot assign a string value to a variable of type boolean. Again, the third error is generated because we cannot assign a boolean value to a variable of type number. The strong typing syntax that the TypeScript language introduces, means that we need to ensure that the types on the left-hand side of an assignment operator (=) are the same as the types on the right-hand side of the assignment operator. To fix the preceding TypeScript code, and remove the compile errors, we would need to do something similar to the following: myString = myNumber.toString(); myBoolean = (myString === "test"); if (myBoolean) { myNumber = 1; } Our first line of code has been changed to call the .toString() function on the myNumber variable (which is of type number), in order to return a value that is of type string. This line of code, then, does not generate a compile error because both sides of the equal sign are of the same type. Our second line of code has also been changed so that the right hand side of the assignment operator returns the result of a comparison, myString === "test", which will return a value of type boolean. The compiler will therefore allow this code, because both sides of the assignment resolve to a value of type boolean. The last line of our code snippet has been changed to only assign the value 1 (which is of type number) to the myNumber variable, if the value of the myBoolean variable is true. Anders Hejlsberg describes this feature as "syntactic sugar". With a little sugar on top of comparable JavaScript code, TypeScript has enabled our code to conform to strong typing rules. Whenever you break these strong typing rules, the compiler will generate errors for your offending code. Inferred typing TypeScript also uses a technique called inferred typing, in cases where you do not explicitly specify the type of your variable. In other words, TypeScript will find the first usage of a variable within your code, figure out what type the variable is first initialized to, and then assume the same type for this variable in the rest of your code block. As an example of this, consider the following code: var myString = "this is a string"; var myNumber = 1; myNumber = myString; We start by declaring a variable named myString, and assign a string value to it. TypeScript identifies that this variable has been assigned a value of type string, and will, therefore, infer any further usages of this variable to be of type string. Our second variable, named myNumber has a number assigned to it. Again, TypeScript is inferring the type of this variable to be of type number. If we then attempt to assign the myString variable (of type string) to the myNumber variable (of type number) in the last line of code, TypeScript will generate a familiar error message: error TS2011: Build: Cannot convert 'string' to 'number' This error is generated because of TypeScript's inferred typing rules. Duck-typing TypeScript also uses a method called duck-typing for more complex variable types. Duck-typing means that if it looks like a duck, and quacks like a duck, then it probably is a duck. Consider the following TypeScript code: var complexType = { name: "myName", id: 1 }; complexType = { id: 2, name: "anotherName" }; We start with a variable named complexType that has been assigned a simple JavaScript object with a name and id property. On our second line of code, we can see that we are re-assigning the value of this complexType variable to another object that also has an id and a name property. The compiler will use duck-typing in this instance to figure out whether this assignment is valid. In other words, if an object has the same set of properties as another object, then they are considered to be of the same type. To further illustrate this point, let's see how the compiler reacts if we attempt to assign an object to our complexType variable that does not conform to this duck-typing: var complexType = { name: "myName", id: 1 }; complexType = { id: 2 }; complexType = { name: "anotherName" }; complexType = { address: "address" }; The first line of this code snippet defines our complexType variable, and assigns to it an object that contains both an id and name property. From this point, TypeScript will use this inferred type on any value we attempt to assign to the complexType variable. On our second line of code, we are attempting to assign a value that has an id property but not the name property. On the third line of code, we again attempt to assign a value that has a name property, but does not have an id property. On the last line of our code snippet, we have completely missed the mark. Compiling this code will generate the following errors: error TS2012: Build: Cannot convert '{ id: number; }' to '{ name: string; id: number; }': error TS2012: Build: Cannot convert '{ name: string; }' to '{ name: string; id: number; }': error TS2012: Build: Cannot convert '{ address: string; }' to '{ name: string; id: number; }': As we can see from the error messages, TypeScript is using duck-typing to ensure type safety. In each message, the compiler gives us clues as to what is wrong with the offending code – by explicitly stating what it is expecting. The complexType variable has both an id and a name property. To assign a value to the complexType variable, then, this value will need to have both an id and a name property. Working through each of these errors, TypeScript is explicitly stating what is wrong with each line of code. Note that the following code will not generate any error messages: var complexType = { name: "myName", id: 1 }; complexType = { name: "name", id: 2, address: "address" }; Again, our first line of code defines the complexType variable, as we have seen previously, with an id and a name property. Now, look at the second line of this example. The object we are using actually has three properties: name, id, and address. Even though we have added a new address property, the compiler will only check to see if our new object has both an id and a name. Because our new object has these properties, and will therefore match the original type of the variable, TypeScript will allow this assignment through duck-typing. Inferred typing and duck-typing are powerful features of the TypeScript language – bringing strong typing to our code, without the need to use explicit typing, that is, a colon : and then the type specifier syntax. Arrays Besides the base JavaScript types of string, number, and boolean, TypeScript has two other data types: Arrays and enums. Let's look at the syntax for defining arrays. An array is simply marked with the [] notation, similar to JavaScript, and each array can be strongly typed to hold a specific type as seen in the code below: var arrayOfNumbers: number[] = [1, 2, 3]; arrayOfNumbers = [3, 4, 5]; arrayOfNumbers = ["one", "two", "three"]; On the first line of this code snippet, we are defining an array named arrayOfNumbers, and further specify that each element of this array must be of type number. The second line then reassigns this array to hold some different numerical values. The last line of this snippet, however, will generate the following error message: error TS2012: Build: Cannot convert 'string[]' to 'number[]': This error message is warning us that the variable arrayOfNumbers is strongly typed to only accept values of type number. Our code tries to assign an array of strings to this array of numbers, and is therefore, generating a compile error. The any type All this type checking is well and good, but JavaScript is flexible enough to allow variables to be mixed and matched. The following code snippet is actually valid JavaScript code: var item1 = { id: 1, name: "item 1" }; item1 = { id: 2 }; Our first line of code assigns an object with an id property and a name property to the variable item1. The second line then re-assigns this variable to an object that has an id property but not a name property. Unfortunately, as we have seen previously, TypeScript will generate a compile time error for the preceding code: error TS2012: Build: Cannot convert '{ id: number; }' to '{ id: number; name: string; }' TypeScript introduces the any type for such occasions. Specifying that an object has a type of any in essence relaxes the compiler's strict type checking. The following code shows how to use the any type: var item1 : any = { id: 1, name: "item 1" }; item1 = { id: 2 }; Note how our first line of code has changed. We specify the type of the variable item1 to be of type : any so that our code will compile without errors. Without the type specifier of : any, the second line of code, would normally generate an error. Explicit casting As with any strongly typed language, there comes a time where you need to explicitly specify the type of an object. An object can be cast to the type of another by using the < > syntax. This is not a cast in the strictest sense of the word; it is more of an assertion that is used at runtime by the TypeScript compiler. Any explicit casting that you use will be compiled away in the resultant JavaScript and will not affect the code at runtime. Let's modify our previous code snippet to use explicit casting: var item1 = <any>{ id: 1, name: "item 1" }; item1 = { id: 2 }; Note that on the first line of this snippet, we have now replaced the : any type specifier on the left hand side of the assignment, with an explicit cast of <any> on the right hand side. This snippet of code is telling the compiler to explicitly cast, or to explicitly treat the { id: 1, name: "item 1" } object on the right-hand side as a type of any. So the item1 variable, therefore, also has the type of any (due to TypeScript's inferred typing rules). This then allows us to assign an object with only the { id: 2 } property to the variable item1 on the second line of code. This technique of using the < > syntax on the right hand side of an assignment, is called explicit casting. While the any type is a necessary feature of the TypeScript language – its usage should really be limited as much as possible. It is a language shortcut that is necessary to ensure compatibility with JavaScript, but over-use of the any type will quickly lead to coding errors that will be difficult to find. Rather than using the type any, try to figure out the correct type of the object you are using, and then use this type instead. We use an acronym within our programming teams: S.F.I.A.T. (pronounced sviat or sveat). Simply Find an Interface for the Any Type. While this may sound silly – it brings home the point that the any type should always be replaced with an interface – so simply find it. Just remember that by actively trying to define what an object's type should be, we are building strongly typed code, and therefore protecting ourselves from future coding errors and bugs. Enums Enums are a special type that has been borrowed from other languages such as C#, and provide a solution to the problem of special numbers. An enum associates a human-readable name for a specific number. Consider the following code: enum DoorState { Open, Closed, Ajar } In this code snippet, we have defined an enum called DoorState to represent the state of a door. Valid values for this door state are Open, Closed, or Ajar. Under the hood (in the generated JavaScript), TypeScript will assign a numeric value to each of these human-readable enum values. In this example, the DoorState.Open enum value will equate to a numeric value of 0. Likewise, the enum value DoorState.Closed will be equate to the numeric value of 1, and the DoorState.Ajar enum value will equate to 2. Let's have a quick look at how we would use these enum values: window.onload = () => { var myDoor = DoorState.Open; console.log("My door state is " + myDoor.toString()); }; The first line within the window.onload function creates a variable named myDoor, and sets its value to DoorState.Open. The second line simply logs the value of myDoor to the console. The output of this console.log function would be: My door state is 0 This clearly shows that the TypeScript compiler has substituted the enum value of DoorState.Open with the numeric value 0. Now let's use this enum in a slightly different way: window.onload = () => { var openDoor = DoorState["Closed"]; console.log("My door state is " + openDoor.toString()); }; This code snippet uses a string value of "Closed" to lookup the enum type, and assign the resulting enum value to the openDoor variable. The output of this code would be: My door state is 1 This sample clearly shows that the enum value of DoorState.Closed is the same as the enum value of DoorState["Closed"], because both variants resolve to the numeric value of 1. Finally, let's have a look at what happens when we reference an enum using an array type syntax: window.onload = () => { var ajarDoor = DoorState[2]; console.log("My door state is " + ajarDoor.toString()); }; Here, we assign the variable openDoor to an enum value based on the 2nd index value of the DoorState enum. The output of this code, though, is surprising: My door state is Ajar You may have been expecting the output to be simply 2, but here we are getting the string "Ajar" – which is a string representation of our original enum name. This is actually a neat little trick – allowing us to access a string representation of our enum value. The reason that this is possible is down to the JavaScript that has been generated by the TypeScript compiler. Let's have a look, then, at the closure that the TypeScript compiler has generated: var DoorState; (function (DoorState) { DoorState[DoorState["Open"] = 0] = "Open"; DoorState[DoorState["Closed"] = 1] = "Closed"; DoorState[DoorState["Ajar"] = 2] = "Ajar"; })(DoorState || (DoorState = {})); This strange looking syntax is building an object that has a specific internal structure. It is this internal structure that allows us to use this enum in the various ways that we have just explored. If we interrogate this structure while debugging our JavaScript, we will see the internal structure of the DoorState object is as follows: DoorState {...} [prototype]: {...} [0]: "Open" [1]: "Closed" [2]: "Ajar" [prototype]: [] Ajar: 2 Closed: 1 Open: 0 The DoorState object has a property called "0", which has a string value of "Open". Unfortunately, in JavaScript the number 0 is not a valid property name, so we cannot access this property by simply using DoorState.0. Instead, we must access this property using either DoorState[0] or DoorState["0"]. The DoorState object also has a property named Open, which is set to the numeric value 0. The word Open IS a valid property name in JavaScript, so we can access this property using DoorState["Open"], or simply DoorState.Open, which equate to the same property in JavaScript. While the underlying JavaScript can be a little confusing, all we need to remember about enums is that they are a handy way of defining an easily remembered, human-readable name to a special number. Using human-readable enums, instead of just scattering various special numbers around in our code, also makes the intent of the code clearer. Using an application wide value named DoorState.Open or DoorState.Closed is far simpler than remembering to set a value to 0 for Open, 1 for Closed, and 3 for ajar. As well as making our code more readable, and more maintainable, using enums also protects our code base whenever these special numeric values change – because they are all defined in one place. One last note on enums – we can set the numeric value manually, if needs be: enum DoorState { Open = 3, Closed = 7, Ajar = 10 } Here, we have overridden the default values of the enum to set DoorState.Open to 3, DoorState.Closed to 7, and DoorState.Ajar to 10. Const enums With the release of TypeScript 1.4, we are also able to define const enums as follows: const enum DoorStateConst { Open, Closed, Ajar } var myState = DoorStateConst.Open; These types of enums have been introduced largely for performance reasons, and the resultant JavaScript will not contain the full closure definition for the DoorStateConst enum as we saw previously. Let's have a quick look at the JavaScript that is generated from this DoorStateConst enum: var myState = 0 /* Open */; Note how we do not have a full JavaScript closure for the DoorStateConst at all. The compiler has simply resolved the DoorStateConst.Open enum to its internal value of 0, and removed the const enum definition entirely. With const enums, we therefore cannot reference the internal string value of an enum, as we did in our previous code sample. Consider the following example: // generates an error console.log(DoorStateConst[0]); // valid usage console.log(DoorStateConst["Open"]); The first console.log statement will now generate a compile time error – as we do not have the full closure available with the property of [0] for our const enum. The second usage of this const enum is valid, however, and will generate the following JavaScript: console.log(0 /* "Open" */); When using const enums, just keep in mind that the compiler will strip away all enum definitions and simply substitute the numeric value of the enum directly into our JavaScript code. Functions JavaScript defines functions using the function keyword, a set of braces, and then a set of curly braces. A typical JavaScript function would be written as follows: function addNumbers(a, b) { return a + b; } var result = addNumbers(1, 2); var result2 = addNumbers("1", "2"); This code snippet is fairly self-explanatory; we have defined a function named addNumbers that takes two variables and returns their sum. We then invoke this function, passing in the values of 1 and 2. The value of the variable result would then be 1 + 2, which is 3. Now have a look at the last line of code. Here, we are invoking the addNumbers function, passing in two strings as arguments, instead of numbers. The value of the variable result2 would then be a string, "12". This string value seems like it may not be the desired result, as the name of the function is addNumbers. Copying the preceding code into a TypeScript file would not generate any errors, but let's insert some type rules to the preceding JavaScript to make it more robust: function addNumbers(a: number, b: number): number { return a + b; }; var result = addNumbers(1, 2); var result2 = addNumbers("1", "2"); In this TypeScript code, we have added a :number type to both of the parameters of the addNumbers function (a and b), and we have also added a :number type just after the ( ) braces. Placing a type descriptor here means that the return type of the function itself is strongly typed to return a value of type number. In TypeScript, the last line of code, however, will cause a compilation error: error TS2082: Build: Supplied parameters do not match any signature of call target: This error message is generate because we have explicitly stated that the function should accept only numbers for both of the arguments a and b, but in our offending code, we are passing two strings. The TypeScript compiler, therefore, cannot match the signature of a function named addNumbers that accepts two arguments of type string. Anonymous functions The JavaScript language also has the concept of anonymous functions. These are functions that are defined on the fly and don't specify a function name. Consider the following JavaScript code: var addVar = function(a, b) { return a + b; }; var result = addVar(1, 2); This code snippet defines a function that has no name and adds two values. Because the function does not have a name, it is known as an anonymous function. This anonymous function is then assigned to a variable named addVar. The addVar variable, then, can then be invoked as a function with two parameters, and the return value will be the result of executing the anonymous function. In this case, the variable result will have a value of 3. Let's now rewrite the preceding JavaScript function in TypeScript, and add some type syntax, in order to ensure that the function only accepts two arguments of type number, and returns a value of type number: var addVar = function(a: number, b: number): number { return a + b; } var result = addVar(1, 2); var result2 = addVar("1", "2"); In this code snippet, we have created an anonymous function that accepts only arguments of type number for the parameters a and b, and also returns a value of type number. The types for both the a and b parameters, as well as the return type of the function, are now using the :number syntax. This is another example of the simple "syntactic sugar" that TypeScript injects into the language. If we compile this code, TypeScript will reject the code on the last line, where we try to call our anonymous function with two string parameters: error TS2082: Build: Supplied parameters do not match any signature of call target: Optional parameters When we call a JavaScript function that has is expecting parameters, and we do not supply these parameters, then the value of the parameter within the function will be undefined. As an example of this, consider the following JavaScript code: var concatStrings = function(a, b, c) { return a + b + c; } console.log(concatStrings("a", "b", "c")); console.log(concatStrings("a", "b")); Here, we have defined a function called concatStrings that takes three parameters, a, b, and c, and simply returns the sum of these values. If we call this function with all three parameters, as seen in the second last line of this snipped, we will end up with the string "abc" logged to the console. If, however, we only supply two parameters, as seen in the last line of this snippet, the string "abundefined" will be logged to the console. Again, if we call a function and do not supply a parameter, then this parameter, c in our case, will be simply undefined. TypeScript introduces the question mark ? syntax to indicate optional parameters. Consider the following TypeScript function definition: var concatStrings = function(a: string, b: string, c?: string) { return a + b + c; } console.log(concatStrings("a", "b", "c")); console.log(concatStrings("a", "b")); console.log(concatStrings("a")); This is a strongly typed version of the original concatStrings JavaScript function that we were using previously. Note the addition of the ? character in the syntax for the third parameter: c?: string. This indicates that the third parameter is optional, and therefore, all of the preceding code will compile cleanly, except for the last line. The last line will generate an error: error TS2081: Build: Supplied parameters do not match any signature of call target. This error is generated because we are attempting to call the concatStrings function with only a single parameter. Our function definition, though, requires at least two parameters, with only the third parameter being optional. The optional parameters must be the last parameters in the function definition. You can have as many optional parameters as you want, as long as non-optional parameters precede the optional parameters. Default parameters A subtle variant on the optional parameter function definition, allows us to specify the value of a parameter if it is not passed in as an argument from the calling code. Let's modify our preceding function definition to use an optional parameter: var concatStrings = function(a: string, b: string, c: string = "c") { return a + b + c; } console.log(concatStrings("a", "b", "c")); console.log(concatStrings("a", "b")); This function definition has now dropped the ? optional parameter syntax, but instead has assigned a value of "c" to the last parameter: c:string = "c". By using default parameters, if we do not supply a value for the final parameter named c, the concatStrings function will substitute the default value of "c" instead. The argument c, therefore, will not be undefined. The output of the last two lines of code will both be "abc". Note that using the default parameter syntax will automatically make the parameter optional. The arguments variable The JavaScript language allows a function to be called with a variable number of arguments. Every JavaScript function has access to a special variable, named arguments, that can be used to retrieve all arguments that have been passed into the function. As an example of this, consider the following JavaScript code: function testParams() { if (arguments.length > 0) { for (var i = 0; i < arguments.length; i++) { console.log("Argument " + i + " = " + arguments[i]); } } } testParams(1, 2, 3, 4); testParams("first argument"); In this code snippet, we have defined a function name testParams that does not have any named parameters. Note, though, that we can use the special variable, named arguments, to test whether the function was called with any arguments. In our sample, we can simply loop through the arguments array, and log the value of each argument to the console, by using an array indexer : arguments[i]. The output of the console.log calls are as follows: Argument 0 = 1 Argument 1 = 2 Argument 2 = 3 Argument 3 = 4 Argument 0 = first argument So, how do we express a variable number of function parameters in TypeScript? The answer is to use what are called rest parameters, or the three dots (…) syntax. Here is the equivalent testParams function, expressed in TypeScript: function testParams(...argArray: number[]) { if (argArray.length > 0) { for (var i = 0; i < argArray.length; i++) { console.log("argArray " + i + " = " + argArray[i]); console.log("arguments " + i + " = " + arguments[i]); } } } testParams(1); testParams(1, 2, 3, 4); testParams("one", "two"); Note the use of the …argArray: number[] syntax for our testParams function. This syntax is telling the TypeScript compiler that the function can accept any number of arguments. This means that our usages of this function, i.e. calling the function with either testParams(1) or testParams(1,2,3,4), will both compile correctly. In this version of the testParams function, we have added two console.log lines, just to show that the arguments array can be accessed by either the named rest parameter, argArray[i], or through the normal JavaScript array, arguments[i]. The last line in this sample will, however, generate a compile error, as we have defined the rest parameter to only accept numbers, and we are attempting to call the function with strings. The the subtle difference between using argArray and arguments is the inferred type of the argument. Since we have explicitly specified that argArray is of type number, TypeScript will treat any item of the argArray array as a number. However, the internal arguments array does not have an inferred type, and so will be treated as the any type. We can also combine normal parameters along with rest parameters in a function definition, as long as the rest parameters are the last to be defined in the parameter list, as follows: function testParamsTs2(arg1: string, arg2: number, ...ArgArray: number[]) { } Here, we have two normal parameters named arg1 and arg2 and then an argArray rest parameter. Mistakenly placing the rest parameter at the beginning of the parameter list will generate a compile error. Function callbacks One of the most powerful features of JavaScript–and in fact the technology that Node was built on–is the concept of callback functions. A callback function is a function that is passed into another function. Remember that JavaScript is not strongly typed, so a variable can also be a function. This is best illustrated by having a look at some JavaScript code: function myCallBack(text) { console.log("inside myCallback " + text); } function callingFunction(initialText, callback) { console.log("inside CallingFunction"); callback(initialText); } callingFunction("myText", myCallBack); Here, we have a function named myCallBack that takes a parameter and logs its value to the console. We then define a function named callingFunction that takes two parameters: initialText and callback. The first line of this funciton simply logs "inside CallingFunction" to the console. The second line of the callingFunction is the interesting bit. It assumes that the callback argument is in fact a function, and invokes it. It also passes the initialText variable to the callback function. If we run this code, we will get two messages logged to the console, as follows: inside CallingFunction inside myCallback myText But what happens if we do not pass a function as a callback? There is nothing in the preceding code that signals to us that the second parameter of callingFunction must be a function. If we inadvertently called the callingFunction function with a string, instead of a function as the second parameter as follows: callingFunction("myText", "this is not a function"); We would get a JavaScript runtime error: 0x800a138a - JavaScript runtime error: Function expected Defensive minded programmers, however, would first check whether the callback parameter was in fact a function before invoking it, as follows: function callingFunction(initialText, callback) { console.log("inside CallingFunction"); if (typeof callback === "function") { callback(initialText); } else { console.log(callback + " is not a function"); } } callingFunction("myText", "this is not a function"); Note the third line of this code snippet, where we check the type of the callback variable before invoking it. If it is not a function, we then log a message to the console. On the last line of this snippet, we are executing the callingFunction, but this time passing a string as the second parameter. The output of the code snipped would be: inside CallingFunction this is not a function is not a function When using function callbacks, then, JavaScript programmers need to do two things; firstly, understand which parameters are in fact callbacks and secondly, code around the invalid use of callback functions. Function signatures The TypeScript "syntactic sugar" that enforces strong typing, is not only intended for variables and types, but for function signatures as well. What if we could document our JavaScript callback functions in code, and then warn users of our code when they are passing the wrong type of parameter to our functions ? TypeScript does this through function signatures. A function signature introduces a fat arrow syntax, () =>, to define what the function should look like. Let's re-write the preceding JavaScript sample in TypeScript: function myCallBack(text: string) { console.log("inside myCallback " + text); } function callingFunction(initialText: string, callback: (text: string) => void) { callback(initialText); } callingFunction("myText", myCallBack); callingFunction("myText", "this is not a function"); Our first function definition, myCallBack now strongly types the text parameter to be of type string. Our callingFunction function has two parameters; initialText, which is of type string, and callback, which now has the new function signature syntax. Let's look at this function signature more closely: callback: (text: string) => void What this function definition is saying, is that the callback argument is typed (by the : syntax) to be a function, using the fat arrow syntax () =>. Additionally, this function takes a parameter named text that is of type string. To the right of the fat arrow syntax, we can see a new TypeScript basic type, called void. Void is a keyword to denote that a function does not return a value. So, the callingFunction function will only accept, as its second argument, a function that takes a single string parameter and returns nothing. Compiling the preceding code will correctly highlight an error in the last line of the code snippet, where we passing a string as the second parameter, instead of a callback function: error TS2082: Build: Supplied parameters do not match any signature of call target: Type '(text: string) => void' requires a call signature, but type 'String' lacks one Given the preceding function signature for the callback function, the following code would also generate compile time errors: function myCallBackNumber(arg1: number) { console.log("arg1 = " + arg1); } callingFunction("myText", myCallBackNumber); Here, we are defining a function named myCallBackNumber, that takes a number as its only parameter. When we attempt to compile this code, we will get an error message indicating that the callback parameter, which is our myCallBackNumber function, also does not have the correct function signature: Call signatures of types 'typeof myCallBackNumber' and '(text: string) => void' are incompatible. The function signature of myCallBackNumber would actually be (arg1:number) => void, instead of the required (text: string) => void, hence the error. In function signatures, the parameter name (arg1 or text) does not need to be the same. Only the number of parameters, their types, and the return type of the function need to be the same. This is a very powerful feature of TypeScript — defining in code what the signatures of functions should be, and warning users when they do not call a function with the correct parameters. As we saw in our introduction to TypeScript, this is most significant when we are working with third-party libraries. Before we are able to use third-party functions, classes, or objects in TypeScript, we need to define what their function signatures are. These function definitions are put into a special type of TypeScript file, called a declaration file, and saved with a .d.ts extension. Function callbacks and scope JavaScript uses lexical scoping rules to define the valid scope of a variable. This means that the value of a variable is defined by its location within the source code. Nested functions have access to variables that are defined in their parent scope. As an example of this, consider the following TypeScript code: function testScope() { var testVariable = "myTestVariable"; function print() { console.log(testVariable); } } console.log(testVariable); This code snippet defines a function named testScope. The variable testVariable is defined within this function. The print function is a child function of testScope, so it has access to the testVariable variable. The last line of the code, however, will generate a compile error, because it is attempting to use the variabletestVariable, which is lexically scoped to be valid only inside the body of the testScope function: error TS2095: Build: Could not find symbol 'testVariable'. Simple, right? A nested function has access to variables depending on its location within the source code. This is all well and good, but in large JavaScript projects, there are many different files and many areas of the code are designed to be re-usable. Let's take a look at how these scoping rules can become a problem. For this sample, we will use a typical callback scenario—using jQuery to execute an asynchronous call to fetch some data. Consider the following TypeScript code: var testVariable = "testValue"; function getData() { var testVariable_2 = "testValue_2"; $.ajax( { url: "/sample_json.json" success: (data, status, jqXhr) => { console.log("success : testVariable is " + testVariable); console.log("success : testVariable_2 is" + testVariable_2); }, error: (message, status, stack) => { alert("error " + message); } } ); } getData(); In this code snippet, we are defining a variable named testVariable and setting its value. We then define a function called getData. The getData function sets another variable called testVariable_2, and then calls the jQuery $.ajax function. The $.ajax function is configured with three properties: url, success, and error. The url property is a simple string that points to a sample_json.json file in our project directory. The success property is an anonymous function callback, that simply logs the values of testVariable and testVariable_2 to the console. Finally, the error property is also an anonymous function callback, that simply pops up an alert. This code runs as expected, and the success function will log the following results to the console: success : testVariable is :testValue success : testVariable_2 is :testValue_2 So far so good. Now, let's assume that we are trying to refactor the preceding code, as we are doing quite a few similar $.ajax calls, and want to reuse the success callback function elsewhere. We can easily switch out this anonymous function, and create a named function for our success callback, as follows: var testVariable = "testValue"; function getData() { var testVariable_2 = "testValue_2"; $.ajax( { url: "/sample_json.json", success: successCallback, error: (message, status, stack) => { alert("error " + message); } } ); } function successCallback(data, status, jqXhr) { console.log("success : testVariable is :" + testVariable); console.log("success : testVariable_2 is :" + testVariable_2); } getData(); In this sample, we have created a new function named successCallback with the same parameters as our previous anonymous function. We have also modified the $.ajax call to simply pass this function in, as a callback function for the success property: success: successCallback. If we were to compile this code now, TypeScript would generate an error, as follows: error TS2095: Build: Could not find symbol ''testVariable_2''. Since we have changed the lexical scope of our code, by creating a named function, the new successCallback function no longer has access the variable testVariable_2. It is fairly easy to spot this sort of error in a trivial example, but in larger projects, and when using third-party libraries, these sorts of errors become more difficult to track down. It is, therefore, worth mentioning that when using callback functions, we need to understand this lexical scope. If your code expects a property to have a value, and it does not have one after a callback, then remember to have a look at the context of the calling code. Function overloads As JavaScript is a dynamic language, we can often call the same function with different argument types. Consider the following JavaScript code: function add(x, y) { return x + y; } console.log("add(1,1)=" + add(1,1)); console.log("add(''1'',''1'')=" + add("1", "1")); console.log("add(true,false)=" + add(true, false)); Here, we are defining a simple add function that returns the sum of its two parameters, x and y. The last three lines of this code snippet simply log the result of the add function with different types: two numbers, two strings, and two boolean values. If we run this code, we will see the following output: add(1,1)=2 add('1','1')=11 add(true,false)=1 TypeScript introduces a specific syntax to indicate multiple function signatures for the same function. If we were to replicate the preceding code in TypeScript, we would need to use the function overload syntax: function add(arg1: string, arg2: string): string; function add(arg1: number, arg2: number): number; function add(arg1: boolean, arg2: boolean): boolean; function add(arg1: any, arg2: any): any { return arg1 + arg2; } console.log("add(1,1)=" + add(1, 1)); console.log("add(''1'',''1'')=" + add("1", "1")); console.log("add(true,false)=" + add(true, false)); The first line of this code snippet specifies a function overload signature for the add function that accepts two strings and returns a string. The second line specifies another function overload that uses numbers, and the third line uses booleans. The fourth line contains the actual body of the function and uses the type specifier of any. The last three lines of this snippet show how we would use these function signatures, and are similar to the JavaScript code that we have been using previously. There are three points of interest in the preceding code snippet. Firstly, none of the function signatures on the first three lines of the snippet actually have a function body. Secondly, the final function definition uses the type specifier of any and eventually includes the function body. The function overload syntax must follow this structure, and the final function signature, that includes the body of the function must use the any type specifier, as anything else will generate compile-time errors. The third point to note, is that we are limiting the add function, by using these function overload signatures, to only accept two parameters that are of the same type. If we were to try and mix our types; for example, if we call the function with a boolean and a string, as follows: console.log("add(true,''1'')", add(true, "1")); TypeScript would generate compile errors: error TS2082: Build: Supplied parameters do not match any signature of call target: error TS2087: Build: Could not select overload for ''call'' expression. This seems to contradict our final function definition though. In the original TypeScript sample, we had a function signature that accepted (arg1: any, arg2: any); so, in theory, this should be called when we try to add a boolean and a number. The TypeScript syntax for function overloads, however, does not allow this. Remember that the function overload syntax must include the use of the any type for the function body, as all overloads eventually call this function body. However, the inclusion of the function overloads above the function body indicates to the compiler that these are the only signatures that should be available to the calling code. Summary To learn more about TypeScript, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Learning TypeScript (https://www.packtpub.com/web-development/learning-typescript) TypeScript Essentials (https://www.packtpub.com/web-development/typescript-essentials) Resources for Article: Further resources on this subject: Introduction to TypeScript[article] Writing SOLID JavaScript code with TypeScript[article] JavaScript Execution with Selenium[article]
Read more
  • 0
  • 0
  • 3481
Banner background image

Packt
09 Feb 2016
13 min read
Save for later

CSS Properties – Part 1

Packt
09 Feb 2016
13 min read
In this article written by Joshua Johanan, Talha Khan and Ricardo Zea, authors of the book Web Developer's Reference Guide, the authors wants to state that "CSS properties are characteristics of an element in a markup language (HTML, SVG, XML, and so on) that control their style and/or presentation. These characteristics are part of a constantly evolving standard from the W3C." (For more resources related to this topic, see here.) A basic example of a CSS property is border-radius: input { border-radius: 100px; } There is an incredible amount of CSS properties, and learning them all is virtually impossible. Adding more into this mix, there are CSS properties that need to be vendor prefixed (-webkit-, -moz-, -ms-, and so on), making this equation even more complex. Vendor prefixes are short pieces of CSS that are added to the beginning of the CSS property (and sometimes, CSS values too). These pieces of code are directly related to either the company that makes the browser (the "vendor") or to the CSS engine of the browser. There are four major CSS prefixes: -webkit-, -moz-, -ms- and -o-. They are explained here: -webkit-: This references Safari's engine, Webkit (Google Chrome and Opera used this engine in the past as well) -moz-: This stands for Mozilla, which creates Firefox -ms-: This stands for Microsoft, which creates Internet Explorer -o-: This stands for Opera, but only targets old versions of the browser Google Chrome and Opera both support the -webkit- prefix. However, these two browsers do not use the Webkit engine anymore. Their engine is called Blink and is developed by Google. A basic example of a prefixed CSS property is column-gap: .column { -webkit-column-gap: 5px; -moz-column-gap: 5px; column-gap: 5px; } Knowing which CSS properties need to be prefixed is futile. That's why, it's important to keep a constant eye on CanIUse.com. However, it's also important to automate the prefixing process with tools such as Autoprefixer or -prefix-free, or mixins in preprocessors, and so on. However, vendor prefixing isn't in the scope of the book, so the properties we'll discuss are without any vendor prefixes. If you want to learn more about vendor prefixes, you can visit Mozilla Developer Network (MDN) at http://tiny.cc/mdn-vendor-prefixes. Let's get the CSS properties reference rolling. Animation Unlike the old days of Flash, where creating animations required third-party applications and plugins, today, we can accomplish practically the same things with a lot less overhead, better performance, and greater scalability, all through CSS only. Forget plugins and third-party software! All we need is a text editor, some imagination, and a bit of patience to wrap our heads around some of the animation concepts CSS brings to our plate. Base markup and CSS Before we dive into all the animation properties, we will use the following markup and animation structure as our base: HTML: <div class="element"></div> CSS: .element { width: 300px; height: 300px; } @keyframes fadingColors { 0% { background: red; } 100% { background: black; } } In the examples, we will only see the element rule since the HTML and @keyframes fadingColors will remain the same. The @keyframes declaration block is a custom animation that can be applied to any element. When applied, the element's background will go from red to black. Ok, let's do this. animation-name The animation-name CSS property is the name of the @keyframes at-rule that we want to execute, and it looks like this: animation-name: fadingColors; Description In the HTML and CSS base example, our @keyframes at-rule had an animation where the background color went from red to black. The name of that animation is fadingColors. So, we can call the animation like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; } This is a valid rule using the longhand. There are clearly no issues with it at all. The thing is that the animation won't run unless we add animation-duration to it. animation-duration The animation-duration CSS property defines the amount of time the animation will take to complete a cycle, and it looks like this: animation-duration: 2s; Description We can specify the units either in seconds using s or in milliseconds using ms. Specifying a unit is required. Specifying a value of 0s means that the animation should actually never run. However, since we do want our animation to run, we will use the following lines of code: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; } As mentioned earlier, this will make a box go from its red background to black in 2 seconds, and then stop. animation-iteration-count The animation-iteration-count CSS property defines the number of times the animation should be played, and it looks like this: animation-iteration-count: infinite;Description Here are two values: infinite and a number, such as 1, 3, or 0.5. Negative numbers are not allowed. Add the following code to the prior example: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; } This will make a box go from its red background to black, start over again with the red background and go to black, infinitely. animation-direction The animation-direction CSS property defines the direction in which the animation should play after the cycle, and it looks like this: animation-direction: alternate; Description There are four values: normal, reverse, alternate, and alternate-reverse. normal: It makes the animation play forward. This is the default value. reverse: It makes the animation play backward. alternate: It makes the animation play forward in the first cycle, then backward in the next cycle, then forward again, and so on. In addition, timing functions are affected, so if we have ease-out, it gets replaced by ease-in when played in reverse. We'll look at these timing functions in a minute. alternate-reverse: It's the same thing as alternate, but the animation starts backward, from the end. In our current example, we have a continuous animation. However, the background color has a "hard stop" when going from black (end of the animation) to red (start of the animation). Let's create a more 'fluid' animation by making the black background fade into red and then red into black without any hard stops. Basically, we are trying to create a "pulse-like" effect: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; } animation-delay The animation-delay CSS property allows us to define when exactly an animation should start. This means that as soon as the animation has been applied to an element, it will obey the delay before it starts running. It looks like this: animation-delay: 3s; Description We can specify the units either in seconds using s or in milliseconds using ms.Specifying a unit is required. Negative values are allowed. Take into consideration that using negative values means that the animation should start right away, but it will start midway into the animation for the opposite amount of time as the negative value. Use negative values with caution. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; } This will make the animation start after 3 seconds have passed. animation-fill-mode The animation-fill-mode CSS property defines which values are applied to an element before and after the animation. Basically, outside the time, the animation is being executed. It looks like this: animation-fill-mode: none; Description There are four values: none, forwards, backwards, and both. none: No styles are applied before or after the animation. forwards: The animated element will retain the styles of the last keyframe. This the most used value. backwards: The animated element will retain the styles of the first keyframe, and these styles will remain during the animation-delay period. This is very likely the least used value. both: The animated element will retain the styles of the first keyframe before starting the animation and the styles of the last keyframe after the animation has finished. In many cases, this is almost the same as using forwards. The prior properties are better used in animations that have an end and stop. In our example, we're using a fading/pulsating animation, so the best property to use is none. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; } animation-play-state The animation-play-state CSS property defines whether an animation is running or paused, and it looks like this: animation-play-state: running; Description There are two values: running and paused. These values are self-explanatory. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; } In this case, defining animation-play-state as running is redundant, but I'm listing it for purposes of the example. animation-timing-function The animation-timing-function CSS property defines how an animation's speed should progress throughout its cycles, and it looks like this: animation-timing-function: ease-out; There are five predefined values, also known as easing functions, for the Bézier curve (we'll see what the Bézier curve is in a minute): ease, ease-in, ease-out, ease-in-out, and linear. ease The ease function Sharply accelerates at the beginning and starts slowing down towards the middle of the cycle, its syntax is as follows: animation-timing-function: ease; ease-in The ease-in function starts slowly accelerating until the animation sharply ends, its syntax is as follows: animation-timing-function: ease-in; ease-out The ease-out function starts quickly and gradually slows down towards the end: animation-timing-function: ease-out; ease-in-out The ease-in-out function starts slowly and it gets fast in the middle of the cycle. It then starts slowing down towards the end, its syntax is as follows: animation-timing-function:ease-in-out; linear The linear function has constant speed. No accelerations of any kind happen, its syntax is as follows: animation-timing-function: linear; Now, the easing functions are built on a curve named the Bézier curve and can be called using the cubic-bezier() function or the steps() function. cubic-bezier() The cubic-bezier() function allows us to create custom acceleration curves. Most use cases can benefit from the already defined easing functions we just mentioned (ease, ease-in, ease-out, ease-in-out and linear), but if you're feeling adventurous, cubic-bezier() is your best bet. Here's how a Bézier curve looks like: Parameters The cubic-bezier() function takes four parameters as follows: animation-timing-function: cubic-bezier(x1, y1, x2, y2); X and Y represent the x and y axes. The numbers 1 and 2 after each axis represent the control points. 1 represents the control point starting on the lower left, and 2 represent the control point on the upper right. Description Let's represent all five predefined easing functions with the cubic-bezier() function: ease: animation-timing-function: cubic-bezier(.25, .1, .25, 1); ease-in: animation-timing-function: cubic-bezier(.42, 0, 1, 1); ease-out: animation-timing-function: cubic-bezier(0, 0, .58, 1); ease-in-out: animation-timing-function: cubic-bezier(.42, 0, .58, 1); linear: animation-timing-function: cubic-bezier(0, 0, 1, 1); Not sure about you, but I prefer to use the predefined values. Now, we can start tweaking and testing each value to the decimal, save it, and wait for the live refresh to do its thing. However, that's too much time wasted testing if you ask me. The amazing Lea Verou created the best web app to work with Bézier curves. You can find it at cubic-bezier.com. This is by far the easiest way to work with Bézier curves. I highly recommend this tool. The Bézier curve image showed earlier was taken from the cubic-bezier.com website. Let's add animation-timing-function to our example: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: ease-out; } steps() The steps() timing function isn't very widely used, but knowing how it works is a must if you're into CSS animations. It looks like this: animation-timing-function: steps(6); This function is very helpful when we want our animation to take a defined number of steps. After adding a steps() function to our current example, it looks like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: steps(6); } This makes the box take six steps to fade from red to black and vice versa. Parameters There are two optional parameters that we can use with the steps() function: start and end. start: This will make the animation run at the beginning of each step. This will make the animation start right away. end: This will make the animation run at the end of each step. This is the default value if nothing is declared. This will make the animation have a short delay before it starts. Description After adding the parameters to the CSS code, it looks like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: steps(6, start); } Granted, in our example, is not very noticeable. However, you can see it more clear in this pen form Louis Lazarus when hovering over the boxes, at http://tiny.cc/steps-timing-function. Here's an image taken from Stephen Greig's article on Smashing Magazine, Understanding CSS Timing Functions, that explains start and end from the steps() function: Also, there are two predefined values for the steps() function: step-start and step-end. step-start: Is the same thing as steps(1, start). It means that every change happens at the beginning of each interval. step-end: Is the same thing as steps(1, end). It means that every change happens at the end of each interval. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: step-end; } animation The animation CSS property is the shorthand for animation-name, animation-duration, animation-timing-function, animation-delay, animation-iteration-count, animation-direction, animation-fill-mode, and animation-play-state. It looks like this: animation: fadingColors 2s; Description For a simple animation to work, we need at least two properties: name and duration. If you feel overwhelmed by all these properties, relax. Let me break them down for you in simple bits. Using the animation longhand, the code would look like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; } Using the animation shorthand, which is the recommended syntax, the code would look like this: CSS: .element { width: 300px; height: 300px; animation: fadingColors 2s; } This will make a box go from its red background to black in 2 seconds, and then stop. Final CSS code Let's see how all the animation properties look in one final example showing both the longhand and shorthand styles. Longhand style .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: ease-out; } Shorthand style .element { width: 300px; height: 300px; animation: fadingColors 2s infinite alternate 3s none running ease-out; } The animation-duration property will always be considered first rather than animation-delay. All other properties can appear in any order within the declaration. You can find a demo in CodePen at http://tiny.cc/animation. Summary In this article we learned how to add animations in our web project, also we learned about different properties, in detail, that can be used to animate our web project along with their description. Resources for Article: Further resources on this subject: Using JavaScript with HTML[article] Welcome to JavaScript in the full stack[article] A Typical JavaScript Project[article]
Read more
  • 0
  • 0
  • 830

article-image-introducing-sailsjs
Packt
15 Jan 2016
5 min read
Save for later

Introducing Sails.js

Packt
15 Jan 2016
5 min read
In this article by Shahid Shaikh, author of the book Sails.js Essentials, you will learn a few basics of Sails.js. Sails.js is modern production-ready Node.js framework to develop Node.js web application by following the MVC pattern. If you are not looking to reinvent wheel as we do in MongoDB, ExpressJS, AngularJS, and NodeJS (MEAN) stack and focus on business logic, then Sails.js is the answer. Sails.js is a powerful enterprise-ready framework, you can write your business logic and deploy the application with the surety that the application won't fail due to other factors. Sails.js uses Express.js as a web framework and Socket.io to handle the web socket messages. It is integrated and coupled in the Sails.js, therefore, you don't need to install and configure them separately. Sails.js also has all the popular database supports such as MySQL, MongoDB, PostgreSQL, and so on. It also comes up with autogenerated API feature that let's you create API on the go. (For more resources related to this topic, see here.) Brief about MVC We know that Model-View-Controller (MVC) is a software architecture pattern coined by Smalltalk engineers. MVC separates the application in three internal components and data is passed via each component. Each component is responsible for their task and they pass their result to the next component. This separation provides a great opportunity of code reusability and loose coupling. MVC components The following are the components of MVC architecture: Model The main component of MVC is model. Model represents knowledge. It could be a single object or nested structure of objects. The model directly manages the data (store the data as well), logic, and rules of the application. View View is the visual representation of model. View takes data from the model and presents it (in browser or console and so on). View gets updated as soon as the model is changed. An ideal MVC application must have a system to notify other components about the changes. In a web application, view is the HTML Embedded JavaScript (EJS) pages that we present in a web browser. Controller As the name implies, task of a controller is to accept input and convert it into a proper command for model or view. Controller can send commands to the model to make changes or update the state and it can also send commands to the view to update the information. For example, consider Google Docs as an MVC application, View will be the screen where you type. As you can see in the defined interval, it automatically saves the information in the Google server, which is a controller that notifies the model (Google backend server) to update the changes. Installing Sails.js Make sure that you have the latest version of Node.js and npm installed in your system before installing Sails.js. You can install it using the following command: npm install -g sails You can then use Sails.js as a command-line tool, as shown in the following: Creating new project You can create a new project using Sails.js command-line tool. The command is as follows: sails create appName Once the project is created, you need to install the dependencies that are needed by it. You can do so by running the following command: npm install Adding database support Sails.js provides object-relational mapping (ORM) drivers for widely-used databases such as MySQL, MongoDB, and PostgreSQL. In order to add these to your project, you need to install the respective packages such as sails-mysql for MySQL, sails-mongo for MongoDB, and so on. Once it is added, you need to change the connections.js and models.js file located in the /config folder. Add the connection parameters such as host, user, password, and database name in the connections.js file. For example, for MySQL, add the following: module.exports.connections = {   mysqlAdapter: {     adapter: 'sails-mysql',     host: 'localhost,     user: 'root',     password: '',     database: 'sampleDB'   } }; In the models.js file, change the connection parameter to this one. Here is a snippet for that: module.exports.models = {   connection: 'mysqlAdapter' }; Now, Sails.js will communicate with MySQL using this connection. Adding grunt task Sails.js uses grunt as a default task builder and provides effective way to add or remove existing tasks. If you take a look at the tasks folder in the Sails.js project directory, you will see that there is the config and register folder, which holds the task and registers the task with a grunt runner. In order to add a new task, you can create new file in the /config folder and add the grunt task using the following snippet as default: module.exports = function(grunt) {   // Your grunt task code }; Once done, you can register it with the default task runner or create a new file in the /register folder and add this task using the following code: module.exports = function (grunt) {   grunt.registerTask('task-name', [ 'your custom task']); }; Run this code using grunt <task-name>. Summary In this article, you learned that you can develop rich a web application very effectively with Sails.js as you don't have to do a lot of extra work for configuration and set up. Sails.js also provides autogenerated REST API and built-in WebSocket integration in each routes, which will help you to develop the real-time application in an easy way. Resources for Article: Further resources on this subject: Using Socket.IO and Express together [article] Parallel Programming Patterns [article] Planning Your Site in Adobe Muse [article]
Read more
  • 0
  • 0
  • 1805

article-image-ecmascript-6-standard
Packt
15 Jan 2016
21 min read
Save for later

ECMAScript 6 Standard

Packt
15 Jan 2016
21 min read
In this article by Ved Antani, the author of the book Mastering JavaScript, we will learn about ECMAScript 6 standard. ECMAScript 6 (ES6) is the latest version of the ECMAScript standard. This standard is evolving and the last round of modifications was done in June, 2015. ES2015 is significant in its scope and the recommendations of ES2015 are being implemented in most JavaScript engines. This is great news. ES6 introduces a huge number of features that add syntactic forms and helpers that enrich the language significantly. The pace at which ECMAScript standards keep evolving makes it a bit difficult for browsers and JavaScript engines to support new features. It is also a practical reality that most programmers have to write code that can be supported by older browsers. The notorious Internet Explorer 6 was once the most widely used browser in the world. To make sure that your code is compatible with the most number of browsers is a daunting task. So, while you want to jump to the next set of awesome ES6 features, you will have to consider the fact that several of the ES6 features may not be supported by the most popular of browsers or JavaScript frameworks. This may look like a dire scenario, but things are not that dark. Node.js uses the latest version of the V8 engine that supports majority of ES6 features. Facebook's React also supports them. Mozilla Firefox and Google Chrome are two of the most used browsers today and they support a majority of ES6 features. To avoid such pitfalls and unpredictability, certain solutions have been proposed. The most useful among these are polyfills/shims and transpilers. (For more resources related to this topic, see here.) ES6 syntax changes ES6 brings in significant syntactic changes to JavaScript. These changes need careful study and some getting used to. In this section, we will study some of the most important syntax changes and see how you can use Babel to start using these newer constructs in your code right away. Block scoping We discussed earlier that the variables in JavaScript are function-scoped. Variables created in a nested scope are available to the entire function. Several programming languages provide you with a default block scope where any variable declared within a block of code (usually delimited by {}) is scoped (available) only within this block. To achieve a similar block scope in JavaScript, a prevalent method is to use immediately-invoked function expressions (IIFE). Consider the following example: var a = 1; (function blockscope(){     var a = 2;     console.log(a);   // 2 })(); console.log(a);       // 1 Using the IIFE, we are creating a block scope for the a variable. When a variable is declared in the IIFE, its scope is restricted within the function. This is the traditional way of simulating the block scope. ES6 supports block scoping without using IIFEs. In ES6, you can enclose any statement(s) in a block defined by {}. Instead of using var, you can declare a variable using let to define the block scope. The preceding example can be rewritten using ES6 block scopes as follows: "use strict"; var a = 1; {   let a = 2;   console.log( a ); // 2 } console.log( a ); // 1 Using standalone brackets {} may seem unusual in JavaScript, but this convention is fairly common to create a block scope in many languages. The block scope kicks in other constructs such as if { } or for (){ } as well. When you use a block scope in this way, it is generally preferred to put the variable declaration on top of the block. One difference between variables declared using var and let is that variables declared with var are attached to the entire function scope, while variables declared using let are attached to the block scope and they are not initialized until they appear in the block. Hence, you cannot access a variable declared with let earlier than its declaration, whereas with variables declared using var, the ordering doesn't matter: function fooey() {   console.log(foo); // ReferenceError   let foo = 5000; } One specific use of let is in for loops. When we use a variable declared using var in a for loop, it is created in the global or parent scope. We can create a block-scoped variable in the for loop scope by declaring a variable using let. Consider the following example: for (let i = 0; i<5; i++) {   console.log(i); } console.log(i); // i is not defined As i is created using let, it is scoped in the for loop. You can see that the variable is not available outside the scope. One more use of block scopes in ES6 is the ability to create constants. Using the const keyword, you can create constants in the block scope. Once the value is set, you cannot change the value of such a constant: if(true){   const a=1;   console.log(a);   a=100;  ///"a" is read-only, you will get a TypeError } A constant has to be initialized while being declared. The same block scope rules apply to functions also. When a function is declared inside a block, it is available only within that scope. Default parameters Defaulting is very common. You always set some default value to parameters passed to a function or variables that you initialize. You may have seen code similar to the following: function sum(a,b){   a = a || 0;   b = b || 0;   return (a+b); } console.log(sum(9,9)); //18 console.log(sum(9));   //9 Here, we are using || (OR operator) to default variables a and b to 0 if no value was supplied when this function was invoked. With ES6, you have a standard way of defaulting function arguments. The preceding example can be rewritten as follows: function sum(a=0, b=0){   return (a+b); } console.log(sum(9,9)); //18 console.log(sum(9));   //9 You can pass any valid expression or function call as part of the default parameter list. Spread and rest ES6 has a new operator, …. Based on how it is used, it is called either spread or rest. Let's look at a trivial example: function print(a, b){   console.log(a,b); } print(...[1,2]);  //1,2 What's happening here is that when you add … before an array (or an iterable) it spreads the element of the array in individual variables in the function parameters. The a and b function parameters were assigned two values from the array when it was spread out. Extra parameters are ignored while spreading an array: print(...[1,2,3 ]);  //1,2 This would still print 1 and 2 because there are only two functional parameters available. Spreads can be used in other places also, such as array assignments: var a = [1,2]; var b = [ 0, ...a, 3 ]; console.log( b ); //[0,1,2,3] There is another use of the … operator that is the very opposite of the one that we just saw. Instead of spreading the values, the same operator can gather them into one: function print (a,...b){   console.log(a,b); } console.log(print(1,2,3,4,5,6,7));  //1 [2,3,4,5,6,7] In this case, the variable b takes the rest of the values. The a variable took the first value as 1 and b took the rest of the values as an array. Destructuring If you have worked on a functional language such as Erlang, you will relate to the concept of pattern matching. Destructuring in JavaScript is something very similar. Destructuring allows you to bind values to variables using pattern matching. Consider the following example: var [start, end] = [0,5]; for (let i=start; i<end; i++){   console.log(i); } //prints - 0,1,2,3,4 We are assigning two variables with the help of array destructuring: var [start, end] = [0,5]; As shown in the preceding example, we want the pattern to match when the first value is assigned to the first variable (start) and the second value is assigned to the second variable (end). Consider the following snippet to see how the destructuring of array elements works: function fn() {   return [1,2,3]; } var [a,b,c]=fn(); console.log(a,b,c); //1 2 3 //We can skip one of them var [d,,f]=fn(); console.log(d,f);   //1 3 //Rest of the values are not used var [e,] = fn(); console.log(e);     //1 Let's discuss how objects' destructuring works. Let's say that you have a function f that returns an object as follows: function f() {   return {     a: 'a',     b: 'b',     c: 'c'   }; } When we destructure the object being returned by this function, we can use the similar syntax as we saw earlier; the difference is that we use {} instead of []: var { a: a, b: b, c: c } = f(); console.log(a,b,c); //a b c Similar to arrays, we use pattern matching to assign variables to their corresponding values returned by the function. There is an even shorter way of writing this if you are using the same variable as the one being matched. The following example would do just fine: var { a,b,c } = f(); However, you would mostly be using a different variable name from the one being returned by the function. It is important to remember that the syntax is source: destination and not the usual destination: source. Carefully observe the following example: //this is target: source - which is incorrect var { x: a, x: b, x: c } = f(); console.log(x,y,z); //x is undefined, y is undefined z is undefined //this is source: target - correct var { a: x, b: y, c: z } = f(); console.log(x,y,z); // a b c This is the opposite of the target = source way of assigning values and hence will take some time in getting used to. Object literals Object literals are everywhere in JavaScript. You would think that there is no scope of improvement there. However, ES6 wants to improve this too. ES6 introduces several shortcuts to create a concise syntax around object literals: var firstname = "Albert", lastname = "Einstein",   person = {     firstname: firstname,     lastname: lastname   }; If you intend to use the same property name as the variable that you are assigning, you can use the concise property notation of ES6: var firstname = "Albert", lastname = "Einstein",   person = {     firstname,     lastname   }; Similarly, you are assigning functions to properties as follows: var person = {   getName: function(){     // ..   },   getAge: function(){     //..   } } Instead of the preceding lines, you can say the following: var person = {   getName(){     // ..   },   getAge(){     //..   } } Template literals I am sure you have done things like the following: function SuperLogger(level, clazz, msg){   console.log(level+": Exception happened in class:"+clazz+" -     Exception :"+ msg); } This is a very common way of replacing variable values to form a string literal. ES6 provides you with a new type of string literal using the backtick (`) delimiter. You can use string interpolation to put placeholders in a template string literal. The placeholders will be parsed and evaluated. The preceding example can be rewritten as follows: function SuperLogger(level, clazz, msg){   console.log(`${level} : Exception happened in class: ${clazz} -     Exception : {$msg}`); } We are using `` around a string literal. Within this literal, any expression of the ${..} form is parsed immediately. This parsing is called interpolation. While parsing, the variable's value replaces the placeholder within ${}. The resulting string is just a normal string with the placeholders replaced with actual variable values. With string interpolation, you can split a string into multiple lines also, as shown in the following code (very similar to Python): var quote = `Good night, good night! Parting is such sweet sorrow, that I shall say good night till it be morrow.`; console.log( quote ); You can use function calls or valid JavaScript expressions as part of the string interpolation: function sum(a,b){   console.log(`The sum seems to be ${a + b}`); } sum(1,2); //The sum seems to be 3 The final variation of the template strings is called tagged template string. The idea is to modify the template string using a function. Consider the following example: function emmy(key, ...values){   console.log(key);   console.log(values); } let category="Best Movie"; let movie="Adventures in ES6"; emmy`And the award for ${category} goes to ${movie}`;   //["And the award for "," goes to ",""] //["Best Movie","Adventures in ES6"] The strangest part is when we call the emmy function with the template literal. It's not a traditional function call syntax. We are not writing emmy(); we are just tagging the literal with the function. When this function is called, the first argument is an array of all the plain strings (the string between interpolated expressions). The second argument is the array where all the interpolated expressions are evaluated and stored. Now what this means is that the tag function can actually change the resulting template tag: function priceFilter(s, ...v){   //Bump up discount   return s[0]+ (v[0] + 5); } let default_discount = 20; let greeting = priceFilter `Your purchase has a discount of   ${default_discount} percent`; console.log(greeting);  //Your purchase has a discount of 25 As you can see, we modified the value of the discount in the tag function and returned the modified values. Maps and Sets ES6 introduces four new data structures: Map, WeakMap, Set, and WeakSet. We discussed earlier that objects are the usual way of creating key-value pairs in JavaScript. The disadvantage of objects is that you cannot use non-string values as keys. The following snippets demonstrate how Maps are created in ES6: let m = new Map(); let s = { 'seq' : 101 };   m.set('1','Albert'); m.set('MAX', 99); m.set(s,'Einstein');   console.log(m.has('1')); //true console.log(m.get(s));   //Einstein console.log(m.size);     //3 m.delete(s); m.clear(); You can initialize the map while declaring it: let m = new Map([   [ 1, 'Albert' ],   [ 2, 'Douglas' ],   [ 3, 'Clive' ], ]); If you want to iterate over the entries in the Map, you can use the entries() function that will return you an iterator. You can iterate through all the keys using the keys() function and you can iterate through the values of the Map using values() function: let m2 = new Map([     [ 1, 'Albert' ],     [ 2, 'Douglas' ],     [ 3, 'Clive' ], ]); for (let a of m2.entries()){   console.log(a); } //[1,"Albert"] [2,"Douglas"][3,"Clive"] for (let a of m2.keys()){   console.log(a); } //1 2 3 for (let a of m2.values()){   console.log(a); } //Albert Douglas Clive A variation of JavaScript Maps is a WeakMap—a WeakMap does not prevent its keys from being garbage-collected. Keys for a WeakMap must be objects and the values can be arbitrary values. While a WeakMap behaves in the same way as a normal Map, you cannot iterate through it and you can't clear it. There are reasons behind these restrictions. As the state of the Map is not guaranteed to remain static (keys may get garbage-collected), you cannot ensure correct iteration. There are not many cases where you may want to use WeakMap. The most uses of a Map can be written using normal Maps. While Maps allow you to store arbitrary values, Sets are a collection of unique values. Sets have similar methods as Maps; however, set() is replaced with add(), and the get() method does not exist. The reason that the get() method is not there is because a Set has unique values, so you are interested in only checking whether the Set contains a value or not. Consider the following example: let x = {'first': 'Albert'}; let s = new Set([1,2,'Sunday',x]); //console.log(s.has(x));  //true s.add(300); //console.log(s);  //[1,2,"Sunday",{"first":"Albert"},300]   for (let a of s.entries()){   console.log(a); } //[1,1] //[2,2] //["Sunday","Sunday"] //[{"first":"Albert"},{"first":"Albert"}] //[300,300] for (let a of s.keys()){   console.log(a); } //1 //2 //Sunday //{"first":"Albert"} //300 for (let a of s.values()){   console.log(a); } //1 //2 //Sunday //{"first":"Albert"} //300 The keys() and values() iterators both return a list of the unique values in the Set. The entries() iterator yields a list of entry arrays, where both items of the array are the unique Set values. The default iterator for a Set is its values() iterator. Symbols ES6 introduces a new data type called Symbols. A Symbol is guaranteed to be unique and immutable. Symbols are usually used as an identifier for object properties. They can be considered as uniquely generated IDs. You can create Symbols with the Symbol() factory method—remember that this is not a constructor and hence you should not use a new operator: let s = Symbol(); console.log(typeof s); //symbol Unlike strings, Symbols are guaranteed to be unique and hence help in preventing name clashes. With Symbols, we have an extensibility mechanism that works for everyone. ES6 comes with a number of predefined built-in Symbols that expose various meta behaviors on JavaScript object values. Iterators Iterators have been around in other programming languages for quite some time. They give convenience methods to work with collections of data. ES6 introduces iterators for the same use case. ES6 iterators are objects with a specific interface. Iterators have a next() method that returns an object. The returning object has two properties—value (the next value) and done (indicates whether the last result has been reached). ES6 also defines an Iterable interface, which describes objects that must be able to produce iterators. Let's look at an array, which is an iterable, and the iterator that it can produce to consume its values: var a = [1,2]; var i = a[Symbol.iterator](); console.log(i.next());      // { value: 1, done: false } console.log(i.next());      // { value: 2, done: false } console.log(i.next());      // { value: undefined, done: true } As you can see, we are accessing the array's iterator via Symbol.iterator() and calling the next() method on it to get each successive element. Both value and done are returned by the next() method call. When you call next() past the last element in the array, you get an undefined value and done: true indicating that you have iterated over the entire array. For..of loops ES6 adds a new iteration mechanism in form of for..of loop, which loops over the set of values produced by an iterator. The value that we iterate over with for..of is an iterable. Let's compare for..of to for..in: var list = ['Sunday','Monday','Tuesday']; for (let i in list){   console.log(i);  //0 1 2 } for (let i of list){   console.log(i);  //Sunday Monday Tuesday } As you can see, using the for  in loop, you can iterate over indexes of the list array, while the for..of loop lets you iterate over the values stored in the list array. Arrow functions One of the most interesting new parts of ECMAScript 6 is arrow functions. Arrow functions are, as the name suggests, functions defined with a new syntax that uses an arrow (=>) as part of the syntax. Let's first see how arrow functions look: //Traditional Function function multiply(a,b) {   return a*b; } //Arrow var multiply = (a,b) => a*b; console.log(multiply(1,2)); //2 The arrow function definition consists of a parameter list (of zero or more parameters and surrounding ( .. ) if there's not exactly one parameter), followed by the => marker, which is followed by a function body. The body of the function can be enclosed by { .. } if there's more than one expression in the body. If there's only one expression, and you omit the surrounding { .. }, there's an implied return in front of the expression. There are several variations of how you can write arrow functions. The following are the most commonly used: // single argument, single statement //arg => expression; var f1 = x => console.log("Just X"); f1(); //Just X   // multiple arguments, single statement //(arg1 [, arg2]) => expression; var f2 = (x,y) => x*y; console.log(f2(2,2)); //4   // single argument, multiple statements // arg => { //     statements; // } var f3 = x => {   if(x>5){     console.log(x);   }   else {     console.log(x+5);   } } f3(6); //6   // multiple arguments, multiple statements // ([arg] [, arg]) => { //   statements // } var f4 = (x,y) => {   if(x!=0 && y!=0){     return x*y;   } } console.log(f4(2,2));//4   // with no arguments, single statement //() => expression; var f5 = () => 2*2; console.log(f5()); //4   //IIFE console.log(( x => x * 3 )( 3 )); // 9 It is important to remember that all the characteristics of a normal function parameters are available to arrow functions, including default values, destructuring, and rest parameters. Arrow functions offer a convenient and short syntax, which gives your code a very functional programming flavor. Arrow functions are popular because they offer an attractive promise of writing concise functions by dropping function, return, and { .. } from the code. However, arrow functions are designed to fundamentally solve a particular and common pain point with this-aware coding. In normal ES5 functions, every new function defined its own value of this (a new object in case of a constructor, undefined in strict mode function calls, context object if the function is called as an object method, and so on). JavaScript functions always have their own this and this prevents you from accessing the this of, for example, a surrounding method from inside a callback. To understand this problem, consider the following example: function CustomStr(str){   this.str = str; } CustomStr.prototype.add = function(s){   // --> 1   'use strict';   return s.map(function (a){             // --> 2     return this.str + a;                 // --> 3   }); };   var customStr = new CustomStr("Hello"); console.log(customStr.add(["World"])); //Cannot read property 'str' of undefined On the line marked with 3, we are trying to get this.str, but the anonymous function also has its own this, which shadows this from the method from line 1. To fix this in ES5, we can assign this to a variable and use the variable instead: function CustomStr(str){   this.str = str; } CustomStr.prototype.add = function(s){     'use strict';   var that = this;                       // --> 1   return s.map(function (a){             // --> 2     return that.str + a;                 // --> 3   }); };   var customStr = new CustomStr("Hello"); console.log(customStr.add(["World"])); //["HelloWorld] On the line marked with 1, we are assigning this to a variable, that, and in the anonymous function, we are using the that variable, which will have a reference to this from the correct context. ES6 arrow functions have lexical this, meaning that the arrow functions capture the this value of the enclosing context. We can convert the preceding function to an equivalent arrow function as follows: function CustomStr(str){   this.str = str; } CustomStr.prototype.add = function(s){   return s.map((a)=> {     return this.str + a;   }); }; var customStr = new CustomStr("Hello"); console.log(customStr.add(["World"])); //["HelloWorld] Summary In this article, we discussed a few important features being added to the language in ES6. It's an exciting collection of new language features and paradigms, and using polyfills and transpilers, you can start with them right away. JavaScript is an ever growing language and it is important to understand what the future holds. ES6 features make JavaScript an even more interesting and mature language. Resources for Article: Further resources on this subject: Using JavaScript with HTML [article] Getting Started with Tableau Public [article] Façade Pattern – Being Adaptive with Façade [article]
Read more
  • 0
  • 0
  • 1506
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-setup-routine-enterprise-spring-application
Packt
14 Jan 2016
6 min read
Save for later

Setup Routine for an Enterprise Spring Application

Packt
14 Jan 2016
6 min read
In this article by Alex Bretet, author of the book Spring MVC Cookbook, you will learn to install Eclipse for Java EE developers and Java SE 8. (For more resources related to this topic, see here.) Introduction The choice of the Eclipse IDE needs to be discussed as there is some competition in this domain. Eclipse is popular in the Java community for being an active open source product; it is consequently accessible online to anyone with no restrictions. It also provides, among other usages, a very good support to web implementations, particularly to MVC approaches. Why use the Spring Framework? The Spring Framework and its community have also contributed to pull forward the Java platform for more than a decade. Presenting the whole framework in detail would require us to write more than a article. However, the core functionality based on the principles of Inversion of Control and Dependency Injection through a performant access to the bean repository allows massive reusability. Staying lightweight, the Spring Framework secures great scaling capabilities and could probably suit all modern architectures. The following recipe is about downloading and installing the Eclipse IDE for JEE developers and downloading and installing JDK 8 Oracle Hotspot. Getting ready This first sequence could appear as redundant or unnecessary with regard to your education or experience. For instance, you will, for sure, stay away from unidentified bugs (integration or development). You will also be assured of experiencing the same interfaces as the presented screenshots and figures. Also, because third-party products are living, you will not have to face the surprise of encountering unexpected screens or windows. How to do it... You need to perform the following steps to install the Eclipse IDE: Download a distribution of the Eclipse IDE for Java EE developers. We will be using in this article an Eclipse Luna distribution. We recommend you to install this version, which can be found at https://www.eclipse.org/downloads/packages/eclipse-ide-java-ee-developers/lunasr1, so that you can follow along with our guidelines and screenshots completely. Download a Luna distribution for the OS and environment of your choice: The product to be downloaded is not a binary installer but a ZIP archive. If you feel confident enough to use another version (more actual) of the Eclipse IDE for Java EE developers, all of them can be found at https://www.eclipse.org/downloads. For the upcoming installations, on Windows, a few target locations are suggested to be at the root directory C:. To avoid permission-related issues, it would be better if your Windows user is configured to be a local administrator. If you can't be part of this group, feel free to target installation directories you have write access to. Extract the downloaded archive into an eclipse directory:     If you are on Windows, archive into the C:Users{system.username}eclipse directory     If you are using Linux, archive into the /home/usr/{system.username}/eclipse directory     If you are using Mac OS X, archive into the /Users/{system.username}/eclipse directory Select and download a JDK 8. We suggest you to download the Oracle Hotspot JDK. Hotspot is a performant JVM implementation that has originally been built by Sun Microsystems. Now owned by Oracle, the Hotspot JRE and JDK are downloadable for free. Choose the product corresponding to your machine through Oracle website's link, http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html. To avoid a compatibility issue later on, do stay consistent with the architecture choice (32 or 64 bits) that you have made earlier for the Eclipse archive. Install the JDK 8. On Windows, perform the following steps:     Execute the downloaded file and wait until you reach the next installation step.     On the installation-step window, pay attention to the destination directory and change it to C:javajdk1.8.X_XX (X_XX as the latest current version. We will be using jdk1.8.0_25 in this article.)     Also, it won't be necessary to install an external JRE, so uncheck the Public JRE feature. On a Linux/Mac OS, perform the following steps:     Download the tar.gz archive corresponding to your environment.     Change the current directory to where you want to install Java. For easier instructions, let's agree on the /usr/java directory.     Move the downloaded tar.gz archive to this current directory.     Unpack the archive with the following command line, targeting the name of your archive: tar zxvf jdk-8u25-linux-i586.tar.gz (This example is for a binary archive corresponding to a Linux x86 machine).You must end up with the /usr/java/jdk1.8.0_25 directory structure that contains the subdirectories /bin, /db, /jre, /include, and so on. How it works… Eclipse for Java EE developers We have installed the Eclipse IDE for Java EE developers. Comparatively to Eclipse IDE for Java developers, there are some additional packages coming along, such as Java EE Developer Tools, Data Tools Platform, and JavaScript Development Tools. This version is appreciated for its capability to manage development servers as part of the IDE itself and its capability to customize the Project Facets and to support JPA. The Luna version is officially Java SE 8 compatible; it has been a decisive factor here. Choosing a JVM The choice of JVM implementation could be discussed over performance, memory management, garbage collection, and optimization capabilities. There are lots of different JVM implementations, and among them, a couple of open source solutions, such as OpenJDK and IcedTea (RedHat). It really depends on the application requirements. We have chosen Oracle Hotspot from experience, and from reference implementations, deployed it in production; it can be trusted for a wide range of generic purposes. Hotspot also behaves very well to run Java UI applications. Eclipse is one of them. Java SE 8 If you haven't already played with Scala or Clojure, it is time to take the functional programming train! With Java SE 8, Lambda expressions reduce the amount of code dramatically with an improved readability and maintainability. We won't implement only this Java 8 feature, but it being probably the most popular, it must be highlighted as it has given a massive credit to the paradigm change. It is important nowadays to feel familiar with these patterns. Summary In this article, you learned how to install Eclipse for Java EE developers and Java SE 8. Resources for Article: Further resources on this subject: Support for Developers of Spring Web Flow 2[article] Design with Spring AOP[article] Using Spring JMX within Java Applications[article]
Read more
  • 0
  • 0
  • 1320

article-image-forms-and-views
Packt
13 Jan 2016
12 min read
Save for later

Forms and Views

Packt
13 Jan 2016
12 min read
In this article by Aidas Bendoraitis, author of the book Web Development with Django Cookbook - Second Edition we will cover the following topics: Passing HttpRequest to the form Utilizing the save method of the form (For more resources related to this topic, see here.) Introduction When the database structure is defined in the models, we need some views to let the users enter data or show the data to the people. In this chapter, we will focus on the views managing forms, the list views, and views generating an alternative output than HTML. For the simplest examples, we will leave the creation of URL rules and templates up to you. Passing HttpRequest to the form The first argument of every Django view is the HttpRequest object that is usually named request. It contains metadata about the request. For example, current language code, current user, current cookies, and current session. By default, the forms that are used in the views accept the GET or POST parameters, files, initial data, and other parameters; however, not the HttpRequest object. In some cases, it is useful to additionally pass HttpRequest to the form, especially when you want to filter out the choices of form fields using the request data or handle saving something such as the current user or IP in the form. In this recipe, we will see an example of a form where a person can choose a user and write a message for them. We will pass the HttpRequest object to the form in order to exclude the current user from the recipient choices; we don't want anybody to write a message to themselves. Getting ready Let's create a new app called email_messages and put it in INSTALLED_APPS in the settings. This app will have no models, just forms and views. How to do it... To complete this recipe, execute the following steps: Add a new forms.py file with the message form containing two fields: the recipient selection and message text. Also, this form will have an initialization method, which will accept the request object and then, modify QuerySet for the recipient's selection field: # email_messages/forms.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals from django import forms from django.utils.translation import ugettext_lazy as _ from django.contrib.auth.models import User class MessageForm(forms.Form): recipient = forms.ModelChoiceField( label=_("Recipient"), queryset=User.objects.all(), required=True, ) message = forms.CharField( label=_("Message"), widget=forms.Textarea, required=True, ) def __init__(self, request, *args, **kwargs): super(MessageForm, self).__init__(*args, **kwargs) self.request = request self.fields["recipient"].queryset = self.fields["recipient"].queryset. exclude(pk=request.user.pk) Then, create views.py with the message_to_user() view in order to handle the form. As you can see, the request object is passed as the first parameter to the form, as follows: # email_messages/views.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals from django.contrib.auth.decorators import login_required from django.shortcuts import render, redirect from .forms import MessageForm @login_required def message_to_user(request): if request.method == "POST": form = MessageForm(request, data=request.POST) if form.is_valid(): # do something with the form return redirect("message_to_user_done") else: form = MessageForm(request) return render(request, "email_messages/message_to_user.html", {"form": form} ) How it works... In the initialization method, we have the self variable that represents the instance of the form itself, we also have the newly added request variable, and then we have the rest of the positional arguments (*args) and named arguments (**kwargs). We call the super() initialization method passing all the positional and named arguments to it so that the form is properly initiated. We will then assign the request variable to a new request attribute of the form for later access in other methods of the form. Then, we modify the queryset attribute of the recipient's selection field, excluding the current user from the request. In the view, we will pass the HttpRequest object as the first argument in both situations: when the form is posted as well as when it is loaded for the first time. See also The Utilizing the save method of the form recipe Utilizing the save method of the form To make your views clean and simple, it is good practice to move the handling of the form data to the form itself whenever possible and makes sense. The common practice is to have a save() method that will save the data, perform search, or do some other smart actions. We will extend the form that is defined in the previous recipe with the save() method, which will send an e-mail to the selected recipient. Getting ready We will build upon the example that is defined in the Passing HttpRequest to the form recipe. How to do it... To complete this recipe, execute the following two steps: From Django, import the function in order to send an e-mail. Then, add the save() method to MessageForm. It will try to send an e-mail to the selected recipient and will fail quietly if any errors occur: # email_messages/forms.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals from django import forms from django.utils.translation import ugettext, ugettext_lazy as _ from django.core.mail import send_mail from django.contrib.auth.models import User class MessageForm(forms.Form): recipient = forms.ModelChoiceField( label=_("Recipient"), queryset=User.objects.all(), required=True, ) message = forms.CharField( label=_("Message"), widget=forms.Textarea, required=True, ) def __init__(self, request, *args, **kwargs): super(MessageForm, self).__init__(*args, **kwargs) self.request = request self.fields["recipient"].queryset = self.fields["recipient"].queryset. exclude(pk=request.user.pk) def save(self): cleaned_data = self.cleaned_data send_mail( subject=ugettext("A message from %s") % self.request.user, message=cleaned_data["message"], from_email=self.request.user.email, recipient_list=[ cleaned_data["recipient"].email ], fail_silently=True, ) Then, call the save() method from the form in the view if the posted data is valid: # email_messages/views.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals from django.contrib.auth.decorators import login_required from django.shortcuts import render, redirect from .forms import MessageForm @login_required def message_to_user(request): if request.method == "POST": form = MessageForm(request, data=request.POST) if form.is_valid(): form.save() return redirect("message_to_user_done") else: form = MessageForm(request) return render(request, "email_messages/message_to_user.html", {"form": form} ) How it works... Let's take a look at the form. The save() method uses the cleaned data from the form to read the recipient's e-mail address and the message. The sender of the e-mail is the current user from the request. If the e-mail cannot be sent due to an incorrect mail server configuration or another reason, it will fail silently; that is, no error will be raised. Now, let's look at the view. When the posted form is valid, the save() method of the form will be called and the user will be redirected to the success page. See also The Passing HttpRequest to the form recipe Uploading images In this recipe, we will take a look at the easiest way to handle image uploads. You will see an example of an app, where the visitors can upload images with inspirational quotes. Getting ready Make sure to have Pillow or PIL installed in your virtual environment or globally. Then, let's create a quotes app and put it in INSTALLED_APPS in the settings. Then, we will add an InspirationalQuote model with three fields: the author, quote text, and picture, as follows: # quotes/models.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals import os from django.db import models from django.utils.timezone import now as timezone_now from django.utils.translation import ugettext_lazy as _ from django.utils.encoding import python_2_unicode_compatible def upload_to(instance, filename): now = timezone_now() filename_base, filename_ext = os.path.splitext(filename) return "quotes/%s%s" % ( now.strftime("%Y/%m/%Y%m%d%H%M%S"), filename_ext.lower(), ) @python_2_unicode_compatible class InspirationalQuote(models.Model): author = models.CharField(_("Author"), max_length=200) quote = models.TextField(_("Quote")) picture = models.ImageField(_("Picture"), upload_to=upload_to, blank=True, null=True, ) class Meta: verbose_name = _("Inspirational Quote") verbose_name_plural = _("Inspirational Quotes") def __str__(self): return self.quote In addition, we created an upload_to function, which sets the path of the uploaded picture to be something similar to quotes/2015/04/20150424140000.png. As you can see, we use the date timestamp as the filename to ensure its uniqueness. We pass this function to the picture image field. How to do it... Execute these steps to complete the recipe: Create the forms.py file and put a simple model form there: # quotes/forms.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals from django import forms from .models import InspirationalQuote class InspirationalQuoteForm(forms.ModelForm): class Meta: model = InspirationalQuote fields = ["author", "quote", "picture", "language"] In the views.py file, put a view that handles the form. Don't forget to pass the FILES dictionary-like object to the form. When the form is valid, trigger the save() method as follows: # quotes/views.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals from django.shortcuts import redirect from django.shortcuts import render from .forms import InspirationalQuoteForm def add_quote(request): if request.method == "POST": form = InspirationalQuoteForm( data=request.POST, files=request.FILES, ) if form.is_valid(): quote = form.save() return redirect("add_quote_done") else: form = InspirationalQuoteForm() return render(request, "quotes/change_quote.html", {"form": form} ) Lastly, create a template for the view in templates/quotes/change_quote.html. It is very important to set the enctype attribute to multipart/form-data for the HTML form, otherwise the file upload won't work: {# templates/quotes/change_quote.html #} {% extends "base.html" %} {% load i18n %} {% block content %} <form method="post" action="" enctype="multipart/form-data"> {% csrf_token %} {{ form.as_p }} <button type="submit">{% trans "Save" %}</button> </form> {% endblock %} How it works... Django model forms are forms that are created from models. They provide all the fields from the model so you don't need to define them again. In the preceding example, we created a model form for the InspirationalQuote model. When we save the form, the form knows how to save each field in the database as well as upload the files and save them in the media directory. There's more As a bonus, we will see an example of how to generate a thumbnail out of the uploaded image. Using this technique, you could also generate several other specific versions of the image, such as the list version, mobile version, and desktop computer version. We will add three methods to the InspirationalQuote model (quotes/models.py). They are save(), create_thumbnail(), and get_thumbnail_picture_url(). When the model is being saved, we will trigger the creation of the thumbnail. When we need to show the thumbnail in a template, we can get its URL using {{ quote.get_thumbnail_picture_url }}. The method definitions are as follows: # quotes/models.py # … from PIL import Image from django.conf import settings from django.core.files.storage import default_storage as storage THUMBNAIL_SIZE = getattr( settings, "QUOTES_THUMBNAIL_SIZE", (50, 50) ) class InspirationalQuote(models.Model): # ... def save(self, *args, **kwargs): super(InspirationalQuote, self).save(*args, **kwargs) # generate thumbnail picture version self.create_thumbnail() def create_thumbnail(self): if not self.picture: return "" file_path = self.picture.name filename_base, filename_ext = os.path.splitext(file_path) thumbnail_file_path = "%s_thumbnail.jpg" % filename_base if storage.exists(thumbnail_file_path): # if thumbnail version exists, return its url path return "exists" try: # resize the original image and # return URL path of the thumbnail version f = storage.open(file_path, 'r') image = Image.open(f) width, height = image.size if width > height: delta = width - height left = int(delta/2) upper = 0 right = height + left lower = height else: delta = height - width left = 0 upper = int(delta/2) right = width lower = width + upper image = image.crop((left, upper, right, lower)) image = image.resize(THUMBNAIL_SIZE, Image.ANTIALIAS) f_mob = storage.open(thumbnail_file_path, "w") image.save(f_mob, "JPEG") f_mob.close() return "success" except: return "error" def get_thumbnail_picture_url(self): if not self.picture: return "" file_path = self.picture.name filename_base, filename_ext = os.path.splitext(file_path) thumbnail_file_path = "%s_thumbnail.jpg" % filename_base if storage.exists(thumbnail_file_path): # if thumbnail version exists, return its URL path return storage.url(thumbnail_file_path) # return original as a fallback return self.picture.url In the preceding methods, we are using the file storage API instead of directly juggling the filesystem, as we could then exchange the default storage with Amazon S3 buckets or other storage services and the methods will still work. How does the creating the thumbnail work? If we had the original file saved as quotes/2014/04/20140424140000.png, we are checking whether the quotes/2014/04/20140424140000_thumbnail.jpg file doesn't exist and, in that case, we are opening the original image, cropping it from the center, resizing it to 50 x 50 pixels, and saving it to the storage. The get_thumbnail_picture_url() method checks whether the thumbnail version exists in the storage and returns its URL. If the thumbnail version does not exist, the URL of the original image is returned as a fallback. Summary In this article, we learned about passing an HttpRequest to the form and utilizing the save method of the form. You can find various book on Django on our website: Learning Website Development with Django (https://www.packtpub.com/web-development/learning-website-development-django) Instant Django 1.5 Application Development Starter (https://www.packtpub.com/web-development/instant-django-15-application-development-starter) Django Essentials (https://www.packtpub.com/web-development/django-essentials) Resources for Article: Further resources on this subject: So, what is Django?[article] Code Style in Django[article] Django JavaScript Integration: jQuery In-place Editing Using Ajax[article]
Read more
  • 0
  • 0
  • 1111

article-image-test-driven-data-model
Packt
13 Jan 2016
17 min read
Save for later

A Test-Driven Data Model

Packt
13 Jan 2016
17 min read
In this article by Dr. Dominik Hauser, author of Test-driven Development with Swift, we will cover the following topics: Implementing a To-Do item Implementing the location iOS apps are often developed using a design pattern called Model-View-Controller (MVC). In this pattern, each class (also, a struct or enum) is either a model object, a view, or a controller. Model objects are responsible to store data. They should be independent from the kind of presentation. For example, it should be possible to use the same model object for an iOS app and command-line tool on Mac. View objects are the presenters of data. They are responsible for making the objects visible (or in case of a VoiceOver-enabled app, hearable) for users. Views are special for the device that the app is executed on. In the case of a cross-platform application view, objects cannot be shared. Each platform needs its own implementation of the view layer. Controller objects communicate between the model and view objects. They are responsible for making the model objects presentable. We will use MVC for our to-do app because it is one of the easiest design patterns, and it is commonly used by Apple in their sample code. This article starts with the test-driven development of the model layer of our application. for more info: (For more resources related to this topic, see here.) Implementing the To-Do item A to-do app needs a model class/struct to store information for to-do items. We start by adding a new test case to the test target. Open the To-Do project and select the ToDoTests group. Navigate to File | New | File, go to iOS | Source | Unit Test Case Class, and click on Next. Put in the name ToDoItemTests, make it a subclass of XCTestCase, select Swift as the language, and click on Next. In the next window, create a new folder, called Model, and click on Create. Now, delete the ToDoTests.swift template test case. At the time of writing this article, if you delete ToDoTests.swift before you add the first test case in a test target, you will see a pop up from Xcode, telling you that adding the Swift file will create a mixed Swift and Objective-C target: This is a bug in Xcode 7.0. It seems that when adding the first Swift file to a target, Xcode assumes that there have to be Objective-C files already. Click on Don't Create if this happens to you because we will not use Objective-C in our tests. Adding a title property Open ToDoItemTests.swift, and add the following import expression right below import XCTest: @testable import ToDo This is needed to be able to test the ToDo module. The @testable keyword makes internal methods of the ToDo module accessible by the test case. Remove the two testExample() and testPerformanceExample()template test methods. The title of a to-do item is required. Let's write a test to ensure that an initializer that takes a title string exists. Add the following test method at the end of the test case (but within the ToDoItemTests class): func testInit_ShouldTakeTitle() {    ToDoItem(title: "Test title") } The static analyzer built into Xcode will complain about the use of unresolved identifier 'ToDoItem': We cannot compile this code because Xcode cannot find the ToDoItem identifier. Remember that not compiling a test is equivalent to a failing test, and as soon as we have a failing test, we need to write an implementation code to make the test pass. To add a file to the implementation code, first click on the ToDo group in Project navigator. Otherwise, the added file will be put into the test group. Go to File | New | File, navigate to the iOS | Source | Swift File template, and click on Next. Create a new folder called Model. In the Save As field, put in the name ToDoItem.swift, make sure that the file is added to the ToDo target and not to the ToDoTests target, and click on Create. Open ToDoItem.swift in the editor, and add the following code: struct ToDoItem { } This code is a complete implementation of a struct named ToDoItem. So, Xcode should now be able to find the ToDoItem identifier. Run the test by either going to Product | Test or use the ⌘U shortcut. The code does not compile because there is Extra argument 'title' in call. This means that at this stage, we could initialize an instance of ToDoItem like this: let item = ToDoItem() But we want to have an initializer that takes a title. We need to add a property, named title, of the String type to store the title: struct ToDoItem {    let title: String } Run the test again. It should pass. We have implemented the first micro feature of our to-do app using TDD. And it wasn't even hard. But first, we need to check whether there is anything to refactor in the existing test and implementation code. The tests and code are clean and simple. There is nothing to refactor as yet. Always remember to check whether refactoring is needed after you have made the tests green. But there are a few things to note about the test. First, Xcode shows a warning that Result of initializer is unused. To make this warning go away, assign the result of the initializer to an underscore _ = ToDoItem(title: "Test title"). This tells Xcode that we know what we are doing. We want to call the initializer of ToDoItem, but we do not care about its return value. Secondly, there is no XCTAssert function call in the test. To add an assert, we could rewrite the test as follows: func testInit_ShouldTakeTitle() {    let item = ToDoItem(title: "Test title")    XCTAssertNotNil(item, "item should not be nil") } But in Swift an non-failable initializer cannot return nil. It always returns a valid instance. This means that the XCTAssertNotNil() method is useless. We do not need it to ensure that we have written enough code to implement the tested micro feature. It is not needed to drive the development and it does not make the code better. In the following tests, we will omit the XCTAssert functions when they are not needed in order to make a test fail. Before we proceed to the next tests, let's set up the editor in a way that makes the TDD workflow easier and faster. Open ToDoItemTests.swift in the editor. Open Project navigator, and hold down the option key while clicking on ToDoItem.swift in the navigator to open it in the assistant editor. Depending on the size of your screen and your preferences, you might prefer to hide the navigator again. With this setup, you have the tests and code side by side, and switching from a test to code and vice versa takes no time. In addition to this, as the relevant test is visible while you write the code, it can guide the implementation. Adding an item description property A to-do item can have a description. We would like to have an initializer that also takes a description string. To drive the implementation, we need a failing test for the existence of that initializer: func testInit_ShouldTakeTitleAndDescription() {    _ = ToDoItem(title: "Test title",    itemDescription: "Test description") } Again, this code does not compile because there is Extra argument 'itemDescription' in call. To make this test pass, we add a itemDescription of type String? property to ToDoItem: struct ToDoItem {    let title: String    let itemDescription: String? } Run the tests. The testInit_ShouldTakeTitleAndDescription()test fails (that is, it does not compile) because there is Missing argument for parameter 'itemDescription' in call. The reason for this is that we are using a feature of Swift where structs have an automatic initializer with arguments setting their properties. The initializer in the first test only has one argument, and, therefore, the test fails. To make the two tests pass again, replace the initializer in testInit_ShouldTakeTitleAndDescription() with this: toDoItem(title: "Test title", itemDescription: nil) Run the tests to check whether all the tests pass again. But now the initializer in the first test looks bad. We would like to be able to have a short initializer with only one argument in case the to-do item only has a title. So, the code needs refactoring. To have more control over the initialization, we have to implement it ourselves. Add the following code to ToDoItem: init(title: String, itemDescription: String? = nil) {    self.title = title    self.itemDescription = itemDescription } This initializer has two arguments. The second argument has a default value, so we do not need to provide both arguments. When the second argument is omitted, the default value is used. Before we refactor the tests, run the tests to make sure that they still pass. Then, remove the second argument from the initializer in testInit_ShouldTakeTitle(): func testInit_ShouldTakeTitle() {    _ = ToDoItem(title: "Test title") } Run the tests again to make sure that everything still works. Removing a hidden source for bugs To be able to use a short initializer, we need to define it ourselves. But this also introduces a new source for potential bugs. We can remove the two micro features we have implemented and still have both tests pass. To see how this works, open ToDoItem.swift, and comment out the properties and assignment in the initializer: struct ToDoItem {    //let title: String    //let itemDescription: String?       init(title: String, itemDescription: String? = nil) {               //self.title = title        //self.itemDescription = itemDescription    } } Run the tests. Both tests still pass. The reason for this is that they do not check whether the values of the initializer arguments are actually set to any the ToDoItem properties. We can easily extend the tests to make sure that the values are set. First, let's change the name of the first test to testInit_ShouldSetTitle(), and replace its contents with the following code: let item = ToDoItem(title: "Test title") XCTAssertEqual(item.title, "Test title",    "Initializer should set the item title") This test does not compile because ToDoItem does not have a property title (it is commented out). This shows us that the test is now testing our intention. Remove the comment signs for the title property and assignment of the title in the initializer, and run the tests again. All the tests pass. Now, replace the second test with the following code: func testInit_ShouldSetTitleAndDescription() {    let item = ToDoItem(title: "Test title",        itemDescription: "Test description")      XCTAssertEqual(item.itemDescription , "Test description",        "Initializer should set the item description") } Remove the remaining comment signs in ToDoItem, and run the tests again. Both tests pass again, and they now test whether the initializer works. Adding a timestamp property A to-do item can also have a due date, which is represented by a timestamp. Add the following test to make sure that we can initialize a to-do item with a title, a description, and a timestamp: func testInit_ShouldSetTitleAndDescriptionAndTimestamp() {    let item = ToDoItem(title: "Test title",        itemDescription: "Test description",        timestamp: 0.0)      XCTAssertEqual(0.0, item.timestamp,        "Initializer should set the timestamp") } Again, this test does not compile because there is an extra argument in the initializer. From the implementation of the other properties, we know that we have to add a timestamp property in ToDoItem and set it in the initializer: struct ToDoItem {    let title: String    let itemDescription: String?    let timestamp: Double?       init(title: String,        itemDescription: String? = nil,        timestamp: Double? = nil) {                   self.title = title            self.itemDescription = itemDescription            self.timestamp = timestamp    } } Run the tests. All the tests pass. The tests are green, and there is nothing to refactor. Adding a location property The last property that we would like to be able to set in the initializer of ToDoItem is its location. The location has a name and can optionally have a coordinate. We will use a struct to encapsulate this data into its own type. Add the following code to ToDoItemTests: func testInit_ShouldSetTitleAndDescriptionAndTimestampAndLocation() {    let location = Location(name: "Test name") } The test is not finished, but it already fails because Location is an unresolved identifier. There is no class, struct, or enum named Location yet. Open Project navigator, add Swift File with the name Location.swift, and add it to the Model folder. From our experience with the ToDoItem struct, we already know what is needed to make the test green. Add the following code to Location.swift: struct Location {    let name: String } This defines a Location struct with a name property and makes the test code compliable again. But the test is not finished yet. Add the following code to testInit_ShouldSetTitleAndDescriptionAndTimestampAndLocation(): func testInit_ShouldTakeTitleAndDescriptionAndTimestampAndLocation() {    let location = Location(name: "Test name")    let item = ToDoItem(title: "Test title",        itemDescription: "Test description",        timestamp: 0.0,        location: location)      XCTAssertEqual(location.name, item.location?.name,        "Initializer should set the location") } Unfortunately, we cannot use location itself yet to check for equality, so the following assert does not work: XCTAssertEqual(location, item.location,    "Initializer should set the location") The reason for this is that the first two arguments of XCTAssertEqual() have to conform to the Equatable protocol. Again, this does not compile because the initializer of ToDoItem does not have an argument called location. Add the location property and the initializer argument to ToDoItem. The result should look like this: struct ToDoItem {    let title: String    let itemDescription: String?    let timestamp: Double?    let location: Location?       init(title: String,        itemDescription: String? = nil,        timestamp: Double? = nil,        location: Location? = nil) {                   self.title = title            self.itemDescription = itemDescription            self.timestamp = timestamp            self.location = location    } } Run the tests again. All the tests pass and there is nothing to refactor. We have now implemented a struct to hold the to-do items using TDD. Implementing the location In the previous section, we added a struct to hold the location information. We will now add tests to make sure Location has the needed properties and initializer. The tests could be added to ToDoItemTests, but they are easier to maintain when the test classes mirror the implementation classes/structs. So, we need a new test case class. Open Project navigator, select the ToDoTests group, and add a unit test case class with the name LocationTests. Make sure to go to iOS | Source | Unit Test Case Class because we want to test the iOS code and Xcode sometimes preselects OS X | Source. Choose to store the file in the Model folder we created previously. Set up the editor to show LocationTests.swift on the left-hand side and Location.swift in the assistant editor on the right-hand side. In the test class, add @testable import ToDo, and remove the testExample() and testPerformanceExample()template tests. Adding a coordinate property To drive the addition of a coordinate property, we need a failing test. Add the following test to LocationTests: func testInit_ShouldSetNameAndCoordinate() {    let testCoordinate = CLLocationCoordinate2D(latitude: 1,        longitude: 2)    let location = Location(name: "",        coordinate: testCoordinate)      XCTAssertEqual(location.coordinate?.latitude,        testCoordinate.latitude,        "Initializer should set latitude")    XCTAssertEqual(location.coordinate?.longitude,        testCoordinate.longitude,        "Initializer should set longitude") } First, we create a coordinate and use it to create an instance of Location. Then, we assert that the latitude and the longitude of the location's coordinate are set to the correct values. We use the 1 and 2 values in the initializer of CLLocationCoordinate2D because it has also an initializer that takes no arguments (CLLocationCoordinate2D()) and sets the longitude and latitude to zero. We need to make sure in the test that the initializer of Location assigns the coordinate argument to its property. The test does not compile because CLLocationCoordinate2D is an unresolved identifier. We need to import CoreLocation in LocationTests.swift: import XCTest @testable import ToDo import CoreLocation The test still does not compile because Location does not have a coordinate property yet. Like ToDoItem, we would like to have a short initializer for locations that only have a name argument. Therefore, we need to implement the initializer ourselves and cannot use the one provided by Swift. Replace the contents of Location.swift with the following code: import CoreLocation   struct Location {    let name: String    let coordinate: CLLocationCoordinate2D?       init(name: String,        coordinate: CLLocationCoordinate2D? = nil) {                   self.name = ""            self.coordinate = coordinate    } } Note that we have intentionally set the name in the initializer to an empty string. This is the easiest implementation that makes the tests pass. But it is clearly not what we want. The initializer should set the name of the location to the value in the name argument. So, we need another test to make sure that the name is set correctly. Add the following test to LocationTests: func testInit_ShouldSetName() {    let location = Location(name: "Test name")    XCTAssertEqual(location.name, "Test name",        "Initializer should set the name") } Run the test to make sure it fails. To make the test pass, change self.name = "" in the initializer of Location to self.name = name. Run the tests again to check that now all the tests pass. There is nothing to refactor in the tests and implementation. Let's move on. Summary In this article, we covered the implementation of a to-do item by adding a title property, item description property, timestamp property, and more. We also covered the implementation of a location using the coordinate property. Resources for Article: Further resources on this subject: Share and Share Alike [article] Introducing Test-driven Machine Learning[article] Testing a UI Using WebDriverJS [article]
Read more
  • 0
  • 0
  • 1121

article-image-interactive-crime-map-using-flask
Packt
12 Jan 2016
18 min read
Save for later

Interactive Crime Map Using Flask

Packt
12 Jan 2016
18 min read
In this article by Gareth Dwyer, author of the book, Flask By Example, we will cover how to set up a MySQL database on our VPS and creating a database for the crime data. We'll follow on from this by setting up a basic page containing a map and textbox. We'll see how to link Flask to MySQL by storing data entered into the textbox in our database. We won't be using an ORM for our database queries or a JavaScript framework for user input and interaction. This means that there will be some laborious writing of SQL and vanilla JavaScript, but it's important to fully understand why tools and frameworks exist, and what problems they solve, before diving in and using them blindly. (For more resources related to this topic, see here.) We'll cover the following topics: Introduction to SQL Databases Installing MySQL on our VPS Connecting to MySQL from Python and creating the database Connecting to MySQL from Flask and inserting data Setting up We'll create a new git repository for our new code base, since although some of the setup will be similar, our new project should be completely unrelated to our first one. If you need more help with this step, head back to the setup of the first project and follow the detailed instructions there. If you're feeling confident, see if you can do it just with the following summary: Head over to the website for bitbucket, GitHub, or whichever hosting platform you used for the first project. Log in and use their Create a new repository functionality. Name your repo crimemap, and take note of the URL you're given. On your local machine, fire up a terminal and run the following commands: mkdir crimemap cd crimemap git init git remote add origin <git repository URL> We'll leave this repository empty for now as we need to set up a database on our VPS. Once we have the database installed, we'll come back here to set up our Flask project. Understanding relational databases In its simplest form, a relational database management system, such as MySQL, is a glorified spreadsheet program, such as Microsoft Excel: We store data in rows and columns. Every row is a "thing" and every column is a specific piece of information about the thing in the relevant row. I put "thing" in inverted commas because we're not limited to storing objects. In fact, the most common example, both in the real world and in explaining databases, is data about people. A basic database storing information about customers of an e-commerce website could look something like the following: ID First Name Surname Email Address Telephone 1 Frodo Baggins fbaggins@example.com +1 111 111 1111 2 Bilbo Baggins bbaggins@example.com +1 111 111 1010 3 Samwise Gamgee sgamgee@example.com +1 111 111 1001 If we look from left to right in a single row, we get all the information about one person. If we look at a single column from top to bottom, we get one piece of information (for example, an e-mail address) for everyone. Both can be useful—if we want to add a new person or contact a specific person, we're probably interested in a specific row. If we want to send a newsletter to all our customers, we're just interested in the e-mail column. So why can't we just use spreadsheets instead of databases then? Well, if we take the example of an e-commerce store further, we quickly see the limitations. If we want to store a list of all the items we have on offer, we can create another table similar to the preceding one, with columns such as "Item name", "Description", "Price", and "Quantity in stock". Our model continues to be useful. But now, if we want to store a list of all the items Frodo has ever purchased, there's no good place to put the data. We could add 1000 columns to our customer table: "Purchase 1", "Purchase 2", and so on up to "Purchase 1000", and hope that Frodo never buys more than 1000 items. This isn't scalable or easy to work with: How do we get the description for the item Frodo purchased last Tuesday? Do we just store the item's name in our new column? What happens with items that don't have unique names? Soon, we realise that we need to think about it backwards. Instead of storing the items purchased by a person in the "Customers" table, we create a new table called "Orders" and store a reference to the customer in every order. Thus, an order knows which customer it belongs to, but a customer has no inherent knowledge of what orders belong to them. While our model still fits into a spreadsheet at the push of a button, as we grow our data model and data size, our spreadsheet becomes cumbersome. We need to perform complicated queries such as "I want to see all the items that are in stock and have been ordered at least once in the last 6 months and cost more than $10." Enter Relational database management systems (RDBMS). They've been around for decades and are a tried and tested way of solving a common problem—storing data with complicated relations in an organized and accessible manner. We won't be touching on their full capabilities in our crime map (in fact, we could probably store our data in a .txt file if we needed to), but if you're interested in building web applications, you will need a database at some point. So, let's start small and add the powerful MySQL tool to our growing toolbox. I highly recommend learning more about databases. If the taster you experience while building our current project takes your fancy, go read and learn about databases. The history of RDBMS is interesting, and the complexities and subtleties of normalization and database varieties (including NoSQL databases, which we'll see something of in our next project) deserve more study time than we can devote to them in a book that focuses on Python web development. Installing and configuring MySQL Installing and configuring MySQL is an extremely common task. You can therefore find it in prebuilt images or in scripts that build entire stacks for you. A common stack is called the LAMP (Linux, Apache, MySQL, and PHP) stack, and many VPS providers provide a one-click LAMP stack image. As we are already using Linux and have already installed Apache manually, after installing MySQL, we'll be very close to the traditional LAMP stack, just using the P for Python instead of PHP. In keeping with our goal of "education first", we'll install MySQL manually and configure it through the command line instead of installing a GUI control panel. If you've used MySQL before, feel free to set it up as you see fit. Installing MySQL on our VPS Installing MySQL on our server is quite straightforward. SSH into your VPS and run the following commands: sudo apt-get update sudo apt-get install mysql-server You should see an interface prompting you for a root password for MySQL. Enter a password of your choice and repeat it when prompted. Once the installation has completed, you can get a live SQL shell by typing the following command and entering the password you chose earlier: mysql –p We could create a database and schema using this shell, but we'll be doing that through Python instead, so hit Ctrl + C to terminate the MySQL shell if you opened it. Installing Python drivers for MySQL Because we want to use Python to talk to our database, we need to install another package. There are two main MySQL connectors for Python: PyMySql and MySqlDB. The first is preferable from a simplicity and ease-of-use point of view. It is a pure Python library, meaning that it has no dependencies. MySqlDB is a C extension, and therefore has some dependencies, but is, in theory, a bit faster. They work very similarly once installed. To install it, run the following (still on your VPS): sudo pip install pymysql Creating our crimemap database in MySQL Some knowledge of SQL's syntax will be useful for the rest of this article, but you should be able to follow either way. The first thing we need to do is create a database for our web application. If you're comfortable using a command-line editor, you can create the following scripts directly on the VPS as we won't be running them locally and this can make them easier to debug. However, developing over an SSH session is far from ideal, so I recommend that you write them locally and use git to transfer them to the server before running. This can make debugging a bit frustrating, so be extra careful in writing these scripts. If you want, you can get them directly from the code bundle that comes with this book. In this case, you simply need to populate the Password field correctly and everything should work. Creating a database setup script In the crimemap directory where we initialised our git repo in the beginning, create a Python file called db_setup.py, containing the following code: import pymysql import dbconfig connection = pymysql.connect(host='localhost', user=dbconfig.db_user, passwd=dbconfig.db_password) try: with connection.cursor() as cursor: sql = "CREATE DATABASE IF NOT EXISTS crimemap" cursor.execute(sql) sql = """CREATE TABLE IF NOT EXISTS crimemap.crimes ( id int NOT NULL AUTO_INCREMENT, latitude FLOAT(10,6), longitude FLOAT(10,6), date DATETIME, category VARCHAR(50), description VARCHAR(1000), updated_at TIMESTAMP, PRIMARY KEY (id) )""" cursor.execute(sql); connection.commit() finally: connection.close() Let’s take a look at what this code does. First, we import the pymysql library we just installed. We also import dbconfig, which we’ll create locally in a bit and populate with the database credentials (we don’t want to store these in our repository). Then, we create a connection to our database using localhost (because our database is installed on the same machine as our code) and the credentials that don’t exist yet. Now that we have a connection to our database, we can get a cursor. You can think of a cursor as being a bit like the blinking object in your word processor that indicates where text will appear when you start typing. A database cursor is an object that points to a place in the database where we want to create, read, update, or delete data. Once we start dealing with database operations, there are various exceptions that could occur. We’ll always want to close our connection to the database, so we create a cursor (and do all subsequent operations) inside a try block with a connection.close() in a finally block (the finally block will get executed whether or not the try block succeeds). The cursor is also a resource, so we’ll grab one and use it in a with block so that it’ll automatically be closed when we’re done with it. With the setup done, we can start executing SQL code. Creating the database SQL reads similarly to English, so it's normally quite straightforward to work out what existing SQL does even if it's a bit more tricky to write new code. Our first SQL statement creates a database (crimemap) if it doesn't already exist (this means that if we come back to this script, we can leave this line in without deleting the entire database every time). We create our first SQL statement as a string and use the variable sql to store it. Then we execute the statement using the cursor we created. Using the database setup script We save our script locally and push it to the repository using the following command: git add db_setup.py git commit –m “database setup script” git push origin master We then SSH to our VPS and clone the new repository to our /var/www directory using the following command: ssh user@123.456.789.123 cd /var/www git clone <your-git-url> cd crimemap Adding credentials to our setup script Now, we still don’t have the credentials that our script relies on. We’ll do the following things before using our setup script: Create the dbconfig.py file with the database and password. Add this file to .gitignore to prevent it from being added to our repository. The following are the steps to do so: Create and edit dbconfig.py using the nano command: nano dbconfig.py Then, type the following (using the password you chose when you installed MySQL): db_username = “root” db_password = “<your-mysql-password>” Save it by hitting Ctrl + X and entering Y when prompted. Now, use similar nano commands to create, edit, and save .gitignore, which should contain this single line: dbconfig.py Running our database setup script With that done, you can run the following command: python db_setup.py Assuming everything goes smoothly, you should now have a database with a table to store crimes. Python will output any SQL errors, allowing you to debug if necessary. If you make changes to the script from the server, run the same git add, git commit, and git push commands that you did from your local machine. That concludes our preliminary database setup! Now we can create a basic Flask project that uses our database. Creating an outline for our Flask app We're going to start by building a skeleton of our crime map application. It'll be a basic Flask application with a single page that: Displays all data in the crimes table of our database Allows users to input data and stores this data in the database Has a "clear" button that deletes all the previously input data Although what we're going to be storing and displaying can't really be described as "crime data" yet, we'll be storing it in the crimes table that we created earlier. We'll just be using the description field for now, ignoring all the other ones. The process to set up the Flask application is very similar to what we used before. We're going to separate out the database logic into a separate file, leaving our main crimemap.py file for the Flask setup and routing. Setting up our directory structure On your local machine, change to the crimemap directory. If you created the database setup script on the server or made any changes to it there, then make sure you sync the changes locally. Then, create the templates directory and touch the files we're going to be using, as follows: cd crimemap git pull origin master mkdir templates touch templates/home.html touch crimemap.py touch dbhelper.py Looking at our application code The crimemap.py file contains nothing unexpected and should be entirely familiar from our headlines project. The only thing to point out is the DBHelper() function, whose code we'll see next. We simply create a global DBHelper() function right after initializing our app and then use it in the relevant methods to grab data from, insert data into, or delete all data from the database. from dbhelper import DBHelper from flask import Flask from flask import render_template from flask import request app = Flask(__name__) DB = DBHelper() @app.route("/") def home(): try: data = DB.get_all_inputs() except Exception as e: print e data = None return render_template("home.html", data=data) @app.route("/add", methods=["POST"]) def add(): try: data = request.form.get("userinput") DB.add_input(data) except Exception as e: print e return home() @app.route("/clear") def clear(): try: DB.clear_all() except Exception as e: print e return home() if __name__ == '__main__': app.run(debug=True) Looking at our SQL code There's a little bit more SQL to learn from our database helper code. In dbhelper.py, we need the following: import pymysql import dbconfig class DBHelper: def connect(self, database="crimemap"): return pymysql.connect(host='localhost', user=dbconfig.db_user, passwd=dbconfig.db_password, db=database) def get_all_inputs(self): connection = self.connect() try: query = "SELECT description FROM crimes;" with connection.cursor() as cursor: cursor.execute(query) return cursor.fetchall() finally: connection.close() def add_input(self, data): connection = self.connect() try: query = "INSERT INTO crimes (description) VALUES ('{}');".format(data) with connection.cursor() as cursor: cursor.execute(query) connection.commit() finally: connection.close() def clear_all(self): connection = self.connect() try: query = "DELETE FROM crimes;" with connection.cursor() as cursor: cursor.execute(query) connection.commit() finally: connection.close() As in our setup script, we need to make a connection to our database and then get a cursor from our connection in order to do anything meaningful. Again, we perform all our operations in try: ...finally: blocks in order to ensure that the connection is closed. In our helper code, we see three of the four main database operations. CRUD (Create, Read, Update, and Delete) describes the basic database operations. We are either creating and inserting new data or reading, modifying, or deleting existing data. We have no need to update data in our basic app, but creating, reading, and deleting are certainly useful. Creating our view code Python and SQL code is fun to write, and it is indeed the main part of our application. However, at the moment, we have a house without doors or windows—the difficult and impressive bit is done, but it's unusable. Let's add a few lines of HTML to allow the world to interact with the code we've written. In /templates/home.html, add the following: <html> <body> <head> <title>Crime Map</title> </head> <h1>Crime Map</h1> <form action="/add" method="POST"> <input type="text" name="userinput"> <input type="submit" value="Submit"> </form> <a href="/clear">clear</a> {% for userinput in data %} <p>{{userinput}}</p> {% endfor %} </body> </html> There's nothing we haven't seen before. We have a form with a single text input box to add data to our database by calling the /add function of our app, and directly below it, we loop through all the existing data and display each piece within <p> tags. Running the code on our VPS Finally, we just need to make our code accessible to the world. This means pushing it to our git repo, pulling it onto the VPS, and configuring Apache to serve it. Run the following commands locally: git add git commit –m "Skeleton CrimeMap" git push origin master ssh <username>@<vps-ip-address> And on your VPS use the following command: cd /var/www/crimemap git pull origin master Now, we need a .wsgi file to link our Python code to Apache: nano crimemap.wsgi The .wsgi file should contain the following: import sys sys.path.insert(0, "/var/www/crimemap") from crimemap import app as application Hit Ctrl + X and then Y when prompted to save. We also need to create a new Apache .conf file and set this as the default (instead of the headlines.conf file that is our current default), as follows: cd /etc/apache2/sites-available nano crimemap.conf This file should contain the following: <VirtualHost *> ServerName example.com WSGIScriptAlias / /var/www/crimemap/crimemap.wsgi WSGIDaemonProcess crimemap <Directory /var/www/crimemap> WSGIProcessGroup crimemap WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> </VirtualHost> This is so similar to the headlines.conf file we created for our previous project that you might find it easier to just copy that one and substitute code as necessary. Finally, we need to deactivate the old site (later on, we'll look at how to run multiple sites simultaneously off the same server) and activate the new one: sudo a2dissite headlines.conf sudo a2enssite crimemap.conf sudo service apache2 reload Now, everything should be working. If you copied the code out manually, it's almost certain that there's a bug or two to deal with. Don't be discouraged by this—remember that debugging is expected to be a large part of development! If necessary, do a tail –f on /var/log/apache2/error.log while you load the site in order to see any errors. If this fails, add some print statements to crimemap.py and dbhelper.py to narrow down the places where things are breaking. Once everything is working, you should be able to see the following in your browser: Notice how the data we get from the database is a tuple, which is why it is surrounded by brackets and has a trailing comma. This is because we selected only a single field (description) from our crimes table when we could, in theory, be dealing with many columns for each crime (and soon will be). Summary That's it for the introduction to our crime map project. Resources for Article: Further resources on this subject: Web Scraping with Python[article] Python 3: Building a Wiki Application[article] Using memcached with Python[article]
Read more
  • 0
  • 0
  • 3505
article-image-courses-users-and-roles
Packt
30 Dec 2015
9 min read
Save for later

Courses, Users, and Roles

Packt
30 Dec 2015
9 min read
In this article by Alex Büchner, the author of Moodle 3 Administration, Third Edition, gives an overview of Moodle courses, users, and roles. The three concepts are inherently intertwined and any one of these cannot be used without the other two. We will deal with the basics of the three core elements and show how they work together. Let's see what they are: Moodle courses: Courses are central to Moodle as this is where learning takes place. Teachers upload their learning resources, create activities, assist in learning and grade work, monitor progress, and so on. Students, on the other hand, read, listen to or watch learning resources, participate in activities, submit work, collaborate with others, and so on. Moodle users: These are individuals accessing our Moodle system. Typical users are students and teachers/trainers, but also there are others such as teaching assistants, managers, parents, assessors, examiners, or guests. Oh, and the administrator, of course! Moodle roles: Roles are effectively permissions that specify which features users are allowed to access and, also, where and when (in Moodle) they can access them. Bear in mind that this articleonly covers the basic concepts of these three core elements. (For more resources related to this topic, see here.) A high-level overview To give you an overview of courses, users, and roles, let's have a look at the following diagram. It shows nicely how central the three concepts are and also how other features are related to them. Again, all of their intricacies will be dealt with in due course, so for now, just start getting familiar with some Moodle terminology. Let's start at the bottom-left and cycle through the pyramid clockwise. Users have to go through an Authentication process to get access to Moodle. They then have to go through theEnrolments step to be able to participate in Courses, which themselves are organized into Categories. Groups & Cohorts are different ways to group users at course level or site-wide. Users are granted Roles in particular Contexts. Which role is allowed to do what and which isn't, depends entirely on the Permissions set within that role. The diagram also demonstrates a catch-22 situation. If we start with users, we have no courses to enroll them in to (except the front page); if we start with courses, we have no users who can participate in them. Not to worry though. Moodle lets us go back and forth between any administrative areas and, often, perform multiple tasks at once. Moodle courses Moodle manages activities and stores resources in courses, and this is where learning and collaboration takes place. Courses themselves belong to categories, which are organized hierarchically, similar to folders on our local hard drive. Moodle comes with a default category called Miscellaneous, which is sufficient to show the basics of courses. Moodle is a course-centric system. To begin with, let's create the first course. To do so, go to Courses|Managecourses and categories. Here, select the Miscellaneous category. Then, select the Create newcourse link, and you will be directed to the screen where course details have to be entered. For now, let's focus on the two compulsory fields, namely Coursefullname and Courseshortname. The former is displayed at various places in Moodle, whereas the latter is, by default,used to identify the course and is also shown in the breadcrumb trail. For now, we leave all other fields empty or at their default values and save the course by clicking on the Savechanges button at the bottom. The screen displayed after clicking onSavechanges shows enrolled users, if any. Since we just created the course, there are no users present in the course yet. In fact, except the administrator account we are currently using, there are no users at all on our Moodle system. So, we leave the course without users for now and add some users to our LMS before we come back to this screen (select the Home link in the breadcrumb). Moodle users Moodle users, or rather their user accounts, are dealt within Users|Accounts. Before we start, it is important to understand the difference between authentication and enrolment. Moodle users have to be authenticated in order to log in to the system. Authentication grants users access to the system through login where a username and password have to be given (this also applies to guest accounts where a username is allotted internally). Moodle supports a significant number of authentication mechanisms, which are discussed later in detail. Enrolment happens at course level. However, a user has to be authenticated to the system before enrolment to a course can take place. So, a typical workflow is as follows (there are exceptions as always, but we will deal with them when we get there): Create your users Create your courses (and categories) Associate users to courses and assign roles Again, this sequence demonstrates nicely how intertwined courses, users, and roles are in Moodle. Another way of looking at the difference between authentication and enrolment is how a user will get access to a course. Please bear in mind that this is a very simplistic view and it ignores the supported features such as external authentication, guest access, and self-enrolment. During the authentication phase, a user enters his credentials (username and password) or they are entered automatically via single sign-on. If the account exists locally, that is within Moodle, and the password is valid, he/she is granted access. The next phase is enrolment. If the user is enrolled and the enrolment hasn't expired, he/she is granted access to the course. You will come across a more detailed version of these graphics later on, but for now, it hopefully demonstrates the difference between authentication and enrolment. To add a user account manually, go to Users | Accounts|Addanewuser. As with courses, we will only focus on the mandatory fields, which should be self-explanatory: Username (has to be unique) New password (if a password policy has been set, certain rules might apply) Firstname Surname Email address Make sure you save the account information by selecting Create user at the bottom of the page. If any entered information is invalid, Moodle will display error messages right above the field. I have created a few more accounts; to see who has access to your Moodle system, go to Users|Accounts|Browselistofusers, where you will see all users. Actually, I did this via batch upload. Now that we have a few users on our system, let's go back to the course we created a minute ago and manually enroll new participants to it. To achieve this, go back to Courses|Manage courses and categories, select the Miscellaneous category again, and select the created demo course. Underneath the listed demo course, course details will be displayed alongside a number of options (on large screens, details are shown to the right). Here, select Enrolledusers. As expected, the list of enrolled users is still empty. Click on the Enrolusers button to change this. To grant users access to the course, select the Enrol button beside them and close the window. In the following screenshot, three users, participant01 to participant03 have already been enrolled to the course. Two more users, participant04 and participant05, have been selected for enrolment. You have probably spotted the Assignroles dropdown at the top of the pop-up window. This is where you select what role the selected user has, once he/she is enrolled in the course. For example, to give Tommy Teacher appropriate access to the course, we have to select the Teacher role first, before enrolling him to the course. This leads nicely to the third part of the pyramid, namely, roles. Moodle roles Roles define what users can or cannot see and do in your Moodle system. Moodle comes with a number of predefined roles—we already saw Student and Teacher—but it also allows us to create our own roles, for instance, for parents or external assessors. Each role has a certain scope (called context), which is defined by a set of permissions (expressed as capabilities). For example, a teacher is allowed to grade an assignment, whereas a student isn't. Or, a student is allowed to submit an assignment, whereas a teacher isn't. A role is assigned to a user in a context. Okay, so what is a context? A context is a ring-fenced area in Moodle where roles can be assigned to users. A user can be assigned different roles in different contexts, where the context can be a course, a category, an activity module, a user, a block, the front page, or Moodle itself. For instance, you are assigned the Administrator role for the entire system, but additionally, you might be assigned the Teacher role in any courses you are responsible for; or, a learner will be given the Student role in a course, but might have been granted the Teacher role in a forum to act as a moderator. To give you a feel of how a role is defined, let's go to Users |Permissions, where roles are managed, and select Defineroles. Click on the Teacher role and, after some general settings, you will see a (very) long list of capabilities: For now, we only want to stick with the example we used throughout the article. Now that we know what roles are, we can slightly rephrase what we have done. Instead of saying, "We have enrolled the user participant 01 in the demo course as a student", we would say, "We have assigned the studentrole to the user participant 01 in the context of the demo course." In fact, the term enrolment is a little bit of a legacy and goes back to the times when Moodle didn't have the customizable, finely-grained architecture of roles and permissions that it does now. One can speculate whether there are linguistic connotations between the terms role and enrolment. Summary In this article, we very briefly introduced the concepts of Moodle courses, users, and roles. We also saw how central they are to Moodle and how they are linked together. Any one of these concepts simply cannot exist without the other two, and this is something you should bear in mind throughout. Well, theoretically they can, but it would be rather impractical when you try to model your learning environment. If you haven't fully understood any of the three areas, don't worry. The intention was only to provide you with a high-level overview of the three core components and to touch upon the basics. Resources for Article: Further resources on this subject: Moodle for Online Communities [article] Gamification with Moodle LMS [article] Moodle Plugins [article]
Read more
  • 0
  • 0
  • 1850

article-image-using-nodejs-dependencies-nwjs
Max Gfeller
19 Nov 2015
6 min read
Save for later

Using Node.js dependencies in NW.js

Max Gfeller
19 Nov 2015
6 min read
NW.js (formerly known as node-webkit) is a framework that makes it possible to write multi-platform desktop applications using the technologies you already know well: HTML, CSS and JavaScript. It bundles a Chromium and a Node (or io.js) runtime and provides additional APIs to implement native-like features like real menu bars or desktop notifications. A big advantage of having a Node/io.js runtime is to be able to make use of all the modules that are available for node developers. We can categorize three different types of modules that we can use. Internal modules Node comes with a solid set of internal modules like fs or http. It is built on the UNIX philosophy of doing only one thing and doing it very well. Therefore you won't find too much functionality in node core. The following modules are shipped with node: assert: used for writing unit tests buffer: raw memory allocation used for dealing with binary data child_process: spawn and use child processes cluster: take advatage of multi-core systems crypto: cryptographic functions dgram: use datagram sockets - dns: perform DNS lookups domain: handle multiple different IO operations as a single group events: provides the EventEmitter fs: operations on the file system http: perform http queries and create http servers https: perform https queries and create https servers net: asynchronous network wrapper os: basic operating-system related utility functions path: handle and transform file paths punycode: deal with punycode domain names querystring: deal with query strings stream: abstract interface implemented by various objects in Node timers: setTimeout, setInterval etc. tls: encrypted stream communication url: URL resolution and parsing util: various utility functions vm: sandbox to run Node code in zlib: bindings to Gzip/Gunzip, Deflate/Inflate, and DeflateRaw/InflateRaw Those are documented on the official Node API documentation and can all be used within NW.js. Please take care that Chromium already defines a crypto global, so when using the crypto module in the webkit context you should assign it to a variable like crypt rather than crypto: var crypt = require('crypto'); The following example shows how we would read a file and use its contents using Node's modules: var fs = require('fs'); fs.readFile(__dirname + '/file.txt', function (error, contents) {   if (error) returnconsole.error(error);   console.log(contents); }); 3rd party JavaScript modules Soon after Node itself was started, Isaac Schlueter, who was friend of creator Ryan Dahl, started working on a package manager for Node itself. While Nodes's popularity reached new highs, a lot of packages got added to the npm registry and it soon became the fastest growing package registry. To the time of this writing there are over 169'000 packages on the registry and nearly two billion downloads each month. The npm registry is now also slowly evolving from being "only" a package manager for Node into a package manager for all things Javascript. Most of these packages can also be used inside NW.js applications. Your application's dependencies are being defined in your package.json file in the dependencies(or devDependencies) section: {   "name": "my-cool-application",   "version": "1.0.0",   "dependencies": {     "lodash": "^3.1.2"   },   "devDependencies": {     "uglify-js": "^2.4.3"   } } In the dependencies field you find all the modules that are required to run your application while in the devDependencies field only the modules required while developing the application are found. Installing a module is fairly easy and the best way to do this is with the npm install command: npm install lodash --save The install command directly downloads the latest version into your node_modules/ folder. The --save flag means that this dependency should also directly be written into your package.json file. You can also define a specific version to download by using following notation: npm install lodash@1.* or even npm install lodash@1.1 How does node's require() work? You need to deal with two different contexts in NW.js and it is really important to always know which context you are currently in as it changes the way the require() function works. When you load a moule using Node's require() function, then this module runs in the Node context. That means you have the same globals as you would have in a pure Node script but you can't access the globals from the browser, e.g. document or window. If you write Javascript code inside of a <script> tag in your html, or when you include a script inside your HTML using <script src="">, then this code runs in the webkit context. There you have access to all browsers globals. In the webkit context The require() function is a module loading system defined by the CommonJS Modules 1.0 standard and directly implemented in node core. To offer the same smooth experience you get a modified require() method that works in webkit, too. Whenever you want to include a certain module from the webkit context, e.g. directly from an inline script in your index.html file, you need to specify the path directly from the root of your project. Let's assume the following folder structure: - app/   - app.js   - foo.js   - bar.js   - index.html And you want to include the app/app.js file directly in your index.html you need to include it like this: <script type="text/javascript">   var app = require('./app/app.js'); </script> If you need to use a module from npm then you can simply require() it and NW.js will figure out where the corresponding node_modules/ folder is located. In the node context In node when you use relative paths it will always try to locate this module relative to the file you are requiring it from. If we take the example from above then we could require the foo.js module from app.js like this: var foo = require('./foo'); About the Author Max Gfeller is a passionate web developer and JavaScript enthusiast. He is making awesome things at Cylon and can be found on Twitter @mgefeller.
Read more
  • 0
  • 0
  • 4301

article-image-overview-tdd
Packt
06 Nov 2015
11 min read
Save for later

Overview of TDD

Packt
06 Nov 2015
11 min read
 In this article, by Ravi Gupta, Harmeet Singh, and Hetal Prajapati, authors of the book Test-Driven JavaScript Development explain how testing is one of the most important phases in the development of any project, and in the traditional software development model. Testing is usually executed after the code for functionality is written. Test-driven development (TDD) makes a big difference by writing tests before the actual code. You are going to learn TDD for JavaScript and see how this approach can be utilized in the projects. In this article, you are going to learn the following: Complexity of web pages Understanding TDD Benefits of TDD and common myths (For more resources related to this topic, see here.) Complexity of web pages When Tim Berners-Lee wrote the first ever web browser around 1990, it was supposed to run HTML, neither CSS nor JavaScript. Who knew that WWW will be the most powerful communication medium? Since then, there are now a number of technologies and tools which help us write the code and run it for our needs. We do a lot these days with the help of the Internet. We shop, read, learn, share, and collaborate... well, a few words are not going to suffice to explain what we do on the Internet, are they? Over the period of time, our needs have grown to a very complex level, so is the complexity of code written for websites. It's not plain HTML anymore, not some CSS style, not some basic JavaScript tweaks. That time has passed. Pick any site you visit daily, view source by opening developer tools of the browser, and look at the source code of the site. What do you see? Too much code? Too many styles? Too many scripts? The JavaScript code and CSS code is so huge to keep it in as inline, and we need to keep them in different files, sometimes even different folders to keep them organized. Now, what happens before you publish all the code live? You test it. You test each line and see if that works fine. Well, that's a programmer's job. Zero defect, that's what every organization tries to achieve. When that is in focus, testing comes into picture, more importantly, a development style, which is essentially test driven. As the title says for this article, we're going to keep our focus on test-driven JavaScript development.   Understanding Test-driven development TDD, short for Test-driven development, is a process for software development. Kent Beck, who is known for development of TDD, refers this as "Rediscovery." Kent's answer to a question on Quora can be found at https://www.quora.com/Why-does-Kent-Beck-refer-to-the-rediscovery-of-test-driven-development. "The original description of TDD was in an ancient book about programming. It said you take the input tape, manually type in the output tape you expect, then program until the actual output tape matches the expected output. After I'd written the first xUnit framework in Smalltalk I remembered reading this and tried it out. That was the origin of TDD for me. When describing TDD to older programmers, I often hear, "Of course. How else could you program?" Therefore I refer to my role as "rediscovering" TDD." If you go and try to find references to TDD, you would even get few references from 1968. It's not a new technique, though did not get so much attention yet. Recently, an interest toward TDD is growing, and as a result, there are a number of tools on the Web. For example, Jasmine, Mocha, DalekJS, JsUnit, QUnit, and Karma are among these popular tools and frameworks. More specifically, test-driven JavaScript development is getting popular these days. Test-driven development is a software development process, which enforces a developer to write test before production code. A developer writes a test, expects a behavior, and writes code to make the test pass. It is needless to mention that the test will always fail at the start. Need of testing To err is human. As a developer, it's not easy to find defects in our own code and often we think that our code is perfect. But there are always some chances that a defect is present in the code. Every organization or individual wants to deliver the best software they can. This is one major reason that every software, every piece of code is well tested before its release. Testing helps to detect and correct defects. There are a number of reasons why testing is needed. They are as follows: To check if the software is functioning as per the requirements There will not be just one device or one platform to run your software The end user will perform an action as a programmer you never expected There was a study conducted by National Institute of Standards and Technology (NIST) in 2002, which reported that software bugs cost the U.S. economy around $60 billion annually. With better testing, more than one-third of the cost could be avoided. The earlier the defect is found, the cheaper it is to fix it. A defect found post release would cost 10-100 times more to fix than if it had already been detected and fixed. The report of the study performed by NIST can be found at http://www.nist.gov/director/planning/upload/report02-3.pdf. If we draw a curve for the cost, it comes as an exponential when it comes to cost. The following figure clearly shows that the cost increases as the project matures with time. Sometimes, it's not possible to fix a defect without making changes in the architecture. In those cases, the cost, sometimes, is so much that developing the software from scratch seems like a better option. Benefits of TDD and common myths Every methodology has its own benefits and myths among people. The following sections will analyze the key benefits and most common myths of TDD. Benefits TDD has its own advantages over regular development approaches. There are a number of benefits, which help make a decision of using TDD over the traditional approach. Automated testing: If you did see a website code, you know that it's not easy to maintain and test all the scripts manually and keep them working. A tester may leave a few checks, but automated tests won't. Manual testing is error prone and slow. Lower cost of overall development: With TDD, the number of debugs is significantly decreased. You develop some code; run tests, if you fail, re-doing the development is significantly faster than debugging and fixing it. TDD aims at detecting defect and correcting them at an early stage, which costs much cheaper than detecting and correcting at a later stage or post release. Also, now debugging is very less frequent and significant amount of time is saved. With the help of tools/test runners like Karma, JSTestDriver, and so on, running every JavaScript tests on browser is not needed, which saves significant time in validation and verification while the development goes on. Increased productivity: Apart from time and financial benefits, TDD helps to increase productivity since the developer becomes more focused and tends to write quality code that passes and fulfills the requirement. Clean, maintainable, and flexible code: Since tests are written first, production code is often very neat and simple. When a new piece of code is added, all the tests can be run at once to see if anything failed with the change. Since we try to keep our tests atomic, and our methods also address a single goal, the code automatically becomes clean. At the end of the application development, there would be thousands of test cases which will guarantee that every piece of logic can be tested. The same test cases also act as documentation for users who are new to the development of system, since these tests act as an example of how the code works. Improved quality and reduced bugs: Complex codes invite bugs. Developers when change anything in neat and simple code, they tend to leave less or no bugs at all. They tend to focus on purpose and write code to fulfill the requirement. Keeps technical debt to minimum: This is one of the major benefits of TDD. Not writing unit tests and documentation is a big part, which increases technical debt for a software/project. Since TDD encourages you to write tests first, and if they are well written, they act as documentation, you keep technical debt for these to minimum. As Wikipedia says, A technical debt can be defined as tasks to be performed before a unit can be called complete. If the debt is not repaid, interest also adds up and makes it harder to make changes at a later stage. More about Technical debt can be found at https://en.wikipedia.org/wiki/Technical_debt. Myths Along with the benefits, TDD has some myths as well. Let's check few of them: Complete code coverage: TDD enforces to write tests first and developers write minimum amount of code to pass the test and almost 100% code coverage is done. But that does not guarantee that nothing is missed and the code is bug free. Code coverage tools do not cover all the paths. There can be infinite possibilities in loops. Of course it's not possible and feasible to check all the paths, but a developer is supposed to take care of major and critical paths. A developer is supposed to take care of business logic, flow, and process code most of the times. No need to test integration parts, setter-getter methods for properties, configurations, UI, and so on. Mocking and stubbing is to be used for integrations. No need of debugging the code: Though test-first development makes one think that debugging is not needed, but it's not always true. You need to know the state of the system when a test failed. That will help you to correct and write the code further. No need of QA: TDD cannot always cover everything. QA plays a very important role in testing. UI defects, integration defects are more likely to be caught by a QA. Even though developers are excellent, there are chances of errors. QA will try every kind of input and unexpected behavior that even a programmer did not cover with test cases. They will always try to crash the system with random inputs and discover defects. I can code faster without tests and can also validate for zero defect: While this may stand true for very small software and websites where code is small and writing test cases may increase overall time of development and delivery of the product. But for bigger products, it helps a lot to identify defects at a very early stage and gives a chance to correct at a very low cost. As seen in the previous screenshots of cost of fixing defects for phases and testing types, the cost of correcting a defect increases with time. Truly, whether TDD is required for a project or not, it depends on context. TDD ensures a good design and architecture: TDD encourages developers to write quality code, but it is not a replacement for good design practice and quality code. Will a team of developers be enough to ensure a stable and scalable architecture? Design should still be done by following the standard practices. You need to write all tests first: Another myth says that you need to write all tests first and then the actual production code. Actually, generally an iterative approach is used. Write some tests first, then some code, run the tests, fix the code, run the tests, write more tests, and so on. With TDD, you always test parts of software and keep developing. There are many myths, and covering all of them is not possible. The point is, TDD offers developers a better opportunity of delivering quality code. TDD helps organizations by delivering close to zero-defect products. Summary In this article, you learned about what TDD is. You learned about the benefits and myths of TDD. Resources for Article: Further resources on this subject: Understanding outside-in [article] Jenkins Continuous Integration [article] Understanding TDD [article]
Read more
  • 0
  • 0
  • 1907
article-image-restservices-finagle-and-finch
Packt
03 Nov 2015
9 min read
Save for later

RESTServices with Finagle and Finch

Packt
03 Nov 2015
9 min read
In this article by Jos Dirksen, the author of RESTful Web Services with Scala, we'll only be talking about Finch. Note, though, that most of the concepts provided by Finch are based on the underlying Finagle ideas. Finch just provides a very nice REST-based set of functions to make working with Finagle very easy and intuitive. (For more resources related to this topic, see here.) Finagle and Finch are two different frameworks that work closely together. Finagle is an RPC framework, created by Twitter, which you can use to easily create different types of services. On the website (https://github.com/twitter/finagle), the team behind Finagle explains it like this: Finagle is an extensible RPC system for the JVM, used to construct high-concurrency servers. Finagle implements uniform client and server APIs for several protocols, and is designed for high performance and concurrency. Most of Finagle's code is protocol agnostic, simplifying the implementation of new protocols. So, while Finagle provides the plumbing required to create highly scalable services, it doesn't provide direct support for specific protocols. This is where Finch comes in. Finch (https://github.com/finagle/finch) provides an HTTP REST layer on top of Finagle. On their website, you can find a nice quote that summarizes what Finch aims to do: Finch is a thin layer of purely functional basic blocks atop of Finagle for building composable REST APIs. Its mission is to provide the developers simple and robust REST API primitives being as close as possible to the bare metal Finagle API. Your first Finagle and Finch REST service Let's start by building a minimal Finch REST service. The first thing we need to do is to make sure we have the correct dependencies. To use Finch, all you have to do is to add the following dependency to your SBT file: "com.github.finagle" %% "finch-core" % "0.7.0" With this dependency added, we can start coding our very first Finch service. The next code fragment shows a minimal Finch service, which just responds with a Hello, Finch! message: package org.restwithscala.chapter2.gettingstarted import io.finch.route._ import com.twitter.finagle.Httpx object HelloFinch extends App { Httpx.serve(":8080", (Get / "hello" />"Hello, Finch!").toService) println("Press <enter> to exit.") Console.in.read.toChar } When this service receives a GET request on the URL path hello, it will respond with a Hello, Finch! message. Finch does this by creating a service (using the toService function) from a route (more on what a route is will be explained in the next section) and using the Httpx.serve function to host the created service. When you run this example, you'll see an output as follows: [info] Loading project definition from /Users/jos/dev/git/rest-with-scala/project [info] Set current project to rest-with-scala (in build file:/Users/jos/dev/git/rest-with-scala/) [info] Running org.restwithscala.chapter2.gettingstarted.HelloFinch Jun 26, 2015 9:38:00 AM com.twitter.finagle.Init$$anonfun$1 apply$mcV$sp INFO: Finagle version 6.25.0 (rev=78909170b7cc97044481274e297805d770465110) built at 20150423-135046 Press <enter> to exit. At this point, we have an HTTP server running on port 8080. When we make a call to http://localhost:8080/hello, this server will respond with the Hello, Finch! message. To test this service, you can make a HTTP requests in Postman like this: If you don't want to use a GUI to make the requests, you can also use the following Curl command: curl'http://localhost:8080/hello' HTTP verb and URL matching An important part of every REST framework is the ability to easily match HTTP verbs and the various path segments of the URL. In this section, we'll look at the tools Finch provides us with. Let's look at the code required to do this (the full source code for this example can be found at https://github.com/josdirksen/rest-with-scala/blob/master/chapter-02/src/main/scala/org/restwithscala/chapter2/steps/FinchStep1.scala): package org.restwithscala.chapter2.steps import com.twitter.finagle.Httpx import io.finch.request._ import io.finch.route._ import io.finch.{Endpoint => _} object FinchStep1 extends App { // handle a single post using a RequestReader valtaskCreateAPI = Post / "tasks" /> ( for { bodyContent<- body } yield s"created task with: $bodyContent") // Use matchers and extractors to determine which route to call // For more examples see the source file. valtaskAPI = Get / "tasks" /> "Get a list of all the tasks" | Get / "tasks" / long /> ( id =>s"Get a single task with id: $id" ) | Put / "tasks" / long /> ( id =>s"Update an existing task with id $id to " ) | Delete / "tasks" / long /> ( id =>s"Delete an existing task with $id" ) // simple server that combines the two routes and creates a val server = Httpx.serve(":8080", (taskAPI :+: taskCreateAPI).toService ) println("Press <enter> to exit.") Console.in.read.toChar server.close() } In this code fragment, we created a number of Router instances that process the requests, which we sent from Postman. Let's start by looking at one of the routes of the taskAPI router: Get / "tasks" / long /> (id =>s"Get a single task with id: $id"). The following table explains the various parts of the route: Part Description Get While writing routers, usually the first thing you do is determine which HTTP verb you want to match. In this case, this route will only match the GET verb. Besides the Get matcher, Finch also provides the following matchers: Post, Patch, Delete, Head, Options, Put, Connect, and Trace. "tasks" The next part of the route is a matcher that matches a URL path segment. In this case, we match the following URL: http://localhost:8080/tasks. Finch will use an implicit conversion to convert this String object to a finch Matcher object. Finch also has two wildcard Matchers: * and **. The * matcher allows any value for a single path segment, and the ** matcher allows any value for multiple path segments. long The next part in the route is called an Extractor. With an extractor, you turn part of the URL into a value, which you can use to create the response (for example, retrieve an object from the database using the extracted ID). The long extractor, as the name implies, converts the matching path segment to a long value. Finch also provides an int, string, and Boolean extractor. long =>B The last part of the route is used to create the response message. Finch provides different ways of creating the response, which we'll show in the other parts of this article. In this case, we need to provide Finch with a function that transforms the long value we extracted, and return a value Finch can convert to a response (more on this later). In this example, we just return a String.  If you've looked closely at the source code, you would have probably noticed that Finch uses custom operators to combine the various parts of a route. Let's look a bit closer at those. With Finch, we get the following operators (also called combinators in Finch terms): / or andThen: With this combinatory, you sequentially combine various matchers and extractors together. Whenever the first part matches, the next one is called. For instance: Get / "path" / long. | or orElse: This combinator allows you to combine two routers (or parts thereof) together as long as they are of the same type. So, we could do (Get | Post) to create a matcher, which matches the GET and POST HTTP verbs. In the code sample, we've also used this to combine all the routes that returned a simple String into the taskAPI router. /> or map: With this combinatory, we pass the request and any extracted values from the path to a function for further processing. The result of the function that is called is returned as the HTTP response. As you'll see in the rest of the article, there are different ways of processing the HTTP request and creating a response. :+:: The final combinator allows you to combine two routers together of different types. In the example, we have two routers. A taskAPI, that returns a simple String, and a taskCreateAPI, which uses a RequestReader (through the body function) to create the response. We can't combine these with | since the result is created using two different approaches, so we use the :+:combinator. We just return simple Strings whenever we get a request. In the next section, we'll look at how you can use RequestReader to convert the incoming HTTP requests to case classes and use those to create a HTTP response. When you run this service, you'll see an output as follows: [info] Loading project definition from /Users/jos/dev/git/rest-with-scala/project [info] Set current project to rest-with-scala (in build file:/Users/jos/dev/git/rest-with-scala/) [info] Running org.restwithscala.chapter2.steps.FinchStep1 Jun 26, 2015 10:19:11 AM com.twitter.finagle.Init$$anonfun$1 apply$mcV$sp INFO: Finagle version 6.25.0 (rev=78909170b7cc97044481274e297805d770465110) built at 20150423-135046 Press <enter> to exit. Once the server is started, you can once again use Postman(or any other REST client) to make requests to this service (example requests can be found at https://github.com/josdirksen/rest-with-scala/tree/master/common): And once again, you don't have to use a GUI to make the requests. You can test the service with Curl as follows: # Create task curl 'http://localhost:8080/tasks' -H 'Content-Type: text/plain;charset=UTF-8' --data-binary $'{ntaskdatan}' # Update task curl 'http://localhost:8080/tasks/1' -X PUT -H 'Content-Type: text/plain;charset=UTF-8' --data-binary $'{ntaskdatan}' # Get all tasks curl'http://localhost:8080/tasks' # Get single task curl'http://localhost:8080/tasks/1' Summary This article only showed a couple of the features Finch provides. But it should give you a good head start toward working with Finch. Resources for Article: Further resources on this subject: RESTful Java Web Services Design [article] Creating a RESTful API [article] Scalability, Limitations, and Effects [article]
Read more
  • 0
  • 0
  • 2648

article-image-relational-databases-sqlalchemy
Packt
02 Nov 2015
28 min read
Save for later

Relational Databases with SQLAlchemy

Packt
02 Nov 2015
28 min read
In this article by Matthew Copperwaite, author of the book Learning Flask Framework, he talks about how relational databases are the bedrock upon which almost every modern web applications are built. Learning to think about your application in terms of tables and relationships is one of the keys to a clean, well-designed project. We will be using SQLAlchemy, a powerful object relational mapper that allows us to abstract away the complexities of multiple database engines, to work with the database directly from within Python. In this article, we shall: Present a brief overview of the benefits of using a relational database Introduce SQLAlchemy, The Python SQL Toolkit and Object Relational Mapper Configure our Flask application to use SQLAlchemy Write a model class to represent blog entries Learn how to save and retrieve blog entries from the database Perform queries—sorting, filtering, and aggregation Create schema migrations using Alembic (For more resources related to this topic, see here.) Why use a relational database? Our application's database is much more than a simple record of things that we need to save for future retrieval. If all we needed to do was save and retrieve data, we could easily use flat text files. The fact is, though, that we want to be able to perform interesting queries on our data. What's more, we want to do this efficiently and without reinventing the wheel. While non-relational databases (sometimes known as NoSQL databases) are very popular and have their place in the world of web, relational databases long ago solved the common problems of filtering, sorting, aggregating, and joining tabular data. Relational databases allow us to define sets of data in a structured way that maintains the consistency of our data. Using relational databases also gives us, the developers, the freedom to focus on the parts of our app that matter. In addition to efficiently performing ad hoc queries, a relational database server will also do the following: Ensure that our data conforms to the rules set forth in the schema Allow multiple people to access the database concurrently, while at the same time guaranteeing the consistency of the underlying data Ensure that data, once saved, is not lost even in the event of an application crash Relational databases and SQL, the programming language used with relational databases, are topics worthy of an entire book. Because this book is devoted to teaching you how to build apps with Flask, I will show you how to use a tool that has been widely adopted by the Python community for working with databases, namely, SQLAlchemy. SQLAlchemy abstracts away many of the complications of writing SQL queries, but there is no substitute for a deep understanding of SQL and the relational model. For that reason, if you are new to SQL, I would recommend that you check out the colorful book Learn SQL the Hard Way, Zed Shaw available online for free at http://sql.learncodethehardway.org/. Introducing SQLAlchemy SQLAlchemy is an extremely powerful library for working with relational databases in Python. Instead of writing SQL queries by hand, we can use normal Python objects to represent database tables and execute queries. There are a number of benefits to this approach which are listed as follows: Your application can be developed entirely in Python. Subtle differences between database engines are abstracted away. This allows you to do things just like a lightweight database, for instance, use SQLite for local development and testing, then switch to the databases designed for high loads (such as PostgreSQL) in production. Database errors are less common because there are now two layers between your application and the database server: the Python interpreter itself (which will catch the obvious syntax errors), and SQLAlchemy, which has well-defined APIs and it's own layer of error-checking. Your database code may become more efficient, thanks to SQLAlchemy's unit-of-work model which helps reduce unnecessary round-trips to the database. SQLAlchemy also has facilities for efficiently pre-fetching related objects known as eager loading. Object Relational Mapping (ORM) makes your code more maintainable, an asperation known as don't repeat yourself, (DRY). Suppose you add a column to a model. With SQLAlchemy it will be available whenever you use that model. If, on the other hand, you had hand-written SQL queries strewn throughout your app, you would need to update each query, one at a time, to ensure that you were including the new column. SQLAlchemy can help you avoid SQL injection vulnerabilities. Excellent library support: There are a multitude of useful libraries that can work directly with your SQLAlchemy models to provide things like maintenance interfaces and RESTful APIs. I hope you're excited after reading this list. If all the items in this list don't make sense to you right now, don't worry. Now that we have discussed some of the benefits of using SQLAlchemy, let's install it and start coding. If you'd like to learn more about SQLAlchemy, there is an article devoted entirely to its design in The Architecture of Open-Source Applications, available online for free at http://aosabook.org/en/sqlalchemy.html. Installing SQLAlchemy We will use pip to install SQLAlchemy into the blog app's virtualenv. To activate your virtualenv, change directories to source the activate script as follows: $ cd ~/projects/blog $ source bin/activate (blog) $ pip install sqlalchemy Downloading/unpacking sqlalchemy … Successfully installed sqlalchemy Cleaning up... You can check if your installation succeeded by opening a Python interpreter and checking the SQLAlchemy version; note that your exact version number is likely to differ. $ python >>> import sqlalchemy >>> sqlalchemy.__version__ '0.9.0b2' Using SQLAlchemy in our Flask app SQLAlchemy works very well with Flask on its own, but the author of Flask has released a special Flask extension named Flask-SQLAlchemy that provides helpers with many common tasks, and can save us from having to re-invent the wheel later on. Let's use pip to install this extension: (blog) $ pip install flask-sqlalchemy … Successfully installed flask-sqlalchemy Flask provides a standard interface for the developers who are interested in building extensions. As the framework has grown in popularity, the number of high quality extensions has increased. If you'd like to take a look at some of the more popular extensions, there is a curated list available on the Flask project website at http://flask.pocoo.org/extensions/. Choosing a database engine SQLAlchemy supports a multitude of popular database dialects, including SQLite, MySQL, and PostgreSQL. Depending on the database you would like to use, you may need to install an additional Python package containing a database driver. Listed next are several popular databases supported by SQLAlchemy and the corresponding pip-installable driver. Some databases have multiple driver options, so I have listed the most popular one first. Database Driver Package(s) SQLite Not needed, part of the Python standard library since version 2.5 MySQL MySQL-python, PyMySQL (pure Python), OurSQL PostgreSQL psycopg2 Firebird fdb Microsoft SQL Server pymssql, PyODBC Oracle cx-Oracle SQLite comes as standard with Python and does not require a separate server process, so it is perfect for getting up and running quickly. For simplicity in the examples that follow, I will demonstrate how to configure the blog app for use with SQLite. If you have a different database in mind that you would like to use for the blog project, feel free to use pip to install the necessary driver package at this time. Connecting to the database Using your favorite text editor, open the config.py module for our blog project (~/projects/blog/app/config.py). We are going to add an SQLAlchemy specific setting to instruct Flask-SQLAlchemy how to connect to our database. The new lines are highlighted in the following: class Configuration(object): APPLICATION_DIR = current_directory DEBUG = True SQLALCHEMY_DATABASE_URI = 'sqlite:///%s/blog.db' % APPLICATION_DIR The SQLALCHEMY_DATABASE_URIis comprised of the following parts: dialect+driver://username:password@host:port/database Because SQLite databases are stored in local files, the only information we need to provide is the path to the database file. On the other hand, if you wanted to connect to PostgreSQL running locally, your URI might look something like this: postgresql://postgres:secretpassword@localhost:5432/blog_db If you're having trouble connecting to your database, try consulting the SQLAlchemy documentation on the database URIs:http://docs.sqlalchemy.org/en/rel_0_9/core/engines.html Now that we've specified how to connect to the database, let's create the object responsible for actually managing our database connections. This object is provided by the Flask-SQLAlchemy extension and is conveniently named SQLAlchemy. Open app.py and make the following additions: from flask import Flask from flask.ext.sqlalchemy import SQLAlchemy from config import Configuration app = Flask(__name__) app.config.from_object(Configuration) db = SQLAlchemy(app) These changes instruct our Flask app, and in turn SQLAlchemy, how to communicate with our application's database. The next step will be to create a table for storing blog entries and to do so, we will create our first model. Creating the Entry model A model is the data representation of a table of data that we want to store in the database. These models have attributes called columns that represent the data items in the data. So, if we were creating a Person model, we might have columns for storing the first and last name, date of birth, home address, hair color, and so on. Since we are interested in creating a model to represent blog entries, we will have columns for things like the title and body content. Note that we don't say a People model or Entries model – models are singular even though they commonly represent many different objects. With SQLAlchemy, creating a model is as easy as defining a class and specifying a number of attributes assigned to that class. Let's start with a very basic model for our blog entries. Create a new file named models.py in the blog project's app/ directory and enter the following code: import datetime, re from app import db def slugify(s): return re.sub('[^w]+', '-', s).lower() class Entry(db.Model): id = db.Column(db.Integer, primary_key=True) title = db.Column(db.String(100)) slug = db.Column(db.String(100), unique=True) body = db.Column(db.Text) created_timestamp = db.Column(db.DateTime, default=datetime.datetime.now) modified_timestamp = db.Column( db.DateTime, default=datetime.datetime.now, onupdate=datetime.datetime.now) def __init__(self, *args, **kwargs): super(Entry, self).__init__(*args, **kwargs) # Call parent constructor. self.generate_slug() def generate_slug(self): self.slug = '' if self.title: self.slug = slugify(self.title) def __repr__(self): return '<Entry: %s>' % self.title There is a lot going on, so let's start with the imports and work our way down. We begin by importing the standard library datetime and re modules. We will be using datetime to get the current date and time, and re to do some string manipulation. The next import statement brings in the db object that we created in app.py. As you recall, the db object is an instance of the SQLAlchemy class, which is a part of the Flask-SQLAlchemy extension. The db object provides access to the classes that we need to construct our Entry model, which is just a few lines ahead. Before the Entry model, we define a helper function slugify, which we will use to give our blog entries some nice URLs. The slugify function takes a string like A post about Flask and uses a regular expression to turn a string that is human-readable in a URL, and so returns a-post-about-flask. Next is the Entry model. Our Entry model is a normal class that extends db.Model. By extending the db.Model our Entry class will inherit a variety of helpers which we'll use to query the database. The attributes of the Entry model, are a simple mapping of the names and data that we wish to store in the database and are listed as follows: id: This is the primary key for our database table. This value is set for us automatically by the database when we create a new blog entry, usually an auto incrementing number for each new entry. While we will not explicitly set this value, a primary key comes in handy when you want to refer one model to another. title: The title for a blog entry, stored as a String column with a maximum length of 100. slug: The URL-friendly representation of the title, stored as a String column with a maximum length of 100. This column also specifies unique=True, so that no two entries can share the same slug. body: The actual content of the post, stored in a Text column. This differs from the String type of the Title and Slug as you can store as much text as you like in this field. created_timestamp: The time a blog entry was created, stored in a DateTime column. We instruct SQLAlchemy to automatically populate this column with the current time by default when an entry is first saved. modified_timestamp: The time a blog entry was last updated. SQLAlchemy will automatically update this column with the current time whenever we save an entry. For short strings such as titles or names of things, the String column is appropriate, but when the text may be especially long it is better to use a Text column, as we did for the entry body. We've overridden the constructor for the class (__init__) so that when a new model is created, it automatically sets the slug for us based on the title. The last piece is the __repr__ method which is used to generate a helpful representation of instances of our Entry class. The specific meaning of __repr__ is not important but allows you to reference the object that the program is working with, when debugging. A final bit of code needs to be added to main.py, the entry-point to our application, to ensure that the models are imported. Add the highlighted changes to main.py as follows: from app import app, db import models import views if __name__ == '__main__': app.run() Creating the Entry table In order to start working with the Entry model, we first need to create a table for it in our database. Luckily, Flask-SQLAlchemy comes with a nice helper for doing just this. Create a new sub-folder named scripts in the blog project's app directory. Then create a file named create_db.py: (blog) $ cd app/ (blog) $ mkdir scripts (blog) $ touch scripts/create_db.py Add the following code to the create_db.py module. This function will automatically look at all the code that we have written and create a new table in our database for the Entry model based on our models: from main import db if __name__ == '__main__': db.create_all() Execute the script from inside the app/ directory. Make sure the virtualenv is active. If everything goes successfully, you should see no output. (blog) $ python create_db.py (blog) $ If you encounter errors while creating the database tables, make sure you are in the app directory, with the virtualenv activated, when you run the script. Next, ensure that there are no typos in your SQLALCHEMY_DATABASE_URI setting. Working with the Entry model Let's experiment with our new Entry model by saving a few blog entries. We will be doing this from the Python interactive shell. At this stage let's install IPython, a sophisticated shell with features like tab-completion (that the default Python shell lacks): (blog) $ pip install ipython Now check if we are in the app directory and let's start the shell and create a couple of entries as follows: (blog) $ ipython In []: from models import * # First things first, import our Entry model and db object. In []: db # What is db? Out[]: <SQLAlchemy engine='sqlite:////home/charles/projects/blog/app/blog.db'> If you are familiar with the normal Python shell but not IPython, things may look a little different at first. The main thing to be aware of is that In[] refers to the code you type in, and Out[] is the output of the commands you put in to the shell. IPython has a neat feature that allows you to print detailed information about an object. This is done by typing in the object's name followed by a question-mark (?). Introspecting the Entry model provides a bit of information, including the argument signature and the string representing that object (known as the docstring) of the constructor: In []: Entry? # What is Entry and how do we create it? Type: _BoundDeclarativeMeta String Form:<class 'models.Entry'> File: /home/charles/projects/blog/app/models.py Docstring: <no docstring> Constructor information: Definition:Entry(self, *args, **kwargs) We can create Entry objects by passing column values in as the keyword-arguments. In the preceding example, it uses **kwargs; this is a shortcut for taking a dict object and using it as the values for defining the object, as shown next: In []: first_entry = Entry(title='First entry', body='This is the body of my first entry.') In order to save our first entry, we will to add it to the database session. The session is simply an object that represents our actions on the database. Even after adding it to the session, it will not be saved to the database yet. In order to save the entry to the database, we need to commit our session: In []: db.session.add(first_entry) In []: first_entry.id is None # No primary key, the entry has not been saved. Out[]: True In []: db.session.commit() In []: first_entry.id Out[]: 1 In []: first_entry.created_timestamp Out[]: datetime.datetime(2014, 1, 25, 9, 49, 53, 1337) As you can see from the preceding code examples, once we commit the session, a unique id will be assigned to our first entry and the created_timestamp will be set to the current time. Congratulations, you've created your first blog entry! Try adding a few more on your own. You can add multiple entry objects to the same session before committing, so give that a try as well. At any point while you are experimenting, feel free to delete the blog.db file and re-run the create_db.py script to start over with a fresh database. Making changes to an existing entry In order to make changes to an existing Entry, simply make your edits and then commit. Let's retrieve our Entry using the id that was returned to use earlier, make some changes and commit it. SQLAlchemy will know that it needs to be updated. Here is how you might make edits to the first entry: In []: first_entry = Entry.query.get(1) In []: first_entry.body = 'This is the first entry, and I have made some edits.' In []: db.session.commit() And just like that your changes are saved. Deleting an entry Deleting an entry is just as easy as creating one. Instead of calling db.session.add, we will call db.session.delete and pass in the Entry instance that we wish to remove: In []: bad_entry = Entry(title='bad entry', body='This is a lousy entry.') In []: db.session.add(bad_entry) In []: db.session.commit() # Save the bad entry to the database. In []: db.session.delete(bad_entry) In []: db.session.commit() # The bad entry is now deleted from the database. Retrieving blog entries While creating, updating, and deleting are fairly straightforward operations, the real fun starts when we look at ways to retrieve our entries. We'll start with the basics, and then work our way up to more interesting queries. We will use a special attribute on our model class to make queries: Entry.query. This attribute exposes a variety of APIs for working with the collection of entries in the database. Let's simply retrieve a list of all the entries in the Entry table: In []: entries = Entry.query.all() In []: entries # What are our entries? Out[]: [<Entry u'First entry'>, <Entry u'Second entry'>, <Entry u'Third entry'>, <Entry u'Fourth entry'>] As you can see, in this example, the query returns a list of Entry instances that we created. When no explicit ordering is specified, the entries are returned to us in an arbitrary order chosen by the database. Let's specify that we want the entries returned to us in an alphabetical order by title: In []: Entry.query.order_by(Entry.title.asc()).all() Out []: [<Entry u'First entry'>, <Entry u'Fourth entry'>, <Entry u'Second entry'>, <Entry u'Third entry'>] Shown next is how you would list your entries in reverse-chronological order, based on when they were last updated: In []: oldest_to_newest = Entry.query.order_by(Entry.modified_timestamp.desc()).all() Out []: [<Entry: Fourth entry>, <Entry: Third entry>, <Entry: Second entry>, <Entry: First entry>] Filtering the list of entries It is very useful to be able to retrieve the entire collection of blog entries, but what if we want to filter the list? We could always retrieve the entire collection and then filter it in Python using a loop, but that would be very inefficient. Instead we will rely on the database to do the filtering for us, and simply specify the conditions for which entries should be returned. In the following example, we will specify that we want to filter by entries where the title equals 'First entry'. In []: Entry.query.filter(Entry.title == 'First entry').all() Out[]: [<Entry u'First entry'>] If this seems somewhat magical to you, it's because it really is! SQLAlchemy uses operator overloading to convert expressions like <Model>.<column> == <some value> into an abstracted object called BinaryExpression. When you are ready to execute your query, these data-structures are then translated into SQL. A BinaryExpression is simply an object that represents the logical comparison and is produced by over-riding the standards methods that are typically called on an object when comparing values in Python. In order to retrieve a single entry, you have two options, .first() and .one(). Their differences and similarities are summarized in the following table: Number of matching rows first() behavior one() behavior 1 Return the object. Return the object. 0 Return None. Raise sqlalchemy.orm.exc.NoResultFound 2+ Return the first object (based on either explicit ordering or the ordering chosen by the database). Raise sqlalchemy.orm.exc.MultipleResultsFound Let's try the same query as before, but instead of calling .all(), we will call .first() to retrieve a single Entry instance: In []: Entry.query.filter(Entry.title == 'First entry').first() Out[]: <Entry u'First entry'> Notice how previously .all() returned a list containing the object, whereas .first() returned just the object itself. Special lookups In the previous example we tested for equality, but there are many other types of lookups possible. In the following table, have listed some that you may find useful. A complete list can be found in the SQLAlchemy documentation. Example Meaning Entry.title == 'The title' Entries where the title is "The title", case-sensitive. Entry.title != 'The title' Entries where the title is not "The title". Entry.created_timestamp < datetime.date(2014, 1, 25) Entries created before January 25, 2014. For less than or equal, use <=. Entry.created_timestamp > datetime.date(2014, 1, 25) Entries created after January 25, 2014. For greater than or equal, use >=. Entry.body.contains('Python') Entries where the body contains the word "Python", case-sensitive. Entry.title.endswith('Python') Entries where the title ends with the string "Python", case-sensitive. Note that this will also match titles that end with the word "CPython", for example. Entry.title.startswith('Python') Entries where the title starts with the string "Python", case-sensitive. Note that this will also match titles like "Pythonistas". Entry.body.ilike('%python%') Entries where the body contains the word "python" anywhere in the text, case-insensitive. The "%" character is a wild-card. Entry.title.in_(['Title one', 'Title two']) Entries where the title is in the given list, either 'Title one' or 'Title two'. Combining expressions The expressions listed in the preceding table can be combined using bitwise operators to produce arbitrarily complex expressions. Let's say we want to retrieve all blog entries that have the word Python or Flask in the title. To accomplish this, we will create two contains expressions, then combine them using Python's bitwise OR operator which is a pipe| character unlike a lot of other languages that use a double pipe || character: Entry.query.filter(Entry.title.contains('Python') | Entry.title.contains('Flask')) Using bitwise operators, we can come up with some pretty complex expressions. Try to figure out what the following example is asking for: Entry.query.filter( (Entry.title.contains('Python') | Entry.title.contains('Flask')) & (Entry.created_timestamp > (datetime.date.today() - datetime.timedelta(days=30))) ) As you probably guessed, this query returns all entries where the title contains either Python or Flask, and which were created within the last 30 days. We are using Python's bitwise OR and AND operators to combine the sub-expressions. For any query you produce, you can view the generated SQL by printing the query as follows: In []: query = Entry.query.filter( (Entry.title.contains('Python') | Entry.title.contains('Flask')) & (Entry.created_timestamp > (datetime.date.today() - datetime.timedelta(days=30))) ) In []: print str(query) SELECT entry.id AS entry_id, ... FROM entry WHERE ( (entry.title LIKE '%%' || :title_1 || '%%') OR (entry.title LIKE '%%' || :title_2 || '%%') ) AND entry.created_timestamp > :created_timestamp_1 Negation There is one more piece to discuss, which is negation. If we wanted to get a list of all blog entries which did not contain Python or Flask in the title, how would we do that? SQLAlchemy provides two ways to create these types of expressions, using either Python's unary negation operator (~) or by calling db.not_(). This is how you would construct this query with SQLAlchemy: Using unary negation: In []: Entry.query.filter(~(Entry.title.contains('Python') | Entry.title.contains('Flask')))   Using db.not_(): In []: Entry.query.filter(db.not_(Entry.title.contains('Python') | Entry.title.contains('Flask'))) Operator precedence Not all operations are considered equal to the Python interpreter. This is like in math class, where we learned that expressions like 2 + 3 * 4 are equal to 14 and not 20, because the multiplication operation occurs first. In Python, bitwise operators all have a higher precedence than things like equality tests, so this means that when you are building your query expression, you have to pay attention to the parentheses. Let's look at some example Python expressions and see the corresponding query: Expression Result (Entry.title == 'Python' | Entry.title == 'Flask') Wrong! SQLAlchemy throws an error because the first thing to be evaluated is actually the 'Python' | Entry.title! (Entry.title == 'Python') | (Entry.title == 'Flask') Right. Returns entries where the title is either "Python" or "Flask". ~Entry.title == 'Python' Wrong! SQLAlchemy will turn this into a valid SQL query, but the results will not be meaningful. ~(Entry.title == 'Python') Right. Returns entries where the title is not equal to "Python". If you find yourself struggling with the operator precedence, it's a safe bet to put parentheses around any comparison that uses ==, !=, <, <=, >, and >=. Making changes to the schema The final topic we will discuss in this article is how to make modifications to an existing Model definition. From the project specification, we know we would like to be able to save drafts of our blog entries. Right now we don't have any way to tell whether an entry is a draft or not, so we will need to add a column that let's us store the status of our entry. Unfortunately, while db.create_all() works perfectly for creating tables, it will not automatically modify an existing table; to do this we need to use migrations. Adding Flask-Migrate to our project We will use Flask-Migrate to help us automatically update our database whenever we change the schema. In the blog virtualenv, install Flask-migrate using pip: (blog) $ pip install flask-migrate The author of SQLAlchemy has a project called alembic; Flask-Migrate makes use of this and integrates it with Flask directly, making things easier. Next we will add a Migrate helper to our app. We will also create a script manager for our app. The script manager allows us to execute special commands within the context of our app, directly from the command-line. We will be using the script manager to execute the migrate command. Open app.py and make the following additions: from flask import Flask from flask.ext.migrate import Migrate, MigrateCommand from flask.ext.script import Manager from flask.ext.sqlalchemy import SQLAlchemy from config import Configuration app = Flask(__name__) app.config.from_object(Configuration) db = SQLAlchemy(app) migrate = Migrate(app, db) manager = Manager(app) manager.add_command('db', MigrateCommand) In order to use the manager, we will add a new file named manage.py along with app.py. Add the following code to manage.py: from app import manager from main import * if __name__ == '__main__': manager.run() This looks very similar to main.py, the key difference being that instead of calling app.run(), we are calling manager.run(). Django has a similar, although auto-generated, manage.py file that serves a similar function. Creating the initial migration Before we can start changing our schema, we need to create a record of its current state. To do this, run the following commands from inside your blog's app directory. The first command will create a migrations directory inside the app folder which will track the changes we make to our schema. The second command db migrate will create a snapshot of our current schema so that future changes can be compared to it. (blog) $ python manage.py db init Creating directory /home/charles/projects/blog/app/migrations ... done ... (blog) $ python manage.py db migrate INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. Generating /home/charles/projects/blog/app/migrations/versions/535133f91f00_.py ... done Finally, we will run db upgrade to run the migration which will indicate to the migration system that everything is up-to-date: (blog) $ python manage.py db upgrade INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. INFO [alembic.migration] Running upgrade None -> 535133f91f00, empty message Adding a status column Now that we have a snapshot of our current schema, we can start making changes. We will be adding a new column named status, which will store an integer value corresponding to a particular status. Although there are only two statuses at the moment (PUBLIC and DRAFT), using an integer instead of a Boolean gives us the option to easily add more statuses in the future. Open models.py and make the following additions to the Entry model: class Entry(db.Model): STATUS_PUBLIC = 0 STATUS_DRAFT = 1 id = db.Column(db.Integer, primary_key=True) title = db.Column(db.String(100)) slug = db.Column(db.String(100), unique=True) body = db.Column(db.Text) status = db.Column(db.SmallInteger, default=STATUS_PUBLIC) created_timestamp = db.Column(db.DateTime, default=datetime.datetime.now) ... From the command-line, we will once again be running db migrate to generate the migration script. You can see from the command's output that it found our new column: (blog) $ python manage.py db migrate INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. INFO [alembic.autogenerate.compare] Detected added column 'entry.status' Generating /home/charles/projects/blog/app/migrations/versions/2c8e81936cad_.py ... done Because we have blog entries in the database, we need to make a small modification to the auto-generated migration to ensure the statuses for the existing entries are initialized to the proper value. To do this, open up the migration file (mine is migrations/versions/2c8e81936cad_.py) and change the following line: op.add_column('entry', sa.Column('status', sa.SmallInteger(), nullable=True)) The replacement of nullable=True with server_default='0' tells the migration script to not set the column to null by default, but instead to use 0: op.add_column('entry', sa.Column('status', sa.SmallInteger(), server_default='0')) Finally, run db upgrade to run the migration and create the status column: (blog) $ python manage.py db upgrade INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. INFO [alembic.migration] Running upgrade 535133f91f00 -> 2c8e81936cad, empty message Congratulations, your Entry model now has a status field! Summary By now you should be familiar with using SQLAlchemy to work with a relational database. We covered the benefits of using a relational database and an ORM, configured a Flask application to connect to a relational database, and created SQLAlchemy models. All this allowed us to create relationships between our data and perform queries. To top it off, we also used a migration tool to handle future database schema changes. We will set aside the interactive interpreter and start creating views to display blog entries in the web browser. We will put all our SQLAlchemy knowledge to work by creating interesting lists of blog entries, as well as a simple search feature. We will build a set of templates to make the blogging site visually appealing, and learn how to use the Jinja2 templating language to eliminate repetitive HTML coding. Resources for Article:   Further resources on this subject: Man, Do I Like Templates! [article] Snap – The Code Snippet Sharing Application [article] Deploying on your own server [article]
Read more
  • 0
  • 0
  • 3202