Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Server-Side Web Development

406 Articles
article-image-how-setup-postgresql-nodejs
Antonio Cucciniello
14 Feb 2017
7 min read
Save for later

How to Setup PostgreSQL with Node.js

Antonio Cucciniello
14 Feb 2017
7 min read
Have you ever wanted to add a PostgreSQL database to the backend of your web application? If so, by the end of this tutorial, you should have a PostgreSQL database up and running with your Node.js web application. PostgreSQL is a popular open source relational database. This tutorial assumes that you have Node and NPM installed on your machine; if you need help installing that, check out this link. First, let's download PostgreSQL. PostgreSQL I am writing and testing this tutorial on a Mac, so it will primarily caterMac, but I will include links in the reference section for downloading PostgreSQL on select Linux distributions and Windows as well. If you are on a Mac, however, you can follow these steps. First, you must have Homebrew. If you do not have it, you may install it byfollowing the directions here. Once Homebrew is installed and working, you can run the following: $ brew update $ brew install postgres This downloads and installs PostgreSQL for you. Command-line Setup Now, open a new instance of a terminal by pressing Command+T. Once you have the new tab, you can start a PostgreSQL server with the command: postgres -D /usr/local/var/postgres.This allows you to use Postgres locally and gives you a logger for all of the commands you run on your databases. Next, open a new instance of a terminal with Command+Tand enter$ psql. This is similar to a command center for Postgres. It allows you to create things in your database and plenty more. You can manually enter commands here to set up your environment. For this example, we willcreatea database called example. To do that, while in the terminal tab with psql running, enter CREATE DATABASE example;. To confirm that the database was made, you should seeCREATE DATABASE as the output. Also, to list all databases, you would usel. Then, we will want to connect to our new database with the command connect example. This should give you the following message telling you that you are connected: You are connected to database "example" as user In order to store things in this database, we need to create a table. Enter the following: CREATE TABLE numbers(   age integer   ); So, this format is probably confusing if you have never seen it before. This is telling Postgres to create a table in this database called numbers, with one column called age, and all items in the age column will be of the data type integer. Now, this should give us the output CREATE TABLE,but if you want to list all tables in a database, you shoulduse the dt command. For the sake of this example, we are going to add a row to the table so that we have some data to play with and prove that this works. When you want to add something to a database in PostgreSQL, you use the INSERT command. Enter this command to have the first row in the table equal to 732: INSERT INTO numbers VALUES (732); This should give you an output of INSERT 0 1. To check the contents of the table, simply typeTABLE numbers;. Now that we have a database up and running with a table with a value, we can setup our code to access this table and pull the value from it. Code Setup In order to follow this example, you will need to make sure that you have the following packages: pg, pg-format, and express. Enter the project directory you plan on working in (and where you have Node and NPMinstalled). For pg, usenpm install -pg.This is a Postgres client for Node. For pg-format, usenpm install pg-format.This allows us to safely make dynamic SQL queries. For express,use npm install express --save.This allows us to create a quick and basic server. Now that those packages are installed, we can code! Actual Code Let's create a file called app.js for this as the main point in our program. At the top, establish your variables: const express = require('express') const app = express() var pg = require('pg') var format = require('pg-format') var PGUSER = 'yourUserName' var PGDATABASE = 'example' var age = 732 The first two lines allow us to use the package express and help us make our server. The next two lines allow us to use the packages pg and pg-format. PGUSER is a variable that holds the user to your database. Enter your username here in place of yourUserName. PGDATABASE is a variable to hold the database name that we wouldlike to connect to. Then, the last variable is to hold the number that we stored in the database. Next, add this: var config = {   user: PGUSER, // name of the user account   database: PGDATABASE, // name of the database   max: 10, // max number of clients in the pool   idleTimeoutMillis: 30000 // how long a client is allowed to remain idle before being closed } var pool = new pg.Pool(config) var myClient Here, we establish a config object that allows pg to know that we want to connect to the database specified as the user specified, with a maximum of 10 clients in a pool of clients with a time out of 30,000 milliseconds of how long a client can be idle before disconnected from the database. Then, we create a new pool of clients according to that config file. Afterwards, we create a variable called myClient to store the client we get from the database in the next step. Now, enter the last bit of code here: pool.connect(function (err, client, done) { if (err) console.log(err) app.listen(3000, function () { console.log('listening on 3000') }) myClient = client var ageQuery = format('SELECT * from numbers WHERE age = %L', age) myClient.query(ageQuery, function (err, result) { if (err) { console.log(err) } console.log(result.rows[0]) }) }) This tries to connect to the database with one of the clients from the pool. If a client successfully connects to the database, we start our server by listening on a port (here, I use 3000). Then, we get access to our client. Next,we create a variable called ageQuery to make a dynamic SQL query. A query is a command to a database. Here, we are making a SELECT query to the database, checking all rows in the table called numbers where the age column is equal to 732. If that is a successful query (meaning, it finds a row with 732 as the value), then we will log the answer. It's now time to test all your hard work! Save the file and run the command in a terminal: node app.js Your output should look like this: listening on 3000 { age: 732 } Conclusion There you go! You now have a PostgreSQL database connected to your web app. To summarize our work, here is a quick breakdown of what happened: We installed PostgreSQL through Homebrew. We started our Local PostgreSQL server. We opened psql in a terminal to use commands manually. We created a database called example. We created a table in that database called numbers. We added a value to that table. We installed pg, pg-format, and express. We used Express to create a server. We created a pool of clients using a config object to access the database. We queried the table in the database for 732. We logged the value. Check out the code for this tutorial on GitHub. About the Author Antonio Cucciniello is a software engineer with a background in C, C++, and Javascript (Node.Js) from New Jersey. His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using our voice. He loves building cool things with software and reading books on self-help and improvement, finance, and entrepreneurship. You can find Antonio on Twitter @antocucciniello and on GitHub.
Read more
  • 0
  • 1
  • 24920

article-image-what-is-functional-reactive-programming
Packt
08 Feb 2017
4 min read
Save for later

What is functional reactive programming?

Packt
08 Feb 2017
4 min read
Reactive programming is, quite simply, a programming paradigm where you are working with an asynchronous data flow. There are a lot of books and blog posts that argue about what reactive programming is, exactly, but if you delve too deeply too quickly it's easy to get confused. Then reactive programming isn't useful at all. Functional reactive programming takes the principles of functional programming and uses them to enhance reactive programming. You take functions - like map, filter, and reduce - and use them to better manage streams of data. Read now: What is the Reactive manifesto? How does imperative programming compare to reactive programming? Imperative programming makes you describe the steps a computer must do to execute a task. In comparison, functional reactive programming gives you the constructs to propagate changes. This means so you have to think more about what to do than how to do it. This can be illustrated in a simple sum of two numbers. This could be presented as a = b + c in an imperative programming. A single line of code expresses the sum - that's straightforward, right? However, if we change the value of b or c, the value doesn't change - you wouldn't want it to change if you were using an imperative approach. In reactive programming, by contrast, the changes you make to different figures would react accordingly. Imagine the sum in a Microsoft Excel spreadsheet. Every time you change the value of the column b or c it recalculate the value of a. This is like a very basic form of software propagation. You probably already use an asynchronous data flow. Every time you add a listener to a mouse click or a keystroke in a web page we pass a function to react to that user input. So, a mouse click might be seen as a stream of events which you can observe; you can then execute a function when it happens. But, this is only one way of using event streams. You might want more sophistication and control over your streams. Reactive programming takes this to the next level. When you use it you can react to changes in anything - that could be changes in: user inputs external sources database changes changes to variables and properties This then means you can create a stream of events following on from specific actions. For example, we can see the changing value of stocks stock as an EventStream. If you can do this you can then use it to show a user when to buy or sell those stocks in real time. Facebook and Twitter are another good example of where software reacts to changes in external source streams -reactive programming is an important component in developing really dynamic UI that are characteristic of social media sites. Functional reactive programming Functional reactive programming, then gives you the ability to do a lot with streams of data or events. You can filter, combine, map buffer, for example. Going back to the stock example above, you can 'listen' to different stocks and use a filter function to present ones worth buying to the user in real time: Why do I need functional reactive programming? Functional reactive programming is especially useful for: Graphical user interface Animation Robotics Simulation Computer Vision A few years ago, all a user could do in a web app was fill in a form with bits of data and post it to a server.  Today web and mobile apps are much richer for users. To go into more detail, by using reactive programming, you can abstract the source of data to the business logic of the application. What this means in practice is that you can write more concise and decoupled code. In turn, this makes code much more reusable and testable, as you can easily mock streams to test your business logic when testing application. Read more: Introduction to JavaScript Breaking into Microservices Architecture JSON with JSON.Net
Read more
  • 0
  • 0
  • 2889

article-image-writing-reddit-reader-rxphp
Packt
09 Jan 2017
9 min read
Save for later

Writing a Reddit Reader with RxPHP

Packt
09 Jan 2017
9 min read
In this article by Martin Sikora, author of the book, PHP Reactive Programming, we will cover writing a CLI Reddit reader app using RxPHP, and we will see how Disposables are used in the default classes that come with RxPHP, and how these are going to be useful for unsubscribing from Observables in our app. (For more resources related to this topic, see here.) Examining RxPHP's internals As we know, Disposables as a means for releasing resources used by Observers, Observables, Subjects, and so on. In practice, a Disposable is returned, for example, when subscribing to an Observable. Consider the following code from the default RxObservable::subscribe() method: function subscribe(ObserverI $observer, $scheduler = null) { $this->observers[] = $observer; $this->started = true; return new CallbackDisposable(function () use ($observer) { $this->removeObserver($observer); }); } This method first adds the Observer to the array of all subscribed Observers. It then marks this Observable as started and, at the end, it returns a new instance of the CallbackDisposable class, which takes a Closure as an argument and invokes it when it's disposed. This is probably the most common use case for Disposables. This Disposable just removes the Observer from the array of subscribers and therefore, it receives no more events emitted from this Observable. A closer look at subscribing to Observables It should be obvious that Observables need to work in such way that all subscribed Observables iterate. Then, also unsubscribing via a Disposable will need to remove one particular Observer from the array of all subscribed Observables. However, if we have a look at how most of the default Observables work, we find out that they always override the Observable::subscribe() method and usually completely omit the part where it should hold an array of subscribers. Instead, they just emit all available values to the subscribed Observer and finish with the onComplete() signal immediately after that. For example, we can have a look at the actual source code of the subscribe() method of the RxReturnObservable class: function subscribe(ObserverI $obs, SchedulerI $sched = null) { $value = $this->value; $scheduler = $scheduler ?: new ImmediateScheduler(); $disp = new CompositeDisposable(); $disp->add($scheduler->schedule(function() use ($obs, $val) { $obs->onNext($val); })); $disp->add($scheduler->schedule(function() use ($obs) { $obs->onCompleted(); })); return $disp; } The ReturnObservable class takes a single value in its constructor and emits this value to every Observer as they subscribe. The following is a nice example of how the lifecycle of an Observable might look: When an Observer subscribes, it checks whether a Scheduler was also passed as an argument. Usually, it's not, so it creates an instance of ImmediateScheduler. Then, an instance of CompositeDisposable is created, which is going to keep an array of all Disposables used by this method. When calling CompositeDisposable::dispose(), it iterates all disposables it contains and calls their respective dispose() methods. Right after that we start populating our CompositeDisposable with the following: $disposable->add($scheduler->schedule(function() { ... })); This is something we'll see very often. SchedulerInterface::schedule() returns a DisposableInterface, which is responsible for unsubscribing and releasing resources. In this case, when we're using ImmediateScheduler, which has no other logic, it just evaluates the Closure immediately: function () use ($obs, $val) { $observer->onNext($val); } Since ImmediateScheduler::schedule() doesn't need to release any resources (it didn't use any), it just returns an instance of RxDisposableEmptyDisposable that does literally nothing. Then the Disposable is returned, and could be used to unsubscribe from this Observable. However, as we saw in the preceding source code, this Observable doesn't let you unsubscribe, and if we think about it, it doesn't even make sense because ReturnObservable class's value is emitted immediately on subscription. The same applies to other similar Observables, such as IteratorObservable, RangeObservable or ArrayObservable. These just contain recursive calls with Schedulers but the principle is the same. A good question is, why on Earth is this so complicated? All the preceding code does could be stripped into the following three lines (assuming we're not interested in using Schedulers): function subscribe(ObserverI $observer) { $observer->onNext($this->value); $observer->onCompleted(); } Well, for ReturnObservable this might be true, but in real applications, we very rarely use any of these primitive Observables. It's true that we usually don't even need to deal with Schedulers. However, the ability to unsubscribe from Observables or clean up any resources when unsubscribing is very important and we'll use it in a few moments. A closer look at Operator chains Before we start writing our Reddit reader, we should talk briefly about an interesting situation that might occur, so it doesn't catch us unprepared later. We're also going to introduce a new type of Observable, called ConnectableObservable. Consider this simple Operator chain with two subscribers: // rxphp_filters_observables.php use RxObservableRangeObservable; use RxObservableConnectableObservable; $connObs = new ConnectableObservable(new RangeObservable(0, 6)); $filteredObs = $connObs ->map(function($val) { return $val ** 2; }) ->filter(function($val) { return $val % 2;, }); $disposable1 = $filteredObs->subscribeCallback(function($val) { echo "S1: ${val}n"; }); $disposable2 = $filteredObs->subscribeCallback(function($val) { echo "S2: ${val}n"; }); $connObservable->connect(); The ConnectableObservable class is a special type of Observable that behaves similarly to Subject (in fact, internally, it really uses an instance of the Subject class). Any other Observable emits all available values right after you subscribe to it. However, ConnectableObservable takes another Observable as an argument and lets you subscribe Observers to it without emitting anything. When you call ConnectableObservable::connect(), it connects Observers with the source Observables, and all values go one by one to all subscribers. Internally, it contains an instance of the Subject class and when we called subscribe(), it just subscribed this Observable to its internal Subject. Then when we called the connect() method, it subscribed the internal Subject to the source Observable. In the $filteredObs variable we keep a reference to the last Observable returned from filter() call, which is an instance of AnnonymousObservable where, on next few lines, we subscribe both Observers. Now, let's see what this Operator chain prints: $ php rxphp_filters_observables.php S1: 1 S2: 1 S1: 9 S2: 9 S1: 25 S2: 25 As we can see, each value went through both Observers in the order they were emitted. Just out of curiosity, we can also have a look at what would happen if we didn't use ConnectableObservable, and used just the RangeObservable instead: $ php rxphp_filters_observables.php S1: 1 S1: 9 S1: 25 S2: 1 S2: 9 S2: 25 This time, RangeObservable emitted all values to the first Observer and then, again, all values to the second Observer. Right now, we can tell that the Observable had to generate all the values twice, which is inefficient, and with a large dataset, this might cause a performance bottleneck. Let's go back to the first example with ConnectableObservable, and modify the filter() call so it prints all the values that go through: $filteredObservable = $connObservable ->map(function($val) { return $val ** 2; }) ->filter(function($val) { echo "Filter: $valn"; return $val % 2; }); Now we run the code again and see what happens: $ php rxphp_filters_observables.php Filter: 0 Filter: 0 Filter: 1 S1: 1 Filter: 1 S2: 1 Filter: 4 Filter: 4 Filter: 9 S1: 9 Filter: 9 S2: 9 Filter: 16 Filter: 16 Filter: 25 S1: 25 Filter: 25 S2: 25 Well, this is unexpected! Each value is printed twice. This doesn't mean that the Observable had to generate all the values twice, however. It's not obvious at first sight what happened, but the problem is that we subscribed to the Observable at the end of the Operator chain. As stated previously, $filteredObservable is an instance of AnnonymousObservable that holds many nested Closures. By calling its subscribe() method, it runs a Closure that's created by its predecessor, and so on. This leads to the fact that every call to subscribe() has to invoke the entire chain. While this might not be an issue in many use cases, there are situations where we might want to do some special operation inside one of the filters. Also, note that calls to the subscribe() method might be out of our control, performed by another developer who wanted to use an Observable we created for them. It's good to know that such a situation might occur and could lead to unwanted behavior. It's sometimes hard to see what's going on inside Observables. It's very easy to get lost, especially when we have to deal with multiple Closures. Schedulers are prime examples. Feel free to experiment with the examples shown here and use debugger to examine step-by-step what code gets executed and in what order. So, let's figure out how to fix this. We don't want to subscribe at the end of the chain multiple times, so we can create an instance of Subject class, where we'll subscribe both Observers, and the Subject class itself will subscribe to the AnnonymousObservable as discussed a moment ago: // ... use RxSubjectSubject; $subject = new Subject(); $connObs = new ConnectableObservable(new RangeObservable(0, 6)); $filteredObservable = $connObs ->map(function($val) { return $val ** 2; }) ->filter(function($val) { echo "Filter: $valn"; return $val % 2; }) ->subscribe($subject); $disposable1 = $subject->subscribeCallback(function($val) { echo "S1: ${val}n"; }); $disposable2 = $subject->subscribeCallback(function($val) { echo "S2: ${val}n"; }); $connObservable->connect(); Now we can run the script again and see that it does what we wanted it to do: $ php rxphp_filters_observables.php Filter: 0 Filter: 1 S1: 1 S2: 1 Filter: 4 Filter: 9 S1: 9 S2: 9 Filter: 16 Filter: 25 S1: 25 S2: 25 This might look like an edge case, but soon we'll see that this issue, left unhandled, could lead to some very unpredictable behavior. We'll bring out both these issues (proper usage of Disposables and Operator chains) when we start writing our Reddit reader. Summary In this article, we looked in more depth at how to use Disposables and Operators, how these work internally, and what it means for us. We also looked at a couple of new classes from RxPHP, such as ConnectableObservable, and CompositeDisposable. Resources for Article: Further resources on this subject: Working with JSON in PHP jQuery [article] Working with Simple Associations using CakePHP [article] Searching Data using phpMyAdmin and MySQL [article]
Read more
  • 0
  • 0
  • 1827

article-image-tools-intypescript
Packt
02 Jan 2017
14 min read
Save for later

Tools inTypeScript

Packt
02 Jan 2017
14 min read
In this article by Nathan Rozentals, author of the book Mastering TypeScript, Second Edition, you will learn how to build enterprise-ready, industrial web applications using TypeScript and leading JavaScript frameworks. In this article, we will cover the following topics: What is TypeScript? The benefits of TypeScript (For more resources related to this topic, see here.) What is TypeScript? TypeScript is both a language and a set of tools to generate JavaScript. It was designed by Anders Hejlsberg at Microsoft (the designer of C#) as an open source project to help developers write enterprise scale JavaScript. TypeScript generates JavaScript – it's as simple as that. Instead of requiring a completely new runtime environment, TypeScript generated JavaScript can reuse all of the existing JavaScript tools, frameworks, and wealth of libraries that are available for JavaScript. The TypeScript language and compiler, however, brings the development of JavaScript closer to a more traditional object-oriented experience. EcmaScript JavaScript as a language has been around for a long time, and is also governed by a language feature standard. The language defined in this standard is called ECMAScript, and each JavaScript interpreter must deliver functions and features that conform to this standard. The definition of this standard helped the growth of JavaScript and the web in general, and allowed websites to render correctly on many different browsers on many different operating systems. The ECMAScript standard was published in 1999 and is known as ECMA-262, third edition. With the popularity of the language, and the explosive growth of internet applications, the ECMAScript standard needed to be revised and updated. This process resulted in a draft specification for ECMAScript, called the fourth edition. Unfortunately, this draft suggested a complete overhaul of the language, and was not well received. Eventually, leaders from Yahoo, Google, and Microsoft tabled an alternate proposal which they called ECMAScript 3.1. This proposal was numbered 3.1, as it was a smaller feature set of the third edition, and sat between edition three and four of the standard. This proposal for language changes tabled earlier was eventually adopted as the fifth edition of the standard, and was called ECMAScript 5. The ECMAScript fourth edition was never published, but it was decided to merge the best features of both the fourth edition and the 3.1 feature set into a sixth edition named ECMAScript Harmony. The TypeScript compiler has a parameter that can switch between different versions of the ECMAScript standard. TypeScript currently supports ECMAScript 3, ECMAScript 5, and ECMAScript 6. When the compiler runs over your TypeScript, it will generate compile errors if the code you are attempting to compile is not valid for that standard. The team at Microsoft has committed to follow the ECMAScript standards in any new versions of the TypeScript compiler, so as new editions are adopted, the TypeScript language and compiler will follow suit. The benefits of TypeScript To give you a flavor of the benefits of TypeScript (and this is by no means the full list), let's have a very quick look at some of the things that TypeScript brings to the table: A compilation step Strong or static typing Type definitions for popular JavaScript libraries Encapsulation Private and public member variable decorators Compiling One of the most frustrating things about JavaScript development is the lack of a compilation step. JavaScript is an interpreted language, and therefore needs to be run in order to test that it is valid. Every JavaScript developer will tell horror stories of hours spent trying to find bugs in their code, only to find that they have missed a stray closing brace { , or a simple comma ,– or even a double quote " where there should have been a single quote ' . Even worse, the real headaches arrive when you misspell a property name, or unwittingly reassign a global variable. TypeScript will compile your code, and generate compilation errors where it finds these sorts of syntax errors. This is obviously very useful, and can help to highlight errors before the JavaScript is run. In large projects, programmers will often need to do large code merges and with today's tools doing automatic merges, it is surprising how often the compiler will pick up these types of errors. While tools to do this sort of syntax checking like JSLint have been around for years, it is obviously beneficial to have these tools integrated into your IDE. Using TypeScript in a continuous integration environment will also fail a build completely when compilation errors are found, further protecting your programmers against these types of bugs. Strong typing JavaScript is not strongly typed. It is a language that is very dynamic, as it allows objects to change their properties and behavior on the fly. As an example of this, consider the following code: var test = "this is a string"; test = 1; test = function(a, b) { return a + b; } On the first line of the preceding code, the variable test is bound to a string. It is then assigned a number, and finally is redefined to be a function that expects two parameters. Traditional object-oriented languages, however, will not allow the type of a variable to change, hence they are called strongly typed languages. While all of the preceding code is valid JavaScript and could be justified, it is quite easy to see how this could cause runtime errors during execution. Imagine that you were responsible for writing a library function to add two numbers, and then another developer inadvertently reassigned your function to subtract these numbers instead. These sorts of errors may be easy to spot in a few lines of code, but it becomes increasingly difficult to find and fix these as your code base and your development team grows. Another feature of strong typing is that the IDE you are working in understands what type of variable you are working with, and can bring better autocomplete or Intellisense options to the fore. TypeScript's syntactic sugar TypeScript introduces a very simple syntax to check the type of an object at compile time. This syntax has been referred to as syntactic sugar, or more formally, type annotations. Consider the following TypeScript code: var test: string = "this is a string"; test = 1; test = function(a, b) { return a + b; } Note on the first line of this code snippet, we have introduced a colon : and a string keyword between our variable and its assignment. This type annotation syntax means that we are setting the type of our variable to be of type string, and that any code that does not adhere to these rules will generate a compile error. Running the preceding code through the TypeScript compiler will generate two errors: hello.ts(3,1): error TS2322: Type 'number' is not assignable to type 'string'. hello.ts(4,1): error TS2322: Type '(a: any, b: any) => any' is not assignable to type 'string'. The first error is fairly obvious. We have specified that the variable test is a string, and therefore attempting to assign a number to it will generate a compile error. The second error is similar to the first, and is, in essence, saying that we cannot assign a function to a string. In this way, the TypeScript compiler introduces strong or static typing to your JavaScript code, giving you all of the benefits of a strongly typed language. TypeScript is therefore described as a superset of JavaScript. Type definitions for popular JavaScript libraries As we have seen, TypeScript has the ability to annotate JavaScript, and bring strong typing to the JavaScript development experience. But how do we strongly type existing JavaScript libraries? The answer is surprisingly simple—by creating a definition file. TypeScript uses files with a .d.ts extension as a sort of header file, similar to languages such as C++, to superimpose strongly typing on existing JavaScript libraries. These definition files hold information that describes each available function, and or variables, along with their associated type annotations. Let's have a quick look at what a definition would look like. As an example, I have lifted a function from the popular Jasmine unit testing framework, called describe: var describe = function(description, specDefinitions) { return jasmine.getEnv().describe(description, specDefinitions); }; Note that this function has two parameters—description and specDefinitions. But JavaScript does not tell us what sort of variables these are. We would need to have a look at the Jasmine documentation to figure out how to call this function. If we head over to http://jasmine.github.io/2.0/introduction.html, we will see an example of how to use this function: describe("A suite", function () { it("contains spec with an expectation", function () { expect(true).toBe(true); }); }); From the documentation, then, we can easily see that the first parameter is a string, and the second parameter is a function. But there is nothing in JavaScript that forces us to conform to this API. As mentioned before, we could easily call this function with two numbers or inadvertently switch the parameters around, sending a function first, and a string second. We will obviously start getting runtime errors if we do this, but TypeScript using a definition file can generate compile-time errors before we even attempt to run this code. Let's have a look at a piece of the jasmine.d.ts definition file: declare function describe( description: string, specDefinitions: () => void ): void; This is the TypeScript definition for the describe function. Firstly, declare function describe tells us that we can use a function called describe, but that the implementation of this function will be provided at runtime. Clearly, the description parameter is strongly typed to a string, and the specDefinitions parameter is strongly typed to be a function that returns void. TypeScript uses the double braces () syntax to declare functions, and the arrow syntax to show the return type of the function. So () => void is a function that does not return anything. Finally, the describe function itself will return void. If our code were to try and pass in a function as the first parameter, and a string as the second parameter (clearly breaking the definition of this function), as shown in the following example: describe(() => { /* function body */}, "description"); TypeScript will generate the following error: hello.ts(11,11): error TS2345: Argument of type '() => void' is not assignable to parameter of type 'string'. This error is telling us that we are attempting to call the describe function with invalid parameters,andit clearly shows that TypeScript will generate errors if we attempt to use external JavaScript libraries incorrectly. DefinitelyTyped Soon after TypeScript was released, Boris Yankov started a Git Hub repository to house definition files, called Definitely Typed (http://definitelytyped.org). This repository has now become the first port of call for integrating external libraries into TypeScript, and it currently holds definitions for over 1,600 JavaScript libraries. Encapsulation One of the fundamental principles of object-oriented programming is encapsulation—the ability to define data, as well as a set of functions that can operate on that data, into a single component. Most programming languages have the concept of a class for this purpose, providing a way to define a template for data and related functions. Let's first take a look at a simple TypeScript class definition: class MyClass { add(x, y) { return x + y; } } varclassInstance = new MyClass(); var result = classInstance.add(1,2); console.log(`add(1,2) returns ${result}`); This code is pretty simple to read and understand. We have created a class, named MyClass, with a simple add function. To use this class we simply create an instance of it, and call the add function with two arguments. JavaScript, unfortunately, does not have a class statement, but instead uses functions to reproduce the functionality of classes. Encapsulation through classes is accomplished by either using the prototype pattern, or by using the closure pattern. Understanding prototypes and the closure pattern, and using them correctly, is considered a fundamental skill when writing enterprise-scale JavaScript. A closure is essentially a function that refers to independent variables. This means that variables defined within a closure function remember the environment in which they were created. This provides JavaScript with a way to define local variables, and provide encapsulation. Writing the MyClass definition in the preceding code, using a closure in JavaScript, would look something like the following: varMyClass = (function () { // the self-invoking function is the // environment that will be remembered // by the closure function MyClass() { // MyClass is the inner function, // the closure } MyClass.prototype.add = function (x, y) { return x + y; }; return MyClass; }()); varclassInstance = new MyClass(); var result = classInstance.add(1, 2); console.log("add(1,2) returns " + result); We start with a variable called MyClass, and assign it to a function that is executed immediately note the })(); syntax near the bottom of the closure definition. This syntax is a common way to write JavaScript in order to avoid leaking variables into the global namespace. We then define a new function named MyClass, and return this new function to the outer calling function. We then use the prototype keyword to inject a new function into the MyClass definition. This function is named add and takes two parameters, returning their sum. The last few lines of the code show how to use this closure in JavaScript. Create an instance of the closure type, and then execute the add function. Running this code will log add(1,2) returns 3 to the console, as expected. Looking at the JavaScript code versus the TypeScript code, we can easily see how simple the TypeScript looks compared to the equivalent JavaScript. Remember how we mentioned that JavaScript programmers can easily misplace a brace {, or a bracket (? Have a look at the last line in the closure definition—})();. Getting one of these brackets or braces wrong can take hours of debugging to find. Public and private accessors A further object oriented principle that is used in Encapsulation is the concept of data hiding that is the ability to have public and private variables. Private variables are meant to be hidden to the user of a particular class as these variables should only be used by the class itself. Inadvertently exposing these variables can easily cause runtime errors. Unfortunately, JavaScript does not have a native way of declaring variables private. While this functionality can be emulated using closures, a lot of JavaScript programmers simply use the underscore character _ to denote a private variable. At runtime though, if you know the name of a private variable you can easily assign a value to it. Consider the following JavaScript code: varMyClass = (function() { function MyClass() { this._count = 0; } MyClass.prototype.countUp = function() { this._count ++; } MyClass.prototype.getCountUp = function() { return this._count; } return MyClass; }()); var test = new MyClass(); test._count = 17; console.log("countUp : " + test.getCountUp()); The MyClass variable is actually a closure with a constructor function, a countUp function, and a getCountUp function. The variable _count is supposed to be a private member variable that is used only within the scope of the closure. Using the underscore naming convention gives the user of this class some indication that the variable is private, but JavaScript will still allow you to manipulate the variable _count. Take a look at the second last line of the code snippet. We are explicitly setting the value of _count to 17 which is allowed by JavaScript, but not desired by the original creator of the class. The output of this code would be countUp : 17. TypeScript, however, introduces the public and private keywords that can be used on class member variables. Trying to access a class member variable that has been marked as private will generate a compile time error. As an example of this, the JavaScript code above can be written in TypeScript, as follows: class CountClass { private _count: number; constructor() { this._count = 0; } countUp() { this._count ++; } getCount() { return this._count; } } varcountInstance = new CountClass() ; countInstance._count = 17; On the second line of our code snippet, we have declared a private member variable named _count. Again, we have a constructor, a countUp, and a getCount function. If we compile this file, the compiler will generate an error: hello.ts(39,15): error TS2341: Property '_count' is private and only accessible within class 'CountClass'. This error is generated because we are trying to access the private variable _count in the last line of the code. The TypeScript compiler, therefore, is helping us to adhere to public and private accessors by generating a compile error when we inadvertently break this rule. Summary In this article, we took a quick look at what TypeScript is and what benefits it can bring to the JavaScript development experience. Resources for Article: Further resources on this subject: Introducing Object Oriented Programmng with TypeScript [article] Understanding Patterns and Architecturesin TypeScript [article] Writing SOLID JavaScript code with TypeScript [article]
Read more
  • 0
  • 0
  • 2239

article-image-object-oriented-javascript
Packt
21 Dec 2016
9 min read
Save for later

Object-Oriented JavaScript

Packt
21 Dec 2016
9 min read
In this article by Ved Antani, author of the book Object Oriented JavaScript - Third Edition, we will learn that you need to know about object-oriented JavaScript. In this article, we will cover the following topics: ECMAScript 5 ECMAScript 6D Object-oriented programming (For more resources related to this topic, see here.) ECMAScript 5 One of the most important milestone in ECMAScript revisions was ECMAScript5 (ES5), officially accepted in December 2009. ECMAScript5 standard is implemented and supported on all major browsers and server-side technologies. ES5 was a major revision because apart from several important syntactic changes and additions to the standard libraries, ES5 also introduced several new constructs in the language. For instance, ES5 introduced some new objects and properties, and also the so-called strict mode. Strict mode is a subset of the language that excludes deprecated features. The strict mode is opt-in and not required, meaning that if you want your code to run in the strict mode, you will declare your intention using (once per function, or once for the whole program) the following string: "use strict"; This is just a JavaScript string, and it's ok to have strings floating around unassigned to any variable. As a result, older browsers that don't speak ES5 will simply ignore it, so this strict mode is backwards compatible and won't break older browsers. For backwards compatibility, all the examples in this book work in ES3, but at the same time, all the code in the book is written so that it will run without warnings in ES5's strict mode. Strict mode in ES6 While strict mode is optional in ES5, all ES6 modules and classes are strict by default. As you will see soon, most of the code we write in ES6 resides in a module; hence, strict mode is enforced by default. However, it is important to understand that all other constructs do not have implicit strict mode enforced. There were efforts to make newer constructs, such as arrow and generator functions, to also enforce strict mode, but it was later decided that doing so will result in very fragment language rules and code. ECMAScript 6 ECMAScript6 revision took a long time to finish and was finally accepted on 17th June, 2015. ES6 features are slowly becoming part of major browsers and server technologies. It is possible to use transpilers to compile ES6 to ES5 and use the code on environments that do not yet support ES6 completely. ES6 substantially upgrades JavaScript as a language and brings in very exciting syntactical changes and language constructs. Broadly, there are two kinds of fundamental changes in this revision of ECMAScript, which are as follows: Improved syntax for existing features and editions to the standard library; for example, classes and promises New language features; for example, generators ES6 allows you to think differently about your code. New syntax changes can let you write code that is cleaner, easier to maintain, and does not require special tricks. The language itself now supports several constructs that required third-party modules earlier. Language changes introduced in ES6 needs serious rethink in the way we have been coding in JavaScript. A note on the nomenclature—ECMAScript6, ES6, and ECMAScript 2015 are the same, but used interchangeably. Browser support for ES6 Majority of the browsers and server frameworks are on their way toward implementing ES6 features. You can check out the what is supported and what is not by clicking: http://kangax.github.io/compat-table/es6/ Though ES6 is not fully supported on all the browsers and server frameworks, we can start using almost all features of ES6 with the help of transpilers. Transpilers are source-to-source compilers. ES6 transpilers allow you to write code in ES6 syntax and compile/transform them into equivalent ES5 syntax, which can then be run on browsers that do not support the entire range of ES6 features.The defacto ES6 transpiler at the moment is Babel Object-oriented programming Let's take a moment to review what people mean when they say object-oriented, and what the main features of this programming style are. Here's a list of some concepts that are most often used when talking about object-oriented programming (OOP): Object, method, and property Class Encapsulation Inheritance Polymorphism Let's take a closer look into each one of these concepts. If you're new to the object-oriented programming lingo, these concepts might sound too theoretical, and you might have trouble grasping or remembering them from one reading. Don't worry, it does take a few tries, and the subject can be a little dry at a conceptual level. Objects As the name object-oriented suggests, objects are important. An object is a representation of a thing (someone or something), and this representation is expressed with the help of a programming language. The thing can be anything—a real-life object, or a more convoluted concept. Taking a common object, a cat, for example, you can see that it has certain characteristics—color, name, weight, and so on—and can perform some actions—meow, sleep, hide, escape, and so on. The characteristics of the object are called properties in OOP-speak, and the actions are called methods. Classes In real life, similar objects can be grouped based on some criteria. A hummingbird and an eagle are both birds, so they can be classified as belonging to some made up Birds class. In OOP, a class is a blueprint or a recipe for an object. Another name for object is instance, so we can say that the eagle is one concrete instance of the general Birds class. You can create different objects using the same class because a class is just a template, while the objects are concrete instances based on the template. There's a difference between JavaScript and the classic OO languages such as C++ and Java. You should be aware right from the start that in JavaScript, there are no classes; everything is based on objects. JavaScript has the notion of prototypes, which are also objects. In a classic OO language, you'd say something like—create a new object for me called Bob, which is of class Person. In a prototypal OO language, you'd say—I'm going to take this object called Bob's dad that I have lying around (on the couch in front of the TV?) and reuse it as a prototype for a new object that I'll call Bob. Encapsulation Encapsulation is another OOP related concept, which illustrates the fact that an object contains (encapsulates) the following: Data (stored in properties) The means to do something with the data (using methods) One other term that goes together with encapsulation is information hiding. This is a rather broad term and can mean different things, but let's see what people usually mean when they use it in the context of OOP. Imagine an object, say, an MP3 player. You, as the user of the object, are given some interface to work with, such as buttons, display, and so on. You use the interface in order to get the object to do something useful for you, like play a song. How exactly the device is working on the inside, you don't know, and, most often, don't care. In other words, the implementation of the interface is hidden from you. The same thing happens in OOP when your code uses an object by calling its methods. It doesn't matter if you coded the object yourself or it came from some third-party library; your code doesn't need to know how the methods work internally. In compiled languages, you can't actually read the code that makes an object work. In JavaScript, because it's an interpreted language, you can see the source code, but the concept is still the same—you work with the object's interface without worrying about its implementation. Another aspect of information hiding is the visibility of methods and properties. In some languages, objects can have public, private, and protected methods and properties. This categorization defines the level of access the users of the object have. For example, only the methods of the same object have access to the private methods, while anyone has access to the public ones. In JavaScript, all methods and properties are public, but we'll see that there are ways to protect the data inside an object and achieve privacy. Inheritance Inheritance is an elegant way to reuse existing code. For example, you can have a generic object, Person, which has properties such as name and date_of_birth, and which also implements the walk, talk, sleep, and eat functionality. Then, you figure out that you need another object called Programmer. You can reimplement all the methods and properties that a Person object has, but it will be smarter to just say that the Programmer object inherits a Person object, and save yourself some work. The Programmer object only needs to implement more specific functionality, such as the writeCode method, while reusing all of the Person object's functionality. In classical OOP, classes inherit from other classes, but in JavaScript, as there are no classes, objects inherit from other objects. When an object inherits from another object, it usually adds new methods to the inherited ones, thus extending the old object. Often, the following phrases can be used interchangeably—B inherits from A and B extends A. Also, the object that inherits can pick one or more methods and redefine them, customizing them for its own needs. This way, the interface stays the same and the method name is the same, but when called on the new object, the method behaves differently. This way of redefining how an inherited method works is known as overriding. Polymorphism In the preceding example, a Programmer object inherited all of the methods of the parent Person object. This means that both objects provide a talk method, among others. Now imagine that somewhere in your code, there's a variable called Bob, and it just so happens that you don't know if Bob is a Person object or a Programmer object. You can still call the talk method on the Bob object and the code will work. This ability to call the same method on different objects, and have each of them respond in their own way, is called polymorphism. Summary In this article, you learned about how JavaScript came to be and where it is today. You were also introduced to ECMAScript 5 and ECMAScript 6, We also discussed about some of the object-oriented programming concepts. Resources for Article: Further resources on this subject: Diving into OOP Principles [article] Prototyping JavaScript [article] Developing Wiki Seek Widget Using Javascript [article]
Read more
  • 0
  • 0
  • 1446

article-image-structuring-your-projects
Packt
24 Nov 2016
20 min read
Save for later

Structuring Your Projects

Packt
24 Nov 2016
20 min read
In this article written by Serghei Iakovlev and David Schissler, authors of the book Phalcon Cookbook , we will cover: (For more resources related to this topic, see here.) Choosing the best place for an implementation Automation of routine tasks Creating the application structure by using code generation tools Introduction In this article you will learn that, often, by creating new projects, developers can face some issues such as what components should they create, where to place them in the application structure, what would each component implement, what naming convention to follow, and so on. Actually, creating custom components isn't a difficult matter; we will sort it out in this article. We will create our own component which will display different menus on your site depending on where we are in the application. From one project to another, the developer's work is usually repeated. This holds true for tasks such as creating the project structure, configuration, creating data models, controllers, views, and so on. For those tasks, we will discover the power of Phalcon Developer Tools and how to use them. You will learn how to create an application skeleton by using one command, and even how to create a fully functional application prototype in less than 10 minutes without writing a single line of code. Developers often come up against a situation where they need to create a lot of predefined code templates. Until you are really familiar with the framework it can be useful for you to do everything manually. But anyway all of us would like to reduce repetitive tasks. Phalcon tries to help you by providing an easy and at the same time flexible code generation tool named Phalcon Developer Tools. These tools help you simplify the creation of CRUD components for a regular application. Therefore, you can create working code in a matter of seconds without creating the code yourself. Often, when creating an application using a framework, we need to extend or add functionality to the framework components. We don't have to reinvent the wheel by rewriting those components. We can use class inheritance and extensibility, but often this approach does not work. In such cases, it is better to use additional layers between the main application and the framework by creating a middleware layer. The term middleware has a wide range of meanings, but in the context of PHP web applications it means code, which will be called in turns by each request. We will look into the main principles of creating and using middleware in your application. We will not get into each solution in depth, but instead we will work with tasks that are common for most projects, and implementations extending Phalcon. Choosing the best place for an implementation Let's pretend you want to add a custom component. As the case may be, this component allows to change your site navigation menu. For example, when you have a Sign In link on your navigation menu and you are logged in, that link needs to change to Sign Out. Then you're asking yourself where is the best place in the project to put the code, where to place the files, how to name the classes, how to make them autoload by the autoloader. Getting ready… For successful implementation of this recipe you must have your application deployed. By this we mean that you need to have a web server installed and configured for handling requests to your application, an application must be able to receive requests, and have implemented the necessary components such as Controllers, Views, and a bootstrap file. For this recipe, we assume that our application is located in the apps directory. If this is not the case, you should change this part of the path in the examples shown in this article. How to do it… Follow these steps to complete this recipe: Create the /library/ directory app, if you haven't got one, where user components will be stored. Next, create the Elements (app/library/Elements.php) component. This class extends PhalconMvcUserComponent. Generally, it is not necessary, but it helps get access to application services quickly. The contents of Elements should be: <?php namespace Library; use PhalconMvcUserComponent; use PhalconMvcViewSimple as View; class Elements extends Component { public function __construct() { // ... } public function getMenu() { // ... } } Now we register this class in the Dependency Injection Container. We use a shared instance in order to prevent creating new instances by each service resolving: $di->setShared('elements', function () { return new LibraryElements(); }); If your Session service is not initialized yet, it's time to do it in your bootstrap file. We use a shared instance for the following reasons: $di->setShared('session', function () { $session = new PhalconSessionAdapterFiles(); $session->start(); return $session; }); Create the templates directory within the directory with your views/templates. Then you need to tell the class autoloader about a new namespace, which we have just entered. Let's do it in the following way: $loader->registerNamespaces([ // The APP_PATH constant should point // to the project's root 'Library' => APP_PATH . '/apps/library/', // ... ]); Add the following code right after the tag body in the main layout of your application: <div class="container"> <div class="navbar navbar-inverse"> <div class="container-fluid"> <div class="navbar-header"> <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#blog-top- menu" aria-expanded="false"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="#">Blog 24</a> </div> <?php echo $this->elements->getMenu(); ?> </div> </div> </div> Next, we need to create a template for displaying your top menu. Let's create it in views/templates/topMenu.phtml: <div class="collapse navbar-collapse" id="blog-top-menu"> <ul class="nav navbar-nav"> <li class="active"> <a href="#">Home</a> </li> </ul> <ul class="nav navbar-nav navbar-right"> <li> <?php if ($this->session->get('identity')): ?> <a href="#">Sign Out</a> <?php else: ?> <a href="#">Sign In</a> <?php endif; ?> </li> </ul> </div> Now, let's put the component to work. First, create the protected field $simpleView and initialize it in the controller: public function __construct() { $this->simpleView = new View(); $this->simpleView->setDI($this->getDI()); } And finally, implement the getMenu method as follows: public function getMenu() { $this->simpleView->setViewsDir($this->view- >getViewsDir()); return $this->simpleView->render('templates/topMenu'); } Open the main page of your site to ensure that your top menu is rendered. How it works… The main idea of our component is to generate a top menu, and to display the correct menu option depending on the situation, meaning whether the user is authorized or not. We create the user component, Elements, putting it in a place specially designed for the purpose. Of course, when creating the directory library and placing a new class there, we should tell the autoloader about a new namespace. This is exactly what we have done. However, we should take note of one important peculiarity. We should note that if you want to get access to your components quickly even in HTML templates like $this->elements, then you should put the components in the DI container. Therefore, we put our component, LibraryElements, in the container named elements. Since our component inherits PhalconMvcUserComponent, we are able to access all registered application services just by their names. For example, the following instruction, $this->view can be written in a long form, $this->getDi()->getShared('view'), but the first one is obviously more concise. Although not necessary, for application structure purposes, it is better to use a separate directory for different views not connected straight to specific controllers and actions. In our case, the directory views/templates serves for this purpose. We create a HTML template for menu rendering and place it in views/templates/topMenu.phtml. When using the method getMenu, our component will render the view topMenu.phtml and return HTML. In the method getMenu, we get the current path for all our views and set it for the PhalconMvcViewSimple component, created earlier in the constructor. In the view topMenu we access to the session component, which earlier we placed in the DI container. By generating the menu, we check whether the user is authorized or not. In the former case, we use the Sign out menu item, in the latter case we display the menu item with an invitation to Sign in. Automation of routine tasks The Phalcon project provides you with a great tool named Developer Tools. It helps automating repeating tasks, by means of code generation of components as well as a project skeleton. Most of the components of your application can be created only with one command. In this recipe, we will consider in depth the Developer Tools installation and configuration. Getting Ready… Before you begin to work on this recipe, you should have a DBMS configured, a web server installed and configured for handling requests from your application. You may need to configure a virtual host (this is optional) for your application which will receive and handle requests. You should be able to open your newly-created project in a browser at http://{your-host-here}/appname or http://{your-host-here}/, where your-host-here is the name of your project. You should have Git installed, too. In this recipe, we assume that your operating system is Linux. Developer Tools installation instructions for Mac OS X and Windows will be similar. You can find the link to the complete documentation for Mac OS X and Windows at the end of this recipe. We used the Terminal to create the database tables, and chose MySQL as our RDBMS. Your setup might vary. The choice of a tool for creating a table in your database, as well as a particular DBMS, is yours. Note that syntax for creating a table by using other DBMSs than MySQL may vary. How to do it… Follow these steps to complete this recipe: Clone Developer Tools in your home directory: git clone git@github.com:phalcon/phalcon-devtools.git devtools Go to the newly created directory devtools, run the./phalcon.sh command, and wait for a message about successful installation completion: $ ./phalcon.sh Phalcon Developer Tools Installer Make sure phalcon.sh is in the same dir as phalcon.php and that you are running this with sudo or as root. Installing Devtools... Working dir is: /home/user/devtools Generating symlink... Done. Devtools installed! Run the phalcon command without arguments to see the available command list and your current Phalcon version: $ phalcon Phalcon DevTools (3.0.0) Available commands: commands (alias of: list, enumerate) controller (alias of: create-controller) model (alias of: create-model) module (alias of: create-module) all-models (alias of: create-all-models) project (alias of: create-project) scaffold (alias of: create-scaffold) migration (alias of: create-migration) webtools (alias of: create-webtools) Now, let's create our project. Go to the folder where you plan to create the project and run the following command: $ phalcon project myapp simple Open the website which you have just created with the previous command in your browser. You should see a message about the successful installation. Create a database for your project: mysql -e 'CREATE DATABASE myapp' -u root -p You will need to configure our application to connect to the database. Open the file app/config/config.php and correct the database connection configuration. Draw attention to the baseUri: parameter if you have not configured your virtual host according to your project. The value of this parameter must be / or /myapp/. As the result, your configuration file must look like this: <?php use PhalconConfig; defined('APP_PATH') || define('APP_PATH', realpath('.')); return new Config([ 'database' => [ 'adapter' => 'Mysql', 'host' => 'localhost', 'username' => 'root', 'password' => '', 'dbname' => 'myapp', 'charset' => 'utf8', ], 'application' => [ 'controllersDir' => APP_PATH . '/app/controllers/', 'modelsDir' => APP_PATH . '/app/models/', 'migrationsDir' => APP_PATH . '/app/migrations/', 'viewsDir' => APP_PATH . '/app/views/', 'pluginsDir' => APP_PATH . '/app/plugins/', 'libraryDir' => APP_PATH . '/app/library/', 'cacheDir' => APP_PATH . '/app/cache/', 'baseUri' => '/myapp/', ] ]); Now, after you have configured the database access, let's create a users table in your database. Create the users table and fill it with the primary data: CREATE TABLE `users` ( `id` INT(11) unsigned NOT NULL AUTO_INCREMENT, `email` VARCHAR(128) NOT NULL, `first_name` VARCHAR(64) DEFAULT NULL, `last_name` VARCHAR(64) DEFAULT NULL, `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `users_email` (`email`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `users` (`email`, `first_name`, `last_name`) VALUES ('john@doe.com', 'John', 'Doe'), ('janie@doe.com', 'Janie', 'Doe'); After that we need to create a new controller, UsersController. This controller must provide us with the main CRUD actions on the Users model and, if necessary, display data with the appropriate views. Lets do it with just one command: $ phalcon scaffold users In your web browser, open the URL associated with your newly created resource User and try to find one of the users of our database table at http://{your-host-here}/appname/users (or http://{your-host-here}/users, depending on how you have configured your server for application request handling. Finally, open your project in your file manager to see all the project structure created with Developer Tools: +-- app ¦ +-- cache ¦ +-- config ¦ +-- controllers ¦ +-- library ¦ +-- migrations ¦ +-- models ¦ +-- plugins ¦ +-- schemas ¦ +-- views ¦ +-- index ¦ +-- layouts ¦ +-- users +-- public +-- css +-- files +-- img +-- js +-- temp How it works… We installed Developer Tools with only two commands, git clone and ./phalcon. This is all we need to start using this powerful code generation tool. Next, using only one command, we created a fully functional application environment. At this stage, the application doesn't represent something outstanding in terms of features, but we have saved time from manually creating the application structure. Developer Tools did that for you! If after this command completion you examine your newly created project, you may notice that the primary application configuration has been generated also, including the bootstrap file. Actually, the phalcon project command has additional options that we have not demonstrated in this recipe. We are focusing on the main commands. Enter the command help to see all available project creating options: $ phalcon project help In the modern world, you can hardly find a web application which works without access to a database. Our application isn't an exception. We created a database for our application, and then we created a users table and filled it with primary data. Of course, we need to supply our application with what we have done in the app/config/config.php file with the database access parameters as well as the database name. After the successful database and table creation, we used the scaffold command for the pre-defined code template generation, particularly the Users controller with all main CRUD actions, all the necessary views, and the Users model. As before, we have used only one command to generate all those files. Phalcon Developer Tools is equipped with a good amount of different useful tools. To see all the available options, you can use the command help. We have taken only a few minutes to create the first version of our application. Instead of spending time with repetitive tasks (such as the creation of the application skeleton), we can now use that time to do more exciting tasks. Phalcon Developer Tools helps us save time where possible. But wait, there is more! The project is evolving, and it becomes more featureful day by day. If you have any problems you can always visit the project on GitHub https://github.com/phalcon/phalcon-devtools and search for a solution. There's more… You can find more information on Phalcon Developer Tools installation for Windows and OS X at: https://docs.phalconphp.com/en/latest/reference/tools.html. More detailed information on web server configuration can be found at: https://docs.phalconphp.com/en/latest/reference/install.html Creating the application structure by using code generation tools In the following recipe, we will discuss available code generation tools that can be used for creating a multi-module application. We don't need to create the application structure and main components manually. Getting Ready… Before you begin, you need to have Git installed, as well as any DBMS (for example, MySQL, PostgreSQL, SQLite, and the like), the Phalcon PHP extension (usually it is named php5-phalcon) and a PHP extension, which offers database connectivity support using PDO (for example, php5-mysql, php5-pgsql or php5-sqlite, and the like). You also need to be able to create tables in your database. To accomplish the following recipe, you will require Phalcon Developer Tools. If you already have it installed, you may skip the first three steps related to the installation and go to the fourth step. In this recipe, we assume that your operating system is Linux. Developer Tools installation instructions for Mac OS X and Windows will be similar. You can find the link to the complete documentation for Mac OS X and Windows at the end of this recipe. We used the Terminal to create the database tables, and chose MySQL as our RDBMS. Your setup might vary. The choice of a tool for creating a table in your database, as well as particular DBMS, is yours. Note that syntax for creating a table by using other DBMSs than MySQL may vary. How to do it… Follow these steps to complete this recipe: First you need to decide where you will install Developer Tools. Put the case that you are going to place Developer Tools in your home directory. Then, go to your home directory and run the following command: git clone git@github.com:phalcon/phalcon-devtools.git Now browse to the newly created phalcon-devtools directory and run the following command to ensure that there are no problems: ./phalcon.sh Now, as far as you have Developer Tools installed, browse to the directory, where you intend to create your project, and run the following command: phalcon project blog modules If there were no errors during the previous step, then create a Help Controller by running the following command: phalcon controller Help --base-class=ControllerBase — namespace=Blog\Frontend\Controllers Open the newly generated HelpController in the apps/frontend/controllers/HelpController.php file to ensure that you have the needed controller, as well as the initial indexAction. Open the database configuration in the Frontend module, blog/apps/frontend/config/config.php, and edit the database configuration according to your current environment. Enter the name of an existing database user and a password that has access to that database, and the application database name. You can also change the database adapter that your application needs. If you do not have a database ready, you can create one now. Now, after you have configured the database access, let's create a users table in your database. Create the users table and fill it with the primary data: CREATE TABLE `users` ( `id` INT(11) unsigned NOT NULL AUTO_INCREMENT, `email` VARCHAR(128) NOT NULL, `first_name` VARCHAR(64) DEFAULT NULL, `last_name` VARCHAR(64) DEFAULT NULL, `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `users_email` (`email`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `users` (`email`, `first_name`, `last_name`) VALUES ('john@doe.com', 'John', 'Doe'), ('janie@doe.com', 'Janie', 'Doe'); Next, let's create Controller, Views, Layout, and Model by using the scaffold command: phalcon scaffold users --ns- controllers=Blog\Frontend\Controllers —ns- models=Blog\Frontend\Models Open the newly generated UsersController located in the apps/frontend/controllers/UsersController.php file to ensure you have generated all actions needed for user search, editing, creating, displaying, and deleting. To check if all actions work as designed, if you have a web server installed and configured for this recipe, you can go to http://{your-server}/users/index. In so doing, you can make sure that the required Users model is created in the apps/frontend/models/Users.php file, all the required views are created in the apps/frontend/views/users folder, and the user layout is created in the apps/frontend/views/layouts folder. If you have a web server installed and configured for displaying the newly created site, go to http://{your-server}/users/search to ensure that the users from our table are shown. How it works… In the world of programming, code generation is designed to lessen the burden of manually creating repeated code by using predefined code templates. The Phalcon framework provides perfect code generation tools which come with Phalcon Developer Tools. We start with the installation of Phalcon Developer Tools. Note, that if you already have Developer Tools installed, you should skip the steps involving these installation. Next, we generate a fully functional MVC application, which implements the multi-module principle. One command is enough to get a working application at once. We save ourselves the trouble of creating the application directory structure, creating the bootstrap file, creating all the required files, and setting up the initial application structure. For that end, we use only one command. It's really great, isn't it? Our next step is creating a controller. In our example, we use HelpController, which displays just such an approach to creating controllers. Next, we create the table users in our database and fill it with data. With that done, we use a powerful tool for generating predefined code templates, which is called Scaffold. Using only one command in the Terminal, we generate the controller UsersController with all the necessary actions and appropriate views. Besides this, we get the Users model and required layout. If you have a web server configured you can check out the work of Developer Tools at http://{your-server}/users/index. When we use the Scaffold command, the generator determines the presence and names of our table fields. Based on these data, the tool generates a model, as well as views with the required fields. The generator provides you with ready-to-use code in the controller, and you can change this code according to your needs. However, even if you don't change anything, you can use your controller safely. You can search for users, edit and delete them, create new users, and view them. And all of this was made possible with one command. We have discussed only some of the features of code generation. Actually, Phalcon Developer Tools has many more features. For help on the available commands you can use the command phalcon (without arguments). There's more… For more detailed information on installation and configuration of PDO in PHP, visit http://php.net/manual/en/pdo.installation.php. You can find detailed Phalcon Developer Tools installation instructions at https://docs.phalconphp.com/en/latest/reference/tools.html. For more information on Scaffold, refer to https://en.wikipedia.org/wiki/Scaffold_(programming). Summary In this article, you learned about the automation of routine tasks andcreating the application structure. Resources for Article: Further resources on this subject: Using Phalcon Models, Views, and Controllers [Article] Phalcon's ORM [Article] Planning and Structuring Your Test-Driven iOS App [Article]
Read more
  • 0
  • 0
  • 1323
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introduction-javascript
Packt
10 Nov 2016
15 min read
Save for later

Introduction to JavaScript

Packt
10 Nov 2016
15 min read
In this article by Simon Timms, author of the book, Mastering JavaScript Design Patterns - Second Edition, we will explore the history of JavaScript and how it came to be the important language that it is today (For more resources related to this topic, see here.) JavaScript is an evolving language that has come a long way from its inception. Possibly more than any other programming language, it has grown and changed with the growth of the World Wide Web. As JavaScript has evolved and grown in importance, the need to apply rigorous methods to its construction has also grown. The road to JavaScript We'll never know how language first came into being. Did it slowly evolve from a series of grunts and guttural sounds made during grooming rituals? Perhaps it developed to allow mothers and their offspring to communicate. Both of these are theories, all but impossible to prove. Nobody was around to observe our ancestors during that important period. In fact, the general lack of empirical evidence lead the Linguistic Society of Paris to ban further discussions on the topic, seeing it as unsuitable for serious study. The early days Fortunately, programming languages have developed in recent history and we've been able to watch them grow and change. JavaScript has one of the more interesting histories of modern programming languages. During what must have been an absolutely frantic 10 days in May of 1995, a programmer at Netscape wrote the foundation for what would grow up to be modern JavaScript. At the time, Netscape was involved in the first of the browser wars with Microsoft. The vision for Netscape was far grander than simply developing a browser. They wanted to create an entire distributed operating system making use of Sun Microsystems' recently-released Java programming language. Java was a much more modern alternative to the C++ Microsoft was pushing. However, Netscape didn't have an answer to Visual Basic. Visual Basic was an easier to use programming language, which was targeted at developers with less experience. It avoided some of the difficulties around memory management that make C and C++ notoriously difficult to program. Visual Basic also avoided strict typing and overall allowed more leeway: Brendan Eich was tasked with developing Netscape repartee to VB. The project was initially codenamed Mocha, but was renamed LiveScript before Netscape 2.0 beta was released. By the time the full release was available, Mocha/LiveScript had been renamed JavaScript to tie it into the Java applet integration. Java Applets were small applications which ran in the browser. They had a different security model from the browser itself and so were limited in how they could interact with both the browser and the local system. It is quite rare to see applets these days, as much of their functionality has become part of the browser. Java was riding a popular wave at the time and any relationship to it was played up. The name has caused much confusion over the years. JavaScript is a very different language from Java. JavaScript is an interpreted language with loose typing, which runs primarily on the browser. Java is a language that is compiled to bytecode, which is then executed on the Java Virtual Machine. It has applicability in numerous scenarios, from the browser (through the use of Java applets), to the server (Tomcat, JBoss, and so on), to full desktop applications (Eclipse, OpenOffice, and so on). In most laypersons' minds, the confusion remains. JavaScript turned out to be really quite useful for interacting with the web browser. It was not long until Microsoft had also adopted JavaScript into their Internet Explorer to complement VBScript. The Microsoft implementation was known as JScript. By late 1996, it was clear that JavaScript was going to be the winning web language for the near future. In order to limit the amount of language deviation between implementations, Sun and Netscape began working with the European Computer Manufacturers Association (ECMA) to develop a standard to which future versions of JavaScript would need to comply. The standard was released very quickly (very quickly in terms of how rapidly standards organizations move), in July of 1997. On the off chance that you have not seen enough names yet for JavaScript, the standard version was called ECMAScript, a name which still persists in some circles. Unfortunately, the standard only specified the very core parts of JavaScript. With the browser wars raging, it was apparent that any vendor that stuck with only the basic implementation of JavaScript would quickly be left behind. At the same time, there was much work going on to establish a standard Document Object Model (DOM) for browsers. The DOM was, in effect, an API for a web page that could be manipulated using JavaScript. For many years, every JavaScript script would start by attempting to determine the browser on which it was running. This would dictate how to address elements in the DOM, as there were dramatic deviations between each browser. The spaghetti of code that was required to perform simple actions was legendary. I remember reading a year-long 20-part series on developing a Dynamic HTML (DHTML) drop down menu such that it would work on both Internet Explorer and Netscape Navigator. The same functionally can now be achieved with pure CSS without even having to resort to JavaScript. DHTML was a popular term in the late 1990s and early 2000s. It really referred to any web page that had some sort of dynamic content that was executed on the client side. It has fallen out of use, as the popularity of JavaScript has made almost every page a dynamic one. Fortunately, the efforts to standardize JavaScript continued behind the scenes. Versions 2 and 3 of ECMAScript were released in 1998 and 1999. It looked like there might finally be some agreement between the various parties interested in JavaScript. Work began in early 2000 on ECMAScript 4, which was to be a major new release. A pause Then, disaster struck. The various groups involved in the ECMAScript effort had major disagreements about the direction JavaScript was to take. Microsoft seemed to have lost interest in the standardization effort. It was somewhat understandable, as it was around that time that Netscape self-destructed and Internet Explorer became the de-facto standard. Microsoft implemented parts of ECMAScript 4 but not all of it. Others implemented more fully-featured support, but without the market leader on-board, developers didn't bother using them. Years passed without consensus and without a new release of ECMAScript. However, as frequently happens, the evolution of the Internet could not be stopped by a lack of agreement between major players. Libraries such as jQuery, Prototype, Dojo, and Mootools, papered over the major differences in browsers, making cross-browser development far easier. At the same time, the amount of JavaScript used in applications increased dramatically. The way of GMail The turning point was, perhaps, the release of Google's GMail application in 2004. Although XMLHTTPRequest, the technology behind Asynchronous JavaScript and XML (AJAX), had been around for about five years when GMail was released, it had not been well-used. When GMail was released, I was totally knocked off my feet by how smooth it was. We've grown used to applications that avoid full reloads, but at the time, it was a revolution. To make applications like that work, a great deal of JavaScript is needed. AJAX is a method by which small chunks of data are retrieved from the server by a client instead of refreshing the entire page. The technology allows for more interactive pages that avoid the jolt of full page reloads. The popularity of GMail was the trigger for a change that had been brewing for a while. Increasing JavaScript acceptance and standardization pushed us past the tipping point for the acceptance of JavaScript as a proper language. Up until that point, much of the use of JavaScript was for performing minor changes to the page and for validating form input. I joke with people that in the early days of JavaScript, the only function name which was used was Validate(). Applications such as GMail that have a heavy reliance on AJAX and avoid full page reloads are known as Single Page Applications or SPAs. By minimizing the changes to the page contents, users have a more fluid experience. By transferring only JavaScript Object Notation (JSON) payload instead of HTML, the amount of bandwidth required is also minimized. This makes applications appear to be snappier. In recent years, there have been great advances in frameworks that ease the creation of SPAs. AngularJS, backbone.js, and ember are all Model View Controller style frameworks. They have gained great popularity in the past two to three years and provide some interesting use of patterns. These frameworks are the evolution of years of experimentation with JavaScript best practices by some very smart people. JSON is a human-readable serialization format for JavaScript. It has become very popular in recent years, as it is easier and less cumbersome than previously popular formats such as XML. It lacks many of the companion technologies and strict grammatical rules of XML, but makes up for it in simplicity. At the same time as the frameworks using JavaScript are evolving, the language is too. 2015 saw the release of a much-vaunted new version of JavaScript that had been under development for some years. Initially called ECMAScript 6, the final name ended up being ECMAScript-2015. It brought with it some great improvements to the ecosystem. Browser vendors are rushing to adopt the standard. Because of the complexity of adding new language features to the code base, coupled with the fact that not everybody is on the cutting edge of browsers, a number of other languages that transcompile to JavaScript are gaining popularity. CoffeeScript is a Python-like language that strives to improve the readability and brevity of JavaScript. Developed by Google, Dart is being pushed by Google as an eventual replacement for JavaScript. Its construction addresses some of the optimizations that are impossible in traditional JavaScript. Until a Dart runtime is sufficiently popular, Google provides a Dart to the JavaScript transcompiler. TypeScript is a Microsoft project that adds some ECMAScript-2015 and even some ECMAScript-201X syntax, as well as an interesting typing system, to JavaScript. It aims to address some of the issues that large JavaScript projects present. The point of this discussion about the history of JavaScript is twofold: first, it is important to remember that languages do not develop in a vacuum. Both human languages and computer programming languages mutate based on the environments in which they are used. It is a popularly held belief that the Inuit people have a great number of words for "snow", as it was so prevalent in their environment. This may or may not be true, depending on your definition for the word and exactly who makes up the Inuit people. There are, however, a great number of examples of domain-specific lexicons evolving to meet the requirements for exact definitions in narrow fields. One need look no further than a specialty cooking store to see the great number of variants of items which a layperson such as myself would call a pan. The Sapir–Whorf hypothesis is a hypothesis within the linguistics domain, which suggests that not only is language influenced by the environment in which it is used, but also that language influences its environment. Also known as linguistic relativity, the theory is that one's cognitive processes differ based on how the language is constructed. Cognitive psychologist Keith Chen has proposed a fascinating example of this. In a very highly-viewed TED talk, Dr. Chen suggested that there is a strong positive correlation between languages that lack a future tense and those that have high savings rates (https://www.ted.com/talks/keith_chen_could_your_language_affect_your_ability_to_save_money/transcript). The hypothesis at which Dr. Chen arrived is that when your language does not have a strong sense of connection between the present and the future, this leads to more reckless behavior in the present. Thus, understanding the history of JavaScript puts one in a better position to understand how and where to make use of JavaScript. The second reason I explored the history of JavaScript is because it is absolutely fascinating to see how quickly such a popular tool has evolved. At the time of writing, it has been about 20 years since JavaScript was first built and its rise to popularity has been explosive. What more exciting thing is there than to work in an ever-evolving language? JavaScript everywhere Since the GMail revolution, JavaScript has grown immensely. The renewed browser wars, which pit Internet Explorer and Edge against Chrome, against Firefox, have lead to building a number of very fast JavaScript interpreters. Brand new optimization techniques have been deployed and it is not unusual to see JavaScript compiled to machine-native code for the added performance it gains. However, as the speed of JavaScript has increased, so has the complexity of the applications built using it. JavaScript is no longer simply a language for manipulating the browser, either. The JavaScript engine behind the popular Chrome browser has been extracted and is now at the heart of a number of interesting projects such as Node.js. Node.js started off as a highly asynchronous method of writing server-side applications. It has grown greatly and has a very active community supporting it. A wide variety of applications have been built using the Node.js runtime. Everything from build tools to editors have been built on the base of Node.js. Recently, the JavaScript engine for Microsoft Edge, ChakraCore, was also open sourced and can be embedded in NodeJS as an alternative to Google's V8. SpiderMonkey, the FireFox equivalent, is also open source and is making its way into more tools. JavaScript can even be used to control microcontrollers. The Johnny-Five framework is a programming framework for the very popular Arduino. It brings a much simpler approach to programming devices than the traditional low-level languages used for programming these devices. Using JavaScript and Arduino opens up a world of possibilities, from building robots to interacting with real-world sensors. All of the major smartphone platforms (iOS, Android, and Windows Phone) have an option to build applications using JavaScript. The tablet space is much the same with tablets supporting programming using JavaScript. Even the latest version of Windows provides a mechanism for building applications using JavaScript: JavaScript is becoming one of the most important languages in the world. Although language usage statistics are notoriously difficult to calculate, every single source which attempts to develop a ranking puts JavaScript in the top 10: Language index Rank of JavaScript Langpop.com 4 Statisticbrain.com 4 Codeval.com 6 TIOBE 8 What is more interesting is that most of of these rankings suggest that the usage of JavaScript is on the rise. The long and short of it is that JavaScript is going to be a major language in the next few years. More and more applications are being written in JavaScript and it is the lingua franca for any sort of web development. Developer of the popular Stack Overflow website Jeff Atwood created Atwood's Law regarding the wide adoption of JavaScript: "Any application that can be written in JavaScript, will eventually be written in JavaScript" – Atwood's Law, Jeff Atwood This insight has been proven to be correct time and time again. There are now compilers, spreadsheets, word processors—you name it—all written in JavaScript. As the applications which make use of JavaScript increase in complexity, the developer may stumble upon many of the same issues as have been encountered in traditional programming languages: how can we write this application to be adaptable to change? This brings us to the need for properly designing applications. No longer can we simply throw a bunch of JavaScript into a file and hope that it works properly. Nor can we rely on libraries such as jQuery to save ourselves. Libraries can only provide additional functionality and contribute nothing to the structure of an application. At least some attention must now be paid to how to construct the application to be extensible and adaptable. The real world is ever-changing, and any application that is unable to change to suit the changing world is likely to be left in the dust. Design patterns provide some guidance in building adaptable applications, which can shift with changing business needs. Summary JavaScript has an interesting history and is really coming of age. With server-side JavaScript taking off and large JavaScript applications becoming common, there is a need for more diligence in building JavaScript applications. For more information on JavaScript, you can check other books by Packt mentioned as follows: Mastering JavaScript Promises: https://www.packtpub.com/application-development/mastering-javascript-promises Mastering JavaScript High Performance: https://www.packtpub.com/web-development/mastering-javascript-high-performance JavaScript : Functional Programming for JavaScript Developers: https://www.packtpub.com/web-development/javascript-functional-programming-javascript-developers Resources for Article: Further resources on this subject: API with MongoDB and Node.js [article] Tips & Tricks for Ext JS 3.x [article] Saying Hello! [article]
Read more
  • 0
  • 0
  • 1705

article-image-creating-personal-web-portal-pwp
Packt
10 Nov 2016
8 min read
Save for later

Creating a Personal Web Portal (PWP)

Packt
10 Nov 2016
8 min read
In this article by Sherwin John Calleja Tragura, author of the book Spring MVC Blueprints, we will discuss about creating a robust and simple personal web portal that can serve as a personal web page, or a professional reference site, for anyone. Usually, these kinds of web sites are used as mashups, or dashboards, of centralized sources of information describing and individual or group. (For more resources related to this topic, see here.) Technically, a personal web portal is a composition of web components like CSS, HTML, and JavaScript, woven together to create a formal, simple or exquisite presentation of any content. It can be used, in its simplest form, as a personal portfolio or an enterprise form like an e-commerce content management system. Commercially, these portals are drafted and designed using the principles of the Rich-client platform or responsive web designs. In the industry, most companies suggest that clients try easy-to-use-tools like PHP frameworks (for example, CodeIgniter, Laravel, Drupal) and seldom advise using JEE-based portals. Overview of the project The personal web portal (PWP) created publishes a simple biography, and professional information, one can at least share through the web. The prototype is a session-driven one that can do dynamic transactions, like updating information on the web pages, and posting notes on the page without using any back-end database. Through using wireframes, the following are the initial drafts and design of the web portal: The Home Page: This is the first page of the site that shows updatable quotes, and inspiring messages coming from the owner of the portal. It contains a sticky-note feature at the side that allows visitors to post their short greetings to the owner in real-time. The Personal Information Page: This page highlights personal information of the owner including the owner's name, age, hobbies, birth date, and age. This page contains some part of the blogger's educational history. The page is dynamic and can be updated at anytime by the owner. The Professional Information Page: This page presents details about the owner's career background. It lists down all the previous jobs of the account owner, and enumerates all skills-related information. This page is also updatable. The Reach Out Page: This serves as the contact information page of the owner. Moreover, it allows visitors to send their contact information, and specifically their electronic mail address, to the portal owner. Update pages: The Home, Personal and Professional pages has updateable pages for the owner to update the content of the portal. The prototype has the capability to update the information presented in the content at anytime the user desires. This simple prototype, called PWP, will give clear steps on how to build personal sites from the ground-up, using Spring MVC 4.x specifications. It will give enthusiasts the opportunity to start creating Spring-based web portals in just a day, without using any database backend. To those who are new to the Spring MVC 4.x concept, this article will be a good start in building full-blown portal sites. Technical requirements In order to start the development, the following tools need to be installed onto the platform: Java Development Kit (JDK) 1.7.x Spring Tool Suite (Eclipse) 3.6 Maven 3.x Spring Framework 4.1 Apache Tomcat 7.x Any operating system First, the JDK 1.7.x installer must be installed. Visit the site http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html to download the installer. Next, setup the Spring Tool Suite 3.6 (Eclipse-based) which will be the official Integrated Development Environment (IDE) of this article. Download the Spring Tool Suite 3.6 at https://spring.io/tools/sts. Setting-up the development environment This article recommends Spring Tool Suite (Eclipse) 3.6 since it has all the Spring Framework 4.x plug-ins, and other dependencies needed by the projects. To start us off, the following screenshot shows the dashboard of the STS IDE: Conversely, Apache Maven 3.x will be used to build and deploy the project for this article. Apache Maven is a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting and documentation from a central piece of information (https://maven.apache.org/). There is already a Maven plugin installed in the STS IDE that can be used to generate the needed development directory structure. Among the many ways to create Spring MVC projects, this article focuses on two styles, namely: Converting a dynamic web project to a Maven specimen Creating a Maven project from scratch Converting a dynamic web project to a Maven project To start creating the project, press CTRL + N to browse the Menu wizard of the IDE. This menu wizard contains all the types of project modules you'll need to start a project. The Menu wizard should look similar to the following screenshot: Once on the menu, browse the Web option and choose Dynamic Web Project. Afterwards, just follow the series of instructions to create the chosen project module until you reached the last menu wizard, which looks like the following figure: This last instruction (Web Module panel) will auto-generate the deployment descriptor (web.xml) of the project. Always click on the Generate web-xml deployment descriptor checkbox option. The deployment descriptor is an XML file that must reside inside the /WEB-INF/ folder of all JEE projects. This file describes how a component, module or application can be deployed. A JEE project must always be in the web.xml file otherwise the project will be defective. Since Spring 4.x container supports the Servlet Specification 3.0 in Tomcat 7 and above, web.xml is no longer mandatory and can be replaced by org.springframework.web.servlet.support.AbstractAnnotationConfigDispatcherServletInitializer or org.springframework.web.servlet.support.AbstractDispatcherServletInitializer class. The next major step is to convert the newly created dynamic web project to a Maven one. To complete the conversion, right-click on the project and navigate to the Configure | Convert Maven Project command set, as shown in the following screenshot: It is always best for the developer to study the directory structure of the project folder created before the actual implementation starts. Below is the directory structure of the Maven project after the conversion: The project directories are just like the usual Eclipse Dynamic Web project without the pom.xml file. Creating a Maven project from scratch Another method of creating a Spring MVC web project is by creating a Maven project from the start. Be sure to install the Maven 3.2 plugin in STS Eclipse. Browse the Menu wizard again, and locate the Maven option. Click on the Maven Project to generate a new Maven project. After clicking this option, a wizard will pop up, asking if an archetype is needed or not to create the Maven project. An archetype is a Maven plugin whose main objective is to create a project structure as per its template. To start quickly, choose an archetype plugin to create a simple java application here. It is recommended to create the project using the archetype maven-archetype-webapp. However, skipping the archetype selection can still be a valid option. After you've done this, proceed with the Select an Archetype window shown in the following screenshot. Locate maven-archetype-webapp then proceed with the last process. The selection of the Archetype maven-archetype-webapp will require the input of Maven parameters before the ending the whole process with a new Maven project: The required parameters for the Maven group or project are as follows: Group Id (groupId): This is the ID of the project's group and must be unique among all the project's groups Artifact Id (artifactId): This is the ID of the project. This is generally the name of the project Version (version): This is the version of the project Package (package): The initial or core package of the sources For more information on Maven plugin and configuration details, visit the documentation and samples on the site http://maven.apache.org/. After providing the Maven parameters, the project source folder structure will be similar to the following screenshot: Summary Using the basic Spring Framework 4.x APIs, web portal creators can create their own platform to promote their personal philosophy, business, ideology, religion, and other concepts. Although it is an advantage to use existing portal platforms made in other language like PHP and Python, it is still fulfilling to design and develop our own portal based on an open-source framework. The PWP is just prototype software that needs to be upgraded to have a backend database, security, and other social media plugins, in order to make the software commercially competitive. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application [article] ASP.NET MVC 2: Validating MVC [article] Using ASP.NET Controls in SharePoint [article]
Read more
  • 0
  • 0
  • 1839

article-image-breaking-microservices-architecture
Packt
08 Nov 2016
15 min read
Save for later

Breaking into Microservices Architecture

Packt
08 Nov 2016
15 min read
In this article by Narayan Prusty, the author of the book Modern JavaScript Applications, we will see the architecture of server side application development for complex and large applications (applications with huge number of users and large volume of data) shouldn't just involve faster response and providing web services for wide variety of platforms. It should be easy to scale, upgrade, update, test, and deploy. It should also be highly available, allowing the developers write components of the server side application in different programming languages and use different databases. Therefore, this leads the developers who build large and complex applications to switch from the common monolithic architecture to microservices architecture that allows us to do all this easily. As microservices architecture is being widely used in enterprises that build large and complex applications, it's really important to learn how to design and create server side applications using this architecture. In this chapter, we will discuss how to create applications based on microservices architecture with Node.js using the Seneca toolkit. (For more resources related to this topic, see here.) What is monolithic architecture? To understand microservices architecture, it's important to first understand monolithic architecture, which is its opposite. In monolithic architecture, different functional components of the server side application, such as payment processing, account management, push notifications, and other components, all blend together in a single unit. For example, applications are usually divided into three parts. The parts are HTML pages or native UI that run on the user's machine, server side application that runs on the server, and database that also runs on the server. The server side application is responsible for handling HTTP requests, retrieving and storing data in a database, executing algorithms, and so on. If the server side application is a single executable (that is running is a single process) that does all these task, than we say that the server side application is monolithic. This is a common way of building server side applications. Almost every major CMS, web servers, server side frameworks, and so on are built using monolithic architecture. This architecture may seem successful, but problems are likely to arise when your application is large and complex. Demerits of monolithic architecture The following are some of the issues caused by server side applications built using the monolithic architecture. Scaling monolithic architecture As traffic to your server side application increases, you will need to scale your server side application to handle the traffic. In case of monolithic architecture, you can scale the server side application by running the same executable on multiple servers and place the servers behind a load balancer or you can use round robin DNS to distribute the traffic among the servers: In the preceding diagram, all the servers will be running the same server side application. Although scaling is easy, scaling monolithic server side application ends up with scaling all the components rather than the components that require greater resource. Thus, causing unbalanced utilization of resources sometimes, depending on the quantity and types of resources the components need. Let's consider some examples to understand the issues caused while scaling monolithic server side applications: Suppose there is a component of server side application that requires a more powerful or special kind of hardware, we cannot simply scale this particular component as all the components are packed together, therefore everything needs to be scaled together. So, to make sure that the component gets enough resources, you need to run the server side application on some more servers with powerful or special hardware, leading to consumption of more resources than actually required. Suppose we have a component that requires to be executed on a specific server operating system that is not free of charge, we cannot simply run this particular component in a non-free operating system as all the components are packed together and therefore, just to execute this specific component, we need to install the non-free operating system in all servers, increasing the cost largely. These are just some examples. There are many more issues that you are likely to come across while scaling a monolithic server side application. So, when we scale monolithic server side applications, the components that don't need more powerful or special kind of resource starts receiving them, therefore deceasing resources for the component that needs them. We can say that scaling monolithic server side application involves scaling all components that are forcing to duplicate everything in the new servers. Writing monolithic server side applications Monolithic server side applications are written in a particular programming language using a particular framework. Enterprises usually have developers who are experts in different programming languages and frameworks to build server side applications; therefore, if they are asked to build a monolithic server side application, then it will be difficult for them to work together. The components of a monolithic server side application can be reused only in the same framework using, which it's built. So, you cannot reuse them for some other kind of project that's built using different technologies. Other issues of monolithic architecture Here are some other issues that developers might face. Depending on the technology that is used to build the monolithic server side application: It may need to be completely rebuild and redeployed for every small change made to it. This is a time-consuming task and makes your application inaccessible for a long time. It may completely fail if any one of the component fails. It's difficult to build a monolithic application to handle failure of specific components and degrade application features accordingly. It may be difficult to find how much resources are each components consuming. It may be difficult to test and debug individual components separately. Microservices architecture to the rescue We saw the problems caused by monolithic architecture. These problems lead developers to switch from monolithic architecture to microservices architecture. In microservices architecture, the server side application is divided into services. A service (or microservice) is a small and independent process that constitutes a particular functionality of the complete server side application. For example, you can have a service for payment processing, another service for account management, and so on; the services need to communicate with each other via network. What do you mean by "small" service? You must be wondering how small a service needs to be and how to tell whether a service is small or not? Well, it actually depends on many factors such as the type of application, team management, availability of resources, size of application, and how small you think is small? However, a small service doesn't have to be the one that is written is less lines of code or provides a very basic functionality. A small service can be the one on which a team of developers can work independently, which can be scaled independently to other services, scaling it doesn't cause unbalanced utilization of recourses, and overall they are highly decoupled (independent and unaware) of other services. You don't have to run each service in a different server, that is, you can run multiple services in a single computer. The ratio of server to services depends on different factors. A common factor is the amount and type of resources and technologies required. For example, if a service needs a lot of RAM and CPU time, then it would be better to run it individually on a server. If there are some services that don't need much resources, then you can run them all in a single server together. The following diagram shows an example of the microservices architecture: Here, you can think of Service 1 as the web server with which a browser communicates and other services providing APIs for various functionalities. The web services communicate with other services to get data. Merits of microservices architecture Due to the fact that services are small and independent and communicate via network, it solves many problems that monolithic architecture had. Here are some of the benefits of microservices architecture: As the services communicate via network, they can be written in different programming languages using different frameworks Making a change to a service only requires that particular service to be redeployed instead of all the services, which is a faster procedure It becomes easier to measure how much resources are consumed by each service as each service runs in a different process It becomes easier to test and debug, as you can analyze each service separately Services can be reused by other applications as they interact via network calls Scaling services Apart from the preceding benefits, one of the major benefits of microservices architecture is that you can scale individual services that require scaling instead of all the services, therefore preventing duplication of resources and unbalanced utilization of resources. Suppose we want to scale Service 1 in the preceding diagram. Here is a diagram that shows how it can be scaled: Here, we are running two instances of Service 1 on two different servers kept behind a load balancer, which distributes the traffic between them. All other services run the same way as scaling them wasn't required. If you wanted to scale Service 3, then you can run multiple instances of Service 3 on multiple servers and place them behind a load balancer. Demerits of microservices architecture Although there are a lot of merits of using microservices architecture compared to monolithic architecture, there are some demerits of microservices architecture as well: As the server side application is divided into services, deploying, and optionally, configuring each service separately is cumbersome and a time-consuming task. Note that developers often use some sort automation technology (such as AWS, Docker, and so on) to make deployment somewhat easier; however, to use it, you still need a good level of experience and expertise of that technology. Communication between services is likely to lag as it's done via network. This sort of server side applications is more prone to network security vulnerabilities as services communicate via network. Writing code for communicating with other services can be harder, that is, you need to make network calls and then parse the data to read it. This also requires more processing. Note that although there are frameworks to build server side applications using microservices that make fetching and parsing of data easier, it still doesn't deduct the processing and network wait time. You will surely need some sort of monitoring tool to monitor services as they may go down due to network, hardware, or software failure. Although you may use the monitoring tool only when your application suddenly stops, to build the monitoring software or use some sort of service, monitoring software needs some level of extra experience and expertise. Microservices-based server side applications are slower than monolithic-based server side applications as communication via networks is slower compared to memory. When to use microservices architecture? It may seem like its difficult to choose between monolithic and microservices architecture, but it's actually not so hard to decide between them. If you are building a server side application using monolithic architecture and you feel that you are unlikely to face any monolithic issues that we discussed earlier, then you can stick to monolithic architecture. In future, if you are facing issues that can be solved using microservices architecture, then you should switch to microservices architecture. If you are switching from a monolithic architecture to microservices architecture, then you don't have to rewrite the complete application, instead you can only convert the components that are causing issues to services by doing some code refactoring. This sort of server side applications where the main application logic is monolithic but some specific functionality is exposed via services is called microservices architecture with monolithic core. As issues increase further, you can start converting more components of the monolithic core to services. If you are building a server side application using monolithic architecture and you feel that you are likely to face any of the monolithic issues that we discussed earlier, then you should immediately switch to microservices architecture or microservices architecture with monolithic core, depending on what suits you the best. Data management In microservices architecture, each service can have its own database to store data and can also use a centralized database to store. Some developers don't use a centralized database at all, instead all services have their own database to store the data. To synchronize the data between the services, the services omit events when their data is changed and other services subscribe to the event and update the data. The problem with this mechanism is that if a service is down, then it may miss some events. There is also going to be a lot of duplicate data, and finally, it is difficult to code this kind of system. Therefore, it's a good idea to have a centralized database and also let each service to maintain their own database if they want to store something that they don't want to share with others. Services should not connect to the centralized database directly, instead there should be another service called database service that provides APIs to work with the centralized database. This extra layer has many advantages, such as the underlying schema can be changed without updating and redeploying all the services that are dependent on the schema, we can add a caching layer without making changes to the services, you can change the type of database without making any changes to the services and there are many other benefits. We can also have multiple database services if there are multiple schemas, or if there are different types of databases, or due to some other reason that benefits the overall architecture and decouples the services. Implementing microservices using Seneca Seneca is a Node.js framework for creating server side applications using microservices architecture with monolithic core. Earlier, we discussed that in microservices architecture, we create a separate service for every component, so you must be wondering what's the point of using a framework for creating services that can be done by simply writing some code to listen to a port and reply to requests. Well, writing code to make requests, send responses, and parse data requires a lot of time and work, but a framework like Seneca make all this easy. Also converting components of monolithic core to services is also a cumbersome task as it requires a lot of code refactoring, but Seneca makes it easy by introducing a concept of actions and plugins. Finally, services written in any other programming language or framework will be able to communicate with Seneca services. In Seneca, an action represents a particular operation. An action is a function that's identified by an object literal or JSON string called as the action's pattern. In Seneca, these operations of a component of monolithic core are written using actions, which we may later want to move from monolithic core to a service and expose it to other services and monolithic core via network. Why actions? You might be wondering what is the benefit of using actions instead of functions to write operations and how actions make it easy to convert components of monolithic core to services? Suppose you want to move an operation of monolithic core that is written using a function to a separate service and expose the function via network then you cannot simply copy and paste the function to the new service, instead you need to define a route (if you are using Express). To call the function inside the monolithic core, you will need to write code to make an HTTP request to the service. To call this operation inside the service, you can simply call a function so that there are two different code snippets depending from where you are executing the operation. Therefore, moving operations requires a lot of code refactoring. However, if you would have written the preceding operation using the Seneca action, then it would have been really easy to move the operation to a separate service. In case the operation is written using action, and you want to move the operation to a separate service and expose the operation via network, then you can simply copy and paste the action to the new service. That's it. Obviously, we also need to tell the service to expose the action via network and tell the monolithic core where to find the action, but all these require just couple of lines of code. A Seneca service exposes actions to other services and monolithic core. While making request to a service, we need to provide a pattern matching an action's pattern to be called in the service. Why patterns? Patterns make it easy to map a URL to action, patterns can overwrite other patterns for specific conditions, therefore it prevents editing of the existing code, as editing of the existing code in a production site is not safe and have many other disadvantages. Seneca also has a concept of plugins. A seneca plugin is actually a set of actions that can be easily distributed and plugged in to a service or monolithic core. As our monolithic core becomes larger and complex, we can convert components to services. That is, move actions of certain components to services. Summary In this chapter, we saw the difference between monolithic and microservices architecture. Then we discussed what microservices architecture with monolithic core means and its benefits. Finally, we jumped into the Seneca framework for implementing microservices architecture with monolithic core and discussed how to create a basic login and registration functionality to demonstrate various features of the Seneca framework and how to use it. In the next chapter, we will create a fully functional e-commerce website using Seneca and Express frameworks Resources for Article: Further resources on this subject: Microservices – Brave New World [article] Patterns for Data Processing [article] Domain-Driven Design [article]
Read more
  • 0
  • 0
  • 2447

article-image-simple-todo-list-web-application-nodejs-express-and-riot
Pedro NarcisoGarcíaRevington
07 Nov 2016
10 min read
Save for later

Simple ToDo list web application with node.js, Express, and Riot

Pedro NarcisoGarcíaRevington
07 Nov 2016
10 min read
The frontend space is indeed crowded, but none of the more popular solutions are really convincing to me. I feel Angular is bloated and the double binding is not for me. I also do not like React and its syntax. Riot is, as stated by their creators, "A React-like user interface micro-library" with simpler syntax that is five times smaller than React. What we are going to learn We are going to build a simple Riot application backed by Express, using Jade as our template language. The backend will expose a simple REST API, which we will consume from the UI. We are not going to use any other dependency like JQuery, so this is also a good chance to try XMLHttpRequest2. I deliberately ommited the inclusion of a client package manager like webpack or jspm because I want to focus on the Expressjs + Riotjs. For the same reason, the application data is persisted in memory. Requirements You just need to have any recent version of node.js(+4), text editor of your choice, and some JS, Express and website development knowledge. Project layout Under my project directory we are going to have 3 directories: * public For assets like the riot.js library itself. * views Common in most Express setup, this is where we put the markup. * client This directory will host the Riot tags (we will see more of that later) We will also have the package.json, our project manifesto, and an app.js file, containing the Express application. Our Express server exposes a REST API; its code can be found in api.js. Here is how the layout of the final project looks: ├── api.js ├── app.js ├── client │ ├── tag-todo.jade │ └── tag-todo.js ├── package.json ├── node_modules ├── public │ └── js │ ├── client.js │ └── riot.js └── views └── index.jade Project setup Create your project directory and from there run the following to install the node.js dependencies: $ npm init -y $ npm install --save body-parser express jade And the application directories: $ mkdir -p views public/js client Start! Lets start by creating the Express application file, app.js: 'use strict'; const express = require('express'), app = express(), bodyParser = require('body-parser'); // Set the views directory and template engine app.set('views', __dirname + '/views'); app.set('view engine', 'jade'); // Set our static directory for public assets like client scripts app.use(express.static('public')); // Parses the body on incoming requests app.use(bodyParser.json()); // Pretty prints HTML output app.locals.pretty = true; // Define our main route, HTTP "GET /" which will print "hello" app.get('/', function (req, res) { res.send('hello'); }); // Start listening for connections app.listen(3000, function (err) { if (err) { console.error('Cannot listen at port 3000', err); } console.log('Todo app listening at port 3000'); }); The #app object we just created, is just a plain Express application. After setting up the application we call the listen function, which will create an HTTP server listening at port 3000. To test our application setup, we open another terminal, cd to our project directory and run $ node app.js. Open a web browser and load http://localhost:3000; can you read "hello"? Node.js will not reload the site if you change the files, so I recommend you to install nodemon. Nodemon monitors your code and reloads the site on every change you perform on the JS source code. The command,$ npm install -g nodemon, installs the program on your computer globally, so you can run it from any directory. Okay, kill our previously created server and start a new one with $ nodemon app.js. Our first Riot tag Riot allows you to encapsulate your UI logic in "custom tags". Tag syntax is pretty straightforward. Judge for yourself. <employee> <span>{ name }</span> </employee> Custom tags can contain code and can be nested as showed in the next code snippet: <employeeList> <employee each="{ items }" onclick={ gotoEmployee } /> <script> gotoEmployee (e) { var item = e.item; // do something } </script> </employeeList> This mechanism enables you to build complex functionality from simple units. Of course you can find more information at their documentation. On the next steps we will create our first tag: ./client/tag-todo.jade. Oh, we have not yet downloaded Riot! Here is the non minified Riot + compiler download. Download it to ./public/js/riot.js. Next step is to create our index view and tell our app to serve it. Locate / router handler, remove the res.send('hello) ', and update to: // Define our main route, HTTP "GET /" which will print "hello" app.get('/', function (req, res) { res.render('index'); }); Now, create the ./views/index.jade file: doctype html html head script(src="/js/riot.js") body h1 ToDo App todo Go to your browser and reload the page. You can read the big "ToDo App" but nothing else. There is a <todo></todo> tag there, but since the browser does not understand, this tag is not rendered. Let's tell Riot to mount the tag. Mount means Riot will use <todo></todo> as a placeholder for our —not yet there— todo tag. doctype html html head script(src="/js/riot.js") body h1 ToDo App script(type="riot/tag" src="/tags/todo.tag") todo script. riot.mount('todo'); Open your browser's dev console and reload the page. riot.mount failed because there was no todo.tag. Tags can be served in many ways, but I choose to serve them as regular Express templates. Of course, you can serve it as static assets or bundled. Just below the / route handler, add the /tags/:name.tag handler. // "/" route handler app.get('/', function (req, res) { res.render('index'); }); // tag route handler app.get('/tags/:name.tag', function (req, res) { var name = 'tag-' + req.params.name; res.render('../client/' + name); }); Now create the tag in ./client/tag-todo.jade: todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") And reload the browser again. Errors gone and a new form in your browser. onsubmit="{ add }" is part of Riot's syntax and means on submit call the add function. You can add mix implementation with the markup, but I rather prefer to split markup from code. In Jade (and any other template language),it is trivial to include other files, which is exactly what we are going to do. Update the file as: todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") script include tag-todo.js And create ./client/tag-todo.js with this snippet: 'use strict'; var self = this; var api = self.opts; When the tag gets mounted by Riot, it gets a context. That is the reason for var self = this;. That context can include the opts object. opts object can be anything of your choice, defined at the time you mount the tag. Let’s say we have an API object and we pass it to riot.mount as the second option at the time we mount the tag, that isriot.mount('todo', api). Then, at the time the tag is rendered this.opts will point to the api object. This is the mechanism we are going to use to expose our client api with the todo tag. Our form is still waiting for the add function, so edit the tag-todo.js again and append the following: self.add = function (e) { var title = self.todo.value; console.log('New ToDo', title); }; Reload the page, type something at the text field, and hit enter. The expected message should appear in your browser's dev console. Implementing our REST API We are ready to implement our REST API on the Express side. Create ./api.js file and add: 'use strict'; const express = require('express'); var app = module.exports = express(); // simple in memory DB var db = []; // handle ToDo creation app.post('/', function (req, res) { db.push({ title: req.body.title, done: false }); let todoID = db.length - 1; // mountpath = /api/todos/ res.location(app.mountpath + todoID); res.status(201).end(); }); // handle ToDo updates app.put('/', function (req, res) { db[req.body.id] = req.body; res.location('/' + req.body.id); res.status(204).end(); }); Our API supports ToDo creation/update, and it is architected as an Express sub application. To mount it, we just need to update app.js for the last time. Update the require block at app.js to: const express = require('express'), api = require('./api'), app = express(), bodyParser = require('body-parser'); ... And mount the api sub application just before the app.listen... // Mount the api sub application app.use('/api/todos/', api); We said we will implement a client for our API. It should expose two functions –create and update –located at ./public/client.js. Here is its source: 'use strict'; (function (api) { var url = '/api/todos/'; function extractIDFromResponse(xhr) { var location = xhr.getResponseHeader('location'); var result = +location.slice(url.length); return result; } api.create = function createToDo(title, callback) { var xhr = new XMLHttpRequest(); var todo = { title: title, done: false }; xhr.open('POST', url); xhr.setRequestHeader('Content-Type', 'application/json'); xhr.onload = function () { if (xhr.status === 201) { todo.id = extractIDFromResponse(xhr); } return callback(null, xhr, todo); }; xhr.send(JSON.stringify(todo)); }; api.update = function createToDo(todo, callback) { var xhr = new XMLHttpRequest(); xhr.open('PUT', url); xhr.setRequestHeader('Content-Type', 'application/json'); xhr.onload = function () { if (xhr.status === 200) { console.log('200'); } return callback(null, xhr, todo); }; xhr.send(JSON.stringify(todo)); }; })(this.todoAPI = {}); Okay, time to load the API client into the UI and share it with our tag. Modify the index view including it as a dependency: doctype html html head script(src="/js/riot.js") body h1 ToDo App script(type="riot/tag" src="/tags/todo.tag") script(src="/js/client.js") todo script. riot.mount('todo', todoAPI); We are now loading the API client and passing it as a reference to the todo tag. Our last change today is to update the add function to consume the API. Reload the browser again, type something into the textbox, and hit enter. Nothing new happens because our add function is not yet using the API. We need to update ./client/tag-todo.js as: 'use strict'; var self = this; var api = self.opts; self.items = []; self.add = function (e) { var title = self.todo.value; api.create(title, function (err, xhr, todo) { if (xhr.status === 201) { self.todo.value = ''; self.items.push(todo); self.update(); } }); }; We have augmented self with an array of items. Everytime we create a new ToDo task (after we get the 201 code from the server) we push that new ToDo object into the array because we are going to print that list of items. In Riot, we can loop the items adding each attribute to any tag. Last, update to ./client/tag-todo.jade todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") ul li(each="{items}") span {title} script include tag-todo.js Finally! Reload the page and create a ToDo! Next steps You can find the complete source code for this article here. The final version of the code also implements a done/undone button, which you can try to implement by yourself. About the author Pedro NarcisoGarcíaRevington is a Senior Full Stack Developer with 10+ years experience in high scalability and availability, microservices, automated deployments, data processing, CI, (T,B,D)DD and polyglot persistence.
Read more
  • 0
  • 0
  • 6812
article-image-setting-environment-aspnet-mvc-6
Packt
02 Nov 2016
9 min read
Save for later

Setting Up the Environment for ASP.NET MVC 6

Packt
02 Nov 2016
9 min read
In this article by Mugilan TS Raghupathi author of the book Learning ASP.NET Core MVC Programming explains the setup for getting started with programming in ASP.NET MVC 6. In any development project, it is vital to set up the right kind of development environment so that you can concentrate on the developing the solution rather than solving the environment issues or configuration problems. With respect to .NET, Visual Studio is the de-facto standard IDE (Integrated Development Environment) for building web applications in .NET. In this article, you'll be learning about the following topics: Purpose of IDE Different offerings of Visual Studio Installation of Visual Studio Community 2015 Creating your first ASP.NET MVC 5 project and project structure (For more resources related to this topic, see here.) Purpose of IDE First of all, let us see why we need an IDE, when you can type the code in Notepad, compile, and execute it. When you develop a web application, you might need the following things for you to be productive: Code editor: This is the text editor where you type your code. Your code-editor should be able to recognize different constructs such as the if condition, for loop of your programming language. In Visual Studio, all of your keywords would be highlighted in blue color. Intellisense: Intellisense is a context aware code-completion feature available in most of the modern IDEs including Visual Studio. One such example is, when you type a dot after an object, this Intellisense feature lists out all the methods available on the object. This helps the developers to write code faster and easier. Build/Publish: It would be helpful if you could build or publish the application using a single click or single command. Visual Studio provides several options out of the box to build a separate project or to build the complete solution at a single click. This makes the build and deployment of your application easier. Templates: Depending on the type of the application, you might have to create different folders and files along with the boilerplate code. So, it'll be very helpful if your IDE supports the creation of different kinds of templates. Visual Studio generates different kinds of templates with the code for ASP.Net Web Forms, MVC, and Web API to get you up and running. Ease of addition of items: Your IDE should allow you to add different kinds of items with ease. For example, you should be able to add an XML file without any issues. And if there is any problem with the structure of your XML file, it should be able to highlight the issue along with the information and help you to fix the issues. Visual Studio offerings There are different versions of Visual Studio 2015 available to satisfy the various needs of the developers/organizations. Primarily, there are four versions of Visual Studio 2015: Visual Studio Community Visual Studio Professional Visual Studio Enterprise Visual Studio Test Professional System requirements Visual Studio can be installed on computers installed with Operation System Windows 7 Service Pack1 and above. You can get to know the complete list of requirements from the following URL: https://www.visualstudio.com/en-us/downloads/visual-studio-2015-system-requirements-vs.aspx Visual Studio Community 2015 This is a fully featured IDE available for building desktops, web applications, and cloud services. It is available free of cost for individual users. You can download Visual Studio Community from the following URL: https://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx Throughout this book, we will be using the Visual Studio Community version for development as it is available free of cost to individual developers. Visual Studio Professional As the name implies, Visual Studio Professional is targeted at professional developers which contains features such as Code Lens for improving your team's productivity. It also has features for greater collaboration within the team. Visual Studio Enterprise Visual Studio Enterprise is a full blown version of Visual Studio with a complete set of features for collaboration, including a team foundation server, modeling, and testing. Visual Studio Test Professional Visual Studio Test Professional is primarily aimed for the testing team or the people who are involved in the testing which might include developers. In any software development methodology—either the waterfall model or agile—developers need to execute the development suite test cases for the code they are developing. Installation of Visual Studio Community Follow the given steps to install Visual Studio Community 2015: Visit the following link to download Visual Studio Community 2015: https://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx Click on the Download Community 2015 button. Save the file in a folder where you can retrieve it easily later: Run the downloaded executable file: Click on Run and the following screen will appear: There are two types of installation—default and custom installation. Default installation installs the most commonly used features and this will cover most of the use cases of the developer. Custom installation helps you to customize the components that you want to get installed, such as the following: Click on the Install button after selecting the installation type. Depending on your memory and processor speed, it will take 1 to 2 hours to install. Once all the components are installed, you will see the following Setup completed screen: Installation of ASP.NET 5 When we install the Visual Studio Community 2015 edition, ASP.NET 5 will not have been installed by default. As the ASP.NET MVC 6 application runs on top of ASP.NET 5, we need to install ASP.NET 5. There are couple of ways to install ASP.NET 5: Get ASP.NET 5 from https://get.asp.net/ Another option is to install from the New Project template in Visual Studio This option is bit easier as you don't need to search and install. The following are the detailed steps: Create a new project by selecting File | New Project or using the shortcut Ctrl + Shift + N: Select ASP.NET Web Application and enter the project name and click on OK: The following window will appear to select the template. Select the Get ASP.NET 5 RC option as shown in the following screenshot: When you click on OK in the preceding screen, the following window will appear: When you click on the Run or Save button in the preceding dialog, you will get the following screen asking for ASP.NET 5 Setup. Select the checkbox, I agree to the license terms and conditions and click on the Install button: Installation of ASP.NET 5 might take couple of hours and once it is completed you'll get the following screen: During the process of installation of ASP.NET 5 RC1 Update 1, it might ask you to close the Visual Studio. If asked, please do so. Project structure in ASP.Net 5 application Once the ASP.NET 5 RC1 is successfully installed, open the Visual Studio and create a new project and select the ASP.NET 5 Web Application as shown in the following screenshot: A new project will be created and the structure will be like the following: File-based project Whenever you add a file or folder in your file system (inside of our ASP.NET 5 project folder), the changes will be automatically reflected in your project structure. Support for full .NET and .NET core You could see a couple of references in the preceding project: DNX 4.5.1 and DNX Core 5.0. DNX 4.5.1 provides functionalities of full-blown .NET whereas DNX Core 5.0 supports only the core functionalities—which would be used if you are deploying the application across cross-platforms such as Apple OS X, Linux. The development and deployment of an ASP.NET MVC 6 application on a Linux machine will be explained in the book. The Project.json package Usually in an ASP.NET web application, we would be having the assemblies as references and the list of references in a C# project file. But in an ASP.NET 5 application, we have a JSON file by the name of Project.json, which will contain all the necessary configuration with all its .NET dependencies in the form of NuGet packages. This makes dependency management easier. NuGet is a package manager provided by Microsoft, which makes the package installation and uninstallation easier. Prior to NuGet, all the dependencies had to be installed manually. The dependencies section identifies the list of dependent packages available for the application. The frameworks section informs about the frameworks being supported for the application. The scripts section identifies the script to be executed during the build process of the application. Include and exclude properties can be used in any section to include or exclude any item. Controllers This folder contains all of your controller files. Controllers are responsible for handling the requests and communicating the models and generating the views for the same. Models All of your classes representing the domain data will be present in this folder. Views Views are files which contain your frontend components and are presented to the end users of the application. This folder contains all of your Razor View files. Migrations Any database-related migrations will be available in this folder. Database migrations are the C# files which contain the history of any database changes done through an Entity Framework (an ORM framework). This will be explained in detail in the book. The wwwroot folder This folder acts as a root folder and it is the ideal container to place all of your static files such as CSS and JavaScript files. All the files which are placed in wwwroot folder can be directly accessed from the path without going through the controller. Other files The appsettings.json file is the config file where you can configure application level settings. Bower, npm (Node Package Manager), and gulpfile.js are client-side technologies which are supported by ASP.NET 5 applications. Summary In this article, you have learnt about the offerings in Visual Studio. Step-by-step instructions are provided for the installation of the Visual Studio Community version—which is freely available for individual developers. We have also discussed the new project structure of the ASP.Net 5 application and the changes when compared to the previous versions. In this book, we are going to discuss the controllers and their roles and functionalities. We'll also build a controller and associated action methods and see how it works. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application [article] Debugging Your .NET Application [article] Using ASP.NET Controls in SharePoint [article]
Read more
  • 0
  • 0
  • 3541

article-image-introduction-moodle-3-and-moodlecloud
Packt
19 Oct 2016
20 min read
Save for later

An Introduction to Moodle 3 and MoodleCloud

Packt
19 Oct 2016
20 min read
In this article by Silvina Paola Hillar, the author of the book Moodle Theme Development we will introduce e-learning and virtual learning environments such as Moodle and MoodleCloud, explaining their similarities and differences. Apart from that, we will learn and understand screen resolution and aspect ratio, which is the information we need in order to develop Moodle themes. In this article, we shall learn the following topics: Understanding what e-learning is Learning about virtual learning environments Introducing Moodle and MoodleCloud Learning what Moodle and MoodleCloud are Using Moodle on different devices Sizing the screen resolution Calculating the aspect ratio Learning about sharp and soft images Learning about crisp and sharp text Understanding what anti-aliasing is (For more resources related to this topic, see here.) Understanding what e-learning is E-learning is electronic learning, meaning that it is not traditional learning in a classroom with a teacher and students, plus the board. E-learning involves using a computer to deliver classes or a course. When delivering classes or a course, there is online interaction between the student and the teacher. There might also be some offline activities, when a student is asked to create a piece of writing or something else. Another option is that there are collaboration activities involving the interaction of several students and the teacher. When creating course content, there is the option of video conferencing as well. So there is virtual face-to-face interaction within the e-learning process. The time and the date should be set beforehand. In this way, e-learning is trying to imitate traditional learning to not lose human contact or social interaction. The course may be full distance or not. If the course is full distance, there is online interaction only. All the resources and activities are delivered online and there might be some interaction through messages, chats, or emails between the student and the teacher. If the course is not full distance, and is delivered face to face but involving the use of computers, we are referring to blended learning. Blended learning means using e-learning within the classroom, and is a mixture of traditional learning and computers. The usage of blended learning with little children is very important, because they get the social element, which is essential at a very young age. Apart from that, they also come into contact with technology while they are learning. It is advisable to use interactive whiteboards (IWBs) at an early stage. IWBs are the right tool to choose when dealing with blended learning. IWBs are motivational gadgets, which are prominent in integrating technology into the classroom. IWBs are considered a symbol of innovation and a key element of teaching students. IWBs offer interactive projections for class demonstrations; we can usually project resources from computer software as well as from our Moodle platform. Students can interact with them by touching or writing on them, that is to say through blended learning. Apart from that, teachers can make presentations on different topics within a subject and these topics become much more interesting and captivating for students, since IWBs allows changes to be made and we can insert interactive elements into the presentation of any subject. There are several types of technology used in IWBs, such as touch technology, laser scanning, and electromagnetic writing tools. Therefore, we have to bear in mind which to choose when we get an IWB. On the other hand, the widespread use of mobile devices nowadays has turned e-learning into mobile learning. Smartphones and tablets allows students to learn anywhere at any time. Therefore, it is important to design course material that is usable by students on such devices. Moodle is a learning platform through which we can design, build and create e-learning environments. It is possible to create online interaction and have video conferencing sessions with students. Distance learning is another option if blended learning cannot be carried out. We can also choose Moodle mobile. We can download the app from App Store, Google Play, Windows Store, or Windows Phone Store. We can browse the content of courses, receive messages, contact people from the courses, upload different types of file, and view course grades, among other actions. Learning about Virtual Learning Environments Virtual Learning Environment (VLE) is a type of virtual environment that supports both resources and learning activities; therefore, students can have both passive and active roles. There is also social interaction, which can take place through collaborative work as well as video conferencing. Students can also be actors, since they can also construct the VLE. VLEs can be used for both distance and blended learning, since they can enrich courses. Mobile learning is also possible because mobile devices have access to the Internet, allowing teachers and students to log in to their courses. VLEs are designed in such a way that they can carry out the following functions or activities: Design, create, store, access, and use course content Deliver or share course content Communicate, interact, and collaborate between students and teachers Assess and personalize the learning experience Modularize both activities and resources Customize the interface We are going to deal with each of these functions and activities and see how useful they might be when designing our VLE for our class. When using Moodle, we can perform all the functions and activities mentioned here, because Moodle is a VLE. Design, create, store, access and use course content If we use the Moodle platform to create the course, we have to deal with course content. Therefore, when we add a course, we have to add its content. We can choose the weekly outline section or the topic under which we want to add the content. We click on Add an activity or resource and two options appear, resources and activities; therefore, the content can be passive or active for the student. When we create or design activities in Moodle, the options are shown in the following screenshot: Another option for creating course content is to reuse content that has already been created and used before in another VLE. In other words, we can import or export course materials, since most VLEs have specific tools designed for such purposes. This is very useful and saves time. There are a variety of ways for teachers to create course materials, due to the fact that the teacher thinks of the methodology, as well as how to meet the student's needs, when creating the course. Moodle is designed in such a way that it offers a variety of combinations that can fit any course content. Deliver or share course content Before using VLEs, we have to log in, because all the content is protected and is not open to the general public. In this way, we can protect property rights, as well as the course itself. All participants must be enrolled in the course unless it has been opened to the public. Teachers can gain remote access in order to create and design their courses. This is quite profitable since they can build the content at home, rather than in their workplace. They need login access and they need to switch roles to course creator in order to create the content. Follow these steps to switch roles to course creator: Under Administration, click on Switch role to… | Course creator, as shown in the following screenshot: When the role has been changed, the teacher can create content that students can access. Once logged in, students have access to the already created content, either activities or resources. The content is available over the Internet or the institution's intranet connection. Students can access the content anywhere if any of these connections are available. If MoodleCloud is being used, there must be an Internet connection, otherwise it is impossible for both students and teachers to log in. Communicate, interact, and collaborate among students and teachers Communication, interaction, and collaborative working are the key factors to social interaction and learning through interchanging ideas. VLEs let us create course content activities, because these actions are allows they are elemental for our class. There is no need to be an isolated learner, because learners have the ability to communicate between themselves and with the teachers. Moodle offers the possibility of video conferencing through the Big Blue Button. In order to install the Big Blue Button plugin in Moodle, visit the following link:https://moodle.org/plugins/browse.php?list=set&id=2. This is shown in the following screenshot: If you are using MoodleCloud, the Big Blue Button is enabled by default, so when we click on Add an activity or resource it appears in the list of activities, as shown in the following screenshot: Assess and personalize the learning experience Moodle allows the teacher to follow the progress of students so that they can assess and grade their work, as long as they complete the activities. Resources cannot be graded, since they are passive content for students, but teachers can also check when a participant last accessed the site. Badges are another element used to personalize the learning experience. We can create badges for students when they complete an activity or a course; they are homework rewards. Badges are quite good at motivating young learners. Modularize both activities and resources Moodle offers the ability to build personalized activities and resources. There are several ways to present both, with all the options Moodle offers. Activities can be molded according to the methodology the teacher uses. In Moodle 3, there are new question types within the Quiz activity. The question types are as follows: Select missing words Drag and drop into text Drag and drop onto image Drag and drop markers The question types are shown after we choose Quiz in the Add a resource or Activity menu, in the weekly outline section or topic that we have chosen. The types of question are shown in the following screenshot: Customize the interface Moodle allows us to customize the interface in order to develop the look and feel that we require; we can add a logo for the school or institution that the Moodle site belongs to. We can also add another theme relevant to the subject or course that we have created. The main purpose in customizing the interface is to avoid all subjects and courses looking the same. Later in the article, we will learn how to customize the interface. Learning Moodle and MoodleCloud Modular Object-Oriented Dynamic Learning Environment (Moodle) is a learning platform designed in such a way that we can create VLEs. Moodle can be downloaded, installed and run on any web server software using Hypertext Preprocessor (PHP). It can support a SQL database and can run on several operating systems. We can download Moodle 3.0.3 from the following URL: https://download.moodle.org/. This URL is shown in the following screenshot: MoodleCloud, on the other hand, does not need to be downloaded since, as its name suggests, is in the cloud. Therefore, we can get our own Moodle site with MoodleCloud within minutes and for free. It is Moodle's hosting platform, designed and run by the people who make Moodle. In order to get a MoodleCloud site, we need to go to the following URL: https://moodle.com/cloud/. This is shown in the following screenshot: MoodleCloud was created in order to cater for users with fewer requirements and small budgets. In order to create an account, you need to add your cell phone number to receive an SMS which we must be input when creating your site. As it is free, there are some limitations to MoodleCloud, unless we contact Moodle Partners and pay for an expanded version of it. The limitations are as follows: No more than 50 users 200 MB disk space Core themes and plugins only One site per phone number Big Blue Button sessions are limited to 6 people, with no recordings There are advertisements When creating a Moodle site, we want to change the look and functionality of the site or individual course. We may also need to customize themes for Moodle, in order to give the course the desired look. Therefore, this article will explain the basic concepts that we have to bear in mind when dealing with themes, due to the fact that themes are shown in different devices. In the past, Moodle ran only on desktops or laptops, but nowadays Moodle can run on many different devices, such as smartphones, tablets, iPads, and smart TVs, and the list goes on. Using Moodle on different devices Moodle can be used on different devices, at different times, in different places. Therefore, there are factors that we need to be aware of when designing courses and themes.. Therefore, here after in this article, there are several aspects and concepts that we need to deepen into in order to understand what we need to take into account when we design our courses and build our themes. Devices change in many ways, not only in size but also in the way they display our Moodle course. Moodle courses can be used on anything from a tiny device that fits into the palm of a hand to a huge IWB or smart TV, and plenty of other devices in between. Therefore, such differences have to be taken into account when choosing images, text, and other components of our course. We are going to deal with sizing screen resolution, calculating the aspect ratio, types of images such as sharp and soft, and crisp and sharp text. Finally, but importantly, the anti-aliasing method is explained. Sizing the screen resolution Number of pixels the display of device has, horizontally and vertically and the color depth measuring the number of bits representing the color of each pixel makes up the screen resolution. The higher the screen resolution, the higher the productivity we get. In the past, the screen resolution of a display was important since it determined the amount of information displayed on the screen. The lower the resolution, the fewer items would fit on the screen; the higher the resolution, the more items would fit on the screen. The resolution varies according to the hardware in each device. Nowadays, the screen resolution is considered a pleasant visual experience, since we would rather see more quality than more stuff on the screen. That is the reason why the screen resolution matters. There might be different display sizes where the screen resolutions are the same, that is to say, the total number of pixels is the same. If we compare a laptop (13'' screen with a resolution of 1280 x 800) and a desktop (with a 17'' monitor with the same 1280 x 800 resolution), although the monitor is larger, the number of pixels is the same; the only difference is that we will be able to see everything bigger on the monitor. Therefore, instead of seeing more stuff, we see higher quality. Screen resolution chart Code Width Height Ratio Description QVGA 320 240 4:3 Quarter Video Graphics Array FHD 1920 1080 ~16:9 Full High Definition HVGA 640 240 8:3 Half Video Graphics Array HD 1360 768 ~16:9 High Definition HD 1366 768 ~16:9 High Definition HD+ 1600 900 ~16:9 High Definition plus VGA 640 480 4:3 Video Graphics Array SVGA 800 600 4:3 Super Video Graphics Array XGA 1024 768 4:3 Extended Graphics Array XGA+ 1152 768 3:2 Extended Graphics Array plus XGA+ 1152 864 4:3 Extended Graphics Array plus SXGA 1280 1024 5:4 Super Extended Graphics Array SXGA+ 1400 1050 4:3 Super Extended Graphics Array plus UXGA 1600 1200 4:3 Ultra Extended Graphics Array QXGA 2048 1536 4:3 Quad Extended Graphics Array WXGA 1280 768 5:3 Wide Extended Graphics Array WXGA 1280 720 ~16:9 Wide Extended Graphics Array WXGA 1280 800 16:10 Wide Extended Graphics Array WXGA 1366 768 ~16:9 Wide Extended Graphics Array WXGA+ 1280 854 3:2 Wide Extended Graphics Array plus WXGA+ 1440 900 16:10 Wide Extended Graphics Array plus WXGA+ 1440 960 3:2 Wide Extended Graphics Array plus WQHD 2560 1440 ~16:9 Wide Quad High Definition WQXGA 2560 1600 16:10 Wide Quad Extended Graphics Array WSVGA 1024 600 ~17:10 Wide Super Video Graphics Array WSXGA 1600 900 ~16:9 Wide Super Extended Graphics Array WSXGA 1600 1024 16:10 Wide Super Extended Graphics Array WSXGA+ 1680 1050 16:10 Wide Super Extended Graphics Array plus WUXGA 1920 1200 16:10 Wide Ultra Extended Graphics Array WQXGA 2560 1600 16:10 Wide Quad Extended Graphics Array WQUXGA 3840 2400 16:10 Wide Quad Ultra Extended Graphics Array 4K UHD 3840 2160 16:9 Ultra High Definition 4K UHD 1536 864 16:9 Ultra High Definition Considering that 3840 x 2160 displays (also known as 4K, QFHD, Ultra HD, UHD, or 2160p) are already available for laptops and monitors, a pleasant visual experience with high DPI displays can be a good long-term investment for your desktop applications. The DPI setting for the monitor causes another common problem. The change in the effective resolution. Consider the 13.3" display that offers a 3200 x1800 resolution and is configured with an OS DPI of 240 DPI. The high DPI setting makes the system use both larger fonts and UI elements; therefore, the elements consume more pixels to render than the same elements displayed in the resolution configured with an OS DPI of 96 DPI. The effective resolution of a display that provides 3200 x1800 pixels configured at 240 DPI is 1280 x 720. The effective resolution can become a big problem because an application that requires a minimum resolution of the old standard 1024 x 768 pixels with an OS DPI of 96 would have problems with a 3200 x 1800-pixel display configured at 240 DPI, and it wouldn't be possible to display all the necessary UI elements. It may sound crazy, but the effective vertical resolution is 720 pixels, lower than the 768 vertical pixels required by the application to display all the UI elements without problems. The formula to calculate the effective resolution is simple: divide the physical pixels by the scale factor (OS DPI / 96). For example, the following formula calculates the horizontal effective resolution of my previous example: 3200 / (240 / 96) = 3200 / 2.5 = 1280; and the following formula calculates the vertical effective resolution: 1800 / (240 / 96) = 1800 / 2.5 = 720. The effective resolution would be of 1800 x 900 pixels if the same physical resolution were configured at 192 DPI. Effective horizontal resolution: 3200 / (192 / 96) = 3200 / 2 = 1600; and vertical effective resolution: 1800 / (192 / 96) = 1800 / 2 = 900. Calculating the aspect ratio The aspect radio is the proportional relationship between the width and the height of an image. It is used to describe the shape of a computer screen or a TV. The aspect ratio of a standard-definition (SD) screen is 4:3, that is to say, a relatively square rectangle. The aspect ratio is often expressed in W:H format, where W stands for width and H stands for height. 4:3 means four units wide to three units high. With regards to high-definition TV (HDTV), they have a 16:9 ratio, which is a wider rectangle. Why do we calculate the aspect ratio? The answer to this question is that the ratio has to be well defined because the rectangular shape that every frame, digital video, canvas, image, or responsive design has, makes shapes fit into different and distinct devices. Learning about sharp and soft images Images can be either sharp or soft. Sharp is the opposite of soft. A soft image has less pronounced details, while a sharp image has more contrast between pixels. The more pixels the image has, the sharper it is. We can soften the image, in which case it loses information, but we cannot sharpen one; in other words, we can't add more information to an image. In order to compare sharp and soft images, we can visit the following website, where we can convert bitmaps to vector graphics. We can convert a bitmap images such as .png, .jpeg, or .gif into a .svg in order to get an anti-aliased image. We can do this with a simple step. We work with an online tool to vectorize the bitmap using http://vectormagic.com/home. There are plenty of features to take into account when vectorizing. We can design a bitmap using an image editor and upload the bitmap image from the clipboard, or upload the file from our computer. Once the image is uploaded to the application, we can start working. Another possibility is to use the sample images on the website, which we are going to use in order to see that anti-aliasing effect. We convert bitmap images, which are made up of pixels, into vector images, which are made up of shapes. The shapes are mathematical descriptions of images and do not become pixelated when scaling up. Vector graphics can handle scaling without any problems. Vector images are the preferred type to work with in graphic design on paper or clothes. Go to http://vectormagic.com/home and click on Examples, as shown in the following screenshot: After clicking on Examples, the bitmap appears on the left and the vectorized image on the right. The bitmap is blurred and soft; the SVG has an anti-aliasing effect, therefore the image is sharp. The result is shown in the following screenshot: Learning about crisp and sharp text There are sharp and soft images, and there is also crisp and sharp text, so it is now time to look at text. What is the main difference between these two? When we say that the text is crisp, we mean that there is more anti-aliasing, in other words it has more grey pixels around the black text. The difference is shown when we zoom in to 400%. On the other hand, sharp mode is superior for small fonts because it makes each letter stronger. There are four options in Photoshop to deal with text: sharp, crisp, strong, and smooth. Sharp and crisp have already been mentioned in the previous paragraphs. Strong is notorious for adding unnecessary weight to letter forms, and smooth looks closest to the untinted anti-aliasing, and it remains similar to the original. Understanding what anti-aliasing is The word anti-aliasing means the technique used in order to minimize the distortion artifacts. It applies intermediate colors in order to eliminate pixels, that is to say the saw-tooth or pixelated lines. Therefore, we need to look for a lower resolution so that the saw-tooth effect does not appear when we make the graphic bigger. Test your knowledge Before we delve deeper into more content, let's test your knowledge about all the information that we have dealt with in this article: Moodle is a learning platform with which… We can design, build and create E-learning environments. We can learn. We can download content for students. BigBlueButtonBN… Is a way to log in to Moodle. Lets you create links to real-time online classrooms from within Moodle. Works only in MoodleCloud. MoodleCloud… Is not open source. Does not allow more than 50 users. Works only for universities. The number of pixels the display of the device has horizontally and vertically, and the color depth measuring the number of bits representing the color of each pixel, make up… Screen resolution. Aspect ratio. Size of device. Anti-aliasing can be applied to … Only text. Only images. Both images and text. Summary In this article, we have covered most of what needs to be known about e-learning, VLEs, and Moodle and MoodleCloud. There is a slight difference between Moodle and MoodleCloud specially if you don't have access to a Moodle course in the institution where you are working and want to design a Moodle course. Moodle is used in different devices and there are several aspects to take into account when designing a course and building a Moodle theme. We have dealt with screen resolution, aspect ratio, types of images and text, and anti-aliasing effects. Resources for Article: Further resources on this subject: Listening Activities in Moodle 1.9: Part 2 [article] Gamification with Moodle LMS [article] Adding Graded Activities [article]
Read more
  • 0
  • 0
  • 2220

article-image-server-side-swift-building-slack-bot-part-2
Peter Zignego
13 Oct 2016
5 min read
Save for later

Server-side Swift: Building a Slack Bot, Part 2

Peter Zignego
13 Oct 2016
5 min read
In Part 1 of this series, I introduced you to SlackKit and Zewo, which allows us to build and deploy a Slack bot written in Swift to a Linux server. Here in Part 2, we will finish the app, showing all of the Swift code. We will also show how to get an API token, how to test the app and deploy it on Heroku, and finally how to launch it. Show Me the Swift Code! Finally, some Swift code! To create our bot, we need to edit our main.swift file to contain our bot logic: import String importSlackKit class Leaderboard: MessageEventsDelegate { // A dictionary to hold our leaderboard var leaderboard: [String: Int] = [String: Int]() letatSet = CharacterSet(characters: ["@"]) // A SlackKit client instance let client: SlackClient // Initalize the leaderboard with a valid Slack API token init(token: String) { client = SlackClient(apiToken: token) client.messageEventsDelegate = self } // Enum to hold commands the bot knows enum Command: String { case Leaderboard = "leaderboard" } // Enum to hold logic that triggers certain bot behaviors enum Trigger: String { casePlusPlus = "++" caseMinusMinus = "--" } // MARK: MessageEventsDelegate // Listen to the messages that are coming in over the Slack RTM connection funcmessageReceived(message: Message) { listen(message: message) } funcmessageSent(message: Message){} funcmessageChanged(message: Message){} funcmessageDeleted(message: Message?){} // MARK: Leaderboard Internal Logic privatefunc listen(message: Message) { // If a message contains our bots user ID and a recognized command, handle that command if let id = client.authenticatedUser?.id, text = message.text { iftext.lowercased().contains(query: Command.Leaderboard.rawValue) &&text.contains(query: id) { handleCommand(command: .Leaderboard, channel: message.channel) } } // If a message contains a trigger value, handle that trigger ifmessage.text?.contains(query: Trigger.PlusPlus.rawValue) == true { handleMessageWithTrigger(message: message, trigger: .PlusPlus) } ifmessage.text?.contains(query: Trigger.MinusMinus.rawValue) == true { handleMessageWithTrigger(message: message, trigger: .MinusMinus) } } // Text parsing can be messy when you don't have Foundation... privatefunchandleMessageWithTrigger(message: Message, trigger: Trigger) { if let text = message.text, start = text.index(of: "@"), end = text.index(of: trigger.rawValue) { let string = String(text.characters[start...end].dropLast().dropFirst()) let users = client.users.values.filter{$0.id == self.userID(string: string)} // If the receiver of the trigger is a user, use their user ID ifusers.count> 0 { letidString = userID(string: string) initalizationForValue(dictionary: &leaderboard, value: idString) scoringForValue(dictionary: &leaderboard, value: idString, trigger: trigger) // Otherwise just store the receiver value as is } else { initalizationForValue(dictionary: &leaderboard, value: string) scoringForValue(dictionary: &leaderboard, value: string, trigger: trigger) } } } // Handle recognized commands privatefunchandleCommand(command: Command, channel:String?) { switch command { case .Leaderboard: // Send message to the channel with the leaderboard attached if let id = channel { client.webAPI.sendMessage(channel:id, text: "Leaderboard", linkNames: true, attachments: [constructLeaderboardAttachment()], success: {(response) in }, failure: { (error) in print("Leaderboard failed to post due to error:(error)") }) } } } privatefuncinitalizationForValue(dictionary: inout [String: Int], value: String) { if dictionary[value] == nil { dictionary[value] = 0 } } privatefuncscoringForValue(dictionary: inout [String: Int], value: String, trigger: Trigger) { switch trigger { case .PlusPlus: dictionary[value]?+=1 case .MinusMinus: dictionary[value]?-=1 } } // MARK: Leaderboard Interface privatefuncconstructLeaderboardAttachment() -> Attachment? { let Great! But we’ll need to replace the dummy API token with the real deal before anything will work. Getting an API Token We need to create a bot integration in Slack. You’ll need a Slack instance that you have administrator access to. If you don’t already have one of those to play with, go sign up. Slack is free for small teams: Create a new bot here. Enter a name for your bot. I’m going to use “leaderbot”. Click on “Add Bot Integration”. Copy the API token that Slack generates and replace the placeholder token at the bottom of main.swift with it. Testing 1,2,3… Now that we have our API token, we’re ready to do some local testing. Back in Xcode, select the leaderbot command-line application target and run your bot (⌘+R). When we go and look at Slack, our leaderbot’s activity indicator should show that it’s online. It’s alive! To ensure that it’s working, we should give our helpful little bot some karma points: @leaderbot++ And ask it to see the leaderboard: @leaderbot leaderboard Head in the Clouds Now that we’ve verified that our leaderboard bot works locally, it’s time to deploy it. We are deploying on Heroku, so if you don’t have an account, go and sign up for a free one. First, we need to add a Procfile for Heroku. Back in the terminal, run: echo slackbot: .build/debug/leaderbot > Procfile Next, let’s check in our code: git init git add . git commit -am’leaderbot powering up’ Finally, we’ll setup Heroku: Install the Heroku toolbelt. Log in to Heroku in your terminal: heroku login Create our application on Heroku and set our buildpack: heroku create --buildpack https://github.com/pvzig/heroku-buildpack-swift.git leaderbot Set up our Heroku remote: heroku git:remote -a leaderbot Push to master: git push heroku master Once you push to master, you’ll see Heroku going through the process of building your application. Launch! When the build is complete, all that’s left to do is to run our bot: heroku run:detached slackbot Like when we tested locally, our bot should become active and respond to our commands! You’re Done! Congratulations, you’ve successfully built and deployed a Slack bot written in Swift onto a Linux server! Built With: Jay: Pure-Swift JSON parser and formatterkylef’s Heroku buildpack for Swift Open Swift: Open source cross project standards for Swift SlackKit: A Slack client library Zewo: Open source libraries for modern server software Disclaimer The linux version of SlackKit should be considered an alpha release. It’s a fun tech demo to show what’s possible with Swift on the server, not something to be relied upon. Feel free to report issues you come across. About the author Peter Zignego is an iOS developer in Durham, North Carolina, USA. He writes at bytesized.co, tweets at @pvzig, and freelances at Launch Software.
Read more
  • 0
  • 0
  • 2125
article-image-create-user-profile-system-and-use-null-coalesce-operator
Packt
12 Oct 2016
15 min read
Save for later

Create a User Profile System and use the Null Coalesce Operator

Packt
12 Oct 2016
15 min read
In this article by Jose Palala and Martin Helmich, author of PHP 7 Programming Blueprints, will show you how to build a simple profiles page with listed users which you can click on, and create a simple CRUD-like system which will enable us to register new users to the system, and delete users for banning purposes. (For more resources related to this topic, see here.) You will learn to use the PHP 7 null coalesce operator so that you can show data if there is any, or just display a simple message if there isn’t any. Let's create a simple UserProfile class. The ability to create classes has been available since PHP 5. A class in PHP starts with the word class, and the name of the class: class UserProfile { private $table = 'user_profiles'; } } We've made the table private and added a private variable, where we define which table it will be related to. Let's add two functions, also known as a method, inside the class to simply fetch the data from the database: function fetch_one($id) { $link = mysqli_connect(''); $query = "SELECT * from ". $this->table . " WHERE `id` =' " . $id "'"; $results = mysqli_query($link, $query); } function fetch_all() { $link = mysqli_connect('127.0.0.1', 'root','apassword','my_dataabase' ); $query = "SELECT * from ". $this->table . "; $results = mysqli_query($link, $query); } The null coalesce operator We can use PHP 7's null coalesce operator to allow us to check whether our results contain anything, or return a defined text which we can check on the views—this will be responsible for displaying any data. Lets put this in a file which will contain all the define statements, and call it: //definitions.php define('NO_RESULTS_MESSAGE', 'No results found'); require('definitions.php'); function fetch_all() { …same lines ... $results = $results ?? NO_RESULTS_MESSAGE; return $message; } On the client side, we'll need to come up with a template to show the list of user profiles. Let’s create a basic HTML block to show that each profile can be a div element with several list item elements to output each table. In the following function, we need to make sure that all values have been filled in with at least the name and the age. Then we simply return the entire string when the function is called: function profile_template( $name, $age, $country ) { $name = $name ?? null; $age = $age ?? null; if($name == null || $age === null) { return 'Name or Age need to be set'; } else { return '<div> <li>Name: ' . $name . ' </li> <li>Age: ' . $age . '</li> <li>Country: ' . $country . ' </li> </div>'; } } Separation of concerns In a proper MVC architecture, we need to separate the view from the models that get our data, and the controllers will be responsible for handling business logic. In our simple app, we will skip the controller layer since we just want to display the user profiles in one public facing page. The preceding function is also known as the template render part in an MVC architecture. While there are frameworks available for PHP that use the MVC architecture out of the box, for now we can stick to what we have and make it work. PHP frameworks can benefit a lot from the null coalesce operator. In some codes that I've worked with, we used to use the ternary operator a lot, but still had to add more checks to ensure a value was not falsy. Furthermore, the ternary operator can get confusing, and takes some getting used to. The other alternative is to use the isSet function. However, due to the nature of the isSet function, some falsy values will be interpreted by PHP as being a set. Creating views Now that we have our  model complete, a template render function, we just need to create the view with which we can look at each profile. Our view will be put inside a foreach block, and we'll use the template we wrote to render the right values: //listprofiles.php <html> <!doctype html> <head> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css"> </head> <body> <?php foreach($results as $item) { echo profile_template($item->name, $item->age, $item->country; } ?> </body> </html> Let's put the code above into index.php. While we may install the Apache server, configure it to run PHP, install new virtual hosts and the other necessary featuress, and put our PHP code into an Apache folder, this will take time, so, for the purposes of testing this out, we can just run PHP's server for development. To  run the built-in PHP server (read more at http://php.net/manual/en/features.commandline.webserver.php) we will use the folder we are running, inside a terminal: php -S localhost:8000 If we open up our browser, we should see nothing yet—No results found. This means we need to populate our database. If you have an error with your database connection, be sure to replace the correct database credentials we supplied into each of the mysql_connect calls that we made. To supply data into our database, we can create a simple SQL script like this: INSERT INTO user_profiles ('Chin Wu', 30, 'Mongolia'); INSERT INTO user_profiles ('Erik Schmidt', 22, 'Germany'); INSERT INTO user_profiles ('Rashma Naru', 33, 'India'); Let's save it in a file such as insert_profiles.sql. In the same directory as the SQL file, log on to the MySQL client by using the following command: mysql -u root -p Then type use <name of database>: mysql> use <database>; Import the script by running the source command: mysql> source insert_profiles.sql Now our user profiles page should show the following: Create a profile input form Now let's create the HTML form for users to enter their profile data. Our profiles app would be no use if we didn't have a simple way for a user to enter their user profile details. We'll create the profile input form  like this: //create_profile.php <html> <body> <form action="post_profile.php" method="POST"> <label>Name</label><input name="name"> <label>Age</label><input name="age"> <label>Country</label><input name="country"> </form> </body> </html> In this profile post, we'll need to create a PHP script to take care of anything the user posts. It will create an SQL statement from the input values and output whether or not they were inserted. We can use the null coalesce operator again to verify that the user has inputted all values and left nothing undefined or null: $name = $_POST['name'] ?? ""; $age = $_POST['country'] ?? ""; $country = $_POST['country'] ?? ""; This prevents us from accumulating errors while inserting data into our database. First, let's create a variable to hold each of the inputs in one array: $input_values = [ 'name' => $name, 'age' => $age, 'country' => $country ]; The preceding code is a new PHP 5.4+ way to write arrays. In PHP 5.4+, it is no longer necessary to put an actual array(); the author personally likes the new syntax better. We should create a new method in our UserProfile class to accept these values: Class UserProfile { public function insert_profile($values) { $link = mysqli_connect('127.0.0.1', 'username','password', 'databasename'); $q = " INSERT INTO " . $this->table . " VALUES ( '". $values['name']."', '".$values['age'] . "' ,'". $values['country']. "')"; return mysqli_query($q); } } Instead of creating a parameter in our function to hold each argument as we did with our profile template render function, we can simply use an array to hold our values. This way, if a new field needs to be inserted into our database, we can just add another field to the SQL insert statement. While we are at it, let's create the edit profile section. For now, we'll assume that whoever is using this edit profile is the administrator of the site. We'll need to create a page where, provided the $_GET['id'] or has been set, that the user that we will be fetching from the database and displaying on the form. <?php require('class/userprofile.php');//contains the class UserProfile into $id = $_GET['id'] ?? 'No ID'; //if id was a string, i.e. "No ID", this would go into the if block if(is_numeric($id)) { $profile = new UserProfile(); //get data from our database $results = $user->fetch_id($id); if($results && $results->num_rows > 0 ) { while($obj = $results->fetch_object()) { $name = $obj->name; $age = $obj->age; $country = $obj->country; } //display form with a hidden field containing the value of the ID ?> <form action="post_update_profile.php" method="post"> <label>Name</label><input name="name" value="<?=$name?>"> <label>Age</label><input name="age" value="<?=$age?>"> <label>Country</label><input name="country" value="<?=country?>"> </form> <?php } else { exit('No such user'); } } else { echo $id; //this should be No ID'; exit; } Notice that we're using what is known as the shortcut echo statement in the form. It makes our code simpler and easier to read. Since we're using PHP 7, this feature should come out of the box. Once someone submits the form, it goes into our $_POST variable and we'll create a new Update function in our UserProfile class. Admin system Let's finish off by creating a simple grid for an admin dashboard portal that will be used with our user profiles database. Our requirement for this is simple: We can just set up a table-based layout that displays each user profile in rows. From the grid, we will add the links to be able to edit the profile, or  delete it, if we want to. The code to display a table in our HTML view would look like this: <table> <tr> <td>John Doe</td> <td>21</td> <td>USA</td> <td><a href="edit_profile.php?id=1">Edit</a></td> <td><a href="profileview.php?id=1">View</a> <td><a href="delete_profile.php?id=1">Delete</a> </tr> </table> This script to this is the following: //listprofiles.php $sql = "SELECT * FROM userprofiles LIMIT $start, $limit "; $rs_result = mysqli_query ($sql); //run the query while($row = mysqli_fetch_assoc($rs_result) { ?> <tr> <td><?=$row['name'];?></td> <td><?=$row['age'];?></td> <td><?=$row['country'];?></td> <td><a href="edit_profile.php?id=<?=$id?>">Edit</a></td> <td><a href="profileview.php?id=<?=$id?>">View</a> <td><a href="delete_profile.php?id=<?=$id?>">Delete</a> </tr> <?php } There's one thing that we haven't yet created: A delete_profile.php page. The view and edit pages - have been discussed already. Here's how the delete_profile.php page would look: <?php //delete_profile.php $connection = mysqli_connect('localhost','<username>','<password>', '<databasename>'); $id = $_GET['id'] ?? 'No ID'; if(is_numeric($id)) { mysqli_query( $connection, "DELETE FROM userprofiles WHERE id = '" .$id . "'"); } else { echo $id; } i(!is_numeric($id)) { exit('Error: non numeric $id'); } else { echo "Profile #" . $id . " has been deleted"; ?> Of course, since we might have a lot of user profiles in our database, we have to create a simple pagination. In any pagination system, you just need to figure out the total number of rows, and how many rows you want displayed per page. We can create a function that will be able to return a URL that contains the page number and how many to view per page. From our queries database, we first create a new function for us to select only up to the total number of items in our database: class UserProfile{ // …. Etc … function count_rows($table) { $dbconn = new mysqli('localhost', 'root', 'somepass', 'databasename'); $query = $dbconn->query("select COUNT(*) as num from '". $table . "'"); $total_pages = mysqli_fetch_array($query); return $total_pages['num']; //fetching by array, so element 'num' = count } For our pagination, we can create a simple paginate function which accepts the base_url of the page where we have pagination, the rows per page — also known as the number of records we want each page to have — and the total number of records found: require('definitions.php'); require('db.php'); //our database class Function paginate ($baseurl, $rows_per_page, $total_rows) { $pagination_links = array(); //instantiate an array to hold our html page links //we can use null coalesce to check if the inputs are null ( $total_rows || $rows_per_page) ?? exit('Error: no rows per page and total rows); //we exit with an error message if this function is called incorrectly $pages = $total_rows % $rows_per_page; $i= 0; $pagination_links[$i] = "<a href="http://". $base_url . "?pagenum=". $pagenum."&rpp=".$rows_per_page. ">" . $pagenum . "</a>"; } return $pagination_links; } This function will help display the above page links in a table: function display_pagination($links) { $display = ' <div class="pagination">'; <table><tr>'; foreach ($links as $link) { echo "<td>" . $link . "</td>"; } $display .= '</tr></table></div>'; return $display; } Notice that we're following the principle that there should rarely be any echo statements inside a function. This is because we want to make sure that other users of these functions are not confused when they debug some mysterious output on their page. By requiring the programmer to echo out whatever the functions return, it becomes easier to debug our program. Also, we're following the separation of concerns—our code doesn't output the display, it just formats the display. So any future programmer can just update the function's internal code and return something else. It also makes our function reusable; imagine that in the future someone uses our function—this way, they won't have to double check that there's some misplaced echo statement within our functions. A note on alternative short tags As you know, another way to echo is to use  the <?= tag. You can use it like so: <?="helloworld"?>.These are known as short tags. In PHP 7, alternative PHP tags have been removed. The RFC states that <%, <%=, %> and <script language=php>  have been deprecated. The RFC at https://wiki.php.net/rfc/remove_alternative_php_tags says that the RFC does not remove short opening tags (<?) or short opening tags with echo (<?=). Since we have laid out the groundwork of creating paginate links, we now just have to invoke our functions. The following script is all that is needed to create a paginated page using the preceding function: $mysqli = mysqli_connect('localhost','<username>','<password>', '<dbname>'); $limit = $_GET['rpp'] ?? 10; //how many items to show per page default 10; $pagenum = $_GET['pagenum']; //what page we are on if($pagenum) $start = ($pagenum - 1) * $limit; //first item to display on this page else $start = 0; //if no page var is given, set start to 0 /*Display records here*/ $sql = "SELECT * FROM userprofiles LIMIT $start, $limit "; $rs_result = mysqli_query ($sql); //run the query while($row = mysqli_fetch_assoc($rs_result) { ?> <tr> <td><?php echo $row['name']; ?></td> <td><?php echo $row['age']; ?></td> <td><?php echo $row['country']; ?></td> </tr> <?php } /* Let's show our page */ /* get number of records through */ $record_count = $db->count_rows('userprofiles'); $pagination_links = paginate('listprofiles.php' , $limit, $rec_count); echo display_pagination($paginaiton_links); The HTML output of our page links in listprofiles.php will look something like this: <div class="pagination"><table> <tr> <td> <a href="listprofiles.php?pagenum=1&rpp=10">1</a> </td> <td><a href="listprofiles.php?pagenum=2&rpp=10">2</a> </td> <td><a href="listprofiles.php?pagenum=3&rpp=10">2</a> </td> </tr> </table></div> Summary As you can see, we have a lot of use cases for the null coalesce. We learned how to make a simple user profile system, and how to use PHP 7's null coalesce feature when fetching data from the database, which returns null if there are no records. We also learned that the null coalesce operator is similar to a ternary operator, except this returns null by default if there is no data. Resources for Article: Further resources on this subject: Running Simpletest and PHPUnit [article] Mapping Requirements for a Modular Web Shop App [article] HTML5: Generic Containers [article]
Read more
  • 0
  • 0
  • 3465

article-image-server-side-swift-building-slack-bot-part-1
Peter Zignego
12 Oct 2016
5 min read
Save for later

Server-side Swift: Building a Slack Bot, Part 1

Peter Zignego
12 Oct 2016
5 min read
As a remote iOS developer, I love Slack. It’s my meeting room and my water cooler over the course of a work day. If you’re not familiar with Slack, it is a group communication tool popular in Silicon Valley and beyond. What makes Slack valuable beyond replacing email as the go-to communication method for buisnesses is that it is more than chat; it is a platform. Thanks to Slack’s open attitude toward developers with its API, hundreds of developers have been building what have become known as Slack bots. There are many different libraries available to help you start writing your Slack bot, covering a wide range of programming languages. I wrote a library in Apple’s new programming language (Swift) for this very purpose, called SlackKit. SlackKit wasn’t very practical initially—it only ran on iOS and OS X. On the modern web, you need to support Linux to deploy on Amazon Web Servies, Heroku, or hosted server companies such as Linode and Digital Ocean. But last June, Apple open sourced Swift, including official support for Linux (Ubuntu 14 and 15 specifically). This made it possible to deploy Swift code on Linux servers, and developers hit the ground running to build out the infrastructure needed to make Swift a viable language for server applications. Even with this huge developer effort, it is still early days for server-side Swift. Apple’s Linux Foundation port is a huge undertaking, as is the work to get libdispatch, a concurrency framework that provides much of the underpinning for Foundation. In addition to rough official tooling, writing code for server-side Swift can be a bit like hitting a moving target, with biweekly snapshot releases and multiple, ABI-incompatible versions to target. Zewo to Sixty on Linux Fortunately, there are some good options for deploying Swift code on servers right now, even with Apple’s libraries in flux. I’m going to focus in on one in particular: Zewo. Zewo is modular by design, allowing us to use the Swift Package Manager to pull in only what we need instead of a monolithic framework. It’s open source and is a great community of developers that spans the globe. If you’re interested in the world of server-side Swift, you should get involved! Oh, and of course they have a Slack. Using Zewo and a few other open source libraries, I was able to build a version of SlackKit that runs on Linux. A Swift Tutorial In this two-part post series I have detailed a step-by-step guide to writing a Slack bot in Swift and deploying it to Heroku. I’m going to be using OS X but this is also achievable on Linux using the editor of your choice. Prerequisites Install Homebrew: /usr/bin/ruby -e “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" Install swiftenv: brew install kylef/formulae/swiftenv Configure your shell: echo ‘if which swiftenv > /dev/null; then eval “$(swiftenv init -)”; fi’ >> ~/.bash_profile Download and install the latest Zewo-compatible snapshot: swiftenv install DEVELOPMENT-SNAPSHOT-2016-05-09-a swiftenv local DEVELOPMENT-SNAPSHOT-2016-05-09-a Install and Link OpenSSL: brew install openssl brew link openssl --force Let’s Keep Score The sample application we’ll be building is a leaderboard for Slack, like PlusPlus++ by Betaworks. It works like this: add a point for every @thing++, subtract a point for every @thing--, and show a leaderboard when asked @botname leaderboard. First, we need to create the directory for our application and initialize the basic project structure. mkdir leaderbot && cd leaderbot swift build --init Next, we need to edit Package.swift to add our dependency, SlackKit: importPackageDescription let package = Package( name: "Leaderbot", targets: [], dependencies: [ .Package(url: "https://github.com/pvzig/SlackKit.git", majorVersion: 0, minor: 0), ] ) SlackKit is dependent on several Zewo libraries, but thanks to the Swift Package Manager, we don’t have to worry about importing them explicitly. Then we need to build our dependencies: swift build And our development environment (we need to pass in some linker flags so that swift build knows where to find the version of OpenSSL we installed via Homebrew and the C modules that some of our Zewo libraries depend on): swift build -Xlinker -L$(pwd)/.build/debug/ -Xswiftc -I/usr/local/include -Xlinker -L/usr/local/lib -X In Part 2, I will show all of the Swift code, how to get an API token, how to test the app and deploy it on Heroku, and finally how to launch it. Disclaimer The linux version of SlackKit should be considered an alpha release. It’s a fun tech demo to show what’s possible with Swift on the server, not something to be relied upon. Feel free to report issues you come across. About the author Peter Zignego is an iOS developer in Durham, North Carolina. He writes at bytesized.co, tweets @pvzig, and freelances at Launch Software.fto help you start writing your Slack bot, covering a wide range of programming languages. I wrote a library in Apple’s new programming language (Swift) for this very purpose, called SlackKit. SlackKit wasn’t very practical initially—it only ran on iOS and OS X. On the modern web, you need to support Linux to deploy on Amazon Web Servies, Heroku, or hosted server 
Read more
  • 0
  • 0
  • 2283