Deployment
Deployment is one of the lengthier tasks when it comes to releasing the final product. Generally, it involves logging into a remote server, manually finding the correct files to copy, restarting the server and praying we didn't forget anything. There may also be other steps involved which could further complicate this process, such as performing a backup of the current version or modifying a remote configuration file. Each one of these steps can be catered for with Grunt, either with plugins, which provide useful tasks, or with our own custom tasks where we may wield the complete power of Node.js.
As mentioned in the first section, we can use Grunt to script these types of processes, thus removing the element of human error. Human error is probably the most dangerous at the deployment step because it can easily result in server down time, which will often result in monetary losses.
In the following subsections, we'll cover three common methods of deploying files to our production servers: FTP, SFTP, and S3. We won't however, cover the creation of custom tasks and plugins in this section, as we will go through these topics in depth in Chapter 3, Using Grunt.
FTP
The File Transfer Protocol specification was released in 1980. Because of FTP's maturity and supremacy, FTP became the standard way to transfer files across the Internet. Since FTP operates over a TCP connection, and given the fact that Node.js excels in building fast network applications, an FTP client has been implemented in JavaScript in approximately 1000 lines, which is tiny! It can be found at http://gswg.io#jsftp.
A Grunt plugin has been made using this implementation, and this plugin can be found at http://gswg.io#grunt-ftp-deploy. In the following example, we'll use this plugin along with a local FTP server:
//Code example 09-ftp module.exports = function(grunt) { // Load the plugin that provides the "ftp-deploy" task. grunt.loadNpmTasks('grunt-ftp-deploy'); // Project configuration. grunt.initConfig({ 'ftp-deploy': { target1: { auth: { host: 'localhost', port: 21, authKey: 'my-key' }, src: 'build', dest: 'build' } } }); // Define the default task grunt.registerTask('default', ['ftp-deploy']); };
When the ftp-deploy
task is run, it looks for an .ftppass
file, which contains sets of usernames and passwords. When placing a Grunt environment inside a version control system, we must be wary of unauthorized access to login credentials. Therefore, it is good practice to place these credentials in an external file, which is not under version control. We could also use system environment variables to achieve the same effect.
Our Gruntfile.js
above has set the key
option to "my-key"
, this tells ftp-deploy
to look for this property inside our .ftppass
file (which is in JSON format). So, we should create a .ftppass
file like:
{ "my-key": { "username": "john", "password": "smith" } }
Tip
For testing purposes, there are free FTP servers available: PureFTPd http://gswg.io#pureftpd (Mac OS X) and FileZilla Server http://gswg.io#filezilla-server (Windows).
Once we have an FTP server ready, with the correct username and password, we are ready to transfer. Running this example should produce the following:
$ grunt Running "ftp-deploy:target1" (ftp-deploy) task >> New remote folder created /build/ >> Uploaded file: foo.js to: / >> FTP upload done!
FTP is widespread and commonly supported; however, as technology and software improve, as legacy systems get deprecated, and as data encryption becomes a negligible computational cost, the use of unencrypted protocols like FTP is in decline—which segues us to SFTP.
SFTP
The Secure File Transfer Protocol is often incorrectly assumed to be a normal FTP connection tunneled through an SSH (Secure Shell) connection. However, SFTP is a new file transfer protocol (though it does use SSH).
In this example, we are copying three HTML files from our local build
directory to the remote tmp
directory. Again, to avoid placing credentials inside build
, we store our username
and password
inside our credentials.json
file. This example uses the Grunt plugin http://gswg.io#grunt-ssh. This plugin actually provides two tasks: sftp
and sshexec
, however, in this example we'll only be using the sftp
task:
//Code example 10-sftp module.exports = function(grunt) { // Load the plugin that provides the "sftp" task. grunt.loadNpmTasks('grunt-ssh'); // Project configuration. grunt.initConfig({ credentials: grunt.file.readJSON('credentials.json'), sftp: { options: { host: 'localhost', username: '<%= credentials.username %>', password: '<%= credentials.password %>', path: '/tmp/', srcBasePath: 'build/' }, target1: { src: 'build/{foo,bar,bazz}.html' } } }); // Define the default task grunt.registerTask('default', ['sftp']); };
At the top of our configuration, we created a new credentials
property to store the result of reading our credentials.json
file. Using Grunt templates, which we cover in Chapter 2, Setting Up Grunt, we can list the path to the property we wish to substitute in. Once we have prepared our credentials.json
file, we can execute grunt:
$ grunt Running "sftp:target1" (sftp) task Done, without errors.
We notice the sftp
task didn't display any detailed information. However, if we run Grunt with the verbose flag: grunt -v
we should see this snippet at the end of our output:
Connection :: connect copying build/bar.html to /tmp/bar.html copied build/bar.html to /tmp/bar.html copying build/bazz.html to /tmp/bazz.html copied build/bazz.html to /tmp/bazz.html copying build/foo.html to /tmp/foo.html copied build/foo.html to /tmp/foo.html Connection :: end Connection :: close Done, without errors.
This output clearly conveys that we have indeed successfully copied our three HTML files from our local directory to the remote directory.
S3
Amazon Web Service's Simple Storage Service is not a deployment method (or protocol) like FTP and SFTP, but rather a service. Nevertheless, from a deployment perspective they are quite similar as they all require some configuration, including destination and authentication information.
Hosting Web Applications in the Amazon Cloud has grown quite popular in recent years. The relatively low prices of S3 make it a good choice for static file hosting, especially as running your own servers can introduce many unexpected costs. AWS has released a Node.js client library for many of its services. Since there was no Grunt plugins utilizing this library at the time, I decided to make one. So, in the following example, we are using http://gswg.io#grunt-aws. Below, we are attempting to upload all of the files inside the build
directory into the root of the chosen bucket:
//Code example 11-aws grunt.initConfig({ aws: grunt.file.readJSON("credentials.json"), s3: { options: { accessKeyId: "<%= aws.accessKeyId %>", secretAccessKey: "<%= aws.secretAccessKey %>", bucket: "..." }, //upload all files within build/ to output/ build: { cwd: "build/", src: "**" } } });
Again, similar to the SFTP, we are using an external credentials.json
file to house our valuable information. So, before we can run this example, we first need to create a credentials.json
file, which looks like:
{ "accessKeyId": "AKIAIMK...", "secretAccessKey": "bt5ozy7nP9Fl9..." }
Next, we set the bucket
option to the name of bucket we wish to upload to, then we can go ahead and execute grunt
:
$ grunt Running "s3:build" (s3) task Retrieving list of existing objects... >> Put 'foo.html' >> Put 'bar.js' >> Put 2 files Done, without errors.