Introducing built-in modules
Node.js is considered a technology that you can use to write backend applications. As such, we need to perform various tasks. Thankfully, we have a bunch of helpful built-in modules at our disposal.
Creating a server with the HTTP module
We already used the HTTP module. It's perhaps the most important one for web development because it starts a server that listens on a particular port:
var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }).listen(9000, '127.0.0.1'); console.log('Server running at http://127.0.0.1:9000/');
We have a createServer
method that returns a new web server object. In most cases, we run the listen
method. If needed, there is close
, which stops the server from accepting new connections. The callback function that we pass always accepts the request
(req
) and response
(res
) objects. We can use the first one to retrieve information about incoming request, such as, GET
or POST
parameters.
Reading and writing to files
The module that is responsible for the read and write processes is called fs
(it is derived from filesystem). Here is a simple example that illustrates how to write data to a file:
var fs = require('fs'); fs.writeFile('data.txt', 'Hello world!', function (err) { if(err) { throw err; } console.log('It is saved!'); });
Most of the API functions have synchronous versions. The preceding script could be written with writeFileSync
, as follows:
fs.writeFileSync('data.txt', 'Hello world!');
However, the usage of the synchronous versions of the functions in this module blocks the event loop. This means that while operating with the filesystem, our JavaScript code is paused. Therefore, it is a best practice with Node to use asynchronous versions of methods wherever possible.
The reading of the file is almost the same. We should use the readFile
method in the following way:
fs.readFile('data.txt', function(err, data) { if (err) throw err; console.log(data.toString()); });
Working with events
The observer design pattern is widely used in the world of JavaScript. This is where the objects in our system subscribe to the changes happening in other objects. Node.js has a built-in module to manage events. Here is a simple example:
var events = require('events'); var eventEmitter = new events.EventEmitter(); var somethingHappen = function() { console.log('Something happen!'); } eventEmitter .on('something-happen', somethingHappen) .emit('something-happen');
The eventEmitter
object is the object that we subscribed to. We did this with the help of the on
method. The emit
function fires the event and the somethingHappen
handler is executed.
The events
module provides the necessary functionality, but we need to use it in our own classes. Let's get the book idea from the previous section and make it work with events. Once someone rates the book, we will dispatch an event in the following manner:
// book.js var util = require("util"); var events = require("events"); var Class = function() { }; util.inherits(Class, events.EventEmitter); Class.prototype.ratePoints = 0; Class.prototype.rate = function(points) { ratePoints = points; this.emit('rated'); }; Class.prototype.getPoints = function() { return ratePoints; } module.exports = Class;
We want to inherit the behavior of the EventEmitter
object. The easiest way to achieve this in Node.js is by using the utility module (util
) and its inherits
method. The defined class could be used like this:
var BookClass = require('./book.js'); var book = new BookClass(); book.on('rated', function() { console.log('Rated with ' + book.getPoints()); }); book.rate(10);
We again used the on
method to subscribe to the rated
event. The book
class displays that message once we set the points. The terminal then shows the Rated with 10 text.
Managing child processes
There are some things that we can't do with Node.js. We need to use external programs for the same. The good news is that we can execute shell commands from within a Node.js script. For example, let's say that we want to list the files in the current directory. The file system APIs do provide methods for that, but it would be nice if we could get the output of the ls
command:
// exec.js var exec = require('child_process').exec; exec('ls -l', function(error, stdout, stderr) { console.log('stdout: ' + stdout); console.log('stderr: ' + stderr); if (error !== null) { console.log('exec error: ' + error); } });
The module that we used is called child_process
. Its exec
method accepts the desired command as a string and a callback. The stdout
item is the output of the command. If we want to process the errors (if any), we may use the error
object or the stderr
buffer data. The preceding code produces the following screenshot:
Along with the exec
method, we have spawn
. It's a bit different and really interesting. Imagine that we have a command that not only does its job, but also outputs the result. For example, git push
may take a few seconds and it may send messages to the console continuously. In such cases, spawn
is a good variant because we get an access to a stream:
var spawn = require('child_process').spawn; var command = spawn('git', ['push', 'origin', 'master']); command.stdout.on('data', function (data) { console.log('stdout: ' + data); }); command.stderr.on('data', function (data) { console.log('stderr: ' + data); }); command.on('close', function (code) { console.log('child process exited with code ' + code); });
Here, stdout
and stderr
are streams. They dispatch events and if we subscribe to these events, we will get the exact output of the command as it was produced. In the preceding example, we run git push origin master
and sent the full command responses to the console.