Microservices is one of the most popular topics in software development these days, as
well as JavaScript and chat bots. In this post, I share my experience in designing and implementing a platform using a microservices architecture. I will also talk about which tools were picked for this platform and how they work together. Let’s start by giving you some context about the project.
The startup I am working with had the idea of building a platform to help people with depression and-tracking and managing the habits that keep them healthy, positive, and productive. The final product is an iOS app with a conversational UI, similar to a chat bot, and probably not very intelligent in version 1.0, but still with some feelings!
For this project, we decided to use Node.js for building the microservices, ReactNative for the mobile app, ReactJS for the Admin Dashboard, and ElasticSearch and Kibana for logging and monitoring the applications.
And yes, we do like JavaScript!
There are many definitions of a microservice, but I am assuming we agree on a common statement that describes a microservice as an independent component that performs certain actions within your systems, or in a nutshell, a part of your software that solves a specific problem that you have.
I got interested in microservices this year, especially when I found out there was a node.js toolkit called Seneca that helps you organize and scale your code. Unfortunately, my enthusiasm didn’t last long as I faced the first issue: the learning curve to approach Seneca was too high for this project; however, even if I ended up not using it, I wanted to include it here because many people are successfully using it, and I think you should be aware of it, and at least consider looking at it.
Instead, we decided to go for a simpler way. We split our project into small Node applications, and using pm2, we deploy our microservices using a pm2 configuration file called ecosystem.json. As explained in the documentation, this is a good way of keeping your deployment simple, organized, and monitored. If you like control dashboards, graphs, and colored progress bars, you should look at pm2 Keymetrics--it offers a nice overview of all your processes.
It has also been extremely useful in creating a GitHub Machine User, which essentially is a normal GitHub account, with its own attached ssh-key, which grants access to the repositories that contain the project’s code. Additionally, we created a Node user on our virtual machine with the ssh-key loaded in. All of the microservices run under this node user, which has access to the code base through the machine user ssh-key. In this way, we can easily pull the latest code and deploy new versions. We finally attached our ssh-keys to the Node user so each developer can login as the Node user via ssh:
ssh node@<IP>
cd ./project/frontend-api
git pull
Without being prompted for a password or authorization token from GitHub, and then using pm2, restart the microservice:
pm2 restart frontend-api
Another very important element that makes our microservices architecture is the API Gateway. After evaluating different options, including AWS API Gateway, HAproxy, Varnish, and Tyk, we decided to use Kong, an open source API Layer that offers modularity via plugins (everybody loves plugins!) and scalability features. We picked Kong because it supports WebSockets, as well as HTTP, but unfortunately, not all of the alternatives did.
Using Kong in combination with nginx, we mapped our microservices under a single domain name, http://api.example.com, and exposed each microservices under a specific path:
This allows us to run the microservices on separate servers on different ports, and having one single gate for the clients consuming our APIs.
Finally, the API Gateway is responsible of allowing only authenticated requests to pass through, so this is a great way of protecting the microservices, because the gateway is the only public component, and all of the microservices run in a private network.
We started creating a microservice-boilerplate package that includes Express to expose some APIs, Passport to allow only authorized clients to use them, winston for logging their activities, and mocha and chai for testing the microservice. We then created an index.js file that initializes the express app with a default route: /api/ping. This returns a simple JSON containing the message ‘pong’, which we use to know if the service is down. One alternative is to get the status of the process using pm2:
pm2 list
pm2 status <microservice-pid>
Whenever we want to create a new microservice, we start from this boilerplate code. It saves a lot of time, especially if the microservices have a similar shape.
The main communication channel between microservices is HTTP via API calls. We are also using web sockets to allow a faster communication between some parts of the platform. We decided to use socket.io, a very simple and efficient way of implementing web sockets.
I recommend creating a Node package that contains the business logic, including the object, models, prototypes, and common functions, such as read and write methods for the database. Using this approach allows you to include the package into each microservice, with the benefit of having just one place to update if something needs to change.
In this post, I covered the tools used for building a microservice architecture in Node.js. I hope you have found this useful.
Andrea Falzetti is an enthusiastic Full Stack Developer based in London. He has been designing and developing web applications for over 5 years. He is currently focused on node.js, react, microservices architecture, serverless, conversational UI, chat bots, and machine learning. He is currently working at Activate Media, where his role is to estimate, design, and lead the development of web and mobile platforms.