Introduction
A single node server can typically handle several thousand simultaneous connections. However, as the audience of an application grows, it is important to make sure that the application is scalable. On the server side, this means that we may want to distribute our applications across multiple threads or node instances.
The issue with distributing your application across nodes is that when we emit a message, it will only be received by one of the distributed servers. Sockets that are not connected to the same server as the one that receives the message will not be able to receive it without some additional handling. Luckily, there are some great ways to pass session data between servers with a caching system, such as Redis, Memcache, or RabbitMQ. By using adapters for one of these distributed caching mechanisms, we can easily scale our servers without compromising our Socket.IO connections.