If you have to work with a large enough set of data, it's fairly obvious that it will cause problems. Your server may not be able to provide all the required memory, or even if that doesn't prove to be a problem, the needed processing time would surpass the standard waiting time, causing timeouts—plus the fact that your server would close out other requests, because it would be devoted to handling your long-time processing one.
Node provides a way to work with collections of data as streams, being able to process the data as it flows, and piping it to compose functionality out of smaller steps, much in the fashion of Linux's and Unix's pipelines. Let's see a basic example, which you might use if you were interested in doing low-level Node request processing. (As is, we will be using higher-level libraries...