Breaking down our project
In our system, we have a series of tasks that need to be executed. However, these tasks take a long time to complete. If we were to just have a normal server handling the tasks, the server will end up being choked and multiple users will receive a delayed experience. If the task is too long, then the users’ connection might time out.
To avoid degrading users’ experience when long tasks are needed, we utilize a queuing system. This is where an HTTP server receives a request from the user. The long task associated with the request is then sent to a first-in-first-out queue to be processed by a pool of workers. Because the task is in the queue, there is nothing more the HTTP server can do apart from respond to the user that the task has been sent and that their request has been processed. Due to the ebbs and flows of traffic, we will not need all our workers and HTTP servers when the traffic is low. However, we will need to create and connect...