Designing an ETML solution
The requirements clearly point us to a solution that takes in some data and augments it with ML inference, before outputting the data to a target location. Any design we come up with must encapsulate these steps. This is the description of any ETML solution, and this is one of the most used patterns in the ML world. In my opinion it will remain important for a long time to come as it is particularly suited to ML applications where:
- Latency is not critical: If you can afford to run on a schedule and there are no high-throughput or low-latency response time requirements, then running as an ETML batch is perfectly acceptable.
- You need to batch the data for algorithmic reasons: A great example of this is the clustering approach we will use here. There are ways to perform clustering in an online setting, where the model is continually updated as new data comes in, but some approaches are simpler if you have all the relevant data taken together...