People often ask how they should appropriately size their cluster if they plan on using Elastic ML. Other than the obvious it depends answer, it is useful to have an empirical approach to the process. As seen on the Elastic blog Sizing for Machine Learning with Elasticsearch (https://www.elastic.co/blog/sizing-machine-learning-with-elasticsearch), there is a key recommendation: use dedicated nodes for ML so that you don't have ML jobs interfere with the other tasks of the data nodes of a cluster (indexing, searching, and so on). To scope how many dedicated nodes are necessary, follow this approach:
- If there are no representative jobs created yet, use generic rules of thumb based on the overall cluster size from the blog. These rules of thumb are as follows:
- Recommend 1 dedicated ML node (2 for HA) for cluster sizes < 10 data nodes
- Definitely at...