ML node capacity
For embedding models run on ML nodes in Elasticsearch, you will need to plan to ensure your nodes have enough capacity to run the model at inference time. Elastic Cloud allows the auto-scaling of ML nodes based on CPU requirements, which allows them to scale up and out when more compute is required and scale down when those requirements are reduced.
We cover tuning ML nodes for inference in the next chapter in more detail, but minimally, you will need an ML node with enough RAM to load at least one instance of the embedding model. As your performance requirements increase, you can increase the number of allocations of the individual model as well as the number of threads allocated per allocation.
To check the size of a model and the amount of memory (RAM) required to load the model, you can run the get trained models statistics API (for more information on this API, visit the documentation page at https://www.elastic.co/guide/en/elasticsearch/reference/current...