Summary
In this chapter, we understood how data in Elasticsearch can be aggregated for statistical insights. We explored how metric and bucket aggregations help slice and dice a large dataset to analyze data for insights.
We also looked at how ingest pipelines can be used to manipulate and transform incoming data to prepare it for use cases on Elasticsearch. We explored a range of common use cases for ingest pipelines in this section.
Lastly, we looked at how Watcher can be used to implement alerting and response actions to changes in data. Again, we explored a range of common alerting use cases in this section.
In the next chapter, we will dive into getting started with and using machine learning jobs to find anomalies in our data, run inference for new documents using the inference ingest processor, and run transformation jobs to pivot incoming datasets for machine learning.