Building the Logstash data pipeline
Having set up the mechanism to automatically create the Elasticsearch index and also the metadata database, we can now focus on building the data pipeline using Logstash. What should our data pipeline do? It should perform the following steps:
- Accept JSON requests over the web (over HTTP)
- Enrich the JSON with the metadata we have in the MySQL database
- Store the resulting documents in Elasticsearch
These three main functions that we want to perform correspond exactly to the Logstash data pipeline's input, filter, and output plugins respectively. The full Logstash configuration file for this data pipeline is in the code base at https://github.com/pranav-shukla/learningelasticstack/tree/master/chapter-10/files/logstash_sensor_data_http.conf.
Let us look at how to achieve the end goal of our data pipeline by following the aforementioned steps. We will start with accepting JSON requests over the web (over HTTP).
Accept JSON requests over the web
This function is achieved...