Inserting an ingest pipeline
The power of the pipeline definition is the ability for it to be updated and created without a node restart (compared to Logstash). The definition is stored in a cluster state via the put
pipeline API.
Now that we've defined a pipeline, we need to provide it to the Elasticsearch cluster.
Getting ready
You need an up-and-running Elasticsearch installation, as we described in the Downloading and installing Elasticsearch recipe in Chapter 1, Getting Started.
To execute the commands, any HTTP client can be used, such as curl (https://curl.haxx.se/), Postman (https://www.getpostman.com/), or similar. Use the Kibana console, as it provides code completion and better character escaping for Elasticsearch.
How to do it...
To store or update an ingestion pipeline in Elasticsearch, we will do the following.
Store the ingest
pipeline using a PUT
call:
PUT /_ingest/pipeline/add-user-john { "description": "Add user john field...