Consolidating indexing/forwarding apps
Oftentimes, there is a plausible reason to consolidate apps that either forward data to Splunk or transform data before it is written to disk. This reduces administrative overhead, and allows a single package to be deployed to all systems that spin up with those criteria.
I will use Hadoop for this example. Let's hypothetically say you have 600 nodes in a Hadoop cluster (all on a Linux platform) on which we would also like to monitor CPU, Memory, and disk metrics. Within that Hadoop system, apps such as Spark or Hive and Hive2 and Platfora each have their own logs and data inputs. Some of these components have Apache web frontends, which will also need to be parsed, but not all nodes will need this.
It takes some magic with the deployment server to make it work, but there is a relatively easy way to do it. We create a consolidated forwarding app (that is, a deployment app) and a consolidated cluster app (that is, an indexing app).