The first step in any data pipeline is data ingestion, which brings data from the source system for necessary processing. There are different types of source systems available and to bring data from these source systems there are different specific tools available. Big Data ecosystem has its own setup of tools to bring data from these systems, for example, sqoop can be used to bring data from relational databases, Gobblin can bring data from relational databases, REST API, FTP server, and so on.
Apache flume is a Java-based, distributed, scalable, fault tolerant system to consume data from a streaming source, such as Twitter, log server, and so on. At one time it was a widely used application in different use case and still large numbers of pipeline use Flume (specifically as a producer to Kafka).