Capacity planning
Regardless how much data you think you have, things will change over time. New projects will pop up and data creation rates for your existing projects will change (up or down). Data volume will usually ebb and flow with the traffic of the day. Finally, the number of servers feeding your Hadoop cluster will change over time.
There are many schools of thought on how much extra storage capacity to keep in your Hadoop cluster (we use the totally unscientific value of 20 percent—meaning we usually plan for 80 percent full when ordering additional hardware but don't start to panic until we hit the 85 percent to 90 percent utilization number).
You may also need to set up multiple flows inside a single agent. The source and sink processors are currently single threaded so there is a limit to what tuning batch sizes can accomplish when under heavy data volumes.
The number of Flume agents feeding Hadoop, should be adjusted based on real numbers. Watch the channel size to see how well...