Sahara facilitates the execution of jobs and bursting workloads in big data clusters running any supported EDP workload platform in OpenStack. As we have rapidly deployed a Spark cluster in the previous section, associated jobs in Sahara can be managed very easily.
Running jobs in Sahara requires the localization of the data source and destination from which the Sahara engine will fetch, analyze, and store them respectively. Sahara supports mainly three types of input/output data storage:
- Swift: This designates the OpenStack object storage as the main location for data input and the destination of the output result
- HDFS: This uses any running OpenStack instance backed by HDFS storage
- Manila: This uses the OpenStack network file system by exposing the data source share that is mounted among the Sahara cluster
At the time of writing this book, Sahara's EDP...