Understanding the DataSource API
The DataSource API was introduced in Apache Spark 1.1, but is constantly being extended. You have already used the DataSource API without knowing when reading and writing data using SparkSession or DataFrames.
The DataSource API provides an extensible framework to read and write data to and from an abundance of different data sources in various formats. There is built-in support for Hive, Avro, JSON, JDBC, Parquet, and CSV and a nearly infinite number of third-party plugins to support, for example, MongoDB, Cassandra, ApacheCouchDB, Cloudant, or Redis.
Usually, you never directly use classes from the DataSource API as they are wrapped behind the read
method of SparkSession
or the write
method of the DataFrame or Dataset. Another thing that is hidden from the user is schema discovery.
Implicit schema discovery
One important aspect of the DataSource API is implicit schema discovery. For a subset of data sources, implicit schema discovery is possible. This means...