There are four ways to create RDDs in Spark. They range from the parallelize() method for simple testing and debugging within the client driver code to streaming RDDs for near-realtime responses. In this recipe, we provide you with several examples to demonstrate RDD creation using internal sources. The streaming case will be covered in the streaming Spark example in Chapter 13, Streaming Machine Learning System, so we can address it in a meaningful way.
Creating RDDs with Spark 2.0 using internal data sources
How to do it...
- Start a new project in IntelliJ or in an IDE of your choice. Make sure the necessary JAR files are included.
- Set up the package location where the program will reside:
package spark.ml.cookbook.chapter3...