Transforming RDDs with Spark 2.0 using the filter() API
In this recipe, we explore the filter()
method of which is used to select a subset of the base RDD and return a new filtered RDD. The format is similar to map()
, but a lambda selects which members are to be included in the resulting RDD.
How to do it...
- Start a new project in IntelliJ or in an IDE of your choice. Make sure the necessary JAR files are included.
- Set up the package location where the program will reside:
package spark.ml.cookbook.chapter3
- Import the necessary packages:
import breeze.numerics.pow import org.apache.spark.sql.SparkSession import Array._
- Import the packages for setting up logging level for
log4j
. This step is optional, but we highly recommend it (change the level appropriately as you move through the development cycle).
import org.apache.log4j.Logger import org.apache.log4j.Level
- Set up the logging level to warning and error to cut down on output. See the previous step for package requirements.
Logger.getLogger...