Apache Spark does not come with an out-of-the-box method for exposing models as endpoints like SageMaker does. However, there are easy ways to load Spark models on standard web services using the serialization and deserialization capabilities of Spark's ML package. In this section, we will show you how to deploy the model we created in Chapter 3, Predicting House Value with Regression Algorithms, in order to serve predictions through a simple endpoint. To do this, we will save a trained model to disk, so that we can ship that model to the machine that is serving the model through an endpoint.
We'll start by training our model. In Chapter 3, Predicting House Value with Regression Algorithms, we loaded the housing data into a dataframe:
housing_df = sql.read.csv(SRC_PATH + 'train.csv',
header=True, inferSchema...