Scaling out model inferencing
Another important aspect of the whole ML process, apart from data cleansing and model training and tuning, is the productionization of models itself. Despite having access to huge amounts of data, sometimes it is useful to downsample the data and train models on a smaller subset of the larger dataset. This could be due to reasons such as low signal-to-noise ratio, for example. In this, it is not necessary to scale up or scale out the model training process itself. However, since the raw dataset size is very large, it becomes necessary to scale out the actual model inferencing process to keep up with the large amount of raw data that is being generated.
Apache Spark, along with MLflow, can be used to score models trained using standard, non-distributed Python libraries like scikit-learn. An example of a model trained using scikit-learn and then productionized at scale using Spark is shown in the following code example:
import mlflow from sklearn.model_selection...