Training ML models in Snowpark
Now that we have prepared our dataset, the pinnacle of our journey involves the model-building process, for which we will be leveraging the power of Snowpark ML. Snowpark ML emerges as a recent addition to the Snowpark arsenal, strategically deployed to streamline the intricacies of the model-building process. Its elegance becomes apparent when we engage in a comparative exploration of the model-building procedure through the novel ML library. We will start by developing the pipeline that we’ll use to train the model using the data we prepared previously:
import snowflake.ml.modeling.preprocessing as snowml from snowflake.ml.modeling.pipeline import Pipeline import joblib df = session.table("BSD_TRAINING") df = df.drop("DATETIME","DATE") CATEGORICAL_COLUMNS = ["SEASON","WEATHER"] CATEGORICAL_COLUMNS_OHE = ["SEASON_OE","WEATHER_OE"] MIN_MAX_COLUMNS = ["TEMP"] import numpy as np categories = { "SEASON": np.array([1,2,3,4]), "WEATHER": np.array([1,2,3,4]), } preprocessing_pipeline = Pipeline( steps=[ ( "OE", snowml.OrdinalEncoder( input_cols=CATEGORICAL_COLUMNS, output_cols=CATEGORICAL_COLUMNS_OHE, categories=categories ) ), ( "MMS", snowml.MinMaxScaler( clip=True, input_cols=MIN_MAX_COLUMNS, output_cols=MIN_MAX_COLUMNS, ) ) ] ) PIPELINE_FILE = 'preprocessing_pipeline.joblib' joblib.dump(preprocessing_pipeline, PIPELINE_FILE) transformed_df = preprocessing_pipeline.fit(df).transform(df) transformed_df.show() session.file.put(PIPELINE_FILE,"@snowpark_test_stage",overwrite=True)
The preceding code creates a preprocessing pipeline for the dataset by using various Snowpark ML functions. The preprocessing
and pipeline
modules are imported as these are essential for developing and training the model:
Figure 5.14 – Transformed data
The pipeline includes ordinal encoding for categorical columns (SEASON
and WEATHER
) and min-max scaling for numerical columns (TEMP
). The pipeline is saved into the stage using the joblib
library, which can be utilized for consistent preprocessing in future analyses. Now that we have the pipeline code ready, we will build the features that are required for the model:
CATEGORICAL_COLUMNS = ["SEASON","WEATHER"] CATEGORICAL_COLUMNS_OHE = ["SEASON_OE","WEATHER_OE"] MIN_MAX_COLUMNS = ["TEMP","ATEMP"] FEATURE_LIST = \ ["HOLIDAY","WORKINGDAY","HUMIDITY","TEMP","ATEMP","WINDSPEED"] LABEL_COLUMNS = ['COUNT'] OUTPUT_COLUMNS = ['PREDICTED_COUNT'] PIPELINE_FILE = 'preprocessing_pipeline.joblib' preprocessing_pipeline = joblib.load(PIPELINE_FILE)
The preceding code defines lists representing categorical columns, one-hot encoded categorical columns, and columns for min-max scaling. It also specifies a feature list, label columns, and output columns for an ML model. The preprocessing_pipeline.joblib
file is loaded and assumed to contain a previously saved preprocessing pipeline. These elements collectively prepare the necessary data and configurations for subsequent ML tasks, ensuring consistent handling of categorical variables, feature scaling, and model predictions based on the pre-established pipeline. We will now split the data into training and testing sets:
bsd_train_df, bsd_test_df = df.random_split( weights=[0.7,0.3], seed=0) train_df = preprocessing_pipeline.fit( bsd_train_df).transform(bsd_train_df) test_df = preprocessing_pipeline.transform(bsd_test_df) train_df.show() test_df.show()
The preceding code divides the dataset into training (70%) and testing (30%) sets using a random split. It applies the previously defined preprocessing pipeline to transform both sets, displaying the transformed training and testing datasets and ensuring consistent preprocessing for model training and evaluation. The output shows the different training and testing data:
Figure 5.15 – Training and testing dataset
Next, we’ll train the model with the training data:
from snowflake.ml.modeling.linear_model import LinearRegression regressor = LinearRegression( input_cols=CATEGORICAL_COLUMNS_OHE+FEATURE_LIST, label_cols=LABEL_COLUMNS, output_cols=OUTPUT_COLUMNS ) # Train regressor.fit(train_df) # Predict result = regressor.predict(test_df) result.show()
The LinearRegression
class defines the model, specifying the input columns (categorical columns after one-hot encoding and additional features), label columns (the target variable – that is, COUNT
), and output columns for predictions. The model is trained on the transformed training dataset using fit
, and then predictions are generated for the transformed testing dataset using predict
. The resulting predictions are displayed, assessing the model’s performance on the test data:
Figure 5.16 – Predicted output
The next step is to calculate various performance metrics to evaluate the accuracy of the linear regression model’s predictions:
from snowflake.ml.modeling.metrics import mean_squared_error, explained_variance_score, mean_absolute_error, mean_absolute_percentage_error, d2_absolute_error_score, d2_pinball_score mse = mean_squared_error(df=result, y_true_col_names="COUNT", y_pred_col_names="PREDICTED_COUNT") evs = explained_variance_score(df=result, y_true_col_names="COUNT", y_pred_col_names="PREDICTED_COUNT") mae = mean_absolute_error(df=result, y_true_col_names="COUNT", y_pred_col_names="PREDICTED_COUNT") mape = mean_absolute_percentage_error(df=result, y_true_col_names="COUNT", y_pred_col_names="PREDICTED_COUNT") d2aes = d2_absolute_error_score(df=result, y_true_col_names="COUNT", y_pred_col_names="PREDICTED_COUNT") d2ps = d2_pinball_score(df=result, y_true_col_names="COUNT", y_pred_col_names="PREDICTED_COUNT") print(f"Mean squared error: {mse}") print(f"explained_variance_score: {evs}") print(f"mean_absolute_error: {mae}") print(f"mean_absolute_percentage_error: {mape}") print(f"d2_absolute_error_score: {d2aes}") print(f"d2_pinball_score: {d2ps}")
The preceding code calculates various performance metrics to assess the accuracy of the linear regression model’s predictions. Metrics such as mean squared error, explained variance score, mean absolute error, mean fundamental percentage error, d2 definitive error score, and d2 pinball score are computed based on the actual (COUNT
) and predicted (PREDICTED_COUNT
) values stored in the result
DataFrame:
Figure 5.17 – Performance metrics
These performance metrics provide a comprehensive evaluation of the model’s performance across different aspects of prediction accuracy.
Model results and efficiency
The presented model metrics might need to showcase more exceptional results. It’s crucial to emphasize that the primary objective of this case study is to elucidate the model-building process and highlight the facilitative role of Snowpark ML. The focus of this chapter has been on illustrating the construction of a linear regression model.
The efficiency of Snowpark ML
In delving into the intricacies of the model-building process facilitated by Snowpark ML, the initial standout feature is its well-thought-out design. A notable departure from the conventional approach is evident as Snowpark ML closely mirrors the streamlined methodology found in scikit-learn. A significant advantage is eliminating the need to create separate user-defined functions (UDFs) and stored procedures, streamlining the entire model-building workflow.
It’s crucial to recognize that Snowpark ML seamlessly integrates with scikit-learn while adhering to similar conventions in the model construction process. A noteworthy distinction is a prerequisite in scikit-learn for data to be passed as a pandas DataFrame. Consequently, the Snowflake table must be converted into a pandas DataFrame before you can initiate the model-building phase. However, it’s imperative to be mindful of potential memory constraints, especially when dealing with substantial datasets. Converting a large table into a pandas DataFrame demands a significant amount of memory since the entire dataset is loaded into memory.
In contrast, Snowpark ML provides a more native and memory-efficient approach to the model-building process. This native integration with Snowflake’s environment not only enhances the efficiency of the workflow but also mitigates memory-related challenges associated with large datasets. The utilization of Snowpark ML emerges as a strategic and seamless choice for executing complex model-building tasks within the Snowflake ecosystem.