Model estimation
Once feature sets get finalized in our last section, what follows is to estimate all the parameters of the selected models, for which we have adopted an approach of using SPSS on Spark and also R notebooks in the Databricks environment, plus MLlib directly on Spark. However, for the purpose of organizing workflows better, we focused our effort on organizing all the codes into R notebooks and also coding SPSS Modeler nodes.
For this project, as mentioned earlier, we will also conduct some exploratory analysis for descriptive statistics and for visualization, for which we can take the MLlib codes and get them implemented directly. Also, with R codes, we obtained quick and good results.
For the best modelling, we need to arrange distributed computing, especially for this case, with various locations in combination with various customer segments.
For this distributed computing part, you need to refer to previous chapters, and we will use SPSS Analytics Server with Apache Spark...