Model estimation
Once the feature sets get finalized in our last section, what follows is to estimate all the parameters of the selected models, for which we have adopted a dynamic approach of using SPSS on Spark, R notebooks in the Databricks environment, and MLlib directly on Spark. For the purpose of organizing workflows better, we focused our effort on organizing all the codes into R notebooks and also coding SPSS Modeler nodes.
For this project, as mentioned earlier, we need to conduct some exploratory analysis for descriptive statistics and for visualization. For this, we can take the MLlib codes and implement them directly. Also, with R codes, we obtained quick and good results.
For the best modelling, we need to arrange distributed computing, especially for this case, with various locations in combination with various customer segments per parents. In the United States, there are 13,506 school districts in 50 states. The difference between states is quite big. For this distributed...