The design pattern to execute models in SageMaker is to read the data placed in S3. The data may not be readily consumable most of the time. If the datasets required are large, then wrangling the data in the Jupyter notebook may not be practical. In such cases, Spark EMR clusters can be employed to conduct operations on big data.
Wrangling a big dataset in Jupyter notebooks results in out-of-memory errors. Our solution is to employ AWS EMR (Elastic MapReduce) clusters to conduct distributed data processing. Hadoop will be used as the underlying distributed filesystem while Spark will be used as the distributed computing framework.
Now, to run commands against the EMR cluster to process big data, AWS offers EMR notebooks. EMR notebooks provide a managed notebook environment, based on Jupyter Notebook. These notebooks can be used to interactively...