Using the Debugger Insights Dashboard
When working on ML requirements, ML practitioners may encounter a variety of issues before coming up with a high-performing ML model. Like software development and programming, building ML models requires a bit of trial and error. Developers generally make use of a variety of debugging tools to help them troubleshoot issues and implementation errors when writing software applications. Similarly, ML practitioners need a way to monitor and debug training jobs when building ML models. Luckily for us, Amazon SageMaker has a capability called SageMaker Debugger that allows us to troubleshoot different issues and bottlenecks when training ML models:
Figure 6.24 – SageMaker Debugger features
The preceding diagram shows the features that are available when we use SageMaker Debugger to monitor, debug, and troubleshoot a variety of issues that affect an ML model’s performance. This includes the data capture capability...