Designing Monitored Data Workflows
Logging code is a good practice that allows developers to debug faster and provide maintenance more effectively for applications or systems. There is no strict rule when inserting logs, but knowing when not to spam your monitoring or alerting tool while using it is excellent. Creating several logging messages unnecessarily will obfuscate the instance when something significant happens. That’s why it is crucial to understand the best practices when inserting logs into code.
This chapter will show how to create efficient and well-formatted logs using Python and PySpark for data pipelines with practical examples that can be applied in real-world projects.
In this chapter, we have the following recipes:
- Inserting logs
- Using log-level types
- Creating standardized logs
- Monitoring our data ingest file size
- Logging based on data
- Retrieving SparkSession metrics