Understanding the need for testing and securing your ML application
The growing adoption of data-driven and ML-based solutions is causing businesses to have to handle growing workloads, exposing them to extra levels of complexities and vulnerabilities.
Cybersecurity is the most alarming risk for AI developers and adopters. According to a survey released by Deloitte (https://www2.deloitte.com/us/en/insights/focus/cognitive-technologies/state-of-ai-and-intelligent-automation-in-business-survey.html), in July 2020, 62% of adopters saw cybersecurity risks as a significant or extreme threat, but only 39% said they felt prepared to address those risks.
In this section, we will look into the need for securing ML-based systems and solutions. We will reflect on some of the broader challenges of ML systems such as bias, ethics, and explainability. We will also study some of the challenges present at each stage of the ML life cycle relating to confidentiality, integrity, and availability...