Introduction to AML
In the realm of ML, a unique subfield known as AML has emerged. This area focuses on learning from datasets that may be contaminated by adversarial samples. These samples are introduced by entities intent on undermining the integrity of ML processes for their own benefit. The unpredictability of predictions made by ML algorithms introduces a new layer of security risks in today’s data-centric systems. As data increasingly drives predictive services such as spam filters and voice assistants, the efficacy of a learning model becomes closely tied to the integrity of its data sources. Unfortunately, data source tampering is a common occurrence, whether through insider fraud, deliberate manipulation, or the natural degradation of devices, allowing adversarial entities to exploit these systems by altering their data inputs.
Nowadays, ML has become a crucial component in numerous IT systems. Despite its significant benefits, these systems can have inherent weaknesses...