Defensive mechanisms
In the previous sections, we have learned about a general framework for adversarial threat modeling and various types of attacks. Those attacks are naturally grouped toward the adversary’s security objectives, namely the violation of confidentiality (privacy), integrity, and availability. This aligns well with the common strategy of the cybersecurity process, which moves across different stages, building and strengthening the system. Therefore, we will adopt a similar approach to design and develop defensive mechanisms as in traditional cybersecurity. However, we will restrict the scope of discussion to the realm of ML models. The systematic approach toward securing and hardening AI systems is beyond our discussion.
AI model security involves a dynamic and ongoing process aimed at safeguarding the integrity, confidentiality, and availability of AI systems. This process is not a one-time goal but a continuous journey that evolves and strengthens over time...