Bypassing security with adversarial AI
We have spent a lot of time securing our adversarial AI playground and our sample AI service. In this section, we will explain how the traditional security controls we have applied are very effective in protecting the environment and artifacts of AI but not the logic embedded in its brain – the ML model.
Our first adversarial AI attack
In this section, we will look at staging our first adversarial AI attack by taking advantage of AI itself to subvert how the model works and demonstrating why we need to cover it when we secure a system or conduct a security risk assessment of it.
Imagine that our ImRecS solution detects airplanes and alerts the Border Control Forces of attempted intrusions. The web application would have to become real-time, but for our security conversations, that’s not all that important. Our service is hardened, and criminals cannot break in and tamper with our model to escape detection regarding illegal...