Threat modeling for AI
We discussed the threat model in Chapter 3 and explained that it is a structured process that evaluates the potential impact of threats on an AI system. As a reminder, this involves the following aspects:
- Mapping the system: This involves creating a detailed description of the AI system, including its components, data flows, and interfaces. This will usually be our solution architecture and will be annotated with data flows, critical assets, and trust boundaries.
Trust boundaries in threat models delineate where an organization’s security controls and policies are enforced, separating trusted zones from untrusted ones. For instance, the interface between APIs and the public internet is an example of a trust boundary. Using an external authentication provider is an example of a system outside our trust boundary. We are not responsible for this system’s security policies and controls.
- Identifying threats and vulnerabilities: We identify...