Foundations of enterprise AI security
To develop a robust security framework for AI, we must build the foundations that drive it. These foundations relate directly to the Identify function of the NIST CSF. They act as the compass for organizations to integrate AI technologies securely, including the following:
- AI risk management integrated with existing processes to cover AI-specific risks. This will include the following aspects:
- AI risks are different and vary. Adversarial AI research has also created a vast array of threats that may never occur outside lab conditions or be irrelevant to the organization. Understanding a framework to evaluate adversarial risk and capture what is relevant with guidelines is crucial. It will feed into activities driving AI security, including threat modeling and testing and incorporating continuous learning and adaptation into security practices to respond to new adversarial techniques.
- Other AI-specific risks include bias in decision-making...