Attacking artificial intelligence and machine learning
As a continuation of the previous telemetry example, the idea of weaponizing data to manipulate outcomes is becoming a more critical aspect of adversarial tactics to understand and defend against.
There are many stories where AI did work as intended, and these stories turned into news over the years. Examples include image recognition that identified humans as animals, manipulating chatbots to communicate using inappropriate language, and tricking self-driving cars to incorrectly identify lanes.
At a high level, there are basically two aspects to differentiate:
- Adversarial machine learning to manipulate or trick algorithms
- Lack of security engineering around the technology and infrastructure that hosts and runs said algorithms
Both are important to get right. Artificial intelligence technologies will fundamentally change society over the next decade and there is a lot of potential for things to go in the...