Understanding risk in AI
One subject I talk about frequently at conferences and in print is the risk of AI in terms of trust and control. I’m not talking about AI running amok here, but rather how to make AI dependable. It is quite interesting that the sort of AI we have been considering – specifically, ANNs – does something very few other computer software do. Given the same inputs and conditions, the output of an AI system is not always the same. Given the same inputs, the AI system will sometimes come up with a different answer. The formal name for this behavior is non-determinism.
There is a second corollary to this. Given the same inputs, the AI process will sometimes take a different amount of time to complete its task. This is simply not normal behavior for a computer.
Admittedly, we are not using AI to get answers to math questions such as 2+2, but rather how to do things such as diagnose a cancer tumor or recognize a pedestrian in a crosswalk for...