One subject I talk about frequently at conferences and in print is the risk of artificial intelligence in terms of trust and control. I’m not talking here about AI running amok, but rather with AI being dependable.
It is quite interesting that the sort of AI we have been doing – specifically, artificial neural networks—does something very few other computer software do.
Given the exact same inputs and conditions, the output of an AI system is not always the same. Given the same inputs, the AI will sometimes come up with a different answer.
The formal name for this behavior is non-determinism.
There is a second corollary to this. Given the same inputs, the AI process will sometimes take a different amount of time to complete its task.
This is simply not normal behavior for a computer. We have gotten used to 2+2 = 4 on a pretty consistent...