The Innovator’s Dilemma Comes For The Air Force

You don’t seem to understand what AI is all about. An autopilot follows strict rules, as it is more or less programmed to do, based on its input. If the input is wrong, it will do the wrong thing. If it isn’t programmed for a particular input it won’t do anything or something unpredictable (usually something unexpected).

AI doesn’t follow a pre-programmed path. It learns from data. Feed it the wrong or insufficient data, and it does the wrong things when presented with input. It doesn’t reason, it doesn’t understand, it only does what it has learned was the best, most successful way to respond to the input. Train it on a large amount of valid data, it will do the correct thing, even when presented with a new, never before seen or expected scenario, based on the accumulated “experience” or “knowledge”. Basically it will do the most likely correct thing, based on all the data it has access to. Pretty much like humans.

But, garbage in, garbage out. That applies to all programs, AI or otherwise. Including humans. There will never be a perfect program, AI or otherwise, and there are no perfect humans, either. The advantage that AIs have, is that they can access much more data than humans and that they can process it much faster than humans. They might still be biased, like humans, if they are trained on data that was biased (by humans), but they don’t get emotional or distracted.

There are always pros and cons to everything, but a properly trained AI can do a lot of things that seem quite intelligent, even though no (existing) AI is truly intelligent in the way we consider humans to be intelligent. But who knows when that will change, and what that will mean (for society and us humans in particular).