The Innovator’s Dilemma Comes For The Air Force

The following was written by Neeraj Chandra, an IT expert who "has an interest in building artificial intelligence and software systems that benefit society."


This is a companion discussion topic for the original entry at https://www.avweb.com/insider/the-innovators-dilemma-comes-for-the-air-force

It would be silly to turn away from machine learning and AI weapons systems. It would also be equally silly to think that humans have no role flying in tomorrow’s war. The future is unpredictable. This is especially true if we wind up fighting a near-peer or peer adversary. The author also misunderstands conflict in a number of ways and to varying degrees. We need to augment rather that supplant capabilities.

Excellent comment and analysis…

1 Like

When the enemy’s robots have beaten our robots, will we accept the result? Or will we go out and have a fist fight to determine the real victors?

War is rarely about sending people out simply to duel with each other, with the last man standing begin declared the victor.

It is about destroying or capturing the opposition’s resources until they capitulate. That is the traditional bomber pilot’s objective. The objective of the traditional fighter pilot is not to duke it out with the Red Baron, but to take out the opposition’s bomber. The Red Baron is there to stop that from happening.

In any case, this article is a thoughtful analysis. While there is a lot of pushback against fully automated airline cockpits, in the name of safety, it is exactly that safety plea that also justifies removing the human from a weapon system cockpit and putting themselves into harm’s way. Because so much aviation innovation has been derived from military innovation (where much of the R&D money was spent), perhaps it is also any leaps in integrity and reliability that will accelerate greater automation in civil aviation?

AI is anything but new. An autopilot is one form of it. With every claim of how AI will eliminate the role of a thinking human, I am reminded of autonomous Teslas crashing into things that were not included in the programmers expectation. I am also reminded of this exchange between reporters and the late, great General George Patton: Reporters: “General, We’re told of “wonder weapons” the Germans were working on: Long-range rockets, push-button bombing. . .weapons that don’t need soldiers.”
Patton: “Wonder weapon? My God, I don’t see the wonder in them. Killing without heroics. Nothing is glorified, nothing is reaffirmed. No heroes, no cowards, no troops. No generals. Only those that are left alive and those that are left. . .dead.”

Take humans out of warfare and you remove the chance for humanity, and peace.

This concept was illustrated in The Terminator movies: Sky-Net

The ol’ Albert Einstein quote that fits in perfectly here:

I am not sure with which weapons the third world war will be fought, but in the fourth world war they will fight with sticks and stones.

You don’t seem to understand what AI is all about. An autopilot follows strict rules, as it is more or less programmed to do, based on its input. If the input is wrong, it will do the wrong thing. If it isn’t programmed for a particular input it won’t do anything or something unpredictable (usually something unexpected).

AI doesn’t follow a pre-programmed path. It learns from data. Feed it the wrong or insufficient data, and it does the wrong things when presented with input. It doesn’t reason, it doesn’t understand, it only does what it has learned was the best, most successful way to respond to the input. Train it on a large amount of valid data, it will do the correct thing, even when presented with a new, never before seen or expected scenario, based on the accumulated “experience” or “knowledge”. Basically it will do the most likely correct thing, based on all the data it has access to. Pretty much like humans.

But, garbage in, garbage out. That applies to all programs, AI or otherwise. Including humans. There will never be a perfect program, AI or otherwise, and there are no perfect humans, either. The advantage that AIs have, is that they can access much more data than humans and that they can process it much faster than humans. They might still be biased, like humans, if they are trained on data that was biased (by humans), but they don’t get emotional or distracted.

There are always pros and cons to everything, but a properly trained AI can do a lot of things that seem quite intelligent, even though no (existing) AI is truly intelligent in the way we consider humans to be intelligent. But who knows when that will change, and what that will mean (for society and us humans in particular).

Well, I may be an older guy (67), but I have been around the block a few times. When I started working at Cray Research (supercomputers) in 1984, one of the big efforts there was AI, which required knowledge of LISP. It was a big deal in the mid-80s, but not much more than a flash in the pan. Similar to all the euphoria over the use of Nvidia GPUs for this latest round of AI. It is the same for my wife’s new, supposedly AI-driven washing machine from GE (Chicoms really, Haier), which is also Energy Star approved. It is an expensive piece of junk that runs forever, makes a lot of noise as it attempts to self-adapt its cycle to the load, and does a terrible job of cleaning clothes. Her first washer, a no-name German brand when we lived there, did a much lot better job, because the user (a sharp housewife like my girl) made most of the decisions. Wish we could get one of those HI (Human Intelligence) washers again. Humans with experience from the School of Hard Knocks will never be replaced by computers as we are God-Inspired. That goes especially for pilots. That’s my story, and I’m sticking to it. :slight_smile:

Remember, Genesis is Skynet.
Prophetic? It is looking that way.

There is an episode of the original Star Trek where a computer (M5) was installed in the Enterprise and it on its own decided another star ship was an enemy and attacked it. That computer was installed to replace human crew members. There may be some truth with the author’s article, but I would prefer some safeguards that can’t be overruled.