October 21
It would be silly to turn away from machine learning and AI weapons systems. It would also be equally silly to think that humans have no role flying in tomorrow’s war. The future is unpredictable. This is especially true if we wind up fighting a near-peer or peer adversary. The author also misunderstands conflict in a number of ways and to varying degrees. We need to augment rather that supplant capabilities.
2 replies
October 21
▶ Jzarinnia
Excellent comment and analysis…
October 21
When the enemy’s robots have beaten our robots, will we accept the result? Or will we go out and have a fist fight to determine the real victors?
1 reply
October 21
▶ Mavis
War is rarely about sending people out simply to duel with each other, with the last man standing begin declared the victor.
It is about destroying or capturing the opposition’s resources until they capitulate. That is the traditional bomber pilot’s objective. The objective of the traditional fighter pilot is not to duke it out with the Red Baron, but to take out the opposition’s bomber. The Red Baron is there to stop that from happening.
In any case, this article is a thoughtful analysis. While there is a lot of pushback against fully automated airline cockpits, in the name of safety, it is exactly that safety plea that also justifies removing the human from a weapon system cockpit and putting themselves into harm’s way. Because so much aviation innovation has been derived from military innovation (where much of the R&D money was spent), perhaps it is also any leaps in integrity and reliability that will accelerate greater automation in civil aviation?
October 21
AI is anything but new. An autopilot is one form of it. With every claim of how AI will eliminate the role of a thinking human, I am reminded of autonomous Teslas crashing into things that were not included in the programmers expectation. I am also reminded of this exchange between reporters and the late, great General George Patton: Reporters: “General, We’re told of “wonder weapons” the Germans were working on: Long-range rockets, push-button bombing. . .weapons that don’t need soldiers.”
Patton: “Wonder weapon? My God, I don’t see the wonder in them. Killing without heroics. Nothing is glorified, nothing is reaffirmed. No heroes, no cowards, no troops. No generals. Only those that are left alive and those that are left. . .dead.”
Take humans out of warfare and you remove the chance for humanity, and peace.
1 reply
October 21
This concept was illustrated in The Terminator movies: Sky-Net
1 reply
October 21
The ol’ Albert Einstein quote that fits in perfectly here:
I am not sure with which weapons the third world war will be fought, but in the fourth world war they will fight with sticks and stones.
October 21
▶ kent.misegades
You don’t seem to understand what AI is all about. An autopilot follows strict rules, as it is more or less programmed to do, based on its input. If the input is wrong, it will do the wrong thing. If it isn’t programmed for a particular input it won’t do anything or something unpredictable (usually something unexpected).
AI doesn’t follow a pre-programmed path. It learns from data. Feed it the wrong or insufficient data, and it does the wrong things when presented with input. It doesn’t reason, it doesn’t understand, it only does what it has learned was the best, most successful way to respond to the input. Train it on a large amount of valid data, it will do the correct thing, even when presented with a new, never before seen or expected scenario, based on the accumulated “experience” or “knowledge”. Basically it will do the most likely correct thing, based on all the data it has access to. Pretty much like humans.
But, garbage in, garbage out. That applies to all programs, AI or otherwise. Including humans. There will never be a perfect program, AI or otherwise, and there are no perfect humans, either. The advantage that AIs have, is that they can access much more data than humans and that they can process it much faster than humans. They might still be biased, like humans, if they are trained on data that was biased (by humans), but they don’t get emotional or distracted.
There are always pros and cons to everything, but a properly trained AI can do a lot of things that seem quite intelligent, even though no (existing) AI is truly intelligent in the way we consider humans to be intelligent. But who knows when that will change, and what that will mean (for society and us humans in particular).
1 reply
October 21
▶ Andreas
Well, I may be an older guy (67), but I have been around the block a few times. When I started working at Cray Research (supercomputers) in 1984, one of the big efforts there was AI, which required knowledge of LISP. It was a big deal in the mid-80s, but not much more than a flash in the pan. Similar to all the euphoria over the use of Nvidia GPUs for this latest round of AI. It is the same for my wife’s new, supposedly AI-driven washing machine from GE (Chicoms really, Haier), which is also Energy Star approved. It is an expensive piece of junk that runs forever, makes a lot of noise as it attempts to self-adapt its cycle to the load, and does a terrible job of cleaning clothes. Her first washer, a no-name German brand when we lived there, did a much lot better job, because the user (a sharp housewife like my girl) made most of the decisions. Wish we could get one of those HI (Human Intelligence) washers again. Humans with experience from the School of Hard Knocks will never be replaced by computers as we are God-Inspired. That goes especially for pilots. That’s my story, and I’m sticking to it.
October 21
▶ andresmith76
Remember, Genesis is Skynet.
Prophetic? It is looking that way.
October 21
There is an episode of the original Star Trek where a computer (M5) was installed in the Enterprise and it on its own decided another star ship was an enemy and attacked it. That computer was installed to replace human crew members. There may be some truth with the author’s article, but I would prefer some safeguards that can’t be overruled.
October 22
The author just said the quiet part out loud. Something we have known for more than 20 years–crewed combat aircraft will eventually be the exception rather than the rule. This doesn’t mean the human element and decision making goes away–it may take the form of programming, remote operation and tactics, but expensive aircraft built to enable a human to occupy and operate the airplane will be less effective in the battle skies than AI uncrewed aircraft. The Pentagon needs to embrace this and think logically rather than emotionally, our national security depends on it.
October 22
▶ Jzarinnia
‘AI’ has to be done well - is it possible?
You do use ‘machine learning’, there are other terms that also predate the ‘AI’ hypeword.
1 reply
October 22
▶ RationalityKeith
True. I do use the hype words from click bait. Really the point applies to any new technology. Augmenting rather than replacing capabilities so the best capabilities are available is the idea I’m trying to convey. We still equip our fighters with bubble canopies despite having amazing sensor suites, for example.
October 22
Yes, this is a really cool article. Nicely written, and I hope that the author continues to contribute.
October 22
Isaac Asimov wrote a story, The Feeling of Power, in which our distant descendants have lost the knowledge to perform any kind of mathematical calculations and have become completely dependent on computers. They are engaged in a stalemated war with a civilization from another part of the galaxy, fought with computer-operated weapons. Then someone rediscovers how to do arithmetic. That ability is quickly put to use for building human-piloted weapons, which will give Earthlings a great advantage in the war because, to a society so dependent on computers, a “man is much more dispensable than a computer. Manned missiles could be launched in numbers and under circumstances that no good general would care to undertake as far as computer-directed missiles are concerned.”