An Alaska Airlines Boeing 737 landing at Yakutat, Alaska, on Saturday struck and killed a black bear, one of two crossing the runway. According to reports, airport crews had “cleared the runway” some 10 minutes before the jet was to arrive and had noted no wildlife. The bears were seen by the cockpit crew during the rollout after landing, just prior to impact.
So what happens when a two-engine whatever full of revenue PAX hits a flock of birds and both engines flame out. Are these things gonna be programmed to land in the Hudson ?? And what about the moral or ethical decisions that’d be required? Flying under normal rules and in normal circumstances is one thing; flying under the duress of mechanical or weather related problems is entirely different.
For me – personally – as soon as there’s no one up front, that’s when I stop flying commercially. It’s one thing to take the pilotless trains to the A and B terminals at MCO. It’s entirely another to mail myself in an aluminum tube sans pilot.
I would be surprised if AI never evolves to the point of an autonomous airliner being a technical possibility. The real question is if they will ever be ethically or legally possible, never mind whether anyone would actually want to pay to fly on one (I certainly would not). I’m sure they’d be perfectly fine as long as everything is operating normally, but what happens when everything is not normal? And what happens if one or more passengers die as a result? Who is legally and ethically responsible for those deaths? The airline, the manufacturer, the AI programmer? And that’s not even getting into the philosophical question of what’s the point of living if everything is automated (in the year 2525).
Long-suffering readers of this space are familiar with one of many YARSisms: “The very best implementation of a flawed concept is, itself, fatally flawed.” In my experience, most flawed concepts have their nexus in flawed premises. In the case of the linked video, the first flawed premise is that flying an airplane requires thought. The syllogism that this premise spawns goes something like this:
• Flying an airplane requires thought
• Machines cannot think, therefore
• Machines cannot fly airplanes
And yet they can. In fact, they do. Garmin’s Autoland is just one example.
In his absolutely excellent easy, Paul says: “In the headline, I said whether a computer can replace a pilot or not is a trivial question.” Actually, Paul’s headline asked: “Can a computer think like a pilot?” Am I picking nits? Is this a distinction without a difference? Not at all. In fact, my fundamental argument consistently has been that machines can replace pilots, precisely because flying an airplane does not require thought.
The video’s author runs full tilt with the flawed premise that flying requires thought, then attempts to convince the viewer that “machine thinking” is – and forever will be – incapable of emulating the infinite capabilities of the magnificent human mind. Paul does a great job summarizing the author’s sentiments, when he says: “We lack the imagination to accept that a machine can think because we believe only humans can do that. Only humans can recognize and respond to a novel situation beyond a computer programmer’s limited ability to account for everything. Only humans can triage closely spaced decision options and pick the right one.”
Let me pose this question: If flying an airplane can be accomplished without any thought, how germane is any computer’s lack of ability to think? Garmin’s example answers: “not at all.”
The author of the video asserts – without foundation – that Artificial Intelligence would be the way that engineers like me would attempt to do the impossible. Another flawed concept, spawned by another flawed premise.
Machine Learning probably has a place onboard certain military aircraft (warning: Skynet). But employing AI/ML aboard civil aircraft is a conceptually flawed idea (refer to earlier-cited YARSism). Why?
Because machine learning by definition includes the autonomous altering of instruction sets, as a consequence of the individual machine’s real-world experiences. Think back to your well-worn copy of The Fundamentals of Instruction: “What is Learning? Learning is a change in behavior that occurs as a result of experience.” And there’s the fatal flaw: the behavior of the machine will change, as a result of what it “learns.” Bad enough if there’s one self-taught machine in the sky. Chaos if there are thousands.
Indulge me in another YARSism: “Predictability is the foundation of anticipation.” Anticipation of others’ behavior is what allows us to navigate what otherwise would be a world of chaos.
Thus, the very concept of using AI in an autonomous aircraft control system is flawed – fatally. Consequently, there’s no point in arguing about how good some particular implementation of AI is.
A good old Expert System design is both adequate and desirable. You might want to spend three of four minutes reading this 1,200-word piece as background.
Finally (hold your applause), Larry Stencil asked: “So what happens when a two-engine whatever full of revenue PAX hits a flock of birds and both engines flame out. Are these things gonna be programmed to land in the Hudson ??”
Politely, these things will react to a total lack of thrust by managing a glide to the most-benign available landing spot. In Sully’s case, that was the only open space within gliding distance – the Hudson river. But consider this: Without casting any aspersions upon Sully’s abilities, he had the good fortune of daylight VMC conditions. A machine would be able to do the deed at night, in zero-zero weather – because it doesn’t have the human limitation of needing to be able to see.
YARS: For once, I agree with your fundamental premise. Flying does not require thinking. In fact, when humans inject too much thought into the process of flying, accidents tend to happen.
However, I have a slightly different way of thinking about the implementation of AI than you:
“And there’s the fatal flaw: the behavior of the machine will change, as a result of what it “learns.” Bad enough if there’s one self-taught machine in the sky. Chaos if there are thousands.”
What if the AI implementation was a collective learning process? What if all aircraft of the same type learned from each others’ experience via centralized AI? What if there were a way for aircraft of different types to learn from each other? This way, the aircraft would (in theory) act fairly predictably in most situations, especially situations where it’s critical that they act predictably (think in congested airspace). Rather than a busy uncontrolled GA airport on the first sunny Saturday of the year, the whole sky resembles a military formation demo team: Absolute precision, skills honed together, moving as one, knowing exactly what the other is about to do.
Now, there’s still a need to have individualized AI to allow the aircraft to learn how to operate in a degraded state and get on the ground safely, but then the greater type hive mind could learn from this experience too, in being able to more easily manage a similar degraded situation later on. I mean, I’m theorizing here, bigtime, and I’m far from an AI expert, but I like to try to look at concepts without the blinders of forcing them into our current paradigm.
“Without casting any aspersions upon Sully’s abilities, he had the good fortune of daylight VMC conditions. A machine would be able to do the deed at night, in zero-zero weather – because it doesn’t have the human limitation of needing to be able to see.” - Nail on the head. We hold up Sully and Skiles as the gold standard that is impossible to match in AI. But what did they actually accomplish relative to other crews? They managed the situation exactly as they needed to. Impeccable decision making, impeccable communication, impeccable execution. They’re only on a pedestal (though they do deserve to be on the pedestal) because they did all the right things when many (most?) human pilots would have screwed it up somehow. I’m of the belief that AI would be less prone to making those kinds of errors, and probably would improve the chances of a Miracle on the Hudson rather than reduce them. The AI wouldn’t fall into traps of overthinking. It wouldn’t be thinking about its kids. It wouldn’t have its life flash before its eyes. It wouldn’t be degraded by having lost a few hours sleep the night before. It wouldn’t hesitate because the seat of its pants disagreed with the instruments. It wouldn’t miss an instrument reading. It wouldn’t misunderstand some communication with its copilot. And the outcome would be exactly the same in night IMC as it would be in day VMC.
What you’re suggesting is this: using AI to come up with better software (in controlled, ground-bound circumstances) could be useful. Yes, in the software development environment, it could. But releasing any AI into the wild would be a fatal mistake, because each instance of the software quickly would become unique, thus rendering its behavior unpredictable, and thus unreliable. In order for this stuff to work reliably, EVERY airborne instance of the software must be identical.
You opined: “…there’s still a need to have individualized AI to allow the aircraft to learn how to operate in a degraded state and get on the ground safely.” Absolutely NOT. Operating safely under ALL conditions is a requirement of the software, not some uncompleted task whose solution is to be “discovered” by each instance, ad hoc.
“Operating safely under ALL conditions is a requirement of the software, not some uncompleted task whose solution is to be “discovered” by each instance, ad hoc.”
To your point YARS, who even in their worst nightmares could have conceived of QF32’s black swan event let alone created and then taught this highly unlikely randomness to an intelligent automated system? Isn’t that ultimately a human limitation imposed upon AI?
Captain Richard de Crespigny made conscious choices prior to and during that flight which made the difference between success and failure. Some of those choices would have been learnable by AI, some maybe not. His lifestyle included studying aircraft systems several hours per day - all teachable and learnable for AI. Prior to flight he let every pilot on board know they were part of the flight team whether they were at rest or on active cockpit duty and indeed enlisted them all during the event - a moot point for AI. Taking inventory of systems after the malfunction de Crespigny determined there was no possible way of knowing from information impartable by the aircraft how much of each system he did not have but he did ascertain that he could divine, (not necessarily determine) what he did have remaining at his disposal sufficient to complete the flight successfully by choosing the exact correct configuration and speeds and executing them accurately. If the aircraft has no built in means to impart 100% system status even under normal conditions to any crew, human or not and no checklist to cover the event, could AI have divined what it did not and did have remaining at its disposal under abnormal conditions if the aircraft had no way of imparting that information to it?
Finally, knowing that he had only about a 5 knot approach speed tolerance between too fast for runway availability and too slow aerodynamically, he decided he could more accurately hand fly the approach than could his automation.
Whereas AI could learn to choose to land in the Hudson with all engines out and execute it well, is AI and will it ever be capable of the kind of judgement and frankly accuracy required of Captain de Crespigny? At some point is an analog arm and wrist on a control stick and remaining power levers more capable of accuracy driven by a human brain than a binary robotic arm driven by binary robotic intelligence?
You seem to have missed my primary assertion: AI is the WRONG approach to autonomous aircraft control systems.
You might want to read the article that I recommended over at airfactsjournal. It includes some mention of the ad hoc nature of impediments and mitigations, an understanding of which is required, in order to wrap your mind around the essence of coding autonomous systems.
Bravo YARS. The only point of disagreement I have in Paul’s typically excellent essay, is his description of the Redbird video as “clever and well thought out”. I found it neither.
Hahaha. Spend a few hours flying a pax jet in 121 and you’ll realize how far off full automation is from reality. Not. Even. Close.
From bad GS captures (or any Nav intercepting issues for that matter) to traffic avoidance, runways, taxiing at ORD or IAH. Runway contamination. Go-arounds. Random AP disconnects, random upsets, any random “why the hell did it do thats?”. This stuff happens every single flight. The stick and rudder part, sure. The decision-making part? Nah, no way. Need eyeballs, and a human brain for all that.
Prior to my airline job I was a Mech Eng and worked with automation all day. I too once thought we ought to be on the precipice pilot-less pax airplanes. Once I started flying for a living, I was shocked at how much human intervention is needed on every flight to keep the shiny side up. We have a lonnnnggg way to go, if ever.
Please bear in mind: Autonomous flight control systems have very little in common with autopilot/FMS systems. And machine decision-making is a very mature technology.
With respect, you asked the wrong question.
What you should have asked is can machines replace pilots for the middle of the night freight runs which are harmful for pilots, and disturb people on the ground. (Working nights increases your chances of dying from cancer by five times.)
And the answer is yes.
Of course the pilots’ unions are jumping up and down and swearing than only someone with the equivalent of a masters’ degree can fly the things – they still have not absorbed the implications for the members of the night work findings.
If nature were with the machines,
there would be no need for F-16s,
let alone for DARPA and Duplex.
It would all happen automatically,
or as the inevitable by-product of
digital evolution. The scribes of the
hitech tribe worship their own idols.
Conversely, the myth of the machine
makes “ethical guidelines” irrelevant.
For the actors are programmed to be
malign, and have long since forfeited
their humanity, if they ever had any
to lose. That was Turing’s point—in
the wake of Hitler, he preferred the
company of computers, and figured
our chances of survival were better
if we traded bigotry and bullets for
reason and reflection. If machines
inherit the earth, it will be because
we failed as a species, not because
we made progress as savants.