Alaska Jet Vs. Black Bear - AVweb

Some of these questions have been answered by elevators.

Automatic elevators existed for many years, but most people didn’t trust them. Only elevators with a live, human operator on board was considered safe.

Until the elevator operator unions went on strike, the biggest occurring in NYC in 1945. The inconvenience of so many people climbing so many stairs in so many skyscrapers spurred the adoption and deployment of automatic (aka ‘driverless’) elevators over the next decade.

That doesn’t mean automatic elevators are perfect - they still malfunction and people die. But the main public perception nowadays is that they’re “safe” and convenient.

“Driverless” cars will likely be viewed the same way one day, though it will probably take a human generation after the technology matures.

As for “pilotless” aircraft - it will probably be a hybrid system of fully automated flight with a human “pilot/monitor” to keep tabs and things (and feed the dog…). Similar to several modern partially- and fully-automated passenger train systems that still have an engineer on board ‘just in case’.

Granted, the current 100% automated trains are little more than ‘horizontal’ elevators and are far removed in complexity from piloting an airplane. But they help answer some of the legal and ethical questions about automated transport in general.

There’s a sardonic saying among the AI academics and researchers that says “artificial intelligence is always ten years away.” The more they tackle the problem, the more they learn about how difficult the problem is. In some ways it reminds me of this quote by research engineer and scientist Emerson W. Pugh “If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.”

That being said - I can imagine, at some point, computers and programs will get sophisticated enough to be considered artificial intelligence. But I think that point is farther away than most people realize. It’s easy to look back and see how far we’ve come. But it’s harder to look forward and see how much further we have to go.

People who worry about rogue AI should keep in mind the basis behind the method actor’s “what’s my motivation?” question. As in “why would my AI-driven car WANT to kill me?”

I envision the eventual development of general-purpose core “AGI engines”, hardware/software of scaled capacity operating under a shell application, an application which in turn constrains & directs the learning & optimizing efforts of the AGI core to serving the purposes of the application. A long way from a free-form ASI Skynet, which apparently had carte blanche to write its own application shell.

In any case, more concerning than rogue AI in my estimation is the problem of dealing with the ever-increasing percentage of humanity which really has no “application”, nothing to offer society and vice-versa. Existential angst, anyone?

When it become statistically safer to keep the pilot from interfering with the machine rather than allow the pilot to take control from the machine, it will be safer to fly without a pilot. Such machines may well make fatal mistakes like the Redbird video warns, but if humans make more, the machine is still the right choice. We are progressing towards that point, with more and more work being done by cockpit machines or devices. Before pilots are eliminated entirely, there will be a class of machines that will warn the pilot of his errors in real time in order to prevent disasters or even risky or sloppy flying, while leaving ultimate and final control with the pilot. At some point after that, allowing the pilot to remain in ultimate control will be regarded as the riskier if not actually reckless option. We will surely get to that point before the turn of the century, if not much sooner.

The moral and ethical decisions are seriously lacking in most AI conversations.

Take the hypothetical car in the video, with the passenger sitting in the back.

If a police officer were to pull the car over for speeding, who would get the ticket? The passenger? The programer? The manufacturer? Would the cop even write the ticket?

But, you say, the car would never speed so it would never get pulled over for speeding. To that, well, I never speed, so why do I get pulled over for speeding and the AI gets the pass?

Further, imagine that the AI vehicle is driving through a neighborhood and a child darts in front of the car. The AI quickly calculates that the brakes will not stop the car in time to miss the child. To miss the child, it must turn and go to the sidewalk. On the sidewalk, an elderly couple is taking a walk. If the AI chooses the sidewalk, it will not miss the elderly couple. Which person(s) will the AI choose to kill? Or if you like, which will the AI allow to survive? Does the AI make that decision or the programmer?

“In any case, more concerning than rogue AI in my estimation is the problem of dealing with the ever-increasing percentage of humanity which really has no “application”, nothing to offer society and vice-versa.”

I hesitate to ask you for clarification because it sounds like you’ve joined ranks with the most dangerous despots in history and present day North Korea. Our society is fundamentally built on the premise that we’re all created equal which in my interpretation does not square with your estimation. Hopefully I’ve missed your point and I’m wrong in my interpretation of your estimation.

I interpreted the statement as meaning once AI takes over more and more jobs, what will be left for people without the skills to do something else? In other words, will AI create a world where there are no longer enough jobs for humans?

I remain somewhat optimistic that new jobs will replace old. For example, in 1900 something like half the population was involved in agriculture. Today that figure is about one percent. We don’t have 50% unemployment because as new technologies replaced much manual labor, new jobs arose.

Though I say “somewhat” optimistic because while previous technologies change the type of labor the worker performed, AI is aimed at the worker itself. So the future will be… interesting… to see what new jobs come along that AI can’t do.

A lot of machine “learning” is based on neural networks, but the results of this learning are - at least currently - impossible to audit or unravel. There are interesting experiments that show that minor alterations to e.g. traffic signs can render them unreadable to or even cause current technology traffic sign awareness systems to “see” a different sign. Rather innocuous looking stickers can foil camera-based perception systems. Humans and neural networks rely on entirely different means to perceive the world and we certainly don’t want to end up at a point where a strategically placed dot pattern on a billboard under a final approach will cause an automated airliner to go-around or worse. The “if it acts like a dog” approach to test automated piloting systems by submitting them to the same tests human pilots have to pass to get a license is a fallacy. I have no doubt that autopilots are able to fly better than pilots as they lack the slow biological processing between perception and action. But pilots are required to demonstrate but a small percentage of their skills during a check ride because of the assumption that they will be able to use their systems knowledge to extrapolate in case something requiring extra skills happens to them. There is no base in assuming that a machine learning system will be able to deal with a situation it has never encountered before and much of our current certification system is not based on multiple failures happening (although they do, as demonstrated by the Quantas A380 incident. (As impressive as the Garmin Autoland is, it is designed as a measure of last resource to avoid an even worse outcome - crashing without a pilot - and AFAIK is based on the assumption that there is no other failure beyond the pilot not being able to perform his duties. Should the pilot pass out because e.g. an engine failure excites him to the point of a heart attack, the Garmin system won’t be as helpful as another pilot. The much cited dogfight scenario was based on the computer opponent having total situational awareness, something not usually afforded to human pilots who have to fly airplanes the actual status of many parts of which they don’t know for lack of sensors, external cameras, etc.)

Paul, you’ve aroused the philosopher/technologist community. Your article and the comments about it are most interesting. As for me, a former Gulfstreamer rooted in dusty old Honeywells, it’s all moving very fast and well beyond my scope of reference. These days I am content to hand-prop my trusty steed to go up and look down on the thoroughly Google-surveyed earth and enjoy God’s creation as it has always been, despite man’s constant effort to somehow make it “better”.

Ditto Alex right down to being “a former Gulfstreamer rooted in dusty old Honeywells, it’s all moving very fast and well beyond my scope of reference” and who “enjoys God’s creation as it has always been” from a small airplane.

Those dusty old Honeywells did however represent the cutting edge at one time, and I well remember the very day when I had to make the conscious decision, and a conscious decision it was, to move on from the comfort of the steam gauge and the VOR to a new architecture which required programming at least a basic portion of the flight prior to engine start. I’m audacious enough to believe I could still be up to the task of deciding and even looking forward to moving on to whatever comes next.

My son captains one of those new fully-Garmin Citations at NetJets. What he’s told me about that has left me in the dust. ASCB is alive and well but now it’s an information superhighway compared to what we learned back in the day.

Fantastic article. The largest artificial neural network in existence today (late 2020) consists about 100 billion neurons. That runs on hot racks of GPU’s weighing thousands of pounds and consuming 20 to 100 kW of electric power. Our brains contain about 100 trillion neurons. In electronics, increases in component count proceeds on an exponential curve. Getting all that compute power into a low-power and light weight flight ready package represents some pretty grand technological challenges. As an engineer, I believe solution of these will take a while.

Secondly, deep machine learning of the type that the computer scientists feel can mimic or surpass the human brain in learning tasks exhibits great skill when tested within the range of the training data. The challenge arises outside the range of training data, where the machine learning algorithm is forced to extrapolate. The human pilot relies on experience, often with good outcomes, sometimes not. How will HAL deal with the unanticipated? I suspect in a similar fashion to its organic counterparts.

The campy and way ahead of its time 1970’s sci. fi. flick “Dark Star” provides an entertaining example of where I’m coming from.

“Otto” won’t need a layover nor will it have the opportunity to get drunk then report for the next flight hungover.

First we need to agree on terms. One accepted machine design definition states that ‘thinking is the manipulation of memory’. We think when we analyze memories in relation to other memories (I guess when we sit and ‘think about something’ that happened) and when we apply memories to current input (the aircraft is stalling, I apply training memories to solve the problem). So the poll question is badly formed: of course a computer can “think like a pilot”, all it has to do is manipulate memory. So that’s a definite YES but not to the most pertinent question. Which is: what if something happens which doesn’t fit our stash of training/memories we can apply? These are the so-called ‘corner cases’ which other commenters raise. They require some specific design to define some handling of unexpected situations, neglect of which have famously led to fatalities in the ‘smart’ aircraft systems now in use. The Airbus A400M crash in Seville is an example of really bad embedded control initialization and exception handling. Until we can get those foundational aspects right, piling AI of any kind on top of them will do little good. The core of the design, including power on self test and runtime exception handling must be robust or the whole design is brittle. And how do we test and verify that the design is robust? There are methods and tools but few universally applied standards: witness the 737Max debacle. It’s a messy problem with no clean solutions. That doesn’t make it insoluble, just very challenging. I am an industrial embedded control systems designer and forensic engineer and it still sometimes surprises me how many unexpected ways things can fail. We’ll get there first with self-driving cars where the bar is lower: 37,000 motor vehicle fatalities in 34,000 incidents (in 2016). At least computers don’t get drunk or fall asleep. Driving is a perhaps simpler 2D problem vs 3D in the air. But look at the SpaceX automated docking with the ISS. Solutions to some of this may be out of the scope of our current experiences, just like many of us didn’t imagine the internet or cell phones… or crossing the Atlantic nonstop.

I like this!

Neurons rule!

Can a computer think like a pilot?

Definition of Think by Merriam-Webster:
Think…transitive verb. 1 : to form or have in the mind. 2 : to have as an intention… thought to return early. 3a : to have as an …. think it’s so. b : to regard as or consider… think the rule unfair.
‎Definition of thinking · ‎I Think So · ‎Come To Think Of It · ‎I Think Not

Dictionary.com
Verb (used without object), thought, think·ing.
To have a conscious mind, to some extent of reasoning, remembering experiences, making rational decisions, etc.
To employ one’s mind rationally and objectively in evaluating or dealing with a given situation:
Think carefully before you begin.
Verb (used with object), thought, think·ing.
to have or form in the mind as an idea, conception, etc.
to have or form in the mind in order to understand, know, or remember something else:
Romantic comedy is all about chemistry: think Tracy and Hepburn. Can’t guess? Here’s a hint: think 19th century.
Adjective
Of or relating to thinking or thought.
Informal. stimulating or challenging to the intellect or mind:
The think book of the year.
Compare think piece.

I love Paul’s thought-provoking title asking this question: Can A Computer Think Like A Pilot? It’s A Trivial Question. My answer is no.

Taking in consideration what the word think means, a computer cannot really think. The human mind, at any given time is a sum total of all life’s experiences. Whatever, the mind has experienced such as reading, study, reflection, analyzing, including what all of our combined senses gathered experientially, in addition to sharing and receiving from other human minds their experiences, is something a computer cannot do. This accumulation takes place even in the womb. Mothers and fathers can literally connect with the developing baby’s mind and body in the womb with simply sound or an external caress.

A computer cannot have an intention and then change its mind without being stimulated by data to force the change. It can only react to information going into it. That is not thinking.

A computer has a manufacture date…sort of a birthday. At that point, the sum total of its parts has to be started with some sort of external programming. It has no internal instinct, no sense of itself during manufacturing, no bent or particular inquisitiveness to help its ability to excel in one particular direction or another. It has to be programmed, to be directed toward a specific designed function. Eventually, it too can become a sum of its total experiences but cannot recognize a need for a behavior change.

Staying within the confines of the question “can a computer THINK like a pilot”, using the term think correctly, it cannot.

It can react to programmed information. Or can accumulate data and make a decision in reaction to algorithmic accumulation of information that can manipulate controls properly for that moment. But as been noted by many far smarter than me, accumulating data, both good and bad , AI cannot make a behavioral change that defies that sum total. In many flying cases, pilots make the right decision when all the accumulated data suggests the proper reaction should be far different.

Sully and Skiles made the right decisions, converting those decisions to the right actions, executing perfectly. Yet there are some who have stated they could have made an airport had they reacted sooner suggesting AI could have executed even better. Maybe even a night landing in the water IFR.

But would AI have walked the passenger compartment making sure all of the passengers were off the airplane? Would AI have provided tangible encouragement to those passengers who might have panicked without the crews considerable nobility exercising compassion, diligence, and a sense of duty that benefited everyone aboard contributing to the overall safety of the entire event? Absolutely no.

Therefore, a computer cannot think like a pilot. It can only react without regard for human needs. It can only react to the needs of the machine. To me, that is not enough.

Having recently retired from a 40 year military/civilian career from flying off aircraft carriers to piloting 787’s, I have been fortunate to only having had a few serious mechanical issues or close calls involving the weather or near midair’s. I never had an engine fail or catch fire (inflight). I am convinced, however; that my human intervention, or that of a fellow crew member on several occasions, prevented the loss of an aircraft and potential loss of life. No artificial intelligence would be capable of doing what the crews of United 232 or Qantas 32 did to save their airplanes and passengers’ lives.

There are times that require hands on controls, when all autopilots have failed, all electrics and/or all hydraulics are out and the airplane is essentially a falling paperweight needing guidance in the direction of the crash. Al Haynes and Denny Fitch are the heroes no computer can ever be.

“But would AI have walked the passenger compartment making sure all of the passengers were off the airplane? Would AI have provided tangible encouragement to those passengers who might have panicked without the crews considerable nobility exercising compassion, diligence, and a sense of duty that benefited everyone aboard contributing to the overall safety of the entire event? Absolutely no.”

Politely, that’s what flight attendants do.

““Stephen Hawking has said, “The development of full AI could spell the end of the human race.” Elon Musk has tweeted that AI is a greater threat to humans than nuclear weapons. When extremely intelligent people are concerned about the threat of AI, one can’t help but wonder what’s in store for humanity.”” Excerpt from The Brain vs. Computers by Fritz van Paasschen of the Thrive Global. Good article.

medium.com/thrive-global/the-human-brain-vs-computers-5880cb156541#:~:text=Brains%20are%20also%20about%20100%2C000,or%20minus%20a%20few%20decades.&text=Computer%20processors%20are%20measured%20in%20gigahertz%3A%20billions%20of%20cycles%20per%20second.

My impression is that human nature will slow down or prevent computer systems autonomy in Commercial aviation where transport passengers are involved, but not cargo - at first. However, the idea will continue to crawl, maybe to the end of the century, before partial AI operations are made possible.