Guest Blog: The Case For Keeping Two Pilots

Jeremiah FELIX is a pilot for a Canadian airline with concerns about the apparent move toward single-pilot airline operations.


This is a companion discussion topic for the original entry at https://www.avweb.com/insider/the-case-for-keeping-two-pilots

Case against, two pilots keep pilot wages down. Single pilots will be able to claim at least 25% more for added responsibilitiesā€¦
And that Air France plane that crashed actually had three, highly qualified, pilots on the flight deck. Arguably one, knowing what they were doing would be better than three, with two not realising the third was pulling back on the stick with all his mightā€¦

1 Like

So if single pilot is ok, when will single engine be ok?

2 Likes

I have noticed no mention of SMS in any of the airline single pilot operations discussions I have read. What would a FRAT score be in this situation? Not sure on the ā€œqualified pilotsā€ in the Air France crash mentioned. I do agree with the authorsā€™ reasoning for keeping 2 pilots in the cockpit for airline ops. Single engine airline ops would require Congress in the US to change that rule, since they mandated at least 2 engine planes for airline ops in the 1920ā€™s. Considering how ā€œfastā€ they work, not worried about that happening in my lifetime.

1 Like

Itā€™s tiring to hear the same drumbeat of ā€œautomation reduces workload.ā€ This is only partially true.

The main reason for the development of FMS systems, etc., was not to reduce pilot workload, but to increase fuel and operational efficiency. It does reduce workload at times, and in a perfect world would allow for drastic changes like single pilot operation. In a perfect world one could program an entire flight into the FMS, takeoff, engage the autopilot in LNAV and VNAV, and sail on to your destination hands-off. But we donā€™t live in a perfect world. Hereā€™s where it is appropriate to use the word ā€œnever.ā€ I have NEVER flown a flight where the planned route, planned altitude, planned approach, and planned runway remained unchanged.

There are times when automation actually increases workload, like during arrival and approach. One has to decide what level of automation is appropriate to the situation, use the other pilot to make changes (especially at low altitude), watch for traffic, and make the correct changes at the correct time. ā€œDownloadingā€ automation to an appropriate level has become an important part of operations in high traffic environments. This all becomes second nature to experienced and proficient professional crews. Take one pilot out of the picture and ā€œthe automationā€ will become overwhelming.

1 Like

Yours is an excellent comment, made I am guessing by someone who has personal experience of automation failures.
One should first ask ā€œ What is the advantage of a single pilot operation. The knee jerk answer would be to save a pilotā€™s salary. That is a nonsensical argument when one considers the cost of an accident caused by an automation failure. The Qantas Airbus 380 incident should be a wake up for all those thinking that automation can do a pilotā€™s job. Those five guys struggling to get that aircraft back on the ground.

What should always be understood, is that automation, computers, AI can only react to situations that a computer engineer can imagine happening. Sollenbergerā€™s decision to ditch in the Hudson was as a result an unimaginable collision with multiple geese. If the situation canā€™t be imagined, then it canā€™t be programmed into the system. AI is not a magical solution, because as far as anyone can forecast at the moment, a computer system can only follow a logical progression to arrive at a solution for a situation. Far from decreasing the workload, when things start going wrong, the Flight Management System can INCREASE the workload tremendously.

The Air France accident had only two pilots in the cockpit and quite definitely they demonstrated that they were NOT highly skilled. They took what would have been an annoyance to a catastrophic result.

The recent loss of F35 with its highly automated cockpit should be a Heads Up for those who believe that automation is the answer. That accident will probably be written off as pilot error, but from what has been published, it was pilot error as a result of the failure of cockpit instrumentation.

Flight time with a highly experienced command pilot and a less experienced copilotā€¦ grows that copilot in the knowledge and operation and ā€˜intangiblesā€™ that comes from long in-cockpit experience. Especially from the wide variety of cockpit crew ā€˜mixes/mixingā€™ that occurs routinely in commercial aviation.

Soooā€¦ how would an AI system provide this human-related-training to any single pilot entering the cockpit.

Would AI flight system potentially try to ā€˜fightā€™ the human pilot(s) when the unexpected happens?

Also each aircraft and each flight has subtle peculiarities that require unique ā€˜situational awarenessā€™ [intuition?] of the unpredictable/unexpected elements of flight. Sounds, vibrations, operations, etcā€¦ that are second nature to experience pilots.

Alsoā€¦ every try to have a verbal conversation with an AI chat-botā€¦ when you simply canā€™t make yourself understood? Imagine AI trying to understand ATC, air-to-air and other communications that may be odd, indistinct or scrambled.

SAE has a long list of ā€˜problems to be solved with AI driving vehicles of all typesā€™. It is scary and profoundā€¦ and helps you realize what a commonly skilled human is vastly capable of under extraordinary circumstances.

Then there is the utterly human element of aircraft command over crew and passengers and cargo and whatever else unexpectedly happensā€¦ in flightā€¦ and we have all been there.

I suspect that corporate and short-haul single pilot flightsā€¦ which is starting to be routineā€¦ might benefit the most.

I wonder how insurance companiesā€¦ responsible for the large payouts in the aftermath of aviation catastrophesā€¦ are responding to these technologies and this eventual technologies?

Iā€™m afraid your understanding of AI is somewhat antiquated, Navyjock. The (somewhat scary) difference between ā€œautomationā€ and ā€œartificial intelligenceā€ is the latterā€™s ability to synthesize novel responses unanticipated by its ā€œcomputer engineersā€.

I fear that the airlines are only a few years away from having AI in the cockpit, and that extended timeline being due only to bureaucracy, not technology. At that point, every left-seat will contain an experienced Captain, monitoring the decisions of the onboard AI in collaboration with its counterparts in other airliners and the tower. In the right seat will be the proverbial dog.

When was the last time you saw a caboose?

1 Like

Iā€™ll bet the passengers who lived through potential disasters were grateful for the second pilot in the cockpit, especially when they had to take over control.

This is why there are thousands of self-driving cars on the road.

Automation will ultimately replace the second pilot. Itā€™s only a matter of when. Automation is now capable of landing United 232 type failures and in the event of pilot incapacitation it simply lands the plane, something itā€™s been capable of for over 50 years. Even ā€œCaptain Kirkā€ took a fully autonomous ride into space in a vehicle that had NO pilot.

Aviatrexx, your reply leads me to believe that you are comfortable with AI actually controlling and/or making decisions. You are quite correct in that my appreciation or understanding of AI may well be antiquated. I think the negation of your view of the future can be situations that no one could have forecast. The Qantas situation is an excellent example in that communication between various bits and pieces such as the engines, flaps and so on to the ā€˜Brainā€™ was interrupted. I donā€™t think that any amount of AI would be able to figure out a solution to the many problems that the crew coped with if communication from the broken bit due to physical damage was not there.

I concede that my view of AI may well be antiquated, but I have been flying for 67 years military/airline/general aviation and can think of occasions, particularly in the military, where I had to recover from a situation where I canā€™t imagine how AI would have coped. One unfortunate example could be keeping an engine running that was on fire, in order to attempt to land it. Presumably AI would follow the procedure, shut the engine down, shoot the extinguisher to it and leave me sitting there. At the time I was unable to eject due to other factors.

I think this is a discussion where there are those better educated than I in computer development, who are willing to allow a computer (AI) to make rather important decisions for them which might be irreversible. To me and other aviation greybeards, a computer, (flight management system) Is a very useful tool, a screwdriver or hammer, but not to be trusted completely.

I will concede that conceivably automation could have recovered the United DC 10 incident, BUT ONLY if all of the information concerning the damage had an uninterrupted flow back to that automation ā€™brainā€™. In the case of the Qantas 380 the communication was interrupted to the extent that after landing one of the engines could not be shut down by any means from the cockpit, including pulling the fire handle or shutting off the fuel.
The two incidents, United and Qantas are excellent examples of experienced pilots saving the day.

Bottom lineā€¦.the computer has no dog in the race, and couldnā€™t care less whether the aircraft and passengers are is saved or notā€¦ā€¦perhaps somewhat like equipping the cockpit with ejection seats.

All ā€˜self drivingā€™ cars REQUIRE that there be a functional driver in the driverā€™s seat to take immediate controlā€¦ in face of the long-list of unexpected failures and encounters. The laws and insurance companies have a lot to say about thisā€¦ blaming AI failures for injury/fatal accidents is a pretty sketchy excuse for all involved.

True self driving vehicles are all experimental in nature. As the testing evolves the AI technology, it is ā€˜peeling awayā€™ the hundreds-of-thousands of situational layers of daily trafficā€¦ that would normally/routinely be encountered/solved by experienced drivers.

SAE calls these to be discovered/uncovered issues ā€˜unsettled topicsā€™ā€¦ the list is pretty eye-opening.

1 Like

I figure the shift to single-pilot operations will be a slow process, likely taking decades and tied to fleet upgrades to spread out the costs. To pull it off, airlines will need to strike a good balance between advancing technology, keeping safety a top priority, managing finances responsibly, and building public trust with clear and convincing programs.

Profitability Guesstimate (ChatGPT):

  • Short-Term (10ā€“15 years): Chances of turning a profit are pretty low (~20ā€“30%) due to high upfront costs, pushback from regulators and passengers, and the challenge of changing long-standing practices.
  • Medium-Term (15ā€“30 years): Thereā€™s a fair chance (~50%) as technology improves, rules catch up, and airlines carefully test the concept on specific routes.
  • Long-Term (30+ years): Profits become much more likely (~70%) if airlines can address safety concerns, earn passenger confidence, and realize big savings by cutting down on cockpit crews.

In the end, the success of single-pilot operations will rely on a slow, careful rollout that prioritizes safety and wins public trust, rather than rushing to save money at the expense of confidence or reliability.

I donā€™t see how you can read "The (somewhat scary) difference between ā€œautomationā€ and ā€œartificial Intelligence ā€¦ā€ and ā€œI fear that the airlines are only a few years awayā€¦ā€ and conclude that I am ā€œcomfortable with AI actually controlling and/or making decisionsā€. Iā€™m not happy with the self-driving pizza delivery 'bots plying the streets of some cities already.

However, I do know that the big difference between ā€œautomationā€ and ā€œAIā€ is that the latter is, to grossly oversimplify the matter, self-programmed. No one will have to come up with all the possible scenarios that could occur in order to program ā€œCapt. Alfa Indiaā€ to respond properly. In effect, it will learn from the experiences of all the greybeard Captains out there, the same as any diligent FO. The difference is that it will be able to ā€œflyā€ with all of them, learn from their experiences (good and bad), accumulate the wisdom of all of them, and apply that trove to whatever circumstance it encounters. Just like competent humans, only faster and more completely.

I saw ā€œcomputerizationā€ applied in many inappropriate ways back in the early 60ā€™s (remember ā€œcomputer datingā€?) and realize that any shiny new technology will be mis-applied far more often in its early years. Iā€™m counting on not being around when ā€œAIcar v1.0ā€ hits the streets, but Iā€™m afraid Iā€™m cutting it tight. Once they are ubiquitous, ā€œCapt. Indiaā€ wonā€™t be far behind.

If colliding with multiple geese is unimaginable one must not have much of an imagination. Post-incident, Sully said ditching into the Hudson was something the design engineers had never foreseen. Unknownst to him, they had evaluated the hull for the impact loads expected during a ditching and found it adequate.

To some extent you are correct we are briefed on the best way to ditch. The government even supply life jackets. Designers do not design an aircraft to survive a ditching, that is a function of the superiority of modern designs.
If you are going to base a discussion to an extent on semantics concerning unimaginable goose attacks, you could also have mentioned American Airlines losing an Electra due to seagulls being injected. It ditched into Boston Harbor in about 1965. I think everyone survived in that case.

Obviously the design was not adequate, in that there was severe damage to the rear fuselage. A flight attendant was. Dry badly injured when the floor came up. I think a baggage door was torn off as well as an engine. Consider that it was an immaculate ditching in ideal conditions. Smooth water, no ice, no waves.