Are we proving (over and over) that partial automation is a less than ideal choice? Perhaps we need to make the call - is the pilot in control, or is the computer? If the pilot is in control, the computer should not be making inputs.
We could go fully-automated - pilot out of the loop, monitoring only (humans kinda suck at this, so the less monitoring, the better). Like an always-on autopilot that does everything, gate to gate. More development is needed here, but it is a solvable problem.
We could go non-automated and stop adding ‘features’ that provide inputs that the pilot isn’t commanding and thus potentially confusing him and leading to disaster. The pilot will need to ‘fly’ more than is becoming common today, but how is that not a good thing when things go awry?
The middle ground seems to be increasingly littered with tragedy, eroding piloting skills, and confusion (“what is it doing now?”).