Somebody remind me again how this “self-drive car” thing is supposed to help us, save lives, end Glueball Wormening and bring Peace To Mankind, etc. etc. etc.? Especially when we have crap like this happening to these “A.I.” systems?
The investigation into a fatal plane crash in Ethiopia has zeroed in on suspicion that a faulty sensor triggered an automated anti-stall system, sending the plane into a dive.
The Federal Aviation Administration received black box flight data from Ethiopian Airlines Flight 302 on Thursday, indicating that the MCAS anti-stall system was activated shortly before the crash.
The same system was implicated in the crash of another Boeing 737 Max in October in Indonesia, Lion Air Flight 610.
The MCAS is designed to push the nose of the plane down when sensors indicate that the ‘angle of attack’ is too steep, and the plane in in danger of stalling – but investigators are now probing whether a faulty sensor activated the system during a normal climb, sources say.
This, and especially after we hear that a.) the “safety” feature (i.e. pilot override) was available as an (expensive) option on the system, and b.) the pilots of said doomed airliners appear not to have had, shall we say, adequate training on the system.
Don’t even get me started on cock-ups like the faulty reservation systems, which have been in place since at least the 1970s, are one of the simplest programs in existence, and they still fall over occasionally. (Adding features which screw paying customers over*, however, doesn’t seem to have been a problem at all.)
Color me skeptical on all this stuff. Hell, I don’t even care for automatic gearboxes, let alone “self-drive” systems. “Faulty sensor”, my pale African-American ass.
*British Airways, among others, has a cute little sub-routine when you book two or more tickets at a time that automatically ensures that none of your booked seats are next to each other. So guess what? You have to go back into the system and pay extra for that “privilege” of sitting next to your wife or kids. That automatic program, I’ll wager, works perfectly every time.
I’ve been a computer programmer for FAR too long to trust my ultimate safety to either computers OR programmers. Especially since the concepts of Fail-Safe and Test of Stupidity aren’t taught anymore.
For example, in the aircraft that lawn darted because the plane thought it was stalling:. A stall requires BOTH high angle AND I sufficient air speed. If you detect the angle, check the airspeed. If the airspeed is high enough, and especially if the plane is accelerating, you’re not going to stall so leave the effing plane alone. There, a one line fix. I’ll expect Boeing to send me my consulting fee of $300,000 by Friday.
I’d be interested in how many crashes/deaths resulted from 737s stalling and the crew not recovering vs the 2 crashes/300+ deaths from this “safety” feature.
I tend to notice incidents involving 737s as my first platform as an aircrew member was a 737 variant (USAF T-43). I don’t recall any accidents caused by a stall while in flight (as opposed on final approach/during climb out after takeoff).
“they spent so much time considering if they could, they didn’t consider if they should”
“…my pale African-American ass.”
Now THAT’s uniquely funny.
I was reading the other day, can’t recall where, or I would quote it exactly.
The writer was positing that there was a difference in techniques that made these crashes happen. His theory was that “Americans” love flying and will hand-fly the aircraft up to ten thousand feet and reluctantly engage the autopilot. If anything hicccups, the first action is to disengage the autopilot and hand-fly the aircraft while figuring out what the problem is.
“Asians”, on the other hand, “Fly the computer, not the airplane” and engage the autopilot as soon as the wheels are in the well. He mentioned seeing them engage it at as little as 200 feet. They also have a different idea of cockpit coordination as in “I know what the captain told me to do is wrong, but he’s in charge and the one that signed for the airplane” as opposed to “American’s” philosophy of “I know he’s the captain, but if I let him screw up, I’m getting to the accident the same time he is”.
Just my $0.02; I was spoiled by having all my Navy flying done by one or two pilots being supervised by an enlisted man. (Flight Engineer or Crew Chief)
Americans are big on Crew Resource Management. But there is definitely a cultural component to it, and there can be culture clashes.
I read the same article, and even without that info I think I would avoid anything labeled “Ethiopian Airlines”. Just saying.
I’ve been in the unmanned aviation business for 25 years (scary, isn’t it). And I worry about aviation systems…much less some self-driving car with software cribbed together by a bunch of kids who think it’s an acceptable business practice to sell paying customers late-beta-test software.
They have no clue that “blue screen of death” in a moving vehicle means real, messy, and very dead victims.
A 737 already has an automated trim control system, and it’s had it for a while. A failure of MCAS is handled exactly the same as any other auto trim failure. While I’ll wait for the investigations to finish before making any final judgements, so far it appears that:
1) The pilots completely failed to realize they had a runaway trim problem.
2) Boeing did not widely disseminate the info regarding MCAS, as a result most pilots and maintenance personnel were unaware of the implications of having broken angle of attack sensors.
3) The fact that the MCAS warning light was an option rather than standard is a complete WTF.
I guess you kind of expect third-world airlines to operate like third-world airlines, but FFS, Boeing was not this incompetent when Alan Mulally ran the company!
One of the major issues with this new safety system is that Boeing “forgot” to add it to the pilot documentation of the 737 MAX series, and therefore it never made it into the training exercises. It can be disabled, but if you’re the pilot of a shiny new MAX-8 and you don’t know this system exists in the first place, you aren’t going to have the time to figure out the cause, let alone how to shut it off before the plane helpfully anti-stalls itself right into the ground (which is what happened in both of these crashes).
The other major issue with the system is that it only references 2 angle-of-attack sensors, one on each wing. This means that if one of them is off calibration, the system has no idea which one of the 2 is wrong so that it can be ignored. There’s a reason why the Space Shuttle contained 5 redundant computer systems, and Boeing needs to mount at least 5 total redundant angle sensors for the same reason.
Mark D is also right in that the system needs to have better weighting of the various inputs and sensors, so that if the airspeed is still good, and the vertical climb rate is good (i.e. zero or positive), then sensors that indicate a low nose angle should be ignored. And on top of that, the pilots in both crashes indicated on the CVR that the safety system was overriding their control inputs and forcing the nose down to counter a stall that wasn’t happening. Because of the lousy documentation, they had no idea why that was happening, all the way to the ground. What was Boeing thinking by skipping the user docs on this new feature? Over 300 people died because of that omission; there needs to be criminal proceedings against whatever managers decided to leave that out.
You handle a MCAS malfunction exactly the same way you handle any other type of runaway trim malfunction. You reach over and flip a switch. Then you manage trim manually. The procedure on how to do this is one of the things a 737 pilot is *required* to have memorized.
Most likely Boeing figured that trim is trim, no new procedures are required. In all likelihood, that was overly optimistic when you combine third-rate maintenance with insufficiently trained pilots.
What are the odds our new self-driving cars will have any redundancy or overrides at all?
There are a few web posts which go into some of the details of the MCAS on the 737:
https://theaircurrent.com/aviation-safety/what-is-the-boeing-737-max-maneuvering-characteristics-augmentation-system-mcas-jt610/
As a former pilot, the thing that struck me the most about this system was this: If MCAS activates (auto-trimming nose down) and the pilot “overrides” it with the electric trim switch on the yoke (the natural and proper first response), then MCAS stops executing the nose-down trim.
BUT: after 5 seconds, if MCAS decides that the angle of attack (AOA) is still too high, it REACTIVATES, and resumes the nose-down trim action.
As a pilot, I’d be thinking:
1. Everything’s OK
2. Wait: nose-down trim is occurring, even though airspeed and rate-of-climb are OK.
3. Use the yoke trim to correct trim to where I want it.
4. OK, good. Getting back to desired flight attitude. A few seconds later…
5. Wait: nose-down trim is occurring again. I thought the autotrim switched off.
6. Correct it again, the same way…
This sequence is likely to lead to oscillations, in flight attitude, airspeed, AOA, and (most painfully) altitude. There is apparently evidence in the flight data record showing just this, in the first crash at least.
Sure, there’s two other ways to stop MCAS autotrim: A separate disable switch, and the manual trim wheel. But the pilot has to a) BE TRAINED on these as part of the override protocol, and b) realize at #5 above that the hard disable is now necessary. I am really curious to find out the real story about the system documentation, and the pilot training. So far it looks very VERY bad.
Pretty close. The information I got was a definite feedback loop between the anti-stall system and the autopilot, if the autopilot was ordered to climb to cruising altitude immediately after takeoff.
That same aircraft, the day before and with a different crew, experienced the same issue. They were saved by an off-duty pilot yelling to the crew to turn off the autopilot.