Vehicle automation: Who should have ultimate control?
July 09, 2015
The Amtrak derailment and the Germanwings crash earlier this year raise a fundamental question about vehicle automation. Who or what should be in ulti...
The Amtrak derailment and the Germanwings crash earlier this year raise a fundamental question about vehicle automation. Who or what should be in ultimate control?
Interestingly, different domains have answered this differently. In aviation, the crew is the “gold standard”: the goal of automation is to assist the crew and fly the aircraft as well as the crew could. However, the crew is viewed as having ultimate control and we talk a lot about issues relating to how well (more precisely, how poorly) humans do at monitoring automation. That’s why the Asiana crash must have pilot error as its cause: no matter what the automation did or didn’t do, the pilot retains ultimate responsibility for proper airspeed.
In rail and automotive, however, the goals of automation have been the opposite: to protect the vehicle from the actions of its operator. Even the ancient mechanical lever at each red light in the NYC subway was designed to stop the train if the operator didn’t, as well as the “dead man’s switch.” Much of train automation (which, tragically was only on the southbound and not northbound tracks at the Amtrak derailment’s location) is aimed at ensuring that not only does the train stop at a signal but operates at the proper speed. Much of automotive software is designed to aid the driver by, for example, adjusting a cruise control if the car gets too close to another vehicle.
There are a number of reasons for this difference in strategy. One is due to fundamental differences in the modes of transportation. Trains are one-dimensional; the only control is speed. And a train can always be safely stopped. For automotive, that’s theoretically true as well, but though not always in practice. Moreover, there’s a second dimension: which is the “failsafe” direction in to which to steer? There isn’t a completely safe choice, but an algorithm that gradually decelerates while going straight ahead if it gets sufficiently confused will likely do quite well.
However, there’s nothing even remotely similar in an airplane: you can say “fly straight and level,” but if the avionics has gotten confused enough to put the plane into an unusual attitude, it may not know how to recover (a situation which, unfortunately, is becoming increasingly the case in human pilots as well, as AF447 and Colgan Air have shown).
But it’s also the case that we trust pilots much more than drivers and train operators. They’re better trained, have (at least theoretically) gone through significant screening, and are much more conscious of their alertness level. So we’ve taken the view that we trust them to have ultimate control over their avionics. Unfortunately, the Germanwings incident, along with EgyptAir before it, have shown that not only can pilots be impaired and less well trained than we’d like to believe they are, they can actually take malicious actions toward their airplanes.
So we now have to ask whether aviation needs to adopt the same view as the rail and automotive domains and give the ultimate override to the avionics to protect the aircraft from improper actions by the human. If you have overrides in both directions, one has to take priority. Which one? As engineers, most of us feel we’re better able to improve software than we are to improve the human species.
But how? If there’s no “off switch” or failsafe mode, the reliability standards of such software will have to be much higher than even the very high standards we have today. And we can’t ignore the specification (requirements) side, which will have to take into account many more scenarios than we do today. How do we do these things? I believe that accomplishing them is a major challenge for our industry in the coming decade.
Richard Kenner is a co-founder and Vice President of AdaCore. He was a researcher in the Computer Science Department at New York University.