Most of the recent attention about risks arising from the use of digital technologies has focused on security, including concerns over data breaches and Russian election hacking; privacy, as revelations about misuse of Facebook data have accelerated in number; and AI, because we have become aware of life-or-death decisions that are increasingly being placed in the hands of bots and robots, as, for example, in their roles in autonomous vehicles and weapons.
Yet there are risks with other technology that we regard as mature and benevolent. Consider, for example, software that assists pilots in guiding, stabilizing, and landing airplanes. Most recently, these risks have become apparent in the case of the Boeing 737 Max. There were two crashes with many fatalities, one in Indonesia in October 2018, the other in Ethiopia in early March 2019.
Data from flight recorders and recordings of conversations between pilots and control tower personnel suggest that the ultimate cause in both crashes was a conflict in readings of the planes’ angle with respect to the ground. This triggered an automated safety system MCAS, which had been programmed to push the plane’s nose down to avoid stalls while the plane is making steep turns under manual control. Pilots noticed that this was being done erroneously, and attempted to pull the plane up, but the automated system continued to push the nose down, resulting in a fatal 5-minute tug-of-war. This case illustrates many aspects software’s role in both ensuring safety or in imperiling life. I shall first discuss issues related to design and engineering, then focus on political and commercial pressures.
The 737 Max is a patchwork plane that mixes modern technology with controls and procedures that date back to the 1960s. Even worse, there are inconsistencies between the design as documented and what was built. Inadequate testing of MCAS has betrayed sloppiness in safety engineering, which has been exacerbated because there are insufficient simulators, and they do not portray actual behaviours with sufficient accuracy. Finally, in the fatal tug-of-war, pilots could not react fast enough.
Also, because of political and economic pressures to update the plane quickly, MCAS was introduced without being scrutinized sufficiently by the U.S. Federal Aviation Administration (FAA). Boeing began work on an improved version of the software after the first crash, but this was delayed in part due to the U.S. government shutdown. Many complaints by U.S. pilots about the way the 737 Max performed in flight had been registered in a federal database, but had been ignored. Pilots had also complained vigorously to Boeing executives after the first crash.
This tragic example can be added to those discussed in Chapter 8 of Computers and Society: Modern Perspectives —the tragic radiating to death of individuals with the Therac-25 radiation therapy machine, and the role of digital technologies in the Three Mile Island and Chernobyl nuclear meltdowns. These cases illustrate how safety depends upon properly designed and adequately tested technology; feedback systems that allow problems to be registered and acted upon; good human-computer interfaces; and human oversight and training. It is wise to keep such examples in mind as we move towards the adoption of autonomous vehicles — too hastily in my view, although I believe that ultimately most vehicles will be driven by computers — and also as we increase the use of military drones and autonomous weapons — and thereby likely trigger a new arms race in such weapons.
FOR THINKING AND DISCUSSION
What lessons can be drawn from the Boeing disaster for use in procedures and decisions without respect to the design and commercial introduction of autonomous vehicles?