Where the software meets the road: Certifying the safety of self-driving cars

By S. Tucker Taft

Director of Language Research

AdaCore

December 02, 2017

Where the software meets the road: Certifying the safety of self-driving cars

How do you prove a vehicle can safely operate autonomously? You might not be able to as the systems are currently envisioned. So are self-driving car efforts doomed before they really begin?

The automotive industry is facing one of its biggest challenges in decades, namely developing wholly or partially self-driving vehicles that are safe for the mass market. This represents perhaps the most complex safety certification problem ever attempted.

How do you prove to a suitable level of confidence that a system as complex as a modern automobile can safely operate autonomously, in the middle of multi-lane traffic, pedestrians, bicyclists, farm vehicles, construction vehicles, bridges, cloverleafs, stop lights, toll booths, etc.? The answer is that you might not be able to prove the desired level of safety as the systems are currently envisioned. So are self-driving automobile efforts doomed before they really begin?

The aviation industry faced the challenge of autonomous operation of aerial vehicles many years ago, and advanced autopilots are now a standard feature on every modern commercial jet – including autopilots that can take off and land the planes. Fully autonomous aerial drones have also become commonplace. Clearly there are lessons to be learned from the aviation industry, despite the fact that its challenges are substantially simpler than those of the auto industry because of the much more controlled nature of the air and ground environments.  Whatever the aviation industry has done to ensure safety, the auto industry will likely have to do even more to convince a skeptical public.

What has the aviation industry done to produce their remarkable safety record? The aviation industry has embraced rigorous development, testing, and maintenance processes, with safety certification required before commercial use of products in civil airspace. Software has been an important element of aviation control systems for decades, and there are special safety-oriented processes (e.g., DO-178C) specifically aimed at reducing the likelihood of software failures. Furthermore, when some part of the software is not amenable to direct verification (for example, because it is part of a commercial off-the-shelf product (COTS)), a separate "runtime safety monitor" may be required to detect and flag what might be "hazardously misleading information" (HMI)[1].

The record of these efforts by the aviation industry to ensure safety in their software-intensive systems is impressive, with no known catastrophic commercial aircraft failure being tied directly to a software fault. Nevertheless, there have been catastrophic failures that were due to the software being misled by unexpected combinations of bad sensor data, and cases where loss of life was avoided only thanks to the expertly trained pilots on board.

Both the record of safety in the aviation industry and the cases where bad sensors caused failures (or trained pilots avoided them) indicate that the safety-oriented processes used in aviation help, but are not infallible, and to some extent rely on human backup. To create a fully self-driving car, the auto industry needs to learn from these lessons and realize that their problem is even harder – not just because of the more challenging environment in which cars operate, but also because of the possible lack of an expert driver trained and ready to take over in an emergency. To compound complexity for the auto industry, machine-learning-based (ML-based) approaches are being widely adopted to provide intelligent controllers for self-driving cars, and traditional means to verify the safety of software do not necessarily apply to machine-learning-based control systems.

So what can the auto industry do to face this safety-certification challenge as they develop self-driving automobiles? First of all, the industry and the agencies that oversee the use of highways need to recognize that software-intensive systems require disciplined approaches to development, verification, and maintenance. The number of internal states of a software-intensive system is far greater than a largely-mechanical system typically has, and ensuring that the system will behave within a desired safety "envelope" requires formal, systematic development and verification methods using the best software tools and languages available.

The second step for the auto industry, particularly as ML technologies take over critical parts of the control system, is to incorporate into their self-driving systems a software component that is not machine-learning based, but which continuously monitors the ML-based system to ensure it remains within its safety envelope (similar to the runtime safety monitors used in the aviation industry for certain COTS components, as mentioned above). With ML approaches, a runtime safety monitoring approach may be the only way to produce a system that can be certified as safe. Active ongoing research in the area of runtime safety monitors is being conducted at several institutions[2].

Finally, certifying a safety-critical system as complex as an autonomous automobile will inevitably require a convincing demonstration of safety. The FAA, FDA, and other agencies are beginning to adopt the notion of a formal "assurance case" or "safety case"[3]. A safety case is a tree-like structured argument that shows a system is safe by breaking the argument down into claims and subclaims, and then showing how each claim is backed by a combination of direct evidence and verified subclaims. Using this approach from the beginning helps identify what critical evidence is needed to demonstrate convincingly that the system is safe. 

If self-driving vehicles are ever going to become a mass market reality, the systems will need to be developed systematically, monitored continuously, and shown to be safe in a structured and convincing way.

S. Tucker Taft is Director of Language Research at AdaCore.

AdaCore

www.adacore.com

References:

1. FAA CAST-29 paper: Use of COTS Graphical Processors (CGP) in Airborne Display Systems

2. Kane et al, "A Case Study on Runtime Monitoring of an Autonomous Research Vehicle (ARV) System" from the RV 2015 Conference

3. "Assurance Cases." Performance & Dependability | Tools & Methods | Assurance Cases. November 27, 2017. Accessed December 01, 2017. https://sei.cmu.edu/dependability/tools/assurancecase/index.cfm.

1980-2000: Chief Scientist -- Intermetrics, Inc. and AverStar; 1990-1995: Lead Designer -- Ada 95; 2000-2002: CTO -- AverCom Corp (A Titan Company); 2002-2012: Founder -- SofCheck, Inc.; 2009-present: Designer -- ParaSail Programming Language; 2012-present: Director of Language Research -- AdaCore Specialties: Static Error Detection for Software, Programming Language Design and Implementation, Programming Language Standardization, Parallel Programming, Web-Based Project Database

More from S. Tucker