Automotive safety and security: Is your testing good enough?

June 16, 2017


Automotive safety and security: Is your testing good enough?

  So you’ve now tested your software. But do you know how well you’ve tested it? How do you measure the effectiveness of your testing? The ever-growing volume and complexity of automotive software...


So you’ve now tested your software. But do you know how well you’ve tested it? How do you measure the effectiveness of your testing? The ever-growing volume and complexity of automotive software is placing an increasingly heavy burden on developers to verify it for not only functionality, but also to ensure it addresses the requirements for safety and security. On the surface, that would mean requirements-based testing. That is certainly important, but on a deeper level, you need assurance that such critical code has been adequately tested. Have you tested to see if there might be unusual conditions, which, combined with hidden subtle errors in the less-frequently used parts of the code, could cause some serious malfunction? You have to measure the effectiveness of your testing.

Coverage analysis can be applied in whole or selectively to meet ISO 26262 guidelines

That may sound like adding still another heavy burden to the developer. Yet it can be efficiently and cost-effectively managed with a suite of testing tools — e.g., requirements tracing, static and dynamic testing, unit and integration testing, etc.—that include the ability to evaluate the extent and the nature of that testing. In other words, it’s important to understand what has and has not been tested, why, and how those tests relate back to the functional, safety, and security requirements. As a whole, this is achieved by means of coverage analysis, but that analysis has different techniques and, more specifically, levels, that can be applied to the entire application or more selectively to those parts whose malfunction could endanger the system’s function or result in injury or death. It is important that such tools are integrated and can be applied early and throughout the development cycle so that they can immediately indicate that your testing has not only proven the functionality of the code, but that it has also exercised all of it.

In the case of motor vehicles, we have the ISO 26262 standard guidelines for the development of automotive software, including safety guidelines classified in Automotive Safety Integrity Levels (ASILs) from A to D, with D being the most critical. It should come as no surprise that the guidelines for safety call for deeper testing of the braking system than for the entertainment systems. Using a set of analysis tools we should be able to confirm that we have tested the software to be deployed in the vehicle to the proper levels of assurance for safety and security.

Integrated tools support multiple testing layers

Software quality analysis often starts with static analysis, which examines the source code. While in itself it does not perform coverage analysis, static analysis can analyze the quality and structure of the code, and, if desired, check for compliance with coding standards. Fundamentally, the knowledge and understanding of the code gained through static analysis automates and accelerates test harness generation as well as input generation for test case development.

Dynamic analysis is distinct from static analysis in that the code under test is compiled, linked, and executed. System test and unit test are both examples of dynamic analysis, and they are generally used in combination. As the name implies, system test exercises the system as a whole to show that functional, safety, and security requirements are properly addressed. Unit testing allows much earlier verification of functionality, provides a means to exercise defensive code, and can add further value by testing robustness by means of combinations of valid and invalid inputs, range testing, and boundary conditions.

Coverage data within the tool suite is generated through lightweight code instrumentation. That instrumentation allows the tools to track which code has been exercised and generate information on the scope and detail of the coverage achieved by the utilized test cases. Depending on the level and depth of coverage analysis required, driven by the ASIL, the tool suite reports coverage at levels such as statement, branch, and modified condition/decision coverage. For a list of what is required see tables 12 and 15 from the ISO 26262 standard.

[Table 12]

[Table 15]

Modified Condition/Decision Coverage (MC/DC) is an in-depth coverage analysis technique that is not always well understood. Clearly, very complete code coverage would be achieved by exercising all possible conditions at every branch (Branch Condition Combination Coverage, or BCCC). However, when a branch is dependent on four or more conditions, this leads to an impractical number of tests. For these branches, MC/DC reduces the amount of required testing down to N+1 paths rather than the N2 paths implied by BCCC. MC/DC ensures that each entry and exit point is involved; each decision takes every possible outcome; each condition in a decision takes every possible outcome, and that each condition in a decision is shown to independently affect the outcome of the decision. The results, along with any gaps that require additional attention, should be displayed by the coverage analysis tool.

A complete tool suite can also provide analysis in the form of data and control coupling coverage. ISO 26262 requires that there is “restricted coupling between software components” and that it is a sensible extension of that principle to ensure that every invocation of a function has been exercised and that every access to the data has been exercised. Data coupling analysis follows variables through the source code and reports exactly which values have been used to discover possible anomalous use.

[Figure 1 | The TBvision tool from LDRA shows sections of code that have been analyzed and the coverage results for the different levels of testing applied.]

In the automotive industry it is also increasingly popular to generate the software architectural design using tools with graphical representation of that architecture (i.e. model-based development). These include tools such as IBM Rational Rhapsody, Mathworks Simulink, ANSYS SCADE, and others. A software test and analysis tool suite that can both test automatically generated code from these tools and then map the testing and coverage results back to this graphical representation will help complete the loop for assuring developers of the effectiveness of their verification efforts.

So the answer is: Yes, to ensure safety for automotive applications, simply testing your code isn’t enough. You also have to measure the completeness of the testing process. But with an integrated, automated tool suite, the process is comprehensive, straightforward, and painless.

Jay Thomas is a Technical Development Manager for LDRA Technology, and has been working on embedded software applications in aerospace systems since the year 2000. He specializes in embedded verification implementation and has helped clients on projects including the Lockheed Martin JSF, Boeing 787, as well as medical and industrial applications.