Integrating static analysis with a compiler and database

February 01, 2010

Integrating static analysis with a compiler and database

Static analysis tools are becoming more integrated into the software development process. Saving data from the compiler, change history, and error inf...

Static analysis tools are becoming more integrated into the software development process. Saving data from the compiler, change history, and error information during the process instead of as a post-code step can make static analysis more productive.

Advanced static analysis tools are becoming increasingly critical in embedded systems development. Going well beyond older static analysis tools that were, in effect, coding style checkers, new tools statically analyze a source program’s control and data flow, thereby detecting bugs and vulnerabilities such as potential buffer overflows, uses of uninitialized variables, accesses through null pointers, and susceptibility to security attacks (SQL injection, cross-site scripting, and so on).

However, these advanced tools raise several issues. First, the tools need to understand the semantics of the program being analyzed – that is, they have to compile the program – to perform the required control- and data-flow analysis. To do this, they must be closely integrated into the build environment, so that all include files or other specification modules that might be needed at compile time are identified and available. Second, the amount of output produced by these advanced tools can be daunting, with each diagnostic message requiring careful scrutiny to determine if it reflects a real problem, and if so how, to address it.

Integrating the static analysis tool more closely with the software development tool chain can alleviate both of these challenges. In the first case, integrating the static analysis tool tightly with the compiler largely eliminates build environment problems and makes the user interface simple and familiar. In the second case, the extensive output can be managed by storing all output in an historical database, allowing the programmer to focus on deltas between a known good release and the current state of the source, rather than dealing with all the messages at each step.

Integrating with the compiler

A static analysis tool that goes beyond simple syntax checking generally needs much of the power of a compiler front end so that it can base its analysis on the semantics of the program. This is because the same syntactic form can often have different interpretations based on the meanings of its constituents. For example, the expression F(N) in Ada might be (among other things) an array reference, function invocation, or type conversion.

Having access to the underlying semantics of the program allows the tool to follow every name appearing within the program back to the declaration that introduces that name, even in the presence of overloading, generic templates, or renaming. The tool will know the type of every object and every expression, and will identify where any implicit runtime checks occur. These implicit runtime checks might include a check for dereferencing a null pointer and a check for indexing outside the bounds of an array. Even languages without implicit runtime checks can define certain runtime actions to have unspecified semantics, such as an integer arithmetic overflow or an array index that is out of bounds. A static analysis tool will need to know when the language semantics allow such unspecified (and thus unpredictable) behavior.

Because of the need to include the power of a compiler front end, many static analysis tools are built on top of preexisting compiler technology for the language of interest. Unfortunately, the compiler technology chosen by the tool’s builder might be unrelated to the compiler used by the tool’s customer. When this happens, the static analysis tool might not work on the customer’s code as written.

For example, if the customer program uses compiler-specific features (such as interrupt handling or special memory-mapping facilities), then there is no guarantee that they will be supported at all, or in the same way, by the static analysis tool’s underlying compiler front end technology. Even for portable code, the customer’s compiler and the static analysis tool’s underlying technology might have different bugs or subtly different interpretations of the rules of the language. And even when the interpretations match, the commands to compile the program – the command line switches that control source code search paths, preprocessor support, and other features – might differ significantly. Thus, the build process for a complex program can be difficult to translate into a make process for performing static analysis on the program.

To address these issues, the clear solution is to integrate the advanced static analysis engine with the same compiler technology the customer is using. The static analysis engine must therefore be somewhat independent of the intermediate representation used by any particular compiler technology, so that the tool can be easily adapted to support multiple compiler front ends.

One approach is for the static analysis engine to have its own intermediate representation specifically designed to support the advanced analyses the tool performs. Adapting to support a new compiler front end requires writing a translation module that transforms the compiler’s intermediate representation (the output of the front end) to the program representation used by the static analysis engine. The translation module outputs the result into a file for later use. The intermediate language translator can either be linked to the compiler front end or run as a stand-alone program, reading the compiler’s intermediate representation, transforming it, and then writing out the analysis engine’s intermediate representation. This process is illustrated in Figure 1.

 

Figure 1: The intermediate language translator reads the compiler’s intermediate representation, transforms it, and writes out the static analysis engine’s intermediate representation.


21

 

 

When this integration approach is adopted, static analysis becomes just another part of the build process, performed either during compilation or, to take advantage of whole program analysis, during the link step. A key advantage to users is that the invocation of the static analysis tool only involves providing an additional command line switch to the compiler and/or linker. There is no need to create a specialized build script for the tool or maintain two sets of sources (one that works with the compiler, and one that works with the static analysis tool).

Integrating with the development environment

Because software development is often conducted through graphical Integrated Development Environments (IDEs) such as Eclipse, it is natural to integrate the static analysis tool and the compiler into the IDE. The overall interface to the tool will then be immediately familiar to the programmer, reducing the learning curve and increasing the likelihood that the tool will be used on a regular basis.

Messages generated by the static analysis tool must be handled like error or warning messages generated by the compiler, and managed and viewed in the same way by the user. Given that multiple IDEs are in use, each with its own message format, the static analysis tool will need to represent its messages so that they can be readily transformed into whatever format the IDE expects.

A natural choice for message representation is XML, given its tagged, self-describing approach to capturing message characteristics. A side benefit of using XML is that it helps simplify the process of internationalizing the application, so that the messages can be displayed in the natural language preferred by the customer.

Integrating with an historical database

Once the advanced static analysis tool is integrated with the compiler and IDE, the next issue is dealing with the potentially large number of messages that such a tool can provide. Because advanced static analysis tools are looking for possible runtime logic errors and security vulnerabilities, they have to simulate the runtime program execution (identify the set of potential execution states) and determine under what conditions an undesirable state might be reached. Unfortunately, this is rarely a simple case of yes or no. There are many shades of grey where the level of vulnerability depends on factors that might be unknown to the tool or beyond its powers of analysis.

This issue is sometimes phrased in terms of soundness versus precision. A tool searching for problematic constructs is said to be sound if it identifies all of the problems it is looking for (no false negatives). But soundness generally comes at the expense of precision. The tool could generate a large number of false positives, which are warnings or errors identifying issues that are not real problems. Consider this simple example using C-like syntax:

int k, m, n;

 

... // Complicated code that assigns a positive value to m

 

... // and that does not assign to n

 

if (m<0){

 

   k=n;

 

   ...

 

}

 

A tool might not be able to deduce that the m<0 condition on the if statement is false, and thus might warn that the body of the if statement is referencing an uninitialized variable (n). The actual problem is the opposite: The body of the if statement is code that will never be executed, sometimes referred to as dead or unreachable code.

A tool developer must decide whether to opt for soundness (make sure that no actual violations go undetected) or precision (make sure that all reported violations are real errors). When a tool is intended for safety-critical or high-security systems, the scales are tipped to soundness. The developer using such a tool must have confidence that all violations are detected. But this raises the issue mentioned earlier regarding how to deal with the potentially large number of false positives that could be generated. This problem is especially noticeable when the tool is applied to legacy software (code that was developed before applying the static analysis tool). The number of messages a user needs to review for a large application can be daunting.

Integrating the advanced static analysis tool with an historical database makes productive use of the tool, minimizing the problems caused by false positives, even for a complex application developed prior to the tool’s use. The critical concept is the notion of a baseline and the ability for the tool to highlight the deltas relative to such a baseline. By recording all the results of each tool run in an historical database, the tool can identify deltas (changes) between any two runs.

Data becomes more useful

For the comparison between analysis runs to work effectively, messages must be uniquely identifiable without referring to specific line numbers, which can switch from one version of the source code to another without any significant change. One way to identify a message without a line number is to record the text of the message (or the corresponding XML), along with the name of the function in which it appears, and a sequence number if the text of the message is identical to some prior message in the same function.

Presuming messages are stored in the database using this line-number-independent unique identifier as the key, the tool can then easily identify whether a given message is new or has been generated previously. This keeps the overall size of the historical database manageable. Rather than repeatedly storing the text of all messages for all invocations of the tool, the tool only needs to store the text of a given message once, along with an indication of the range of tool invocations where the message appeared (which run was the first where the message was generated, and which run was the first where it did not appear).

This historical database makes it straightforward for the user interface to display or highlight only those messages that are new since a specified baseline. This allows the tool to be used effectively even on large applications with significant amounts of legacy code. A known good release of the application can be run through the analysis tool as a baseline. The current development version of the application can be analyzed, with the results from analyzing this known good release as the baseline. Those working on the development version can focus on any messages associated with changes they have made since the known good release, rather than having to wade through messages that relate to legacy code. Eventually, effort can be devoted to going through this backlog of messages, but that can be scheduled at a time that is convenient or corresponds to a larger reengineering effort.

Another benefit provided by integration with an historical database is the ability to collect comments from users who review the analysis results. In some cases, a particular message might require significant investigation to understand the possible implications. It is important that this work be captured. The historical database is a natural place to record what the user learns.

In addition, if the user determines that the identified code is safe and secure, the historical database can record that the given message should be suppressed from subsequent output, and can record the supporting rationale for suppressing the message. Alternatively, if the identified code needs to be changed, the historical database can record the Program Trouble Report (PTR) ID assigned to the problem, allowing traceability between a problem-tracking system and the analysis tool’s historical results. When the tool detects that a message with an associated PTR ID has disappeared, it can be configured to directly notify the problem-tracking system that the associated PTR record can be closed. Automating the process of closing PTRs can provide significant relief to a typically overburdened quality assurance team.

Static analysis as a key component of the development process

With applications getting larger and more complex, advanced static analysis tools play a key role in modern software development by significantly reducing the effort needed to find bugs and vulnerabilities that can compromise a system’s reliability, safety, or security. But many organizations are not yet taking full advantage of these tools, often because of the potentially high entry barrier to incorporate them into the daily software development process (builds, regression tests, and other steps).

As discussed previously, two important steps can reduce this entry barrier: tool integration with the compiler technology and with an historical database. This is not simply a theoretical proposal. CodePeer, an advanced static analysis tool developed jointly by SofCheck and AdaCore, serves as an automated code reviewer for Ada. This tool has been fully integrated into AdaCore’s GNAT Pro Ada development environment and is invokable through the GNAT Programming Studio IDE.

Integration with the compiler largely eliminates the challenge of porting the source code to the analysis tool. The same compiler front end that successfully compiles the source code can also generate the intermediate representation that the advanced static analysis engine needs for more in-depth analyses. Furthermore, the same command line switches, source code structure, and make files can be used to compile and statically analyze the code. The compiler front end will automatically handle any implementation-specific features used by the application.

The second major step toward reducing the entry barrier is integration with an historical database, which allows developers working on large systems to focus on their recent changes and defer reviewing issues in previously released legacy code until a more appropriate time. Additionally, integration with the database allows developers to record the results of reviewing the tool output and the rationale behind decisions to either suppress the message or file it as a PTR. Finally, the database automatically verifies a fix and closes the PTR. With these two steps, static analysis can become an important and productive tool in the embedded software developer’s toolbox.

S. Tucker Taft is founder and CTO of SofCheck, Inc., based in Burlington, Massachusetts. Tucker founded SofCheck in 2002 to provide tools and technology for enhancing software development quality and productivity. Prior to that, he was a Chief Scientist at Intermetrics, Inc., where he led the Ada 95 language revision effort. Tucker holds a BA in Chemistry from Harvard College.

SofCheck
781-750-8068
[email protected]
www.sofcheck.com

 

S. Tucker Taft (SofCheck)