Static Test Case and Test Procedure Analysis Tool for Error Optimization in Verification Artifacts

By Sayali Salape

Senior Engineer


January 25, 2022


Static Test Case and Test Procedure Analysis Tool for Error Optimization in Verification Artifacts

Efficiency and quality are important in any field, and software verification is no exception. The “Static Test Case and Test Procedure Analysis Tool” enhances the quality of artifacts in verification projects and helps rectify the human errors introduced in it.



In the avionics domain, safety-critical software must adhere to Federal Aviation Regulations by DO-178B/C means of compliance. Radio Technical Commission of Aeronautics (RTCA) and European Organization for Civil Aviation Equipment (EUROCAE) jointly developed DO-178 Software Considerations in Airborne Systems and Equipment Certification. DO-178B/C is a guideline dealing with the safety of safety-critical software used in airborne systems and developed to satisfy the need for airworthy systems. The software used in airborne systems must fulfill the standards and the related certification objectives.

One of the objectives of DO-178B/C says, “Conformity review of the software product is conducted.” The objective of peer review is to obtain assurance that the software life cycle is completed, and a quality product is delivered for certification. During the peer review process, the reviewer must review all the artifacts that are added in the review process and ensure that the artifacts are free from defects. If any defect is identified, the reviewer needs to capture it as a finding.

In the next step, the implementer must provide a proper resolution against those defects. While working on the verification of avionics software, our team has encountered many findings related to spelling mistakes, duplication of requirements within the same test (or same cell), redundant whitespaces (leading, trailing, between the words etc.), HLR-to-LLR traceability, and missing test case for specific requirements.

It takes a significant amount of time for both the reviewer and implementer to capture and address such findings. If the number of artifacts increases, the required time to identify and address such errors also increases. Hence, to avoid such findings, our team came up with the “Static Test Case and Test Procedure Analysis tool.”  The tool is developed in Python and captures the above-mentioned errors. It helps implementers in fixing such errors at the initial stage and helps in reducing the review process time.


The main goal in developing the Static Test Case and Test Procedure Analysis tool is to minimize user efforts in searching the misspelled words, white spaces, requirement traceability issues (between HLR and LLR) and missing test cases (requirement not tested).

Here, the test cases are developed in excel or text files and are added to the tool. A test case contains the test case ID, tracing of low-level and high-level requirements, the objective of the test case, and test steps that contain inputs/outputs and the purpose of each step.

Manually generated documents are bound to have errors that might be easily ignored. However, the tool scans the whole document and identifies misspellings in the text, extra whitespaces present in the text, and consecutive duplicate words. It also checks the naming convention of the test case file name and test case ID and documents it in the text file to be displayed.

Although, excel provides the feature of checking the spelling of the text. It traverses through each word and requires more time, whereas the tool can directly show errors along with their locations.

Analyzing requirement traceability and locating missing test case is another feature of this tool. In verification, requirement coverage is a very important aspect and one of the core objectives of DO-178B/C standards. The objectives of DO-178B/C section A-7.4 and A-4.6 state, “Test coverage of low-level requirements is achieved,” and, “Low-level requirements are traceable to high-level requirements,” respectively.

Engineers must check whether the requirement is tested or not, and whether every low-level requirement (LLR) has corresponding high-level requirements (HLRs) traced to it. The Static Test Case and Test Procedure Analysis tool collects the data from the test case file and maintains the list of LLRs and HLRs so that users can easily go over it and cross-check the LLR to HLR traceability.

The tool checks whether each LLR has a test associated with it and documents duplicates of LLRs and HLRs within the same cell, helping users minimize the efforts of going over the entire test file.

Design details:

The Static Test Case and Test Procedure Analysis Tool is majorly divided into two parts: 1) requirement traceability analysis, and 2) finding spelling mistakes, blank lines, extra white spaces, and incorrect test case ID (Static analysis and clean up).

In the requirement traceability analysis part, the test case in .xlsx and the list of requirements of the module under test in .csv are provided as input to this tool. It results in the CSV file containing LLRs and associated test IDs, the excel file containing the parsed data of the test ID, HLR, LLR, and text file with any duplicates of LLRs and HLRs.


Figure 2.1: Requirement traceability analysis functionality of tool

The requirement traceability analysis part of the tool performs following functionalities:

  • Traceability between HLR and LLR — The test case file and list of requirements of modules under test in CSV format are provided as input to the function that is developed to check requirement traceability. It parses the test case file based on test case IDs, LLRs, and HLRs and places it into the newly created xlsx file. The input CSV file contains the list of requirements for specific modules.
  • Requirement-test traceability — The function reads the requirements from the CSV file and searches them into the parsed HLR and LLR xlsx. If the LLR exists in a parsed sheet, LLR, and HLR, it captures the corresponding test case ID. The tool creates a new CSV and writes the LLR and its respective test case ID in it. If the LLR does not exist, it results in the string saying, “Requirement not tested.”
  • Duplicate requirements identification — The tool identifies whether a cell from a parsed HLR LLR xlsx file contains duplicate HLRs or LLRs and documents those requirements in the text file.

In the static analysis and clean up part of the tool, one or multiple test files in different formats (such as .xlsx or .txt) are provided as input, and the result of those errors is documented in a text file.


Figure 2.2: Static analysis and clean up functionality of tool

The static analysis and clean up part performs following functionality:

  • Captures the static errors (Spell mistakes, extra whitespaces, consecutive duplicate words etc.) — Users can select one or more test case files and provide them as input to the function that checks for static errors in the test case file. The tool checks whether the test case file name and test ID name are as per the guidelines and documents all the errors in the text file. It also reports the unused rows from the test case file.


The tool generates four result files:

  • Static errors report (.txt)
  • Traceability report between HLR and LLR (.xlsx)
  • Traceability report between requirement and test (.csv)
  • Duplicate requirements (.txt)

The following snippets help users to understand how the tool works and produces results.


Figure 3.1: Report of static errors in the test case

Figure 3.2: Traceability Report between HLR and LLR


Figure 3.3: Traceability Report between Requirement and Test


Figure 3.4: Report showing Duplicate requirements

Integration of Static Test Case and Test Procedure Analysis Tool with GUI developed in C#:

We have successfully integrated the Static Test Case and Test Procedure Analysis tool created by our team with a GUI tool implemented by another team. The challenge was that the GUI tool was implemented in C#, while the Static Test Case and Test Procedure Analysis Tool was implemented in Python.

The idea of integrating both enables the user to keep the same GUI they have been using, with additional features for checking static errors in TCs they are working on. The process of integration includes enabling python script to provide an interface to the C#-based GUI (i.e., making functions to execute on the command line with the test case list as arguments), invoking python script from C#, and performing file operations from C# to generate a log file.

The following are features of this integration:

  • Saves overhead of operating the tool separately
  • All interfaces — like selecting TCs, executing the tool, and analyzing the report — are provided in the GUI tool itself, which saves engineers’ time in executing each step
  • Execution activity is monitored along with timestamps (in form of an activity log) in the GUI tool to let the user know how execution works

Case study:

As mentioned in the introduction, implementation and review efforts to correct errors are greater during the review process if they are not found and addressed in the implementation stage. This case study consists of one of the findings identified during the peer review process and an estimate of the time required to address it. The analysis provided below shows how much implementation and review time could be saved with the help of this tool.

Peer review finding description:

  • Clean up any misspellings of the word "contrl" i.e., the purpose statement in Test 1 - "Slider contrl" that should be “Slider control.”
  • Artifact needs to be renamed. Rename it as per guidelines.


Approximate Time

Approx. time for the reviewer to catch the error and document it.

5 min.

Approx. time for the implementer to do this clean-up.

1 min.

Approx. time for the implementer to commit the changes, regenerate the log, and respond to resolution.

10 min.

Total turnaround time

15 min.

Now, if the same error is caught at the time of implementation, it could be fixed in less than 5 min.

Table 5.1: Effectiveness of tool


  • The effectiveness of the tool increases with several artifacts and multiple TCs in review
  • Improves turnaround time for fixing errors by 70%
  • Reduces the number of findings related to spelling, naming conventions, and HLR-LLR traceability issues

Future Scope:

  • It will take the LLRs and corresponding HLRs as inputs from the requirement management tool and check if the test case contains correct LLR-to-HLR traceability.
  • Based on the parsed LLRs, it generates a TC template that would have some basic fields ready like objective, purpose, inputs/outputs based on the requirements.
  • Support for manually created test procedure files in .c, .py or .xml formats.
  • Support for pdf markups.


The purpose of the tool is to generate robust or quality artifacts by eliminating the requirement traceability issues and errors such as whitespaces, duplicate words, misspelled words, and naming conventions. It saves approximately 10 min. for each artifact.

The tool is more effective when there are several artifacts and saves around 70% of turnaround time. With consistent use of this tool, our team has eliminated findings related to all the above-mentioned errors, significantly improving artifact quality and work productivity.