In many software projects, tests are neglected at the beginning of development, and focus is put on the design and features. This is normal: the first goal should simply be to produce software which works. Priorities change when a software is released, new features are added, and maintenance becomes an issue. Adding new functionality can become difficult, because any extension of the project can have unpredictable results on existing features.
Adopting a code coverage tool is one important method to ensure quality for already existing projects undergoing new developments. Here, we discuss using code coverage tooling efficiently as a test monitoring tool, and present strategies for higher test coverage to ensure product quality adapted to common development scenarios.
Choosing the Right Code Coverage Metric
Before instituting a test monitoring strategy, it is necessary to define what should be tracked and analyzed. Code coverage is a well known metric for tracking test progress, but there are many variants of it based mainly on the granularity. These variants include, function coverage, line coverage, statement coverage, decision coverage, condition coverage, Modified Condition/Decision Coverage (MC/DC), Multiple Condition Coverage (MCC). It is necessary to choose the most relevant one (if not imposed by, for example, a safety-critical standard.)
A common error is to track all to avoid selecting one. A general rule is to say that a single issue should be monitored by a single measurement, simply to make the analysis easier. Following all metric values makes the results difficult to interpret. The question is: how do we interpret the situation where one metric results in a gain in quality, whereas another metric does not?
In code coverage analysis, it is in principle not too difficult to choose the more adapted metric. The metrics can first be ordered with an increasing precision:
- Function Coverage: very imprecise because it ignores the function's code.
- Line Coverage: weak because the code formatting can have an effect on the metric.
- Statement Coverage
- Decision Coverage
- Condition Coverage
- MC/DC: difficult to interpret, but a good intermediate metric between the condition coverage and MCC.
- MCC: very precise, but may need an extremely large set of tests to cover complex boolean expressions.
One property of the ordered metric list is that if code has 100% coverage with one metric, all preceding metrics will also have 100% coverage. So with 100% condition coverage, we also get 100% function, line, statement and decision coverage but not necessarily MC/DC and MCC coverage. It is thus not necessary to track two coverage metrics. It is only necessary to choose the good compromise between the precision of the metric and the tests exhaustively. A good coverage tool will allow recording all metrics.
Strategies for Integrating the Coverage Metrics
Recording the metrics alone is not sufficient -- a strategy behind the coverage metrics must be in place. One strategy is to require a minimum coverage of the product.
Overall Minimum Code Coverage
This strategy is difficult to apply in general if the coverage recorded by unit tests and application tests are mixed. Coverage of application tests grow quickly, and, with few tests, 50% coverage can be reached. Rapidly, however, coverage reaches an asymptote, and it is difficult to reach over 75% coverage. The coverage of unit tests grows slowly with the amount of tests, but often results in higher coverage. In most cases, it is the only way to reach 100% coverage.
But, for an existing product without an automatic test suite, this strategy causes another issue. A product is released, even if not perfect, with a quality which is enough to be used. Having no formal tests does not mean that the product does not work. Fixing a coverage goal for the product, for example, to 90%, would require significant effort and may not be realistic. Second, does it make practical sense to write tests for legacy sources not touched for years, and for which developer experience shows that they work?
Coverage Threshold on New Commits
For the case described above, it is better to step aside from an overall coverage goal, and instead set the requirements on the newly developed features. With a code coverage tool, this is achieved in two different ways:
- Comparison of two software releases.
- A Patch Analysis (or, Test Impact Analysis) which permits an analysis of commits.
Comparing Code Coverage of Two Releases
By comparing the coverage of two releases, it is possible to get the coverage of the modified code between two releases. This strategy offers several advantages over monitoring the overall coverage:
- It is common that the effort required for increasing the coverage is growing exponentially over time. If your product is 30% covered, 1% more is not much work. But if the coverage reaches 90%, getting an additional 1% coverage requires likely more effort than what it took to achieve the first 50%. By monitoring the coverage of the code developed between two releases, the effort stays constant from release to release.
- This strategy does not force developers to write artificial tests on legacy code, simply to fulfill an overall coverage requirement.
- This strategy makes the decision to release or not less arbitrary, by providing a more informed assessment of the quality of the new features in development.
Patch Analysis
The comparison of releases is a good metric for QA to monitor the quality of the product development. But let's consider a common development scenario: once a product is released, often additional hot fixes need to be published. Doing so poses the risk of breaking existing features. A careful review of the code is good, but is difficult and not as robust as a quantitative check like a patch analysis.
A patch analysis decorates a patch generated by a Version Control System with:
- Statistics about the coverage of the patch itself.
- The list of tests impacted by the patch.
- The list of lines not covered by tests.
With this information, the review is easier because the reviewer can analyze if the patch is correctly tested and if the risk to publish the fix is too high.
Wrap Up
A goal with using code coverage analysis in your development is to obtain a more informed assessment of your application's quality. And, for existing software without automatic tests created from the get-go, using a code coverage tool can help ensure quality future product releases. Achieving high test coverage can be done in many ways, but it's important to look at the strategies behind your coverage goals to make sure they remain efficient.