This fall, for the first time, I've had the professional opportunity to work on some Python code. I find the language quite elegant and simple to write, much more so than Ruby, although I couldn't really say why. Along with my work on a Master's degree in bioinformatics, I decided to look at the Biopython project and see about adding some code coverage statistics to it.
Biopython is an open-source software project that provides Python libraries for a variety of bioinformatics purposes. It includes, among other features, a variety of file parsers, alignment algorithms, and interfaces with various common bioinformatics tools and databases. To assure quality, a battery of automated tests is run against the Biopython source code before every release. While Biopython has a substantial amount of automated tests, no statistics are gathered concerning the code coverage of the tests.
It took several steps. The first was simply to run the existing test suite; I downloaded the codebase to my Windows machine, and tried to build it, but it turned out to be really difficult. Biopython has a dependency on NumPy as well as several other packages. I shaved several yaks attempting to get it running, but gave up eventually. Instead, I created a virtual Linux machine and installed it on that.
This went much more smoothly. (Note that I don't find Linux to have any natural advantage here. People who play with the Biopython source code run it on Linux, so that install gets a lot more attention.) I was able to run the test suite after a fashion. Not all of the tests ran the first time, which is a bit scary - it's hard to modify code with which you're not familiar without passing tests. But I realized after a while that the suite does some things like connect to remote services to verify that the connections work, and those services were down. There's an option to run only tests that don't attempt connections, and that worked - again, after a fashion. There are so many external dependencies that the test suite is set up to skip tests that require packages that aren't installed. But as all I was looking for was a passing test suite, I was OK with that.
The next step was a coverage tool. I chose Ned Batchelder's Coverage which worked nicely. All I had to do was replace the line "python run_tests.py --offline" with "coverage run run_tests.py --offline" and the coverage ran. I found the architecture of the tool a little strange; you don't specify any kind of output format. Instead, the tool creates its own binary data file (prefixed with a '.' to hide it on Linux systems) and you run a separate command to generate output in various formats. But this worked well and I had my report, in a nice HTML format so I could look at it in a browser.
Coverage is nice, but I don't know that a single report is very useful. Comparing coverage between builds is where the true benefit lies. Are some tests no longer covering code that they were meant to? Were tests disabled for some reason? Why did the coverage percentage drop? Ideally, each time the code is built, a new coverage report would be generated. So, I went on to look at the Biopython build process.
Biopython uses Travis for its builds. Travis is a rather nice public continuous integration server that integrates with Github, where the Biopython source code lives. You provide a well-named build script in your source tree and Travis monitors your repository for changes. It was easy enough to incorporate the coverage tool, but Travis didn't provide a good mechanism for reporting. The simplest thing, I decided, was to use a script to automatically upload a file back to Github. I jumped through several more hoops to get this working - authentication, permissions - and today I find, first, that Github is deprecating file uploads and, second, that Travis is supporting a new artifact system . So I think most of that work is out the window.
Bother.
So I'll have to rework some of that. Nonetheless, the coverage effort gave me some interesting results. Biopython consists of 298 source files comprising 39,805 statements. Automated unit tests covered all but 12,499 of those statements, for a coverage percentage of 69%. 33 files (11%) had no coverage at all, while 60 files (20%) were fully covered. Based on these numbers, Biopython has at this moment an almost acceptable level of coverage, as it is a truism in the software development world that coverage rates above 70% tend to lead to diminishing returns of value.
However, as a code library, Biopython is free from the problems of automated user interface testing, which is one of the more difficult areas of automated testing. For this reason, one might expect a somewhat higher coverage rate. There is reason to suspect, however, that as time goes by, the percentage will go higher. An examination of some of the source files with zero coverage reveal deprecation warnings, i.e. indications to the user that the module in question should not be used for new development. At some point it is to be assumed that these files will no longer be part of Biopython, which will drive the code coverage percentage upwards.
The introduction of a code coverage measuring tool to the Biopython build process is an important step, but only a first one. The code coverage tool selected is particular to Python, while a certain amount of the Biopython source code is written in C, a much more difficult language to instrument even for automated unit testing, much less code coverage reporting on that testing. However, useful work could be done in attempting to add this coverage. The techniques used here only monitor statement counts – perhaps branching counts would be a valuable addition. Biopython also has many dependencies on third-party products, some of which are installed during unit testing and some are not. Tests around these integration packages might prove useful. Finally, some work could be done on analyzing differences in builds, sending out alerts to interested parties if a given build should have a sudden drop in the code coverage percentage, for example.
Code coverage is a useful tool for long-running projects. Hopefully some of this effort will make it into the main Biopython codebase!