run-tests: collect aggregate code coverage
Before this patch, every Python process during a code coverage run was
writing coverage data to the same file. I'm not sure if the coverage
package even tries to obtain a lock on the file. But what I do know is
there was some last write wins leading to loss of code coverage data, at
least with -j > 1.
This patch changes the code coverage mechanism to be multiple process
safe. The mechanism for initializing code coverage via sitecustomize.py
has been tweaked so each Python process will produce a separate coverage
data file on disk. Unless two processes generate the same random value,
there are no race conditions writing to the same file. At the end of the
test run, we combine all written files into an aggregate report.
On my machine, running the full test suite produces a little over
20,000 coverage files consuming ~350 MB. As you can imagine, it takes
several seconds to load and merge these coverage files. But when it is
done, you have an accurate picture of the aggregate code coverage for the
entire test suite, which is ~60% line coverage.
#require serve
$ cat <<EOF >> $HGRCPATH
> [extensions]
> schemes=
>
> [schemes]
> l = http://localhost:$HGPORT/
> parts = http://{1}:$HGPORT/
> z = file:\$PWD/
> EOF
$ hg init test
$ cd test
$ echo a > a
$ hg ci -Am initial
adding a
invalid scheme
$ hg log -R z:z
abort: no '://' in scheme url 'z:z'
[255]
http scheme
$ hg serve -n test -p $HGPORT -d --pid-file=hg.pid -A access.log -E errors.log
$ cat hg.pid >> $DAEMON_PIDS
$ hg incoming l://
comparing with l://
searching for changes
no changes found
[1]
check that {1} syntax works
$ hg incoming --debug parts://localhost
using http://localhost:$HGPORT/
sending capabilities command
comparing with parts://localhost/
query 1; heads
sending batch command
searching for changes
all remote heads known locally
no changes found
[1]
check that paths are expanded
$ PWD=`pwd` hg incoming z://
comparing with z://
searching for changes
no changes found
[1]
errors
$ cat errors.log
$ cd ..