Mercurial > hg
view tests/test-run-tests.t @ 23787:678f53865c68
revset: use localrepo revbranchcache for branch name filtering
Branch name filtering in revsets was expensive. For every rev it created a
changectx and called .branch() which retrieved the branch name from the
changelog.
Instead, use the revbranchcache.
The revbranchcache is used read-only. The revset implementation with generators
and callbacks makes it hard to figure out when we are done using/updating the
cache and could write it back. It would also be 'tricky' to lock the repo for
writing from within a revset execution. Finally, the branchmap update will
usually make sure that the cache is updated before any revset can be run.
The revbranchcache is used without any locking but is short-lived and used in a
tight loop where we can assume that the changelog doesn't change ... or where
it not is relevant to us if it does.
perfrevset 'branch(mobile)' on mozilla-central.
Before:
! wall 10.989637 comb 10.970000 user 10.940000 sys 0.030000 (best of 3)
After, no cache:
! wall 7.368656 comb 7.370000 user 7.360000 sys 0.010000 (best of 3)
After, with cache:
! wall 0.528098 comb 0.530000 user 0.530000 sys 0.000000 (best of 18)
The performance improvement even without cache come from being based on
branchinfo on the changelog instead of using ctx.branch().
Some tests are added to verify that the revbranchcache works and keep an eye on
when the cache files actually are updated.
author | Mads Kiilerich <madski@unity3d.com> |
---|---|
date | Thu, 08 Jan 2015 00:01:03 +0100 |
parents | 31d3f973d079 |
children | d64dd1252386 |
line wrap: on
line source
This file tests the behavior of run-tests.py itself. Smoke test ============ $ $TESTDIR/run-tests.py # Ran 0 tests, 0 skipped, 0 warned, 0 failed. a succesful test ======================= $ cat > test-success.t << EOF > $ echo babar > babar > $ echo xyzzy > xyzzy > EOF $ $TESTDIR/run-tests.py --with-hg=`which hg` . # Ran 1 tests, 0 skipped, 0 warned, 0 failed. failing test ================== $ cat > test-failure.t << EOF > $ echo babar > rataxes > This is a noop statement so that > this test is still more bytes than success. > EOF $ $TESTDIR/run-tests.py --with-hg=`which hg` --- $TESTTMP/test-failure.t +++ $TESTTMP/test-failure.t.err @@ -1,4 +1,4 @@ $ echo babar - rataxes + babar This is a noop statement so that this test is still more bytes than success. ERROR: test-failure.t output changed !. Failed test-failure.t: output changed # Ran 2 tests, 0 skipped, 0 warned, 1 failed. python hash seed: * (glob) [1] test --xunit support $ $TESTDIR/run-tests.py --with-hg=`which hg` --xunit=xunit.xml --- $TESTTMP/test-failure.t +++ $TESTTMP/test-failure.t.err @@ -1,4 +1,4 @@ $ echo babar - rataxes + babar This is a noop statement so that this test is still more bytes than success. ERROR: test-failure.t output changed !. Failed test-failure.t: output changed # Ran 2 tests, 0 skipped, 0 warned, 1 failed. python hash seed: * (glob) [1] $ cat xunit.xml <?xml version="1.0" encoding="utf-8"?> <testsuite errors="0" failures="1" name="run-tests" skipped="0" tests="2"> <testcase name="test-success.t" time="*"/> (glob) <testcase name="test-failure.t" time="*"> (glob) <![CDATA[--- $TESTTMP/test-failure.t +++ $TESTTMP/test-failure.t.err @@ -1,4 +1,4 @@ $ echo babar - rataxes + babar This is a noop statement so that this test is still more bytes than success. ]]> </testcase> </testsuite> test for --retest ==================== $ $TESTDIR/run-tests.py --with-hg=`which hg` --retest --- $TESTTMP/test-failure.t +++ $TESTTMP/test-failure.t.err @@ -1,4 +1,4 @@ $ echo babar - rataxes + babar This is a noop statement so that this test is still more bytes than success. ERROR: test-failure.t output changed ! Failed test-failure.t: output changed # Ran 2 tests, 1 skipped, 0 warned, 1 failed. python hash seed: * (glob) [1] Selecting Tests To Run ====================== successful $ $TESTDIR/run-tests.py --with-hg=`which hg` test-success.t . # Ran 1 tests, 0 skipped, 0 warned, 0 failed. success w/ keyword $ $TESTDIR/run-tests.py --with-hg=`which hg` -k xyzzy . # Ran 2 tests, 1 skipped, 0 warned, 0 failed. failed $ $TESTDIR/run-tests.py --with-hg=`which hg` test-failure.t --- $TESTTMP/test-failure.t +++ $TESTTMP/test-failure.t.err @@ -1,4 +1,4 @@ $ echo babar - rataxes + babar This is a noop statement so that this test is still more bytes than success. ERROR: test-failure.t output changed ! Failed test-failure.t: output changed # Ran 1 tests, 0 skipped, 0 warned, 1 failed. python hash seed: * (glob) [1] failure w/ keyword $ $TESTDIR/run-tests.py --with-hg=`which hg` -k rataxes --- $TESTTMP/test-failure.t +++ $TESTTMP/test-failure.t.err @@ -1,4 +1,4 @@ $ echo babar - rataxes + babar This is a noop statement so that this test is still more bytes than success. ERROR: test-failure.t output changed ! Failed test-failure.t: output changed # Ran 2 tests, 1 skipped, 0 warned, 1 failed. python hash seed: * (glob) [1] Verify that when a process fails to start we show a useful message ================================================================== NOTE: there is currently a bug where this shows "2 failed" even though it's actually the same test being reported for failure twice. $ cat > test-serve-fail.t <<EOF > $ echo 'abort: child process failed to start blah' > EOF $ $TESTDIR/run-tests.py --with-hg=`which hg` test-serve-fail.t ERROR: test-serve-fail.t output changed ! ERROR: test-serve-fail.t output changed ! Failed test-serve-fail.t: server failed to start (HGPORT=*) (glob) Failed test-serve-fail.t: output changed # Ran 1 tests, 0 skipped, 0 warned, 2 failed. python hash seed: * (glob) [1] $ rm test-serve-fail.t Running In Debug Mode ====================== $ $TESTDIR/run-tests.py --with-hg=`which hg` --debug 2>&1 | grep -v pwd + echo *SALT* 0 0 (glob) *SALT* 0 0 (glob) + echo babar babar + echo *SALT* 4 0 (glob) *SALT* 4 0 (glob) .+ echo *SALT* 0 0 (glob) *SALT* 0 0 (glob) + echo babar babar + echo *SALT* 2 0 (glob) *SALT* 2 0 (glob) + echo xyzzy xyzzy + echo *SALT* 4 0 (glob) *SALT* 4 0 (glob) . # Ran 2 tests, 0 skipped, 0 warned, 0 failed. Parallel runs ============== (duplicate the failing test to get predictable output) $ cp test-failure.t test-failure-copy.t $ $TESTDIR/run-tests.py --with-hg=`which hg` --jobs 2 test-failure*.t -n !! Failed test-failure*.t: output changed (glob) Failed test-failure*.t: output changed (glob) # Ran 2 tests, 0 skipped, 0 warned, 2 failed. python hash seed: * (glob) [1] failures in parallel with --first should only print one failure >>> f = open('test-nothing.t', 'w') >>> f.write('foo\n' * 1024) >>> f.write(' $ sleep 1') $ $TESTDIR/run-tests.py --with-hg=`which hg` --jobs 2 --first --- $TESTTMP/test-failure*.t (glob) +++ $TESTTMP/test-failure*.t.err (glob) @@ -1,4 +1,4 @@ $ echo babar - rataxes + babar This is a noop statement so that this test is still more bytes than success. Failed test-failure*.t: output changed (glob) # Ran 2 tests, 0 skipped, 0 warned, 1 failed. python hash seed: * (glob) [1] (delete the duplicated test file) $ rm test-failure-copy.t test-nothing.t Interactive run =============== (backup the failing test) $ cp test-failure.t backup Refuse the fix $ echo 'n' | $TESTDIR/run-tests.py --with-hg=`which hg` -i --- $TESTTMP/test-failure.t +++ $TESTTMP/test-failure.t.err @@ -1,4 +1,4 @@ $ echo babar - rataxes + babar This is a noop statement so that this test is still more bytes than success. Accept this change? [n] ERROR: test-failure.t output changed !. Failed test-failure.t: output changed # Ran 2 tests, 0 skipped, 0 warned, 1 failed. python hash seed: * (glob) [1] $ cat test-failure.t $ echo babar rataxes This is a noop statement so that this test is still more bytes than success. Interactive with custom view $ echo 'n' | $TESTDIR/run-tests.py --with-hg=`which hg` -i --view echo $TESTTMP/test-failure.t $TESTTMP/test-failure.t.err (glob) Accept this change? [n]* (glob) ERROR: test-failure.t output changed !. Failed test-failure.t: output changed # Ran 2 tests, 0 skipped, 0 warned, 1 failed. python hash seed: * (glob) [1] View the fix $ echo 'y' | $TESTDIR/run-tests.py --with-hg=`which hg` --view echo $TESTTMP/test-failure.t $TESTTMP/test-failure.t.err (glob) ERROR: test-failure.t output changed !. Failed test-failure.t: output changed # Ran 2 tests, 0 skipped, 0 warned, 1 failed. python hash seed: * (glob) [1] Accept the fix $ echo " $ echo 'saved backup bundle to \$TESTTMP/foo.hg'" >> test-failure.t $ echo " saved backup bundle to \$TESTTMP/foo.hg" >> test-failure.t $ echo " $ echo 'saved backup bundle to \$TESTTMP/foo.hg'" >> test-failure.t $ echo " saved backup bundle to \$TESTTMP/foo.hg (glob)" >> test-failure.t $ echo " $ echo 'saved backup bundle to \$TESTTMP/foo.hg'" >> test-failure.t $ echo " saved backup bundle to \$TESTTMP/*.hg (glob)" >> test-failure.t $ echo 'y' | $TESTDIR/run-tests.py --with-hg=`which hg` -i 2>&1 | \ > sed -e 's,(glob)$,&<,g' --- $TESTTMP/test-failure.t +++ $TESTTMP/test-failure.t.err @@ -1,9 +1,9 @@ $ echo babar - rataxes + babar This is a noop statement so that this test is still more bytes than success. $ echo 'saved backup bundle to $TESTTMP/foo.hg' - saved backup bundle to $TESTTMP/foo.hg + saved backup bundle to $TESTTMP/foo.hg (glob)< $ echo 'saved backup bundle to $TESTTMP/foo.hg' saved backup bundle to $TESTTMP/foo.hg (glob)< $ echo 'saved backup bundle to $TESTTMP/foo.hg' Accept this change? [n] .. # Ran 2 tests, 0 skipped, 0 warned, 0 failed. $ sed -e 's,(glob)$,&<,g' test-failure.t $ echo babar babar This is a noop statement so that this test is still more bytes than success. $ echo 'saved backup bundle to $TESTTMP/foo.hg' saved backup bundle to $TESTTMP/foo.hg (glob)< $ echo 'saved backup bundle to $TESTTMP/foo.hg' saved backup bundle to $TESTTMP/foo.hg (glob)< $ echo 'saved backup bundle to $TESTTMP/foo.hg' saved backup bundle to $TESTTMP/*.hg (glob)< (reinstall) $ mv backup test-failure.t No Diff =============== $ $TESTDIR/run-tests.py --with-hg=`which hg` --nodiff !. Failed test-failure.t: output changed # Ran 2 tests, 0 skipped, 0 warned, 1 failed. python hash seed: * (glob) [1] test for --time ================== $ $TESTDIR/run-tests.py --with-hg=`which hg` test-success.t --time . # Ran 1 tests, 0 skipped, 0 warned, 0 failed. # Producing time report cuser csys real Test \s*[\d\.]{5} \s*[\d\.]{5} \s*[\d\.]{5} test-success.t (re) test for --time with --job enabled ==================================== $ $TESTDIR/run-tests.py --with-hg=`which hg` test-success.t --time --jobs 2 . # Ran 1 tests, 0 skipped, 0 warned, 0 failed. # Producing time report cuser csys real Test \s*[\d\.]{5} \s*[\d\.]{5} \s*[\d\.]{5} test-success.t (re) Skips ================ $ cat > test-skip.t <<EOF > $ echo xyzzy > #require false > EOF $ $TESTDIR/run-tests.py --with-hg=`which hg` --nodiff !.s Skipped test-skip.t: skipped Failed test-failure.t: output changed # Ran 2 tests, 1 skipped, 0 warned, 1 failed. python hash seed: * (glob) [1] $ $TESTDIR/run-tests.py --with-hg=`which hg` --keyword xyzzy .s Skipped test-skip.t: skipped # Ran 2 tests, 2 skipped, 0 warned, 0 failed. Skips with xml $ $TESTDIR/run-tests.py --with-hg=`which hg` --keyword xyzzy \ > --xunit=xunit.xml .s Skipped test-skip.t: skipped # Ran 2 tests, 2 skipped, 0 warned, 0 failed. $ cat xunit.xml <?xml version="1.0" encoding="utf-8"?> <testsuite errors="0" failures="0" name="run-tests" skipped="2" tests="2"> <testcase name="test-success.t" time="*"/> (glob) </testsuite> Missing skips or blacklisted skips don't count as executed: $ echo test-failure.t > blacklist $ $TESTDIR/run-tests.py --with-hg=`which hg` --blacklist=blacklist \ > test-failure.t test-bogus.t ss Skipped test-bogus.t: Doesn't exist Skipped test-failure.t: blacklisted # Ran 0 tests, 2 skipped, 0 warned, 0 failed. #if json test for --json ================== $ $TESTDIR/run-tests.py --with-hg=`which hg` --json --- $TESTTMP/test-failure.t +++ $TESTTMP/test-failure.t.err @@ -1,4 +1,4 @@ $ echo babar - rataxes + babar This is a noop statement so that this test is still more bytes than success. ERROR: test-failure.t output changed !.s Skipped test-skip.t: skipped Failed test-failure.t: output changed # Ran 2 tests, 1 skipped, 0 warned, 1 failed. python hash seed: * (glob) [1] $ cat report.json testreport ={ "test-failure.t": [\{] (re) "csys": "\s*[\d\.]{4,5}", ? (re) "cuser": "\s*[\d\.]{4,5}", ? (re) "result": "failure", ? (re) "time": "\s*[\d\.]{4,5}" (re) }, ? (re) "test-skip.t": { "csys": "\s*[\d\.]{4,5}", ? (re) "cuser": "\s*[\d\.]{4,5}", ? (re) "result": "skip", ? (re) "time": "\s*[\d\.]{4,5}" (re) }, ? (re) "test-success.t": [\{] (re) "csys": "\s*[\d\.]{4,5}", ? (re) "cuser": "\s*[\d\.]{4,5}", ? (re) "result": "success", ? (re) "time": "\s*[\d\.]{4,5}" (re) } } (no-eol) #endif