Mercurial > hg
view tests/test-schemes.t @ 20275:2123d27ff75d
backout: avoid update on simple case.
Before the changeset the backout process was:
1) go to <target>
2) revert to <target> parent
3) update back to changeset we came from
The two update steps can takes a very long time to move back and forth unrelated
file change between <target> and current working directory.
The new process is just merging current working directory with the parent of
<target> using <target> as ancestor. This give the very same result but skip
the two updates. On big repo with a lot of files and changes that save a lots of
time (x20 for one week window).
The "merge" version (hg backout --merge) is still done with upgrades. We could
imagine using in memory commit to speed it up but this is another fish.
author | Pierre-Yves David <pierre-yves.david@fb.com> |
---|---|
date | Wed, 08 Jan 2014 14:53:46 -0800 |
parents | b52404a914a9 |
children | 7a9cbb315d84 |
line wrap: on
line source
$ "$TESTDIR/hghave" serve || exit 80 $ cat <<EOF >> $HGRCPATH > [extensions] > schemes= > > [schemes] > l = http://localhost:$HGPORT/ > parts = http://{1}:$HGPORT/ > z = file:\$PWD/ > EOF $ hg init test $ cd test $ echo a > a $ hg ci -Am initial adding a invalid scheme $ hg log -R z:z abort: no '://' in scheme url 'z:z' [255] http scheme $ hg serve -n test -p $HGPORT -d --pid-file=hg.pid -A access.log -E errors.log $ cat hg.pid >> $DAEMON_PIDS $ hg incoming l:// comparing with l:// searching for changes no changes found [1] check that {1} syntax works $ hg incoming --debug parts://localhost using http://localhost:$HGPORT/ sending capabilities command comparing with parts://localhost/ query 1; heads sending batch command searching for changes all remote heads known locally no changes found [1] check that paths are expanded $ PWD=`pwd` hg incoming z:// comparing with z:// searching for changes no changes found [1] errors $ cat errors.log $ cd ..