findrenames: Optimise "addremove -s100" by matching files by their SHA1 hashes.
We speed up 'findrenames' for the usecase when a user specifies they
want a similarity of 100% by matching files by their exact SHA1 hash
value. This reduces the number of comparisons required to find exact
matches from O(n^2) to O(n).
While it would be nice if we could just use mercurial's pre-calculated
SHA1 hash for existing files, this hash includes the file's ancestor
information making it unsuitable for our purposes. Instead, we calculate
the hash of old content from scratch.
The following benchmarks were taken on the current head of crew:
addremove 100% similarity:
rm -rf *; hg up -C; mv tests tests.new
hg --time addremove -s100 --dry-run
before: real 176.350 secs (user 128.890+0.000 sys 47.430+0.000)
after: real 2.130 secs (user 1.890+0.000 sys 0.240+0.000)
addremove 75% similarity:
rm -rf *; hg up -C; mv tests tests.new; \
for i in tests.new/*; do echo x >> $i; done
hg --time addremove -s75 --dry-run
before: real 264.560 secs (user 215.130+0.000 sys 49.410+0.000)
after: real 218.710 secs (user 172.790+0.000 sys 45.870+0.000)
#!/bin/sh
mkdir test
cd test
echo foo>foo
hg init
hg addremove
hg commit -m 1
hg verify
hg serve -p $HGPORT -d --pid-file=hg.pid
cat hg.pid >> $DAEMON_PIDS
cd ..
hg clone --pull http://foo:bar@localhost:$HGPORT/ copy | sed -e "s,:$HGPORT/,:\$HGPORT/,"
cd copy
hg verify
hg co
cat foo
hg manifest --debug
hg pull | sed -e "s,:$HGPORT/,:\$HGPORT/,"
hg rollback --dry-run --verbose | sed -e "s,:$HGPORT/,:\$HGPORT/,"
echo % issue 622
cd ..
hg init empty
cd empty
hg pull -u ../test
echo % test file: uri handling
hg pull -q file://../test-doesnt-exist 2>&1 \
| sed 's%abort: repository.*/test-doesnt-exist%abort: repository /test-doesnt-exist%'
hg pull -q file:../test
# It's tricky to make file:// URLs working on every platforms
# with regular shell commands.
URL=`python -c "import os; print 'file://foobar' + ('/' + os.getcwd().replace(os.sep, '/')).replace('//', '/') + '/../test'"`
hg pull -q "$URL"