Mercurial > hg
view mercurial/similar.py @ 21424:d13b4ecdb680
test: split test-largefile.t in multiple file
The `test-largefiles.t` unified test is significantly longer (about 30%) than
any other tests in the mercurial test suite. As a result, its is alway the last
test my test runner is waiting for at the end of a run.
In practice, this means that `test-largefile.t` is wasting half a minute of my
life every times I'm running the mercurial test suites. This probably mean more
a few cumulated day by now.
I've finally decided to split it up in multiple smaller tests to bring it back in
reasonable length.
This changeset extracts independent test cases in two files. One dedicated to
wire protocole testing, and another one dedicated to all other tests that could
be independently extracted.
No test case were haltered in the making of this changeset.
Various timing available below. All timing have been done on a with 90 jobs on a
64 cores machine. Similar result are shown on firefly (20 jobs on 12 core).
General timing of the whole run
--------------------------------
We see a 25% real time improvement for no significant cpu time impact.
Before split:
real 2m1.149s
user 58m4.662s
sys 11m28.563s
After split:
real 1m31.977s
user 57m45.993s
sys 11m33.634s
Last test to finish (using run-test.py --time)
----------------------------------------------
test-largefile.t is now finishing at the same time than other slow tests.
Before split:
Time Test
119.280 test-largefiles.t
93.995 test-mq.t
89.897 test-subrepo.t
86.920 test-glog.t
85.508 test-rename-merge2.t
83.594 test-revset.t
79.824 test-keyword.t
78.077 test-mq-header-date.t
After split:
Time Test
90.414 test-mq.t
88.594 test-largefiles.t
85.363 test-subrepo.t
81.059 test-glog.t
78.927 test-rename-merge2.t
78.021 test-revset.t
77.777 test-command-template.t
Timing of largefile test themself
-----------------------------------
Running only tests prefixed with "test-largefiles".
No significant change in cumulated time.
Before:
Time Test
58.673 test-largefiles.t
2.931 test-largefiles-cache.t
0.583 test-largefiles-small-disk.t
After:
Time Test
31.754 test-largefiles.t
17.460 test-largefiles-misc.t
8.888 test-largefiles-wireproto.t
2.864 test-largefiles-cache.t
0.580 test-largefiles-small-disk.t
author | Pierre-Yves David <pierre-yves.david@fb.com> |
---|---|
date | Fri, 16 May 2014 13:18:57 -0700 |
parents | 525fdb738975 |
children | a56c47ed3885 |
line wrap: on
line source
# similar.py - mechanisms for finding similar files # # Copyright 2005-2007 Matt Mackall <mpm@selenic.com> # # This software may be used and distributed according to the terms of the # GNU General Public License version 2 or any later version. from i18n import _ import util import mdiff import bdiff def _findexactmatches(repo, added, removed): '''find renamed files that have no changes Takes a list of new filectxs and a list of removed filectxs, and yields (before, after) tuples of exact matches. ''' numfiles = len(added) + len(removed) # Get hashes of removed files. hashes = {} for i, fctx in enumerate(removed): repo.ui.progress(_('searching for exact renames'), i, total=numfiles) h = util.sha1(fctx.data()).digest() hashes[h] = fctx # For each added file, see if it corresponds to a removed file. for i, fctx in enumerate(added): repo.ui.progress(_('searching for exact renames'), i + len(removed), total=numfiles) h = util.sha1(fctx.data()).digest() if h in hashes: yield (hashes[h], fctx) # Done repo.ui.progress(_('searching for exact renames'), None) def _findsimilarmatches(repo, added, removed, threshold): '''find potentially renamed files based on similar file content Takes a list of new filectxs and a list of removed filectxs, and yields (before, after, score) tuples of partial matches. ''' copies = {} for i, r in enumerate(removed): repo.ui.progress(_('searching for similar files'), i, total=len(removed)) # lazily load text @util.cachefunc def data(): orig = r.data() return orig, mdiff.splitnewlines(orig) def score(text): orig, lines = data() # bdiff.blocks() returns blocks of matching lines # count the number of bytes in each equal = 0 matches = bdiff.blocks(text, orig) for x1, x2, y1, y2 in matches: for line in lines[y1:y2]: equal += len(line) lengths = len(text) + len(orig) return equal * 2.0 / lengths for a in added: bestscore = copies.get(a, (None, threshold))[1] myscore = score(a.data()) if myscore >= bestscore: copies[a] = (r, myscore) repo.ui.progress(_('searching'), None) for dest, v in copies.iteritems(): source, score = v yield source, dest, score def findrenames(repo, added, removed, threshold): '''find renamed files -- yields (before, after, score) tuples''' parentctx = repo['.'] workingctx = repo[None] # Zero length files will be frequently unrelated to each other, and # tracking the deletion/addition of such a file will probably cause more # harm than good. We strip them out here to avoid matching them later on. addedfiles = set([workingctx[fp] for fp in added if workingctx[fp].size() > 0]) removedfiles = set([parentctx[fp] for fp in removed if fp in parentctx and parentctx[fp].size() > 0]) # Find exact matches. for (a, b) in _findexactmatches(repo, sorted(addedfiles), sorted(removedfiles)): addedfiles.remove(b) yield (a.path(), b.path(), 1.0) # If the user requested similar files to be matched, search for them also. if threshold < 1.0: for (a, b, score) in _findsimilarmatches(repo, sorted(addedfiles), sorted(removedfiles), threshold): yield (a.path(), b.path(), score)