Mercurial > hg
view mercurial/similar.py @ 21933:8ecbe55fd09d stable
largefiles: invoke "normallookup" on "lfdirstate" for merged files
Before this patch, largefiles gotten from "other" revision (with
conflict) at "hg merge" become "clean" unexpectedly in steps below:
1. "repo.status()" is invoked (for status check before merging)
1-1 "dirstate" entry for standinfile SF is "normal"-ed
1-2 "lfdirstate" entry of largefile LF (for SF) is "normal"-ed
2. "merge.update()" is invoked
2-1 SF is updated in the working directory
(ASSUMPTION: user choice "other" at conflict)
2-2 "dirstate" entry for SF is "merge"-ed
3. "lfcommands.updatelfiles()" is invoked (by "overrides.hgmerge()")
3-1 largefile LF (for SF) is updated in the working directory
3-2 "dirstate" returns "m" for SF (by 2-2)
3-3 "lfdirstate" entry for LF is left as it is
3-4 "lfdirstate" is written into ".hg/largefiles/dirstate", and
timestamp of LF is stored into "lfdirstate" file (by 1-2)
(ASSUMPTION: timestamp of LF differs from one of "lfdirstate" file)
Then, "hs status" treats LF as "clean", even though LF is updated by
"other" revision (by 3-1), because "lfilesrepo.status()" always treats
"normal"-ed files (by 1-2 and 3-4) as "clean".
When state of standinfile in "dirstate" is "m", largefile should be
"normallookup"-ed.
This patch invokes "normallookup" on "lfdirstate" for merged files.
This patch uses "[debug] dirstate.delaywrite" feature in the test, to
ensure that timestamp of the largefile gotten from "other" revision is
stored into ".hg/largefiles/dirstate". (for ASSUMPTION at 3-4)
author | FUJIWARA Katsunori <foozy@lares.dti.ne.jp> |
---|---|
date | Wed, 23 Jul 2014 00:10:24 +0900 |
parents | 525fdb738975 |
children | a56c47ed3885 |
line wrap: on
line source
# similar.py - mechanisms for finding similar files # # Copyright 2005-2007 Matt Mackall <mpm@selenic.com> # # This software may be used and distributed according to the terms of the # GNU General Public License version 2 or any later version. from i18n import _ import util import mdiff import bdiff def _findexactmatches(repo, added, removed): '''find renamed files that have no changes Takes a list of new filectxs and a list of removed filectxs, and yields (before, after) tuples of exact matches. ''' numfiles = len(added) + len(removed) # Get hashes of removed files. hashes = {} for i, fctx in enumerate(removed): repo.ui.progress(_('searching for exact renames'), i, total=numfiles) h = util.sha1(fctx.data()).digest() hashes[h] = fctx # For each added file, see if it corresponds to a removed file. for i, fctx in enumerate(added): repo.ui.progress(_('searching for exact renames'), i + len(removed), total=numfiles) h = util.sha1(fctx.data()).digest() if h in hashes: yield (hashes[h], fctx) # Done repo.ui.progress(_('searching for exact renames'), None) def _findsimilarmatches(repo, added, removed, threshold): '''find potentially renamed files based on similar file content Takes a list of new filectxs and a list of removed filectxs, and yields (before, after, score) tuples of partial matches. ''' copies = {} for i, r in enumerate(removed): repo.ui.progress(_('searching for similar files'), i, total=len(removed)) # lazily load text @util.cachefunc def data(): orig = r.data() return orig, mdiff.splitnewlines(orig) def score(text): orig, lines = data() # bdiff.blocks() returns blocks of matching lines # count the number of bytes in each equal = 0 matches = bdiff.blocks(text, orig) for x1, x2, y1, y2 in matches: for line in lines[y1:y2]: equal += len(line) lengths = len(text) + len(orig) return equal * 2.0 / lengths for a in added: bestscore = copies.get(a, (None, threshold))[1] myscore = score(a.data()) if myscore >= bestscore: copies[a] = (r, myscore) repo.ui.progress(_('searching'), None) for dest, v in copies.iteritems(): source, score = v yield source, dest, score def findrenames(repo, added, removed, threshold): '''find renamed files -- yields (before, after, score) tuples''' parentctx = repo['.'] workingctx = repo[None] # Zero length files will be frequently unrelated to each other, and # tracking the deletion/addition of such a file will probably cause more # harm than good. We strip them out here to avoid matching them later on. addedfiles = set([workingctx[fp] for fp in added if workingctx[fp].size() > 0]) removedfiles = set([parentctx[fp] for fp in removed if fp in parentctx and parentctx[fp].size() > 0]) # Find exact matches. for (a, b) in _findexactmatches(repo, sorted(addedfiles), sorted(removedfiles)): addedfiles.remove(b) yield (a.path(), b.path(), 1.0) # If the user requested similar files to be matched, search for them also. if threshold < 1.0: for (a, b, score) in _findsimilarmatches(repo, sorted(addedfiles), sorted(removedfiles), threshold): yield (a.path(), b.path(), score)