view mercurial/similar.py @ 39018:e793e11e1462

changegroup: introduce requests to define delta generation Currently, we iterate through each revision we will be producing a delta for then call into 1 of 2 functions for generating that delta. Deltas are emitted as we iterate. A problem with this model is that revision generation is tightly coupled to the changegroup code. And the storage layer needs to expose APIs like deltaparent() so changegroup delta generation can produce a delta with that knowledge. Another problem is that in this model, deltas can only be produced sequentially after the previous delta was produced and emitted. Some storage backends might be capable of producing deltas in parallel (e.g. if the changegroup deltas are cached somewhere). This commit aims to solve these problems by turning delta generation into a 2 phase implementation where the first phase determines info about all the deltas that need to be generated and the 2nd phase resolves those deltas. We introduce a "revisiondeltarequest" object that holds data about a to-be-generated delta. We perform a full pass over all revisions whose delta is to be generated and generate a "revisiondeltarequest" for each. Then we iterate over the "revisiondeltarequest" instances and derive a "revisiondelta" for each. This patch was quite large. In order to avoid even more churn, aspects of the implementation are less than ideal. e.g. we're recording revision numbers instead of nodes in a few places and we don't yet have a formal API for resolving an iterable of revisiondeltarequest instances. Things will be improved in subsequent commits. Unfortunately, this commit reduces performance substantially. For `hg perfchangegroupchangelog` on my hg repo: ! wall 1.512607 comb 1.510000 user 1.490000 sys 0.020000 (best of 7) ! wall 2.150863 comb 2.150000 user 2.150000 sys 0.000000 (best of 5) And for `hg bundle -t none-v2 -a` for the mozilla-unified repo: 178.32user 4.22system 3:02.59elapsed 190.97user 4.17system 3:15.19elapsed Some of this was attributed to changelog slowdown. `hg perfchangegroupchangelog` on mozilla-unified: ! wall 21.688715 comb 21.690000 user 21.570000 sys 0.120000 (best of 3) ! wall 25.683659 comb 25.680000 user 25.540000 sys 0.140000 (best of 3) Profiling seems to reveal that the changelog slowdown is due to reading changelog revisions multiple times. First in the linknode callback to resolve the set of files changed. Second in the delta generation. Before, we likely had hit the last revision cache in the revlog when doing delta generation since we performed that immediately after performing the linknode callback. I'm not exactly sure where the other ~8s are being spent. It might be from overhead of constructing a few million revisiondeltarequest objects. I'm OK with the regression for now because it is in service of a larger cause (storage abstraction). I'll try to profile later and claw back the performance. Differential Revision: https://phab.mercurial-scm.org/D4215
author Gregory Szorc <gregory.szorc@gmail.com>
date Thu, 09 Aug 2018 09:28:26 -0700
parents 59c9d3cc810f
children 2372284d9457
line wrap: on
line source

# similar.py - mechanisms for finding similar files
#
# Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.

from __future__ import absolute_import

from .i18n import _
from . import (
    mdiff,
)

def _findexactmatches(repo, added, removed):
    '''find renamed files that have no changes

    Takes a list of new filectxs and a list of removed filectxs, and yields
    (before, after) tuples of exact matches.
    '''
    # Build table of removed files: {hash(fctx.data()): [fctx, ...]}.
    # We use hash() to discard fctx.data() from memory.
    hashes = {}
    progress = repo.ui.makeprogress(_('searching for exact renames'),
                                    total=(len(added) + len(removed)),
                                    unit=_('files'))
    for fctx in removed:
        progress.increment()
        h = hash(fctx.data())
        if h not in hashes:
            hashes[h] = [fctx]
        else:
            hashes[h].append(fctx)

    # For each added file, see if it corresponds to a removed file.
    for fctx in added:
        progress.increment()
        adata = fctx.data()
        h = hash(adata)
        for rfctx in hashes.get(h, []):
            # compare between actual file contents for exact identity
            if adata == rfctx.data():
                yield (rfctx, fctx)
                break

    # Done
    progress.complete()

def _ctxdata(fctx):
    # lazily load text
    orig = fctx.data()
    return orig, mdiff.splitnewlines(orig)

def _score(fctx, otherdata):
    orig, lines = otherdata
    text = fctx.data()
    # mdiff.blocks() returns blocks of matching lines
    # count the number of bytes in each
    equal = 0
    matches = mdiff.blocks(text, orig)
    for x1, x2, y1, y2 in matches:
        for line in lines[y1:y2]:
            equal += len(line)

    lengths = len(text) + len(orig)
    return equal * 2.0 / lengths

def score(fctx1, fctx2):
    return _score(fctx1, _ctxdata(fctx2))

def _findsimilarmatches(repo, added, removed, threshold):
    '''find potentially renamed files based on similar file content

    Takes a list of new filectxs and a list of removed filectxs, and yields
    (before, after, score) tuples of partial matches.
    '''
    copies = {}
    progress = repo.ui.makeprogress(_('searching for similar files'),
                         unit=_('files'), total=len(removed))
    for r in removed:
        progress.increment()
        data = None
        for a in added:
            bestscore = copies.get(a, (None, threshold))[1]
            if data is None:
                data = _ctxdata(r)
            myscore = _score(a, data)
            if myscore > bestscore:
                copies[a] = (r, myscore)
    progress.complete()

    for dest, v in copies.iteritems():
        source, bscore = v
        yield source, dest, bscore

def _dropempty(fctxs):
    return [x for x in fctxs if x.size() > 0]

def findrenames(repo, added, removed, threshold):
    '''find renamed files -- yields (before, after, score) tuples'''
    wctx = repo[None]
    pctx = wctx.p1()

    # Zero length files will be frequently unrelated to each other, and
    # tracking the deletion/addition of such a file will probably cause more
    # harm than good. We strip them out here to avoid matching them later on.
    addedfiles = _dropempty(wctx[fp] for fp in sorted(added))
    removedfiles = _dropempty(pctx[fp] for fp in sorted(removed) if fp in pctx)

    # Find exact matches.
    matchedfiles = set()
    for (a, b) in _findexactmatches(repo, addedfiles, removedfiles):
        matchedfiles.add(b)
        yield (a.path(), b.path(), 1.0)

    # If the user requested similar files to be matched, search for them also.
    if threshold < 1.0:
        addedfiles = [x for x in addedfiles if x not in matchedfiles]
        for (a, b, score) in _findsimilarmatches(repo, addedfiles,
                                                 removedfiles, threshold):
            yield (a.path(), b.path(), score)