hgext/relink.py
author Paul Morelle <paul.morelle@octobus.net>
Tue, 05 Jun 2018 08:19:35 +0200
changeset 38718 f8762ea73e0d
parent 38442 36edfbac7281
child 40293 c303d65d2e34
permissions -rw-r--r--
sparse-revlog: implement algorithm to write sparse delta chains (issue5480) The classic behavior of revlog._isgooddeltainfo is to consider the span size of the whole delta chain, and limit it to 4 * textlen. Once sparse-revlog writing is allowed (and enforced with a requirement), revlog._isgooddeltainfo considers the span of the largest chunk as the distance used in the verification, instead of using the span of the whole delta chain. In order to compute the span of the largest chunk, we need to slice into chunks a chain with the new revision at the top of the revlog, and take the maximal span of these chunks. The sparse read density is a parameter to the slicing, as it will stop when the global read density reaches this threshold. For instance, a density of 50% means that 2 of 4 read bytes are actually used for the reconstruction of the revision (the others are part of other chains). This allows a new revision to be potentially stored with a diff against another revision anywhere in the history, instead of forcing it in the last 4 * textlen. The result is a much better compression on repositories that have many concurrent branches. Here are a comparison between using deltas from current upstream (aggressive-merge-deltas on by default) and deltas from a sparse-revlog Comparison of `.hg/store/` size: mercurial (6.74% merges): before: 46,831,873 bytes after: 46,795,992 bytes (no relevant change) pypy (8.30% merges): before: 333,524,651 bytes after: 308,417,511 bytes -8% netbeans (34.21% merges): before: 1,141,847,554 bytes after: 1,131,093,161 bytes -1% mozilla-central (4.84% merges): before: 2,344,248,850 bytes after: 2,328,459,258 bytes -1% large-private-repo-A (merge 19.73%) before: 41,510,550,163 bytes after: 8,121,763,428 bytes -80% large-private-repo-B (23.77%) before: 58,702,221,709 bytes after: 8,351,588,828 bytes -76% Comparison of `00manifest.d` size: mercurial (6.74% merges): before: 6,143,044 bytes after: 6,107,163 bytes pypy (8.30% merges): before: 52,941,780 bytes after: 27,834,082 bytes -48% netbeans (34.21% merges): before: 130,088,982 bytes after: 119,337,636 bytes -10% mozilla-central (4.84% merges): before: 215,096,339 bytes after: 199,496,863 bytes -8% large-private-repo-A (merge 19.73%) before: 33,725,285,081 bytes after: 390,302,545 bytes -99% large-private-repo-B (23.77%) before: 49,457,701,645 bytes after: 1,366,752,187 bytes -97% The better delta chains provide a performance boost in relevant repositories: pypy, bundling 1000 revisions: before: 1.670s after: 1.149s -31% Unbundling got a bit slower. probably because the sparse algorithm is still pure python. pypy, unbundling 1000 revisions: before: 4.062s after: 4.507s +10% Performance of bundle/unbundle in repository with few concurrent branches (eg: mercurial) are unaffected. No significant differences have been noticed then timing `hg push` and `hg pull` locally. More state timings are being gathered. Same as for aggressive-merge-delta, better delta comes with longer delta chains. Longer chains have a performance impact. For example. The length of the chain needed to get the manifest of pypy's tip moves from 82 item to 1929 items. This moves the restore time from 3.88ms to 11.3ms. Delta chain length is an independent issue that affects repository without this changes. It will be dealt with independently. No significant differences have been observed on repositories where `sparse-revlog` have not much effect (mercurial, unity, netbeans). On pypy, small differences have been observed on some operation affected by delta chain building and retrieval. pypy, perfmanifest before: 0.006162s after: 0.017899s +190% pypy, commit: before: 0.382 after: 0.376 -1% pypy, status: before: 0.157 after: 0.168 +7% More comprehensive and stable timing comparisons are in progress.

# Mercurial extension to provide 'hg relink' command
#
# Copyright (C) 2007 Brendan Cully <brendan@kublai.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.

"""recreates hardlinks between repository clones"""
from __future__ import absolute_import

import os
import stat

from mercurial.i18n import _
from mercurial import (
    error,
    hg,
    registrar,
    util,
)
from mercurial.utils import (
    stringutil,
)

cmdtable = {}
command = registrar.command(cmdtable)
# Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
# extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
# be specifying the version(s) of Mercurial they are tested with, or
# leave the attribute unspecified.
testedwith = 'ships-with-hg-core'

@command('relink', [], _('[ORIGIN]'))
def relink(ui, repo, origin=None, **opts):
    """recreate hardlinks between two repositories

    When repositories are cloned locally, their data files will be
    hardlinked so that they only use the space of a single repository.

    Unfortunately, subsequent pulls into either repository will break
    hardlinks for any files touched by the new changesets, even if
    both repositories end up pulling the same changes.

    Similarly, passing --rev to "hg clone" will fail to use any
    hardlinks, falling back to a complete copy of the source
    repository.

    This command lets you recreate those hardlinks and reclaim that
    wasted space.

    This repository will be relinked to share space with ORIGIN, which
    must be on the same local disk. If ORIGIN is omitted, looks for
    "default-relink", then "default", in [paths].

    Do not attempt any read operations on this repository while the
    command is running. (Both repositories will be locked against
    writes.)
    """
    if (not util.safehasattr(util, 'samefile') or
        not util.safehasattr(util, 'samedevice')):
        raise error.Abort(_('hardlinks are not supported on this system'))
    src = hg.repository(repo.baseui, ui.expandpath(origin or 'default-relink',
                                          origin or 'default'))
    ui.status(_('relinking %s to %s\n') % (src.store.path, repo.store.path))
    if repo.root == src.root:
        ui.status(_('there is nothing to relink\n'))
        return

    if not util.samedevice(src.store.path, repo.store.path):
        # No point in continuing
        raise error.Abort(_('source and destination are on different devices'))

    with repo.lock(), src.lock():
        candidates = sorted(collect(src, ui))
        targets = prune(candidates, src.store.path, repo.store.path, ui)
        do_relink(src.store.path, repo.store.path, targets, ui)

def collect(src, ui):
    seplen = len(os.path.sep)
    candidates = []
    live = len(src['tip'].manifest())
    # Your average repository has some files which were deleted before
    # the tip revision. We account for that by assuming that there are
    # 3 tracked files for every 2 live files as of the tip version of
    # the repository.
    #
    # mozilla-central as of 2010-06-10 had a ratio of just over 7:5.
    total = live * 3 // 2
    src = src.store.path
    progress = ui.makeprogress(_('collecting'), unit=_('files'), total=total)
    pos = 0
    ui.status(_("tip has %d files, estimated total number of files: %d\n")
              % (live, total))
    for dirpath, dirnames, filenames in os.walk(src):
        dirnames.sort()
        relpath = dirpath[len(src) + seplen:]
        for filename in sorted(filenames):
            if filename[-2:] not in ('.d', '.i'):
                continue
            st = os.stat(os.path.join(dirpath, filename))
            if not stat.S_ISREG(st.st_mode):
                continue
            pos += 1
            candidates.append((os.path.join(relpath, filename), st))
            progress.update(pos, item=filename)

    progress.complete()
    ui.status(_('collected %d candidate storage files\n') % len(candidates))
    return candidates

def prune(candidates, src, dst, ui):
    def linkfilter(src, dst, st):
        try:
            ts = os.stat(dst)
        except OSError:
            # Destination doesn't have this file?
            return False
        if util.samefile(src, dst):
            return False
        if not util.samedevice(src, dst):
            # No point in continuing
            raise error.Abort(
                _('source and destination are on different devices'))
        if st.st_size != ts.st_size:
            return False
        return st

    targets = []
    progress = ui.makeprogress(_('pruning'), unit=_('files'),
                               total=len(candidates))
    pos = 0
    for fn, st in candidates:
        pos += 1
        srcpath = os.path.join(src, fn)
        tgt = os.path.join(dst, fn)
        ts = linkfilter(srcpath, tgt, st)
        if not ts:
            ui.debug('not linkable: %s\n' % fn)
            continue
        targets.append((fn, ts.st_size))
        progress.update(pos, item=fn)

    progress.complete()
    ui.status(_('pruned down to %d probably relinkable files\n') % len(targets))
    return targets

def do_relink(src, dst, files, ui):
    def relinkfile(src, dst):
        bak = dst + '.bak'
        os.rename(dst, bak)
        try:
            util.oslink(src, dst)
        except OSError:
            os.rename(bak, dst)
            raise
        os.remove(bak)

    CHUNKLEN = 65536
    relinked = 0
    savedbytes = 0

    progress = ui.makeprogress(_('relinking'), unit=_('files'),
                               total=len(files))
    pos = 0
    for f, sz in files:
        pos += 1
        source = os.path.join(src, f)
        tgt = os.path.join(dst, f)
        # Binary mode, so that read() works correctly, especially on Windows
        sfp = open(source, 'rb')
        dfp = open(tgt, 'rb')
        sin = sfp.read(CHUNKLEN)
        while sin:
            din = dfp.read(CHUNKLEN)
            if sin != din:
                break
            sin = sfp.read(CHUNKLEN)
        sfp.close()
        dfp.close()
        if sin:
            ui.debug('not linkable: %s\n' % f)
            continue
        try:
            relinkfile(source, tgt)
            progress.update(pos, item=f)
            relinked += 1
            savedbytes += sz
        except OSError as inst:
            ui.warn('%s: %s\n' % (tgt, stringutil.forcebytestr(inst)))

    progress.complete()

    ui.status(_('relinked %d files (%s reclaimed)\n') %
              (relinked, util.bytecount(savedbytes)))