view hgext/relink.py @ 30155:b7a966ce89ed

changelog: disable delta chains This patch disables delta chains on changelogs. After this patch, new entries on changelogs - including existing changelogs - will be stored as the fulltext of that data (likely compressed). No delta computation will be performed. An overview of delta chains and data justifying this change follows. Revlogs try to store entries as a delta against a previous entry (either a parent revision in the case of generaldelta or the previous physical revision when not using generaldelta). Most of the time this is the correct thing to do: it frequently results in less CPU usage and smaller storage. Delta chains are most effective when the base revision being deltad against is similar to the current data. This tends to occur naturally for manifests and file data, since only small parts of each tend to change with each revision. Changelogs, however, are a different story. Changelog entries represent changesets/commits. And unless commits in a repository are homogonous (same author, changing same files, similar commit messages, etc), a delta from one entry to the next tends to be relatively large compared to the size of the entry. This means that delta chains tend to be short. How short? Here is the full vs delta revision breakdown on some real world repos: Repo % Full % Delta Max Length hg 45.8 54.2 6 mozilla-central 42.4 57.6 8 mozilla-unified 42.5 57.5 17 pypy 46.1 53.9 6 python-zstandard 46.1 53.9 3 (I threw in python-zstandard as an example of a repo that is homogonous. It contains a small Python project with changes all from the same author.) Contrast this with the manifest revlog for these repos, where 99+% of revisions are deltas and delta chains run into the thousands. So delta chains aren't as useful on changelogs. But even a short delta chain may provide benefits. Let's measure that. Delta chains may require less CPU to read revisions if the CPU time spent reading smaller deltas is less than the CPU time used to decompress larger individual entries. We can measure this via `hg perfrevlog -c -d 1` to iterate a revlog to resolve each revision's fulltext. Here are the results of that command on a repo using delta chains in its changelog and on a repo without delta chains: hg (forward) ! wall 0.407008 comb 0.410000 user 0.410000 sys 0.000000 (best of 25) ! wall 0.390061 comb 0.390000 user 0.390000 sys 0.000000 (best of 26) hg (reverse) ! wall 0.515221 comb 0.520000 user 0.520000 sys 0.000000 (best of 19) ! wall 0.400018 comb 0.400000 user 0.390000 sys 0.010000 (best of 25) mozilla-central (forward) ! wall 4.508296 comb 4.490000 user 4.490000 sys 0.000000 (best of 3) ! wall 4.370222 comb 4.370000 user 4.350000 sys 0.020000 (best of 3) mozilla-central (reverse) ! wall 5.758995 comb 5.760000 user 5.720000 sys 0.040000 (best of 3) ! wall 4.346503 comb 4.340000 user 4.320000 sys 0.020000 (best of 3) mozilla-unified (forward) ! wall 4.957088 comb 4.950000 user 4.940000 sys 0.010000 (best of 3) ! wall 4.660528 comb 4.650000 user 4.630000 sys 0.020000 (best of 3) mozilla-unified (reverse) ! wall 6.119827 comb 6.110000 user 6.090000 sys 0.020000 (best of 3) ! wall 4.675136 comb 4.670000 user 4.670000 sys 0.000000 (best of 3) pypy (forward) ! wall 1.231122 comb 1.240000 user 1.230000 sys 0.010000 (best of 8) ! wall 1.164896 comb 1.160000 user 1.160000 sys 0.000000 (best of 9) pypy (reverse) ! wall 1.467049 comb 1.460000 user 1.460000 sys 0.000000 (best of 7) ! wall 1.160200 comb 1.170000 user 1.160000 sys 0.010000 (best of 9) The data clearly shows that it takes less wall and CPU time to resolve revisions when there are no delta chains in the changelogs, regardless of the direction of traversal. Furthermore, not using a delta chain means that fulltext resolution in reverse is as fast as iterating forward. So not using delta chains on the changelog is a clear CPU win for reading operations. An example of a user-visible operation showing this speed-up is revset evaluation. Here are results for `hg perfrevset 'author(gps) or author(mpm)'`: hg ! wall 1.655506 comb 1.660000 user 1.650000 sys 0.010000 (best of 6) ! wall 1.612723 comb 1.610000 user 1.600000 sys 0.010000 (best of 7) mozilla-central ! wall 17.629826 comb 17.640000 user 17.600000 sys 0.040000 (best of 3) ! wall 17.311033 comb 17.300000 user 17.260000 sys 0.040000 (best of 3) What about 00changelog.i size? Repo Delta Chains No Delta Chains hg 7,033,250 6,976,771 mozilla-central 82,978,748 81,574,623 mozilla-unified 88,112,349 86,702,162 pypy 20,740,699 20,659,741 The data shows that removing delta chains from the changelog makes the changelog smaller. Delta chains are also used during changegroup generation. This operation essentially converts a series of revisions to one large delta chain. And changegroup generation is smart: if the delta in the revlog matches what the changegroup is emitting, it will reuse the delta instead of recalculating it. We can measure the impact removing changelog delta chains has on changegroup generation via `hg perfchangegroupchangelog`: hg ! wall 1.589245 comb 1.590000 user 1.590000 sys 0.000000 (best of 7) ! wall 1.788060 comb 1.790000 user 1.790000 sys 0.000000 (best of 6) mozilla-central ! wall 17.382585 comb 17.380000 user 17.340000 sys 0.040000 (best of 3) ! wall 20.161357 comb 20.160000 user 20.120000 sys 0.040000 (best of 3) mozilla-unified ! wall 18.722839 comb 18.720000 user 18.680000 sys 0.040000 (best of 3) ! wall 21.168075 comb 21.170000 user 21.130000 sys 0.040000 (best of 3) pypy ! wall 4.828317 comb 4.830000 user 4.820000 sys 0.010000 (best of 3) ! wall 5.415455 comb 5.420000 user 5.410000 sys 0.010000 (best of 3) The data shows eliminating delta chains makes the changelog part of changegroup generation slower. This is expected since we now have to compute deltas for revisions where we could recycle the delta before. It is worth putting this regression into context of overall changegroup times. Here is the rough total CPU time spent in changegroup generation for various repos while using delta chains on the changelog: Repo CPU Time (s) CPU Time w/ compression hg 4.50 7.05 mozilla-central 111.1 222.0 pypy 28.68 75.5 Before compression, removing delta chains from the changegroup adds ~4.4% overhead to hg changegroup generation, 1.3% to mozilla-central, and 2.0% to pypy. When you factor in zlib compression, these percentages are roughly divided by 2. While the increased CPU usage for changegroup generation is unfortunate, I think it is acceptable because the percentage is small, server operators (those likely impacted most by this) have other mechanisms to mitigate CPU consumption (namely reducing zlib compression level and pre-generated clone bundles), and because there is room to optimize this in the future. For example, we could use the nullid as the base revision, effectively encoding the full revision for each entry in the changegroup. When doing this, `hg perfchangegroupchangelog` nearly halves: mozilla-unified ! wall 21.168075 comb 21.170000 user 21.130000 sys 0.040000 (best of 3) ! wall 11.196461 comb 11.200000 user 11.190000 sys 0.010000 (best of 3) This looks very promising as a future optimization opportunity. It's worth that the changes in test-acl.t to the changegroup part size. This is because revision 6 in the changegroup had a delta chain of length 2 before and after this patch the base revision is nullrev. When the base revision is nullrev, cg2packer.deltaparent() hardcodes the *previous* revision from the changegroup as the delta parent. This caused the delta in the changegroup to switch base revisions, the delta to change, and the size to change accordingly. While the size increased in this case, I think sizes will remain the same on average, as the delta base for changelog revisions doesn't matter too much (as this patch shows). So, I don't consider this a regression.
author Gregory Szorc <gregory.szorc@gmail.com>
date Thu, 13 Oct 2016 12:50:27 +0200
parents d5883fd055c6
children 46ba2cdda476
line wrap: on
line source

# Mercurial extension to provide 'hg relink' command
#
# Copyright (C) 2007 Brendan Cully <brendan@kublai.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.

"""recreates hardlinks between repository clones"""
from __future__ import absolute_import

import os
import stat

from mercurial.i18n import _
from mercurial import (
    cmdutil,
    error,
    hg,
    util,
)

cmdtable = {}
command = cmdutil.command(cmdtable)
# Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
# extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
# be specifying the version(s) of Mercurial they are tested with, or
# leave the attribute unspecified.
testedwith = 'ships-with-hg-core'

@command('relink', [], _('[ORIGIN]'))
def relink(ui, repo, origin=None, **opts):
    """recreate hardlinks between two repositories

    When repositories are cloned locally, their data files will be
    hardlinked so that they only use the space of a single repository.

    Unfortunately, subsequent pulls into either repository will break
    hardlinks for any files touched by the new changesets, even if
    both repositories end up pulling the same changes.

    Similarly, passing --rev to "hg clone" will fail to use any
    hardlinks, falling back to a complete copy of the source
    repository.

    This command lets you recreate those hardlinks and reclaim that
    wasted space.

    This repository will be relinked to share space with ORIGIN, which
    must be on the same local disk. If ORIGIN is omitted, looks for
    "default-relink", then "default", in [paths].

    Do not attempt any read operations on this repository while the
    command is running. (Both repositories will be locked against
    writes.)
    """
    if (not util.safehasattr(util, 'samefile') or
        not util.safehasattr(util, 'samedevice')):
        raise error.Abort(_('hardlinks are not supported on this system'))
    src = hg.repository(repo.baseui, ui.expandpath(origin or 'default-relink',
                                          origin or 'default'))
    ui.status(_('relinking %s to %s\n') % (src.store.path, repo.store.path))
    if repo.root == src.root:
        ui.status(_('there is nothing to relink\n'))
        return

    if not util.samedevice(src.store.path, repo.store.path):
        # No point in continuing
        raise error.Abort(_('source and destination are on different devices'))

    locallock = repo.lock()
    try:
        remotelock = src.lock()
        try:
            candidates = sorted(collect(src, ui))
            targets = prune(candidates, src.store.path, repo.store.path, ui)
            do_relink(src.store.path, repo.store.path, targets, ui)
        finally:
            remotelock.release()
    finally:
        locallock.release()

def collect(src, ui):
    seplen = len(os.path.sep)
    candidates = []
    live = len(src['tip'].manifest())
    # Your average repository has some files which were deleted before
    # the tip revision. We account for that by assuming that there are
    # 3 tracked files for every 2 live files as of the tip version of
    # the repository.
    #
    # mozilla-central as of 2010-06-10 had a ratio of just over 7:5.
    total = live * 3 // 2
    src = src.store.path
    pos = 0
    ui.status(_("tip has %d files, estimated total number of files: %d\n")
              % (live, total))
    for dirpath, dirnames, filenames in os.walk(src):
        dirnames.sort()
        relpath = dirpath[len(src) + seplen:]
        for filename in sorted(filenames):
            if filename[-2:] not in ('.d', '.i'):
                continue
            st = os.stat(os.path.join(dirpath, filename))
            if not stat.S_ISREG(st.st_mode):
                continue
            pos += 1
            candidates.append((os.path.join(relpath, filename), st))
            ui.progress(_('collecting'), pos, filename, _('files'), total)

    ui.progress(_('collecting'), None)
    ui.status(_('collected %d candidate storage files\n') % len(candidates))
    return candidates

def prune(candidates, src, dst, ui):
    def linkfilter(src, dst, st):
        try:
            ts = os.stat(dst)
        except OSError:
            # Destination doesn't have this file?
            return False
        if util.samefile(src, dst):
            return False
        if not util.samedevice(src, dst):
            # No point in continuing
            raise error.Abort(
                _('source and destination are on different devices'))
        if st.st_size != ts.st_size:
            return False
        return st

    targets = []
    total = len(candidates)
    pos = 0
    for fn, st in candidates:
        pos += 1
        srcpath = os.path.join(src, fn)
        tgt = os.path.join(dst, fn)
        ts = linkfilter(srcpath, tgt, st)
        if not ts:
            ui.debug('not linkable: %s\n' % fn)
            continue
        targets.append((fn, ts.st_size))
        ui.progress(_('pruning'), pos, fn, _('files'), total)

    ui.progress(_('pruning'), None)
    ui.status(_('pruned down to %d probably relinkable files\n') % len(targets))
    return targets

def do_relink(src, dst, files, ui):
    def relinkfile(src, dst):
        bak = dst + '.bak'
        os.rename(dst, bak)
        try:
            util.oslink(src, dst)
        except OSError:
            os.rename(bak, dst)
            raise
        os.remove(bak)

    CHUNKLEN = 65536
    relinked = 0
    savedbytes = 0

    pos = 0
    total = len(files)
    for f, sz in files:
        pos += 1
        source = os.path.join(src, f)
        tgt = os.path.join(dst, f)
        # Binary mode, so that read() works correctly, especially on Windows
        sfp = file(source, 'rb')
        dfp = file(tgt, 'rb')
        sin = sfp.read(CHUNKLEN)
        while sin:
            din = dfp.read(CHUNKLEN)
            if sin != din:
                break
            sin = sfp.read(CHUNKLEN)
        sfp.close()
        dfp.close()
        if sin:
            ui.debug('not linkable: %s\n' % f)
            continue
        try:
            relinkfile(source, tgt)
            ui.progress(_('relinking'), pos, f, _('files'), total)
            relinked += 1
            savedbytes += sz
        except OSError as inst:
            ui.warn('%s: %s\n' % (tgt, str(inst)))

    ui.progress(_('relinking'), None)

    ui.status(_('relinked %d files (%s reclaimed)\n') %
              (relinked, util.bytecount(savedbytes)))