view mercurial/repair.py @ 50338:81c7d04f4722 stable

match: match explicit file using a set The matcher as all the logic to do quick comparison against explicit patterns, however the pattern matcher was shadowing the code using that set and used the compiled regex pattern in all cases, which is quite slow. We restore the usage of the set based matching to boost performance. Building the regexp is still consuming a large amount of time (actually, the majority of the time), which is still silly. Maybe using re2 would help that, but this is a quest for another adventure. Another path to improve this is to have a pattern type dedicated to match the exact path to a file only (not a directory). This pattern could use the set matching only and be skipped in the regex all together. Benchmarks ========== In the following benchmark we are comparing the `hg cat` and `hg files` run time when matching against all files in the repository. They are run: - without the rust extensions - with the standard python engine (so without re2) Performance improvement in this series -------------------------------------- ###### hg files ############################################################### ### mercurial-2018-08-01-zstd-sparse-revlog ### sorted base-changeset: 0.230092 seconds prev-changeset: 0.230069 seconds this-changeset: 0.211425 seconds (-8.36%) ### mercurial-2018-08-01-zstd-sparse-revlog ### shuffled base-changeset: 0.234235 seconds prev-changeset: 0.231165 seconds (-1.38%) this-changeset: 0.212300 seconds (-9.43%) ### pypy-2018-08-01-zstd-sparse-revlog ### sorted base-changeset: 0.613567 seconds prev-changeset: 0.616799 seconds this-changeset: 0.510852 seconds (-16.82%) ### pypy-2018-08-01-zstd-sparse-revlog ### shuffled base-changeset: 0.801880 seconds prev-changeset: 0.616393 seconds (-23.22%) this-changeset: 0.511903 seconds (-36.23%) ### netbeans-2018-08-01-zstd-sparse-revlog ### sorted base-changeset: 21.541828 seconds prev-changeset: 21.586773 seconds this-changeset: 13.648347 seconds (-36.76%) ### netbeans-2018-08-01-zstd-sparse-revlog ### shuffled base-changeset: 172.759857 seconds prev-changeset: 21.908197 seconds (-87.32%) this-changeset: 13.945110 seconds (-91.93%) ### mozilla-central-2018-08-01-zstd-sparse-revlog ### sorted base-changeset: 62.474221 seconds prev-changeset: 61.279490 seconds (-1.22%) this-changeset: 29.529469 seconds (-52.40%) ### mozilla-central-2018-08-01-zstd-sparse-revlog ### shuffled base-changeset: 1364.180218 seconds prev-changeset: 62.473549 seconds (-95.40%) this-changeset: 30.625249 seconds (-97.75%) ###### hg cat ################################################################# ### mercurial-2018-08-01-zstd-sparse-revlog ### sorted base-changeset: 0.764407 seconds prev-changeset: 0.763883 seconds this-changeset: 0.737326 seconds (-3.68%) ### mercurial-2018-08-01-zstd-sparse-revlog ### shuffled base-changeset: 0.768924 seconds prev-changeset: 0.765848 seconds this-changeset: 0.174d0b seconds (-4.44%) ### pypy-2018-08-01-zstd-sparse-revlog ### sorted base-changeset: 2.065220 seconds prev-changeset: 2.070498 seconds this-changeset: 1.939482 seconds (-6.08%) ### pypy-2018-08-01-zstd-sparse-revlog ### shuffled base-changeset: 2.276388 seconds prev-changeset: 2.069197 seconds (-9.15%) this-changeset: 1.931746 seconds (-15.19%) ### netbeans-2018-08-01-zstd-sparse-revlog ### sorted base-changeset: 40.967983 seconds prev-changeset: 41.392423 seconds this-changeset: 32.181681 seconds (-22.20%) ### netbeans-2018-08-01-zstd-sparse-revlog ### shuffled base-changeset: 216.388709 seconds prev-changeset: 41.648689 seconds (-80.88%) this-changeset: 32.580817 seconds (-85.04%) ### mozilla-central-2018-08-01-zstd-sparse-revlog ### sorted base-changeset: 105.228510 seconds prev-changeset: 103.315670 seconds (-1.23%) this-changeset: 69.416118 seconds (-33.64%) ### mozilla-central-2018-08-01-zstd-sparse-revlog ### shuffled base-changeset: 1448.722784 seconds prev-changeset: 104.369358 seconds (-92.80%) this-changeset: 70.554789 seconds (-95.13%) Different way to list the same data with this revision ------------------------------------------------------ ###### hg files ############################################################### ### mercurial-2018-08-01-zstd-sparse-revlog root: 0.119182 seconds glob: 0.120697 seconds (+1.27%) sorted: 0.211425 seconds (+77.40%) shuffled: 0.212300 seconds (+78.13%) ### pypy-2018-08-01-zstd-sparse-revlog root: 0.121986 seconds glob: 0.124822 seconds (+2.32%) sorted: 0.510852 seconds (+318.78%) shuffled: 0.511903 seconds (+319.64%) ### netbeans-2018-08-01-zstd-sparse-revlog root: 0.173984 seconds glob: 0.227203 seconds (+30.59%) sorted: 13.648347 seconds (+7744.59%) shuffled: 13.945110 seconds (+7915.16%) ### mozilla-central-2018-08-01-zstd-sparse-revlog root: 0.366463 seconds glob: 0.491030 seconds (+33.99%) sorted: 29.529469 seconds (+7957.96%) shuffled: 30.625249 seconds (+8256.97%) ###### hg cat ################################################################# ### mercurial-2018-08-01-zstd-sparse-revlog glob: 0.647471 seconds root: 0.643120 seconds shuffled: 0.174d0b seconds (+13.92%) sorted: 0.737326 seconds (+13.88%) ### mozilla-central-2018-08-01-zstd-sparse-revlog glob: 40.596983 seconds root: 40.129136 seconds shuffled: 70.554789 seconds (+73.79%) sorted: 69.416118 seconds (+70.99%) ### netbeans-2018-08-01-zstd-sparse-revlog glob: 18.777924 seconds root: 18.613905 seconds shuffled: 32.580817 seconds (+73.51%) sorted: 32.181681 seconds (+71.38%) ### pypy-2018-08-01-zstd-sparse-revlog glob: 1.555319 seconds root: 1.536534 seconds shuffled: 1.931746 seconds (+24.20%) sorted: 1.939482 seconds (+24.70%)
author Pierre-Yves David <pierre-yves.david@octobus.net>
date Sat, 01 Apr 2023 05:58:59 +0200
parents d89eecf9605e
children c5e93c915ab6
line wrap: on
line source

# repair.py - functions for repository repair for mercurial
#
# Copyright 2005, 2006 Chris Mason <mason@suse.com>
# Copyright 2007 Olivia Mackall
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.


from .i18n import _
from .node import (
    hex,
    short,
)
from . import (
    bundle2,
    changegroup,
    discovery,
    error,
    exchange,
    obsolete,
    obsutil,
    pathutil,
    phases,
    requirements,
    scmutil,
    transaction,
    util,
)
from .utils import (
    hashutil,
    urlutil,
)


def backupbundle(
    repo, bases, heads, node, suffix, compress=True, obsolescence=True
):
    """create a bundle with the specified revisions as a backup"""

    backupdir = b"strip-backup"
    vfs = repo.vfs
    if not vfs.isdir(backupdir):
        vfs.mkdir(backupdir)

    # Include a hash of all the nodes in the filename for uniqueness
    allcommits = repo.set(b'%ln::%ln', bases, heads)
    allhashes = sorted(c.hex() for c in allcommits)
    totalhash = hashutil.sha1(b''.join(allhashes)).digest()
    name = b"%s/%s-%s-%s.hg" % (
        backupdir,
        short(node),
        hex(totalhash[:4]),
        suffix,
    )

    cgversion = changegroup.localversion(repo)
    comp = None
    if cgversion != b'01':
        bundletype = b"HG20"
        if compress:
            comp = b'BZ'
    elif compress:
        bundletype = b"HG10BZ"
    else:
        bundletype = b"HG10UN"

    outgoing = discovery.outgoing(repo, missingroots=bases, ancestorsof=heads)
    contentopts = {
        b'cg.version': cgversion,
        b'obsolescence': obsolescence,
        b'phases': True,
    }
    return bundle2.writenewbundle(
        repo.ui,
        repo,
        b'strip',
        name,
        bundletype,
        outgoing,
        contentopts,
        vfs,
        compression=comp,
    )


def _collectfiles(repo, striprev):
    """find out the filelogs affected by the strip"""
    files = set()

    for x in range(striprev, len(repo)):
        files.update(repo[x].files())

    return sorted(files)


def _collectrevlog(revlog, striprev):
    _, brokenset = revlog.getstrippoint(striprev)
    return [revlog.linkrev(r) for r in brokenset]


def _collectbrokencsets(repo, files, striprev):
    """return the changesets which will be broken by the truncation"""
    s = set()

    for revlog in manifestrevlogs(repo):
        s.update(_collectrevlog(revlog, striprev))
    for fname in files:
        s.update(_collectrevlog(repo.file(fname), striprev))

    return s


def strip(ui, repo, nodelist, backup=True, topic=b'backup'):
    # This function requires the caller to lock the repo, but it operates
    # within a transaction of its own, and thus requires there to be no current
    # transaction when it is called.
    if repo.currenttransaction() is not None:
        raise error.ProgrammingError(b'cannot strip from inside a transaction')

    # Simple way to maintain backwards compatibility for this
    # argument.
    if backup in [b'none', b'strip']:
        backup = False

    repo = repo.unfiltered()
    repo.destroying()
    vfs = repo.vfs
    # load bookmark before changelog to avoid side effect from outdated
    # changelog (see repo._refreshchangelog)
    repo._bookmarks
    cl = repo.changelog

    # TODO handle undo of merge sets
    if isinstance(nodelist, bytes):
        nodelist = [nodelist]
    striplist = [cl.rev(node) for node in nodelist]
    striprev = min(striplist)

    files = _collectfiles(repo, striprev)
    saverevs = _collectbrokencsets(repo, files, striprev)

    # Some revisions with rev > striprev may not be descendants of striprev.
    # We have to find these revisions and put them in a bundle, so that
    # we can restore them after the truncations.
    # To create the bundle we use repo.changegroupsubset which requires
    # the list of heads and bases of the set of interesting revisions.
    # (head = revision in the set that has no descendant in the set;
    #  base = revision in the set that has no ancestor in the set)
    tostrip = set(striplist)
    saveheads = set(saverevs)
    for r in cl.revs(start=striprev + 1):
        if any(p in tostrip for p in cl.parentrevs(r)):
            tostrip.add(r)

        if r not in tostrip:
            saverevs.add(r)
            saveheads.difference_update(cl.parentrevs(r))
            saveheads.add(r)
    saveheads = [cl.node(r) for r in saveheads]

    # compute base nodes
    if saverevs:
        descendants = set(cl.descendants(saverevs))
        saverevs.difference_update(descendants)
    savebases = [cl.node(r) for r in saverevs]
    stripbases = [cl.node(r) for r in tostrip]

    stripobsidx = obsmarkers = ()
    if repo.ui.configbool(b'devel', b'strip-obsmarkers'):
        obsmarkers = obsutil.exclusivemarkers(repo, stripbases)
    if obsmarkers:
        stripobsidx = [
            i for i, m in enumerate(repo.obsstore) if m in obsmarkers
        ]

    newbmtarget, updatebm = _bookmarkmovements(repo, tostrip)

    backupfile = None
    node = nodelist[-1]
    if backup:
        backupfile = _createstripbackup(repo, stripbases, node, topic)
    # create a changegroup for all the branches we need to keep
    tmpbundlefile = None
    if saveheads:
        # do not compress temporary bundle if we remove it from disk later
        #
        # We do not include obsolescence, it might re-introduce prune markers
        # we are trying to strip.  This is harmless since the stripped markers
        # are already backed up and we did not touched the markers for the
        # saved changesets.
        tmpbundlefile = backupbundle(
            repo,
            savebases,
            saveheads,
            node,
            b'temp',
            compress=False,
            obsolescence=False,
        )

    with ui.uninterruptible():
        try:
            with repo.transaction(b"strip") as tr:
                # TODO this code violates the interface abstraction of the
                # transaction and makes assumptions that file storage is
                # using append-only files. We'll need some kind of storage
                # API to handle stripping for us.
                oldfiles = set(tr._offsetmap.keys())
                oldfiles.update(tr._newfiles)

                tr.startgroup()
                cl.strip(striprev, tr)
                stripmanifest(repo, striprev, tr, files)

                for fn in files:
                    repo.file(fn).strip(striprev, tr)
                tr.endgroup()

                entries = tr.readjournal()

                for file, troffset in entries:
                    if file in oldfiles:
                        continue
                    with repo.svfs(file, b'a', checkambig=True) as fp:
                        fp.truncate(troffset)
                    if troffset == 0:
                        repo.store.markremoved(file)

                deleteobsmarkers(repo.obsstore, stripobsidx)
                del repo.obsstore
                repo.invalidatevolatilesets()
                repo._phasecache.filterunknown(repo)

            if tmpbundlefile:
                ui.note(_(b"adding branch\n"))
                f = vfs.open(tmpbundlefile, b"rb")
                gen = exchange.readbundle(ui, f, tmpbundlefile, vfs)
                # silence internal shuffling chatter
                maybe_silent = (
                    repo.ui.silent()
                    if not repo.ui.verbose
                    else util.nullcontextmanager()
                )
                with maybe_silent:
                    tmpbundleurl = b'bundle:' + vfs.join(tmpbundlefile)
                    txnname = b'strip'
                    if not isinstance(gen, bundle2.unbundle20):
                        txnname = b"strip\n%s" % urlutil.hidepassword(
                            tmpbundleurl
                        )
                    with repo.transaction(txnname) as tr:
                        bundle2.applybundle(
                            repo, gen, tr, source=b'strip', url=tmpbundleurl
                        )
                f.close()

            with repo.transaction(b'repair') as tr:
                bmchanges = [(m, repo[newbmtarget].node()) for m in updatebm]
                repo._bookmarks.applychanges(repo, tr, bmchanges)

            transaction.cleanup_undo_files(repo.ui.warn, repo.vfs_map)

        except:  # re-raises
            if backupfile:
                ui.warn(
                    _(b"strip failed, backup bundle stored in '%s'\n")
                    % vfs.join(backupfile)
                )
            if tmpbundlefile:
                ui.warn(
                    _(b"strip failed, unrecovered changes stored in '%s'\n")
                    % vfs.join(tmpbundlefile)
                )
                ui.warn(
                    _(
                        b"(fix the problem, then recover the changesets with "
                        b"\"hg unbundle '%s'\")\n"
                    )
                    % vfs.join(tmpbundlefile)
                )
            raise
        else:
            if tmpbundlefile:
                # Remove temporary bundle only if there were no exceptions
                vfs.unlink(tmpbundlefile)

    repo.destroyed()
    # return the backup file path (or None if 'backup' was False) so
    # extensions can use it
    return backupfile


def softstrip(ui, repo, nodelist, backup=True, topic=b'backup'):
    """perform a "soft" strip using the archived phase"""
    tostrip = [c.node() for c in repo.set(b'sort(%ln::)', nodelist)]
    if not tostrip:
        return None

    backupfile = None
    if backup:
        node = tostrip[0]
        backupfile = _createstripbackup(repo, tostrip, node, topic)

    newbmtarget, updatebm = _bookmarkmovements(repo, tostrip)
    with repo.transaction(b'strip') as tr:
        phases.retractboundary(repo, tr, phases.archived, tostrip)
        bmchanges = [(m, repo[newbmtarget].node()) for m in updatebm]
        repo._bookmarks.applychanges(repo, tr, bmchanges)
    return backupfile


def _bookmarkmovements(repo, tostrip):
    # compute necessary bookmark movement
    bm = repo._bookmarks
    updatebm = []
    for m in bm:
        rev = repo[bm[m]].rev()
        if rev in tostrip:
            updatebm.append(m)
    newbmtarget = None
    # If we need to move bookmarks, compute bookmark
    # targets. Otherwise we can skip doing this logic.
    if updatebm:
        # For a set s, max(parents(s) - s) is the same as max(heads(::s - s)),
        # but is much faster
        newbmtarget = repo.revs(b'max(parents(%ld) - (%ld))', tostrip, tostrip)
        if newbmtarget:
            newbmtarget = repo[newbmtarget.first()].node()
        else:
            newbmtarget = b'.'
    return newbmtarget, updatebm


def _createstripbackup(repo, stripbases, node, topic):
    # backup the changeset we are about to strip
    vfs = repo.vfs
    cl = repo.changelog
    backupfile = backupbundle(repo, stripbases, cl.heads(), node, topic)
    repo.ui.status(_(b"saved backup bundle to %s\n") % vfs.join(backupfile))
    repo.ui.log(
        b"backupbundle", b"saved backup bundle to %s\n", vfs.join(backupfile)
    )
    return backupfile


def safestriproots(ui, repo, nodes):
    """return list of roots of nodes where descendants are covered by nodes"""
    torev = repo.unfiltered().changelog.rev
    revs = {torev(n) for n in nodes}
    # tostrip = wanted - unsafe = wanted - ancestors(orphaned)
    # orphaned = affected - wanted
    # affected = descendants(roots(wanted))
    # wanted = revs
    revset = b'%ld - ( ::( (roots(%ld):: and not _phase(%s)) -%ld) )'
    tostrip = set(repo.revs(revset, revs, revs, phases.internal, revs))
    notstrip = revs - tostrip
    if notstrip:
        nodestr = b', '.join(sorted(short(repo[n].node()) for n in notstrip))
        ui.warn(
            _(b'warning: orphaned descendants detected, not stripping %s\n')
            % nodestr
        )
    return [c.node() for c in repo.set(b'roots(%ld)', tostrip)]


class stripcallback:
    """used as a transaction postclose callback"""

    def __init__(self, ui, repo, backup, topic):
        self.ui = ui
        self.repo = repo
        self.backup = backup
        self.topic = topic or b'backup'
        self.nodelist = []

    def addnodes(self, nodes):
        self.nodelist.extend(nodes)

    def __call__(self, tr):
        roots = safestriproots(self.ui, self.repo, self.nodelist)
        if roots:
            strip(self.ui, self.repo, roots, self.backup, self.topic)


def delayedstrip(ui, repo, nodelist, topic=None, backup=True):
    """like strip, but works inside transaction and won't strip irreverent revs

    nodelist must explicitly contain all descendants. Otherwise a warning will
    be printed that some nodes are not stripped.

    Will do a backup if `backup` is True. The last non-None "topic" will be
    used as the backup topic name. The default backup topic name is "backup".
    """
    tr = repo.currenttransaction()
    if not tr:
        nodes = safestriproots(ui, repo, nodelist)
        return strip(ui, repo, nodes, backup=backup, topic=topic)
    # transaction postclose callbacks are called in alphabet order.
    # use '\xff' as prefix so we are likely to be called last.
    callback = tr.getpostclose(b'\xffstrip')
    if callback is None:
        callback = stripcallback(ui, repo, backup=backup, topic=topic)
        tr.addpostclose(b'\xffstrip', callback)
    if topic:
        callback.topic = topic
    callback.addnodes(nodelist)


def stripmanifest(repo, striprev, tr, files):
    for revlog in manifestrevlogs(repo):
        revlog.strip(striprev, tr)


def manifestrevlogs(repo):
    yield repo.manifestlog.getstorage(b'')
    if scmutil.istreemanifest(repo):
        # This logic is safe if treemanifest isn't enabled, but also
        # pointless, so we skip it if treemanifest isn't enabled.
        for t, unencoded, size in repo.store.datafiles():
            if unencoded.startswith(b'meta/') and unencoded.endswith(
                b'00manifest.i'
            ):
                dir = unencoded[5:-12]
                yield repo.manifestlog.getstorage(dir)


def rebuildfncache(ui, repo, only_data=False):
    """Rebuilds the fncache file from repo history.

    Missing entries will be added. Extra entries will be removed.
    """
    repo = repo.unfiltered()

    if requirements.FNCACHE_REQUIREMENT not in repo.requirements:
        ui.warn(
            _(
                b'(not rebuilding fncache because repository does not '
                b'support fncache)\n'
            )
        )
        return

    with repo.lock():
        fnc = repo.store.fncache
        fnc.ensureloaded(warn=ui.warn)

        oldentries = set(fnc.entries)
        newentries = set()
        seenfiles = set()

        if only_data:
            # Trust the listing of .i from the fncache, but not the .d. This is
            # much faster, because we only need to stat every possible .d files,
            # instead of reading the full changelog
            for f in fnc:
                if f[:5] == b'data/' and f[-2:] == b'.i':
                    seenfiles.add(f[5:-2])
                    newentries.add(f)
                    dataf = f[:-2] + b'.d'
                    if repo.store._exists(dataf):
                        newentries.add(dataf)
        else:
            progress = ui.makeprogress(
                _(b'rebuilding'), unit=_(b'changesets'), total=len(repo)
            )
            for rev in repo:
                progress.update(rev)

                ctx = repo[rev]
                for f in ctx.files():
                    # This is to minimize I/O.
                    if f in seenfiles:
                        continue
                    seenfiles.add(f)

                    i = b'data/%s.i' % f
                    d = b'data/%s.d' % f

                    if repo.store._exists(i):
                        newentries.add(i)
                    if repo.store._exists(d):
                        newentries.add(d)

            progress.complete()

        if requirements.TREEMANIFEST_REQUIREMENT in repo.requirements:
            # This logic is safe if treemanifest isn't enabled, but also
            # pointless, so we skip it if treemanifest isn't enabled.
            for dir in pathutil.dirs(seenfiles):
                i = b'meta/%s/00manifest.i' % dir
                d = b'meta/%s/00manifest.d' % dir

                if repo.store._exists(i):
                    newentries.add(i)
                if repo.store._exists(d):
                    newentries.add(d)

        addcount = len(newentries - oldentries)
        removecount = len(oldentries - newentries)
        for p in sorted(oldentries - newentries):
            ui.write(_(b'removing %s\n') % p)
        for p in sorted(newentries - oldentries):
            ui.write(_(b'adding %s\n') % p)

        if addcount or removecount:
            ui.write(
                _(b'%d items added, %d removed from fncache\n')
                % (addcount, removecount)
            )
            fnc.entries = newentries
            fnc._dirty = True

            with repo.transaction(b'fncache') as tr:
                fnc.write(tr)
        else:
            ui.write(_(b'fncache already up to date\n'))


def deleteobsmarkers(obsstore, indices):
    """Delete some obsmarkers from obsstore and return how many were deleted

    'indices' is a list of ints which are the indices
    of the markers to be deleted.

    Every invocation of this function completely rewrites the obsstore file,
    skipping the markers we want to be removed. The new temporary file is
    created, remaining markers are written there and on .close() this file
    gets atomically renamed to obsstore, thus guaranteeing consistency."""
    if not indices:
        # we don't want to rewrite the obsstore with the same content
        return

    left = []
    current = obsstore._all
    n = 0
    for i, m in enumerate(current):
        if i in indices:
            n += 1
            continue
        left.append(m)

    newobsstorefile = obsstore.svfs(b'obsstore', b'w', atomictemp=True)
    for bytes in obsolete.encodemarkers(left, True, obsstore._version):
        newobsstorefile.write(bytes)
    newobsstorefile.close()
    return n