view mercurial/bundlerepo.py @ 31765:264baeef3588

show: new extension for displaying various repository data Currently, Mercurial has a number of commands to show information. And, there are features coming down the pipe that will introduce more commands for showing information. Currently, when introducing a new class of data or a view that we wish to expose to the user, the strategy is to introduce a new command or overload an existing command, sometimes both. For example, there is a desire to formalize the wip/smartlog/underway/mine functionality that many have devised. There is also a desire to introduce a "topics" concept. Others would like views of "the current stack." In the current model, we'd need a new command for wip/smartlog/etc (that behaves a lot like a pre-defined alias of `hg log`). For topics, we'd likely overload `hg topic[s]` to both display and manipulate topics. Adding new commands for every pre-defined query doesn't scale well and pollutes `hg help`. Overloading commands to perform read-only and write operations is arguably an UX anti-pattern: while having all functionality for a given concept in one command is nice, having a single command doing multiple discrete operations is not. Furthermore, a user may be surprised that a command they thought was read-only actually changes something. We discussed this at the Mercurial 4.0 Sprint in Paris and decided that having a single command where we could hang pre-defined views of various data would be a good idea. Having such a command would: * Help prevent an explosion of new query-related commands * Create a clear separation between read and write operations (mitigates footguns) * Avoids overloading the meaning of commands that manipulate data (bookmark, tag, branch, etc) (while we can't take away the existing behavior for BC reasons, we now won't introduce this behavior on new commands) * Allows users to discover informational views more easily by aggregating them in a single location * Lowers the barrier to creating the new views (since the barrier to creating a top-level command is relatively high) So, this commit introduces the `hg show` command via the "show" extension. This command accepts a positional argument of the "view" to show. New views can be registered with a decorator. To prove it works, we implement the "bookmarks" view, which shows a table of bookmarks and their associated nodes. We introduce a new style to hold everything used by `hg show`. For our initial bookmarks view, the output varies from `hg bookmarks`: * Padding is performed in the template itself as opposed to Python * Revision integers are not shown * shortest() is used to display a 5 character node by default (as opposed to static 12 characters) I chose to implement the "bookmarks" view first because it is simple and shouldn't invite too much bikeshedding that detracts from the evaluation of `hg show` itself. But there is an important point to consider: we now have 2 ways to show a list of bookmarks. I'm not a fan of introducing multiple ways to do very similar things. So it might be worth discussing how we wish to tackle this issue for bookmarks, tags, branches, MQ series, etc. I also made the choice of explicitly declaring the default show template not part of the standard BC guarantees. History has shown that we make mistakes and poor choices with output formatting but can't fix these mistakes later because random tools are parsing output and we don't want to break these tools. Optimizing for human consumption is one of my goals for `hg show`. So, by not covering the formatting as part of BC, the barrier to future change is much lower and humans benefit. There are some improvements that can be made to formatting. For example, we don't yet use label() in the templates. We obviously want this for color. But I'm not sure if we should reuse the existing log.* labels or invent new ones. I figure we can punt that to a follow-up. At the aforementioned Sprint, we discussed and discarded various alternatives to `hg show`. We considered making `hg log <view>` perform this behavior. The main reason we can't do this is because a positional argument to `hg log` can be a file path and if there is a conflict between a path name and a view name, behavior is ambiguous. We could have introduced `hg log --view` or similar, but we felt that required too much typing (we don't want to require a command flag to show a view) and wasn't very discoverable. Furthermore, `hg log` is optimized for showing changelog data and there are things that `hg display` could display that aren't changelog centric. There were concerns about using "show" as the command name. Some users already have a "show" alias that is similar to `hg export`. There were also concerns that Git users adapted to `git show` would be confused by `hg show`'s different behavior. The main difference here is `git show` prints an `hg export` like view of the current commit by default and `hg show` requires an argument. `git show` can also display any Git object. `git show` does not support displaying more complex views: just single objects. If we implemented `hg show <hash>` or `hg show <identifier>`, `hg show` would be a superset of `git show`. Although, I'm hesitant to do that at this time because I view `hg show` as a higher-level querying command and there are namespace collisions between valid identifiers and registered views. There is also a prefix collision with `hg showconfig`, which is an alias of `hg config`. We also considered `hg view`, but that is already used by the "hgk" extension. `hg display` was also proposed at one point. It has a prefix collision with `hg diff`. General consensus was "show" or "view" are the best verbs. And since "view" was taken, "show" was chosen. There are a number of inline TODOs in this patch. Some of these represent decisions yet to be made. Others represent features requiring non-trivial complexity. Rather than bloat the patch or invite additional bikeshedding, I figured I'd document future enhancements via TODO so we can get a minimal implmentation landed. Something is better than nothing.
author Gregory Szorc <gregory.szorc@gmail.com>
date Fri, 24 Mar 2017 19:19:00 -0700
parents 2c4295773436
children 433ab46f6bb4
line wrap: on
line source

# bundlerepo.py - repository class for viewing uncompressed bundles
#
# Copyright 2006, 2007 Benoit Boissinot <bboissin@gmail.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.

"""Repository class for viewing uncompressed bundles.

This provides a read-only repository interface to bundles as if they
were part of the actual repository.
"""

from __future__ import absolute_import

import os
import shutil
import tempfile

from .i18n import _
from .node import nullid

from . import (
    bundle2,
    changegroup,
    changelog,
    cmdutil,
    discovery,
    error,
    exchange,
    filelog,
    localrepo,
    manifest,
    mdiff,
    node as nodemod,
    pathutil,
    phases,
    pycompat,
    revlog,
    util,
    vfs as vfsmod,
)

class bundlerevlog(revlog.revlog):
    def __init__(self, opener, indexfile, bundle, linkmapper):
        # How it works:
        # To retrieve a revision, we need to know the offset of the revision in
        # the bundle (an unbundle object). We store this offset in the index
        # (start). The base of the delta is stored in the base field.
        #
        # To differentiate a rev in the bundle from a rev in the revlog, we
        # check revision against repotiprev.
        opener = vfsmod.readonlyvfs(opener)
        revlog.revlog.__init__(self, opener, indexfile)
        self.bundle = bundle
        n = len(self)
        self.repotiprev = n - 1
        chain = None
        self.bundlerevs = set() # used by 'bundle()' revset expression
        getchunk = lambda: bundle.deltachunk(chain)
        for chunkdata in iter(getchunk, {}):
            node = chunkdata['node']
            p1 = chunkdata['p1']
            p2 = chunkdata['p2']
            cs = chunkdata['cs']
            deltabase = chunkdata['deltabase']
            delta = chunkdata['delta']

            size = len(delta)
            start = bundle.tell() - size

            link = linkmapper(cs)
            if node in self.nodemap:
                # this can happen if two branches make the same change
                chain = node
                self.bundlerevs.add(self.nodemap[node])
                continue

            for p in (p1, p2):
                if p not in self.nodemap:
                    raise error.LookupError(p, self.indexfile,
                                            _("unknown parent"))

            if deltabase not in self.nodemap:
                raise LookupError(deltabase, self.indexfile,
                                  _('unknown delta base'))

            baserev = self.rev(deltabase)
            # start, size, full unc. size, base (unused), link, p1, p2, node
            e = (revlog.offset_type(start, 0), size, -1, baserev, link,
                 self.rev(p1), self.rev(p2), node)
            self.index.insert(-1, e)
            self.nodemap[node] = n
            self.bundlerevs.add(n)
            chain = node
            n += 1

    def _chunk(self, rev):
        # Warning: in case of bundle, the diff is against what we stored as
        # delta base, not against rev - 1
        # XXX: could use some caching
        if rev <= self.repotiprev:
            return revlog.revlog._chunk(self, rev)
        self.bundle.seek(self.start(rev))
        return self.bundle.read(self.length(rev))

    def revdiff(self, rev1, rev2):
        """return or calculate a delta between two revisions"""
        if rev1 > self.repotiprev and rev2 > self.repotiprev:
            # hot path for bundle
            revb = self.index[rev2][3]
            if revb == rev1:
                return self._chunk(rev2)
        elif rev1 <= self.repotiprev and rev2 <= self.repotiprev:
            return revlog.revlog.revdiff(self, rev1, rev2)

        return mdiff.textdiff(self.revision(rev1), self.revision(rev2))

    def revision(self, nodeorrev, raw=False):
        """return an uncompressed revision of a given node or revision
        number.
        """
        if isinstance(nodeorrev, int):
            rev = nodeorrev
            node = self.node(rev)
        else:
            node = nodeorrev
            rev = self.rev(node)

        if node == nullid:
            return ""

        text = None
        chain = []
        iterrev = rev
        # reconstruct the revision if it is from a changegroup
        while iterrev > self.repotiprev:
            if self._cache and self._cache[1] == iterrev:
                text = self._cache[2]
                break
            chain.append(iterrev)
            iterrev = self.index[iterrev][3]
        if text is None:
            text = self.baserevision(iterrev)

        while chain:
            delta = self._chunk(chain.pop())
            text = mdiff.patches(text, [delta])

        text, validatehash = self._processflags(text, self.flags(rev),
                                                'read', raw=raw)
        if validatehash:
            self.checkhash(text, node, rev=rev)
        self._cache = (node, rev, text)
        return text

    def baserevision(self, nodeorrev):
        # Revlog subclasses may override 'revision' method to modify format of
        # content retrieved from revlog. To use bundlerevlog with such class one
        # needs to override 'baserevision' and make more specific call here.
        return revlog.revlog.revision(self, nodeorrev)

    def addrevision(self, text, transaction, link, p1=None, p2=None, d=None):
        raise NotImplementedError
    def addgroup(self, revs, linkmapper, transaction):
        raise NotImplementedError
    def strip(self, rev, minlink):
        raise NotImplementedError
    def checksize(self):
        raise NotImplementedError

class bundlechangelog(bundlerevlog, changelog.changelog):
    def __init__(self, opener, bundle):
        changelog.changelog.__init__(self, opener)
        linkmapper = lambda x: x
        bundlerevlog.__init__(self, opener, self.indexfile, bundle,
                              linkmapper)

    def baserevision(self, nodeorrev):
        # Although changelog doesn't override 'revision' method, some extensions
        # may replace this class with another that does. Same story with
        # manifest and filelog classes.

        # This bypasses filtering on changelog.node() and rev() because we need
        # revision text of the bundle base even if it is hidden.
        oldfilter = self.filteredrevs
        try:
            self.filteredrevs = ()
            return changelog.changelog.revision(self, nodeorrev)
        finally:
            self.filteredrevs = oldfilter

class bundlemanifest(bundlerevlog, manifest.manifestrevlog):
    def __init__(self, opener, bundle, linkmapper, dirlogstarts=None, dir=''):
        manifest.manifestrevlog.__init__(self, opener, dir=dir)
        bundlerevlog.__init__(self, opener, self.indexfile, bundle,
                              linkmapper)
        if dirlogstarts is None:
            dirlogstarts = {}
            if self.bundle.version == "03":
                dirlogstarts = _getfilestarts(self.bundle)
        self._dirlogstarts = dirlogstarts
        self._linkmapper = linkmapper

    def baserevision(self, nodeorrev):
        node = nodeorrev
        if isinstance(node, int):
            node = self.node(node)

        if node in self.fulltextcache:
            result = '%s' % self.fulltextcache[node]
        else:
            result = manifest.manifestrevlog.revision(self, nodeorrev)
        return result

    def dirlog(self, d):
        if d in self._dirlogstarts:
            self.bundle.seek(self._dirlogstarts[d])
            return bundlemanifest(
                self.opener, self.bundle, self._linkmapper,
                self._dirlogstarts, dir=d)
        return super(bundlemanifest, self).dirlog(d)

class bundlefilelog(bundlerevlog, filelog.filelog):
    def __init__(self, opener, path, bundle, linkmapper):
        filelog.filelog.__init__(self, opener, path)
        bundlerevlog.__init__(self, opener, self.indexfile, bundle,
                              linkmapper)

    def baserevision(self, nodeorrev):
        return filelog.filelog.revision(self, nodeorrev)

class bundlepeer(localrepo.localpeer):
    def canpush(self):
        return False

class bundlephasecache(phases.phasecache):
    def __init__(self, *args, **kwargs):
        super(bundlephasecache, self).__init__(*args, **kwargs)
        if util.safehasattr(self, 'opener'):
            self.opener = vfsmod.readonlyvfs(self.opener)

    def write(self):
        raise NotImplementedError

    def _write(self, fp):
        raise NotImplementedError

    def _updateroots(self, phase, newroots, tr):
        self.phaseroots[phase] = newroots
        self.invalidate()
        self.dirty = True

def _getfilestarts(bundle):
    bundlefilespos = {}
    for chunkdata in iter(bundle.filelogheader, {}):
        fname = chunkdata['filename']
        bundlefilespos[fname] = bundle.tell()
        for chunk in iter(lambda: bundle.deltachunk(None), {}):
            pass
    return bundlefilespos

class bundlerepository(localrepo.localrepository):
    def __init__(self, ui, path, bundlename):
        def _writetempbundle(read, suffix, header=''):
            """Write a temporary file to disk

            This is closure because we need to make sure this tracked by
            self.tempfile for cleanup purposes."""
            fdtemp, temp = self.vfs.mkstemp(prefix="hg-bundle-",
                                            suffix=".hg10un")
            self.tempfile = temp

            with os.fdopen(fdtemp, pycompat.sysstr('wb')) as fptemp:
                fptemp.write(header)
                while True:
                    chunk = read(2**18)
                    if not chunk:
                        break
                    fptemp.write(chunk)

            return self.vfs.open(self.tempfile, mode="rb")
        self._tempparent = None
        try:
            localrepo.localrepository.__init__(self, ui, path)
        except error.RepoError:
            self._tempparent = tempfile.mkdtemp()
            localrepo.instance(ui, self._tempparent, 1)
            localrepo.localrepository.__init__(self, ui, self._tempparent)
        self.ui.setconfig('phases', 'publish', False, 'bundlerepo')

        if path:
            self._url = 'bundle:' + util.expandpath(path) + '+' + bundlename
        else:
            self._url = 'bundle:' + bundlename

        self.tempfile = None
        f = util.posixfile(bundlename, "rb")
        self.bundlefile = self.bundle = exchange.readbundle(ui, f, bundlename)

        if isinstance(self.bundle, bundle2.unbundle20):
            cgstream = None
            for part in self.bundle.iterparts():
                if part.type == 'changegroup':
                    if cgstream is not None:
                        raise NotImplementedError("can't process "
                                                  "multiple changegroups")
                    cgstream = part
                    version = part.params.get('version', '01')
                    legalcgvers = changegroup.supportedincomingversions(self)
                    if version not in legalcgvers:
                        msg = _('Unsupported changegroup version: %s')
                        raise error.Abort(msg % version)
                    if self.bundle.compressed():
                        cgstream = _writetempbundle(part.read,
                                                    ".cg%sun" % version)

            if cgstream is None:
                raise error.Abort(_('No changegroups found'))
            cgstream.seek(0)

            self.bundle = changegroup.getunbundler(version, cgstream, 'UN')

        elif self.bundle.compressed():
            f = _writetempbundle(self.bundle.read, '.hg10un', header='HG10UN')
            self.bundlefile = self.bundle = exchange.readbundle(ui, f,
                                                                bundlename,
                                                                self.vfs)

        # dict with the mapping 'filename' -> position in the bundle
        self.bundlefilespos = {}

        self.firstnewrev = self.changelog.repotiprev + 1
        phases.retractboundary(self, None, phases.draft,
                               [ctx.node() for ctx in self[self.firstnewrev:]])

    @localrepo.unfilteredpropertycache
    def _phasecache(self):
        return bundlephasecache(self, self._phasedefaults)

    @localrepo.unfilteredpropertycache
    def changelog(self):
        # consume the header if it exists
        self.bundle.changelogheader()
        c = bundlechangelog(self.svfs, self.bundle)
        self.manstart = self.bundle.tell()
        return c

    def _constructmanifest(self):
        self.bundle.seek(self.manstart)
        # consume the header if it exists
        self.bundle.manifestheader()
        linkmapper = self.unfiltered().changelog.rev
        m = bundlemanifest(self.svfs, self.bundle, linkmapper)
        self.filestart = self.bundle.tell()
        return m

    @localrepo.unfilteredpropertycache
    def manstart(self):
        self.changelog
        return self.manstart

    @localrepo.unfilteredpropertycache
    def filestart(self):
        self.manifestlog
        return self.filestart

    def url(self):
        return self._url

    def file(self, f):
        if not self.bundlefilespos:
            self.bundle.seek(self.filestart)
            self.bundlefilespos = _getfilestarts(self.bundle)

        if f in self.bundlefilespos:
            self.bundle.seek(self.bundlefilespos[f])
            linkmapper = self.unfiltered().changelog.rev
            return bundlefilelog(self.svfs, f, self.bundle, linkmapper)
        else:
            return filelog.filelog(self.svfs, f)

    def close(self):
        """Close assigned bundle file immediately."""
        self.bundlefile.close()
        if self.tempfile is not None:
            self.vfs.unlink(self.tempfile)
        if self._tempparent:
            shutil.rmtree(self._tempparent, True)

    def cancopy(self):
        return False

    def peer(self):
        return bundlepeer(self)

    def getcwd(self):
        return pycompat.getcwd() # always outside the repo

    # Check if parents exist in localrepo before setting
    def setparents(self, p1, p2=nullid):
        p1rev = self.changelog.rev(p1)
        p2rev = self.changelog.rev(p2)
        msg = _("setting parent to node %s that only exists in the bundle\n")
        if self.changelog.repotiprev < p1rev:
            self.ui.warn(msg % nodemod.hex(p1))
        if self.changelog.repotiprev < p2rev:
            self.ui.warn(msg % nodemod.hex(p2))
        return super(bundlerepository, self).setparents(p1, p2)

def instance(ui, path, create):
    if create:
        raise error.Abort(_('cannot create new bundle repository'))
    # internal config: bundle.mainreporoot
    parentpath = ui.config("bundle", "mainreporoot", "")
    if not parentpath:
        # try to find the correct path to the working directory repo
        parentpath = cmdutil.findrepo(pycompat.getcwd())
        if parentpath is None:
            parentpath = ''
    if parentpath:
        # Try to make the full path relative so we get a nice, short URL.
        # In particular, we don't want temp dir names in test outputs.
        cwd = pycompat.getcwd()
        if parentpath == cwd:
            parentpath = ''
        else:
            cwd = pathutil.normasprefix(cwd)
            if parentpath.startswith(cwd):
                parentpath = parentpath[len(cwd):]
    u = util.url(path)
    path = u.localpath()
    if u.scheme == 'bundle':
        s = path.split("+", 1)
        if len(s) == 1:
            repopath, bundlename = parentpath, s[0]
        else:
            repopath, bundlename = s
    else:
        repopath, bundlename = parentpath, path
    return bundlerepository(ui, repopath, bundlename)

class bundletransactionmanager(object):
    def transaction(self):
        return None

    def close(self):
        raise NotImplementedError

    def release(self):
        raise NotImplementedError

def getremotechanges(ui, repo, other, onlyheads=None, bundlename=None,
                     force=False):
    '''obtains a bundle of changes incoming from other

    "onlyheads" restricts the returned changes to those reachable from the
      specified heads.
    "bundlename", if given, stores the bundle to this file path permanently;
      otherwise it's stored to a temp file and gets deleted again when you call
      the returned "cleanupfn".
    "force" indicates whether to proceed on unrelated repos.

    Returns a tuple (local, csets, cleanupfn):

    "local" is a local repo from which to obtain the actual incoming
      changesets; it is a bundlerepo for the obtained bundle when the
      original "other" is remote.
    "csets" lists the incoming changeset node ids.
    "cleanupfn" must be called without arguments when you're done processing
      the changes; it closes both the original "other" and the one returned
      here.
    '''
    tmp = discovery.findcommonincoming(repo, other, heads=onlyheads,
                                       force=force)
    common, incoming, rheads = tmp
    if not incoming:
        try:
            if bundlename:
                os.unlink(bundlename)
        except OSError:
            pass
        return repo, [], other.close

    commonset = set(common)
    rheads = [x for x in rheads if x not in commonset]

    bundle = None
    bundlerepo = None
    localrepo = other.local()
    if bundlename or not localrepo:
        # create a bundle (uncompressed if other repo is not local)

        # developer config: devel.legacy.exchange
        legexc = ui.configlist('devel', 'legacy.exchange')
        forcebundle1 = 'bundle2' not in legexc and 'bundle1' in legexc
        canbundle2 = (not forcebundle1
                      and other.capable('getbundle')
                      and other.capable('bundle2'))
        if canbundle2:
            kwargs = {}
            kwargs['common'] = common
            kwargs['heads'] = rheads
            kwargs['bundlecaps'] = exchange.caps20to10(repo)
            kwargs['cg'] = True
            b2 = other.getbundle('incoming', **kwargs)
            fname = bundle = changegroup.writechunks(ui, b2._forwardchunks(),
                                                     bundlename)
        else:
            if other.capable('getbundle'):
                cg = other.getbundle('incoming', common=common, heads=rheads)
            elif onlyheads is None and not other.capable('changegroupsubset'):
                # compat with older servers when pulling all remote heads
                cg = other.changegroup(incoming, "incoming")
                rheads = None
            else:
                cg = other.changegroupsubset(incoming, rheads, 'incoming')
            if localrepo:
                bundletype = "HG10BZ"
            else:
                bundletype = "HG10UN"
            fname = bundle = bundle2.writebundle(ui, cg, bundlename,
                                                     bundletype)
        # keep written bundle?
        if bundlename:
            bundle = None
        if not localrepo:
            # use the created uncompressed bundlerepo
            localrepo = bundlerepo = bundlerepository(repo.baseui, repo.root,
                                                      fname)
            # this repo contains local and other now, so filter out local again
            common = repo.heads()
    if localrepo:
        # Part of common may be remotely filtered
        # So use an unfiltered version
        # The discovery process probably need cleanup to avoid that
        localrepo = localrepo.unfiltered()

    csets = localrepo.changelog.findmissing(common, rheads)

    if bundlerepo:
        reponodes = [ctx.node() for ctx in bundlerepo[bundlerepo.firstnewrev:]]
        remotephases = other.listkeys('phases')

        pullop = exchange.pulloperation(bundlerepo, other, heads=reponodes)
        pullop.trmanager = bundletransactionmanager()
        exchange._pullapplyphases(pullop, remotephases)

    def cleanup():
        if bundlerepo:
            bundlerepo.close()
        if bundle:
            os.unlink(bundle)
        other.close()

    return (localrepo, csets, cleanup)