view mercurial/exchangev2.py @ 46568:0d840b9d200d

copies-rust: track "overwrites" directly within CopySource Overwrite are "rare enough" that explicitly keeping track of them is going to be "cheap", or at least much cheaper that issuing many `is_ancestor` calls. Even a simple implementation using no specific optimisation (eg: using the generic HashSet type) yield good result in most cases. They are interesting optimization to can do on top of that. We will implement them in later changesets. We tried different approach to speed up the overwrite detection and this one seems the most promising. Without further optimization, we already see sizable speedup on various cases. Repo Case Source-Rev Dest-Rev # of revisions old time new time Difference Factor time per rev --------------------------------------------------------------------------------------------------------------------------------------------------------------- mozilla-try x00000_revs_x_added_0_copies 6a320851d377 1ebb79acd503 : 363753 revs, 5.138169 s, 4.482399 s, -0.655770 s, × 0.8724, 12 µs/rev mozilla-try x00000_revs_x_added_x_copies 5173c4b6f97c 95d83ee7242d : 362229 revs, 5.127809 s, 4.480366 s, -0.647443 s, × 0.8737, 12 µs/rev mozilla-try x00000_revs_x000_added_x_copies 9126823d0e9c ca82787bb23c : 359344 revs, 4.971136 s, 4.369070 s, -0.602066 s, × 0.8789, 12 µs/rev mozilla-try x00000_revs_x0000_added_x0000_copies 8d3fafa80d4b eb884023b810 : 192665 revs, 1.741678 s, 1.592506 s, -0.149172 s, × 0.9144, 8 µs/rev However, some of the case doing a lot of overwrite get significantly slower. The one with a really problematic slowdown are the special "head reducing" merge in mozilla-try so I am not too worried about them. In addition, further changeset are going to improve the performance of all this. Repo Case Source-Rev Dest-Rev # of revisions old time new time Difference Factor time per rev --------------------------------------------------------------------------------------------------------------------------------------------------------------- mozilla-try x0000_revs_xx000_added_x000_copies 89294cd501d9 7ccb2fc7ccb5 : 97052 revs, 1.343373 s, 2.119204 s, +0.775831 s, × 1.5775, 21 µs/rev mozilla-try x00000_revs_x00000_added_x0000_copies 1b661134e2ca 1ae03d022d6d : 228985 revs, 40.314822 s, 87.824489 s, +47.509667 s, × 2.1785, 383 µs/rev mozilla-try x00000_revs_x00000_added_x000_copies 9b2a99adc05e 8e29777b48e6 : 382065 revs, 20.048029 s, 43.304637 s, +23.256608 s, × 2.1600, 113 µs/rev Full benchmark below: Repo Case Source-Rev Dest-Rev # of revisions old time new time Difference Factor time per rev --------------------------------------------------------------------------------------------------------------------------------------------------------------- mercurial x_revs_x_added_0_copies ad6b123de1c7 39cfcef4f463 : 1 revs, 0.000042 s, 0.000043 s, +0.000001 s, × 1.0238, 43 µs/rev mercurial x_revs_x_added_x_copies 2b1c78674230 0c1d10351869 : 6 revs, 0.000110 s, 0.000114 s, +0.000004 s, × 1.0364, 19 µs/rev mercurial x000_revs_x000_added_x_copies 81f8ff2a9bf2 dd3267698d84 : 1032 revs, 0.004945 s, 0.004937 s, -0.000008 s, × 0.9984, 4 µs/rev pypy x_revs_x_added_0_copies aed021ee8ae8 099ed31b181b : 9 revs, 0.000192 s, 0.000339 s, +0.000147 s, × 1.7656, 37 µs/rev pypy x_revs_x000_added_0_copies 4aa4e1f8e19a 359343b9ac0e : 1 revs, 0.000049 s, 0.000049 s, +0.000000 s, × 1.0000, 49 µs/rev pypy x_revs_x_added_x_copies ac52eb7bbbb0 72e022663155 : 7 revs, 0.000112 s, 0.000202 s, +0.000090 s, × 1.8036, 28 µs/rev pypy x_revs_x00_added_x_copies c3b14617fbd7 ace7255d9a26 : 1 revs, 0.000323 s, 0.000409 s, +0.000086 s, × 1.2663, 409 µs/rev pypy x_revs_x000_added_x000_copies df6f7a526b60 a83dc6a2d56f : 6 revs, 0.010042 s, 0.011984 s, +0.001942 s, × 1.1934, 1997 µs/rev pypy x000_revs_xx00_added_0_copies 89a76aede314 2f22446ff07e : 4785 revs, 0.049813 s, 0.050820 s, +0.001007 s, × 1.0202, 10 µs/rev pypy x000_revs_x000_added_x_copies 8a3b5bfd266e 2c68e87c3efe : 6780 revs, 0.079937 s, 0.087953 s, +0.008016 s, × 1.1003, 12 µs/rev pypy x000_revs_x000_added_x000_copies 89a76aede314 7b3dda341c84 : 5441 revs, 0.059412 s, 0.062902 s, +0.003490 s, × 1.0587, 11 µs/rev pypy x0000_revs_x_added_0_copies d1defd0dc478 c9cb1334cc78 : 43645 revs, 0.533769 s, 0.679234 s, +0.145465 s, × 1.2725, 15 µs/rev pypy x0000_revs_xx000_added_0_copies bf2c629d0071 4ffed77c095c : 2 revs, 0.013147 s, 0.013095 s, -0.000052 s, × 0.9960, 6547 µs/rev pypy x0000_revs_xx000_added_x000_copies 08ea3258278e d9fa043f30c0 : 11316 revs, 0.110680 s, 0.120910 s, +0.010230 s, × 1.0924, 10 µs/rev netbeans x_revs_x_added_0_copies fb0955ffcbcd a01e9239f9e7 : 2 revs, 0.000085 s, 0.000087 s, +0.000002 s, × 1.0235, 43 µs/rev netbeans x_revs_x000_added_0_copies 6f360122949f 20eb231cc7d0 : 2 revs, 0.000107 s, 0.000107 s, +0.000000 s, × 1.0000, 53 µs/rev netbeans x_revs_x_added_x_copies 1ada3faf6fb6 5a39d12eecf4 : 3 revs, 0.000175 s, 0.000186 s, +0.000011 s, × 1.0629, 62 µs/rev netbeans x_revs_x00_added_x_copies 35be93ba1e2c 9eec5e90c05f : 9 revs, 0.000720 s, 0.000754 s, +0.000034 s, × 1.0472, 83 µs/rev netbeans x000_revs_xx00_added_0_copies eac3045b4fdd 51d4ae7f1290 : 1421 revs, 0.010019 s, 0.010443 s, +0.000424 s, × 1.0423, 7 µs/rev netbeans x000_revs_x000_added_x_copies e2063d266acd 6081d72689dc : 1533 revs, 0.015602 s, 0.015697 s, +0.000095 s, × 1.0061, 10 µs/rev netbeans x000_revs_x000_added_x000_copies ff453e9fee32 411350406ec2 : 5750 revs, 0.058759 s, 0.063528 s, +0.004769 s, × 1.0812, 11 µs/rev netbeans x0000_revs_xx000_added_x000_copies 588c2d1ced70 1aad62e59ddd : 66949 revs, 0.491550 s, 0.545515 s, +0.053965 s, × 1.1098, 8 µs/rev mozilla-central x_revs_x_added_0_copies 3697f962bb7b 7015fcdd43a2 : 2 revs, 0.000087 s, 0.000089 s, +0.000002 s, × 1.0230, 44 µs/rev mozilla-central x_revs_x000_added_0_copies dd390860c6c9 40d0c5bed75d : 8 revs, 0.000268 s, 0.000265 s, -0.000003 s, × 0.9888, 33 µs/rev mozilla-central x_revs_x_added_x_copies 8d198483ae3b 14207ffc2b2f : 9 revs, 0.000181 s, 0.000381 s, +0.000200 s, × 2.1050, 42 µs/rev mozilla-central x_revs_x00_added_x_copies 98cbc58cc6bc 446a150332c3 : 7 revs, 0.000661 s, 0.000672 s, +0.000011 s, × 1.0166, 96 µs/rev mozilla-central x_revs_x000_added_x000_copies 3c684b4b8f68 0a5e72d1b479 : 3 revs, 0.003256 s, 0.003497 s, +0.000241 s, × 1.0740, 1165 µs/rev mozilla-central x_revs_x0000_added_x0000_copies effb563bb7e5 c07a39dc4e80 : 6 revs, 0.066749 s, 0.073204 s, +0.006455 s, × 1.0967, 12200 µs/rev mozilla-central x000_revs_xx00_added_0_copies 6100d773079a 04a55431795e : 1593 revs, 0.006462 s, 0.006482 s, +0.000020 s, × 1.0031, 4 µs/rev mozilla-central x000_revs_x000_added_x_copies 9f17a6fc04f9 2d37b966abed : 41 revs, 0.004919 s, 0.005066 s, +0.000147 s, × 1.0299, 123 µs/rev mozilla-central x000_revs_x000_added_x000_copies 7c97034feb78 4407bd0c6330 : 7839 revs, 0.062421 s, 0.065707 s, +0.003286 s, × 1.0526, 8 µs/rev mozilla-central x0000_revs_xx000_added_0_copies 9eec5917337d 67118cc6dcad : 615 revs, 0.026633 s, 0.026800 s, +0.000167 s, × 1.0063, 43 µs/rev mozilla-central x0000_revs_xx000_added_x000_copies f78c615a656c 96a38b690156 : 30263 revs, 0.197792 s, 0.203856 s, +0.006064 s, × 1.0307, 6 µs/rev mozilla-central x00000_revs_x0000_added_x0000_copies 6832ae71433c 4c222a1d9a00 : 153721 revs, 1.259970 s, 1.293394 s, +0.033424 s, × 1.0265, 8 µs/rev mozilla-central x00000_revs_x00000_added_x000_copies 76caed42cf7c 1daa622bbe42 : 204976 revs, 1.689184 s, 1.698239 s, +0.009055 s, × 1.0054, 8 µs/rev mozilla-try x_revs_x_added_0_copies aaf6dde0deb8 9790f499805a : 2 revs, 0.000865 s, 0.000875 s, +0.000010 s, × 1.0116, 437 µs/rev mozilla-try x_revs_x000_added_0_copies d8d0222927b4 5bb8ce8c7450 : 2 revs, 0.000893 s, 0.000891 s, -0.000002 s, × 0.9978, 445 µs/rev mozilla-try x_revs_x_added_x_copies 092fcca11bdb 936255a0384a : 4 revs, 0.000172 s, 0.000292 s, +0.000120 s, × 1.6977, 73 µs/rev mozilla-try x_revs_x00_added_x_copies b53d2fadbdb5 017afae788ec : 2 revs, 0.001159 s, 0.003939 s, +0.002780 s, × 3.3986, 1969 µs/rev mozilla-try x_revs_x000_added_x000_copies 20408ad61ce5 6f0ee96e21ad : 1 revs, 0.031621 s, 0.033027 s, +0.001406 s, × 1.0445, 33027 µs/rev mozilla-try x_revs_x0000_added_x0000_copies effb563bb7e5 c07a39dc4e80 : 6 revs, 0.068571 s, 0.073703 s, +0.005132 s, × 1.0748, 12283 µs/rev mozilla-try x000_revs_xx00_added_0_copies 6100d773079a 04a55431795e : 1593 revs, 0.006452 s, 0.006469 s, +0.000017 s, × 1.0026, 4 µs/rev mozilla-try x000_revs_x000_added_x_copies 9f17a6fc04f9 2d37b966abed : 41 revs, 0.005443 s, 0.005278 s, -0.000165 s, × 0.9697, 128 µs/rev mozilla-try x000_revs_x000_added_x000_copies 1346fd0130e4 4c65cbdabc1f : 6657 revs, 0.063180 s, 0.064995 s, +0.001815 s, × 1.0287, 9 µs/rev mozilla-try x0000_revs_x_added_0_copies 63519bfd42ee a36a2a865d92 : 40314 revs, 0.293564 s, 0.301041 s, +0.007477 s, × 1.0255, 7 µs/rev mozilla-try x0000_revs_x_added_x_copies 9fe69ff0762d bcabf2a78927 : 38690 revs, 0.286595 s, 0.285575 s, -0.001020 s, × 0.9964, 7 µs/rev mozilla-try x0000_revs_xx000_added_x_copies 156f6e2674f2 4d0f2c178e66 : 8598 revs, 0.083256 s, 0.085597 s, +0.002341 s, × 1.0281, 9 µs/rev mozilla-try x0000_revs_xx000_added_0_copies 9eec5917337d 67118cc6dcad : 615 revs, 0.027282 s, 0.027118 s, -0.000164 s, × 0.9940, 44 µs/rev mozilla-try x0000_revs_xx000_added_x000_copies 89294cd501d9 7ccb2fc7ccb5 : 97052 revs, 1.343373 s, 2.119204 s, +0.775831 s, × 1.5775, 21 µs/rev mozilla-try x0000_revs_x0000_added_x0000_copies e928c65095ed e951f4ad123a : 52031 revs, 0.665737 s, 0.701479 s, +0.035742 s, × 1.0537, 13 µs/rev mozilla-try x00000_revs_x_added_0_copies 6a320851d377 1ebb79acd503 : 363753 revs, 5.138169 s, 4.482399 s, -0.655770 s, × 0.8724, 12 µs/rev mozilla-try x00000_revs_x00000_added_0_copies dc8a3ca7010e d16fde900c9c : 34414 revs, 0.573276 s, 0.574082 s, +0.000806 s, × 1.0014, 16 µs/rev mozilla-try x00000_revs_x_added_x_copies 5173c4b6f97c 95d83ee7242d : 362229 revs, 5.127809 s, 4.480366 s, -0.647443 s, × 0.8737, 12 µs/rev mozilla-try x00000_revs_x000_added_x_copies 9126823d0e9c ca82787bb23c : 359344 revs, 4.971136 s, 4.369070 s, -0.602066 s, × 0.8789, 12 µs/rev mozilla-try x00000_revs_x0000_added_x0000_copies 8d3fafa80d4b eb884023b810 : 192665 revs, 1.741678 s, 1.592506 s, -0.149172 s, × 0.9144, 8 µs/rev mozilla-try x00000_revs_x00000_added_x0000_copies 1b661134e2ca 1ae03d022d6d : 228985 revs, 40.314822 s, 87.824489 s, +47.509667 s, × 2.1785, 383 µs/rev mozilla-try x00000_revs_x00000_added_x000_copies 9b2a99adc05e 8e29777b48e6 : 382065 revs, 20.048029 s, 43.304637 s, +23.256608 s, × 2.1600, 113 µs/rev private : 459513 revs, 37.179470 s, 33.853687 s, -3.325783 s, × 0.9105, 73 µs/rev Differential Revision: https://phab.mercurial-scm.org/D9644
author Pierre-Yves David <pierre-yves.david@octobus.net>
date Tue, 15 Dec 2020 18:04:23 +0100
parents 7a93b7b3dc2d
children ee91966aec0f
line wrap: on
line source

# exchangev2.py - repository exchange for wire protocol version 2
#
# Copyright 2018 Gregory Szorc <gregory.szorc@gmail.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.

from __future__ import absolute_import

import collections
import weakref

from .i18n import _
from .node import (
    nullid,
    short,
)
from . import (
    bookmarks,
    error,
    mdiff,
    narrowspec,
    phases,
    pycompat,
    setdiscovery,
)
from .interfaces import repository


def pull(pullop):
    """Pull using wire protocol version 2."""
    repo = pullop.repo
    remote = pullop.remote

    usingrawchangelogandmanifest = _checkuserawstorefiledata(pullop)

    # If this is a clone and it was requested to perform a "stream clone",
    # we obtain the raw files data from the remote then fall back to an
    # incremental pull. This is somewhat hacky and is not nearly robust enough
    # for long-term usage.
    if usingrawchangelogandmanifest:
        with repo.transaction(b'clone'):
            _fetchrawstorefiles(repo, remote)
            repo.invalidate(clearfilecache=True)

    tr = pullop.trmanager.transaction()

    # We don't use the repo's narrow matcher here because the patterns passed
    # to exchange.pull() could be different.
    narrowmatcher = narrowspec.match(
        repo.root,
        # Empty maps to nevermatcher. So always
        # set includes if missing.
        pullop.includepats or {b'path:.'},
        pullop.excludepats,
    )

    if pullop.includepats or pullop.excludepats:
        pathfilter = {}
        if pullop.includepats:
            pathfilter[b'include'] = sorted(pullop.includepats)
        if pullop.excludepats:
            pathfilter[b'exclude'] = sorted(pullop.excludepats)
    else:
        pathfilter = None

    # Figure out what needs to be fetched.
    common, fetch, remoteheads = _pullchangesetdiscovery(
        repo, remote, pullop.heads, abortwhenunrelated=pullop.force
    )

    # And fetch the data.
    pullheads = pullop.heads or remoteheads
    csetres = _fetchchangesets(repo, tr, remote, common, fetch, pullheads)

    # New revisions are written to the changelog. But all other updates
    # are deferred. Do those now.

    # Ensure all new changesets are draft by default. If the repo is
    # publishing, the phase will be adjusted by the loop below.
    if csetres[b'added']:
        phases.registernew(
            repo, tr, phases.draft, [repo[n].rev() for n in csetres[b'added']]
        )

    # And adjust the phase of all changesets accordingly.
    for phasenumber, phase in phases.phasenames.items():
        if phase == b'secret' or not csetres[b'nodesbyphase'][phase]:
            continue

        phases.advanceboundary(
            repo,
            tr,
            phasenumber,
            csetres[b'nodesbyphase'][phase],
        )

    # Write bookmark updates.
    bookmarks.updatefromremote(
        repo.ui,
        repo,
        csetres[b'bookmarks'],
        remote.url(),
        pullop.gettransaction,
        explicit=pullop.explicitbookmarks,
    )

    manres = _fetchmanifests(repo, tr, remote, csetres[b'manifestnodes'])

    # We don't properly support shallow changeset and manifest yet. So we apply
    # depth limiting locally.
    if pullop.depth:
        relevantcsetnodes = set()
        clnode = repo.changelog.node

        for rev in repo.revs(
            b'ancestors(%ln, %s)', pullheads, pullop.depth - 1
        ):
            relevantcsetnodes.add(clnode(rev))

        csetrelevantfilter = lambda n: n in relevantcsetnodes

    else:
        csetrelevantfilter = lambda n: True

    # If obtaining the raw store files, we need to scan the full repo to
    # derive all the changesets, manifests, and linkrevs.
    if usingrawchangelogandmanifest:
        csetsforfiles = []
        mnodesforfiles = []
        manifestlinkrevs = {}

        for rev in repo:
            ctx = repo[rev]
            node = ctx.node()

            if not csetrelevantfilter(node):
                continue

            mnode = ctx.manifestnode()

            csetsforfiles.append(node)
            mnodesforfiles.append(mnode)
            manifestlinkrevs[mnode] = rev

    else:
        csetsforfiles = [n for n in csetres[b'added'] if csetrelevantfilter(n)]
        mnodesforfiles = manres[b'added']
        manifestlinkrevs = manres[b'linkrevs']

    # Find all file nodes referenced by added manifests and fetch those
    # revisions.
    fnodes = _derivefilesfrommanifests(repo, narrowmatcher, mnodesforfiles)
    _fetchfilesfromcsets(
        repo,
        tr,
        remote,
        pathfilter,
        fnodes,
        csetsforfiles,
        manifestlinkrevs,
        shallow=bool(pullop.depth),
    )


def _checkuserawstorefiledata(pullop):
    """Check whether we should use rawstorefiledata command to retrieve data."""

    repo = pullop.repo
    remote = pullop.remote

    # Command to obtain raw store data isn't available.
    if b'rawstorefiledata' not in remote.apidescriptor[b'commands']:
        return False

    # Only honor if user requested stream clone operation.
    if not pullop.streamclonerequested:
        return False

    # Only works on empty repos.
    if len(repo):
        return False

    # TODO This is super hacky. There needs to be a storage API for this. We
    # also need to check for compatibility with the remote.
    if b'revlogv1' not in repo.requirements:
        return False

    return True


def _fetchrawstorefiles(repo, remote):
    with remote.commandexecutor() as e:
        objs = e.callcommand(
            b'rawstorefiledata',
            {
                b'files': [b'changelog', b'manifestlog'],
            },
        ).result()

        # First object is a summary of files data that follows.
        overall = next(objs)

        progress = repo.ui.makeprogress(
            _(b'clone'), total=overall[b'totalsize'], unit=_(b'bytes')
        )
        with progress:
            progress.update(0)

            # Next are pairs of file metadata, data.
            while True:
                try:
                    filemeta = next(objs)
                except StopIteration:
                    break

                for k in (b'location', b'path', b'size'):
                    if k not in filemeta:
                        raise error.Abort(
                            _(b'remote file data missing key: %s') % k
                        )

                if filemeta[b'location'] == b'store':
                    vfs = repo.svfs
                else:
                    raise error.Abort(
                        _(b'invalid location for raw file data: %s')
                        % filemeta[b'location']
                    )

                bytesremaining = filemeta[b'size']

                with vfs.open(filemeta[b'path'], b'wb') as fh:
                    while True:
                        try:
                            chunk = next(objs)
                        except StopIteration:
                            break

                        bytesremaining -= len(chunk)

                        if bytesremaining < 0:
                            raise error.Abort(
                                _(
                                    b'received invalid number of bytes for file '
                                    b'data; expected %d, got extra'
                                )
                                % filemeta[b'size']
                            )

                        progress.increment(step=len(chunk))
                        fh.write(chunk)

                        try:
                            if chunk.islast:
                                break
                        except AttributeError:
                            raise error.Abort(
                                _(
                                    b'did not receive indefinite length bytestring '
                                    b'for file data'
                                )
                            )

                if bytesremaining:
                    raise error.Abort(
                        _(
                            b'received invalid number of bytes for'
                            b'file data; expected %d got %d'
                        )
                        % (
                            filemeta[b'size'],
                            filemeta[b'size'] - bytesremaining,
                        )
                    )


def _pullchangesetdiscovery(repo, remote, heads, abortwhenunrelated=True):
    """Determine which changesets need to be pulled."""

    if heads:
        knownnode = repo.changelog.hasnode
        if all(knownnode(head) for head in heads):
            return heads, False, heads

    # TODO wire protocol version 2 is capable of more efficient discovery
    # than setdiscovery. Consider implementing something better.
    common, fetch, remoteheads = setdiscovery.findcommonheads(
        repo.ui, repo, remote, abortwhenunrelated=abortwhenunrelated
    )

    common = set(common)
    remoteheads = set(remoteheads)

    # If a remote head is filtered locally, put it back in the common set.
    # See the comment in exchange._pulldiscoverychangegroup() for more.

    if fetch and remoteheads:
        has_node = repo.unfiltered().changelog.index.has_node

        common |= {head for head in remoteheads if has_node(head)}

        if set(remoteheads).issubset(common):
            fetch = []

    common.discard(nullid)

    return common, fetch, remoteheads


def _fetchchangesets(repo, tr, remote, common, fetch, remoteheads):
    # TODO consider adding a step here where we obtain the DAG shape first
    # (or ask the server to slice changesets into chunks for us) so that
    # we can perform multiple fetches in batches. This will facilitate
    # resuming interrupted clones, higher server-side cache hit rates due
    # to smaller segments, etc.
    with remote.commandexecutor() as e:
        objs = e.callcommand(
            b'changesetdata',
            {
                b'revisions': [
                    {
                        b'type': b'changesetdagrange',
                        b'roots': sorted(common),
                        b'heads': sorted(remoteheads),
                    }
                ],
                b'fields': {b'bookmarks', b'parents', b'phase', b'revision'},
            },
        ).result()

        # The context manager waits on all response data when exiting. So
        # we need to remain in the context manager in order to stream data.
        return _processchangesetdata(repo, tr, objs)


def _processchangesetdata(repo, tr, objs):
    repo.hook(b'prechangegroup', throw=True, **pycompat.strkwargs(tr.hookargs))

    urepo = repo.unfiltered()
    cl = urepo.changelog

    cl.delayupdate(tr)

    # The first emitted object is a header describing the data that
    # follows.
    meta = next(objs)

    progress = repo.ui.makeprogress(
        _(b'changesets'), unit=_(b'chunks'), total=meta.get(b'totalitems')
    )

    manifestnodes = {}
    added = []

    def linkrev(node):
        repo.ui.debug(b'add changeset %s\n' % short(node))
        # Linkrev for changelog is always self.
        return len(cl)

    def ondupchangeset(cl, rev):
        added.append(cl.node(rev))

    def onchangeset(cl, rev):
        progress.increment()

        revision = cl.changelogrevision(rev)
        added.append(cl.node(rev))

        # We need to preserve the mapping of changelog revision to node
        # so we can set the linkrev accordingly when manifests are added.
        manifestnodes[rev] = revision.manifest

        repo.register_changeset(rev, revision)

    nodesbyphase = {phase: set() for phase in phases.phasenames.values()}
    remotebookmarks = {}

    # addgroup() expects a 7-tuple describing revisions. This normalizes
    # the wire data to that format.
    #
    # This loop also aggregates non-revision metadata, such as phase
    # data.
    def iterrevisions():
        for cset in objs:
            node = cset[b'node']

            if b'phase' in cset:
                nodesbyphase[cset[b'phase']].add(node)

            for mark in cset.get(b'bookmarks', []):
                remotebookmarks[mark] = node

            # TODO add mechanism for extensions to examine records so they
            # can siphon off custom data fields.

            extrafields = {}

            for field, size in cset.get(b'fieldsfollowing', []):
                extrafields[field] = next(objs)

            # Some entries might only be metadata only updates.
            if b'revision' not in extrafields:
                continue

            data = extrafields[b'revision']

            yield (
                node,
                cset[b'parents'][0],
                cset[b'parents'][1],
                # Linknode is always itself for changesets.
                cset[b'node'],
                # We always send full revisions. So delta base is not set.
                nullid,
                mdiff.trivialdiffheader(len(data)) + data,
                # Flags not yet supported.
                0,
            )

    cl.addgroup(
        iterrevisions(),
        linkrev,
        weakref.proxy(tr),
        alwayscache=True,
        addrevisioncb=onchangeset,
        duplicaterevisioncb=ondupchangeset,
    )

    progress.complete()

    return {
        b'added': added,
        b'nodesbyphase': nodesbyphase,
        b'bookmarks': remotebookmarks,
        b'manifestnodes': manifestnodes,
    }


def _fetchmanifests(repo, tr, remote, manifestnodes):
    rootmanifest = repo.manifestlog.getstorage(b'')

    # Some manifests can be shared between changesets. Filter out revisions
    # we already know about.
    fetchnodes = []
    linkrevs = {}
    seen = set()

    for clrev, node in sorted(pycompat.iteritems(manifestnodes)):
        if node in seen:
            continue

        try:
            rootmanifest.rev(node)
        except error.LookupError:
            fetchnodes.append(node)
            linkrevs[node] = clrev

        seen.add(node)

    # TODO handle tree manifests

    # addgroup() expects 7-tuple describing revisions. This normalizes
    # the wire data to that format.
    def iterrevisions(objs, progress):
        for manifest in objs:
            node = manifest[b'node']

            extrafields = {}

            for field, size in manifest.get(b'fieldsfollowing', []):
                extrafields[field] = next(objs)

            if b'delta' in extrafields:
                basenode = manifest[b'deltabasenode']
                delta = extrafields[b'delta']
            elif b'revision' in extrafields:
                basenode = nullid
                revision = extrafields[b'revision']
                delta = mdiff.trivialdiffheader(len(revision)) + revision
            else:
                continue

            yield (
                node,
                manifest[b'parents'][0],
                manifest[b'parents'][1],
                # The value passed in is passed to the lookup function passed
                # to addgroup(). We already have a map of manifest node to
                # changelog revision number. So we just pass in the
                # manifest node here and use linkrevs.__getitem__ as the
                # resolution function.
                node,
                basenode,
                delta,
                # Flags not yet supported.
                0,
            )

            progress.increment()

    progress = repo.ui.makeprogress(
        _(b'manifests'), unit=_(b'chunks'), total=len(fetchnodes)
    )

    commandmeta = remote.apidescriptor[b'commands'][b'manifestdata']
    batchsize = commandmeta.get(b'recommendedbatchsize', 10000)
    # TODO make size configurable on client?

    # We send commands 1 at a time to the remote. This is not the most
    # efficient because we incur a round trip at the end of each batch.
    # However, the existing frame-based reactor keeps consuming server
    # data in the background. And this results in response data buffering
    # in memory. This can consume gigabytes of memory.
    # TODO send multiple commands in a request once background buffering
    # issues are resolved.

    added = []

    for i in pycompat.xrange(0, len(fetchnodes), batchsize):
        batch = [node for node in fetchnodes[i : i + batchsize]]
        if not batch:
            continue

        with remote.commandexecutor() as e:
            objs = e.callcommand(
                b'manifestdata',
                {
                    b'tree': b'',
                    b'nodes': batch,
                    b'fields': {b'parents', b'revision'},
                    b'haveparents': True,
                },
            ).result()

            # Chomp off header object.
            next(objs)

            def onchangeset(cl, rev):
                added.append(cl.node(rev))

            rootmanifest.addgroup(
                iterrevisions(objs, progress),
                linkrevs.__getitem__,
                weakref.proxy(tr),
                addrevisioncb=onchangeset,
                duplicaterevisioncb=onchangeset,
            )

    progress.complete()

    return {
        b'added': added,
        b'linkrevs': linkrevs,
    }


def _derivefilesfrommanifests(repo, matcher, manifestnodes):
    """Determine what file nodes are relevant given a set of manifest nodes.

    Returns a dict mapping file paths to dicts of file node to first manifest
    node.
    """
    ml = repo.manifestlog
    fnodes = collections.defaultdict(dict)

    progress = repo.ui.makeprogress(
        _(b'scanning manifests'), total=len(manifestnodes)
    )

    with progress:
        for manifestnode in manifestnodes:
            m = ml.get(b'', manifestnode)

            # TODO this will pull in unwanted nodes because it takes the storage
            # delta into consideration. What we really want is something that
            # takes the delta between the manifest's parents. And ideally we
            # would ignore file nodes that are known locally. For now, ignore
            # both these limitations. This will result in incremental fetches
            # requesting data we already have. So this is far from ideal.
            md = m.readfast()

            for path, fnode in md.items():
                if matcher(path):
                    fnodes[path].setdefault(fnode, manifestnode)

            progress.increment()

    return fnodes


def _fetchfiles(repo, tr, remote, fnodes, linkrevs):
    """Fetch file data from explicit file revisions."""

    def iterrevisions(objs, progress):
        for filerevision in objs:
            node = filerevision[b'node']

            extrafields = {}

            for field, size in filerevision.get(b'fieldsfollowing', []):
                extrafields[field] = next(objs)

            if b'delta' in extrafields:
                basenode = filerevision[b'deltabasenode']
                delta = extrafields[b'delta']
            elif b'revision' in extrafields:
                basenode = nullid
                revision = extrafields[b'revision']
                delta = mdiff.trivialdiffheader(len(revision)) + revision
            else:
                continue

            yield (
                node,
                filerevision[b'parents'][0],
                filerevision[b'parents'][1],
                node,
                basenode,
                delta,
                # Flags not yet supported.
                0,
            )

            progress.increment()

    progress = repo.ui.makeprogress(
        _(b'files'),
        unit=_(b'chunks'),
        total=sum(len(v) for v in pycompat.itervalues(fnodes)),
    )

    # TODO make batch size configurable
    batchsize = 10000
    fnodeslist = [x for x in sorted(fnodes.items())]

    for i in pycompat.xrange(0, len(fnodeslist), batchsize):
        batch = [x for x in fnodeslist[i : i + batchsize]]
        if not batch:
            continue

        with remote.commandexecutor() as e:
            fs = []
            locallinkrevs = {}

            for path, nodes in batch:
                fs.append(
                    (
                        path,
                        e.callcommand(
                            b'filedata',
                            {
                                b'path': path,
                                b'nodes': sorted(nodes),
                                b'fields': {b'parents', b'revision'},
                                b'haveparents': True,
                            },
                        ),
                    )
                )

                locallinkrevs[path] = {
                    node: linkrevs[manifestnode]
                    for node, manifestnode in pycompat.iteritems(nodes)
                }

            for path, f in fs:
                objs = f.result()

                # Chomp off header objects.
                next(objs)

                store = repo.file(path)
                store.addgroup(
                    iterrevisions(objs, progress),
                    locallinkrevs[path].__getitem__,
                    weakref.proxy(tr),
                )


def _fetchfilesfromcsets(
    repo, tr, remote, pathfilter, fnodes, csets, manlinkrevs, shallow=False
):
    """Fetch file data from explicit changeset revisions."""

    def iterrevisions(objs, remaining, progress):
        while remaining:
            filerevision = next(objs)

            node = filerevision[b'node']

            extrafields = {}

            for field, size in filerevision.get(b'fieldsfollowing', []):
                extrafields[field] = next(objs)

            if b'delta' in extrafields:
                basenode = filerevision[b'deltabasenode']
                delta = extrafields[b'delta']
            elif b'revision' in extrafields:
                basenode = nullid
                revision = extrafields[b'revision']
                delta = mdiff.trivialdiffheader(len(revision)) + revision
            else:
                continue

            if b'linknode' in filerevision:
                linknode = filerevision[b'linknode']
            else:
                linknode = node

            yield (
                node,
                filerevision[b'parents'][0],
                filerevision[b'parents'][1],
                linknode,
                basenode,
                delta,
                # Flags not yet supported.
                0,
            )

            progress.increment()
            remaining -= 1

    progress = repo.ui.makeprogress(
        _(b'files'),
        unit=_(b'chunks'),
        total=sum(len(v) for v in pycompat.itervalues(fnodes)),
    )

    commandmeta = remote.apidescriptor[b'commands'][b'filesdata']
    batchsize = commandmeta.get(b'recommendedbatchsize', 50000)

    shallowfiles = repository.REPO_FEATURE_SHALLOW_FILE_STORAGE in repo.features
    fields = {b'parents', b'revision'}
    clrev = repo.changelog.rev

    # There are no guarantees that we'll have ancestor revisions if
    # a) this repo has shallow file storage b) shallow data fetching is enabled.
    # Force remote to not delta against possibly unknown revisions when these
    # conditions hold.
    haveparents = not (shallowfiles or shallow)

    # Similarly, we may not have calculated linkrevs for all incoming file
    # revisions. Ask the remote to do work for us in this case.
    if not haveparents:
        fields.add(b'linknode')

    for i in pycompat.xrange(0, len(csets), batchsize):
        batch = [x for x in csets[i : i + batchsize]]
        if not batch:
            continue

        with remote.commandexecutor() as e:
            args = {
                b'revisions': [
                    {
                        b'type': b'changesetexplicit',
                        b'nodes': batch,
                    }
                ],
                b'fields': fields,
                b'haveparents': haveparents,
            }

            if pathfilter:
                args[b'pathfilter'] = pathfilter

            objs = e.callcommand(b'filesdata', args).result()

            # First object is an overall header.
            overall = next(objs)

            # We have overall['totalpaths'] segments.
            for i in pycompat.xrange(overall[b'totalpaths']):
                header = next(objs)

                path = header[b'path']
                store = repo.file(path)

                linkrevs = {
                    fnode: manlinkrevs[mnode]
                    for fnode, mnode in pycompat.iteritems(fnodes[path])
                }

                def getlinkrev(node):
                    if node in linkrevs:
                        return linkrevs[node]
                    else:
                        return clrev(node)

                store.addgroup(
                    iterrevisions(objs, header[b'totalitems'], progress),
                    getlinkrev,
                    weakref.proxy(tr),
                    maybemissingparents=shallow,
                )