view mercurial/sshpeer.py @ 49658:523cacdfd324

delta-find: set the default candidate chunk size to 10 I ran performance and storage tests on repositories of various sizes and shapes for the following values of the config : 5, 10, 20, 50, 100, no-chunking The performance tests do not show any statistical impact on computation times for large pushes and pulls. For searching for an individual delta, this can provide a significant performance improvement with a minor degradation of space-quality on the result. (see data at the end of the commit). For overall store size, the change : - does not have any impact on many small repositories, - has an observable, but very negligible impact on most larger repositories. - One private repository we use for testing sees a small increase in size (1%) in the narrower version. We will try to get more numbers on a larger version of that repository to make sure nothing pathological happens. We pick "10" as the limit as "5" seems a bit more risky. There are room to improve the current code, by using more aggressive filtering and better (i.e any) sorting of the candidates. However this is already a large improvement for pathological cases, with little impact in the common situations. The initial motivation for this change is to fix performance of delta computation for a file where the previous code ended up testing 20 000 possible candidate-bases in one go, which is… slow. This affected about ½ of the file revisions leading to atrocious performance, especially during some push/pull operations. Details about individual delta finding timing: ---------------------------------------------- The vast majority of benchmark cases are unchanged but the three below. The first two do not see any impact on the final delta. The last one sees a change in delta-size that is negligible compared to the full text size. ### data-env-vars.name = mozilla-try-2019-02-18-zstd-sparse-revlog # benchmark.name = perf-delta-find # benchmark.variants.rev = manifest-snapshot-many-tries-a (revision 756096) ∞: 5.844783 5: 4.473523 (-23.46%) 10: 4.970053 (-14.97%) 20: 5.770386 (-1.27%) 50 5.821358 100: 5.834887 MANIFESTLOG: rev = 756096: (no-limit) delta-base = 301840 search-rounds = 6 try-count = 60 delta-type = snapshot snap-depth = 7 delta-size = 179 MANIFESTLOG: rev=756096: (limit = 10) delta-base=301840 search-rounds=9 try-count=51 delta-type=snapshot snap-depth=7 delta-size=179 ### data-env-vars.name = mozilla-try-2019-02-18-zstd-sparse-revlog # benchmark.name = perf-delta-find # benchmark.variants.rev = manifest-snapshot-many-tries-d (revision 754060) ∞: 5.017663 5: 3.655931 (-27.14%) 10: 4.095436 (-18.38%) 20: 4.828949 (-3.76%) 50 4.987574 100: 4.994889 MANIFESTLOG: rev=754060: (no limit) delta-base=301840 search-rounds=5 try-count=53 delta-type=snapshot snap-depth=7 delta-size = 179 MANIFESTLOG: rev=754060: (limite = 10) delta-base=301840 search-rounds=8 try-count=45 delta-type=snapshot snap-depth=7 delta-size = 179 ### data-env-vars.name = mozilla-try-2019-02-18-zstd-sparse-revlog # benchmark.name = perf-delta-find # bin-env-vars.hg.flavor = rust # benchmark.variants.rev = manifest-snapshot-many-tries-e (revision 693368) ∞: 4.869282 5: 2.039732 (-58.11%) 10: 2.413537 (-50.43%) 20: 4.449639 (-8.62%) 50 4.865863 100: 4.882649 MANIFESTLOG: rev=693368: delta-base=693336 search-rounds=6 try-count=53 delta-type=snapshot snap-depth=6 full-test-size=131065 delta-size=199 MANIFESTLOG: rev=693368: delta-base=278023 search-rounds=5 try-count=21 delta-type=snapshot snap-depth=4 full-test-size=131065 delta-size=278 Raw data for store size (in bytes) for various chunk size value below: ---------------------------------------------------------------------- 440 134 384 5 pypy/.hg/store/ 440 134 384 10 pypy/.hg/store/ 440 134 384 20 pypy/.hg/store/ 440 134 384 50 pypy/.hg/store/ 440 134 384 100 pypy/.hg/store/ 440 134 384 ... pypy/.hg/store/ 666 987 471 5 netbsd-xsrc-2022-11-15/.hg/store/ 666 987 471 10 netbsd-xsrc-2022-11-15/.hg/store/ 666 987 471 20 netbsd-xsrc-2022-11-15/.hg/store/ 666 987 471 50 netbsd-xsrc-2022-11-15/.hg/store/ 666 987 471 100 netbsd-xsrc-2022-11-15/.hg/store/ 666 987 471 ... netbsd-xsrc-2022-11-15/.hg/store/ 852 844 884 5 netbsd-pkgsrc-2022-11-15/.hg/store/ 852 844 884 10 netbsd-pkgsrc-2022-11-15/.hg/store/ 852 844 884 20 netbsd-pkgsrc-2022-11-15/.hg/store/ 852 844 884 50 netbsd-pkgsrc-2022-11-15/.hg/store/ 852 844 884 100 netbsd-pkgsrc-2022-11-15/.hg/store/ 852 844 884 ... netbsd-pkgsrc-2022-11-15/.hg/store/ 1 504 227 981 5 netbeans-2018-08-01-sparse-zstd/.hg/store/ 1 504 227 871 10 netbeans-2018-08-01-sparse-zstd/.hg/store/ 1 504 227 813 20 netbeans-2018-08-01-sparse-zstd/.hg/store/ 1 504 227 813 50 netbeans-2018-08-01-sparse-zstd/.hg/store/ 1 504 227 813 100 netbeans-2018-08-01-sparse-zstd/.hg/store/ 1 504 227 813 ... netbeans-2018-08-01-sparse-zstd/.hg/store/ 3 875 801 068 5 netbsd-src-2022-11-15/.hg/store/ 3 875 696 767 10 netbsd-src-2022-11-15/.hg/store/ 3 875 696 757 20 netbsd-src-2022-11-15/.hg/store/ 3 875 696 653 50 netbsd-src-2022-11-15/.hg/store/ 3 875 696 653 100 netbsd-src-2022-11-15/.hg/store/ 3 875 696 653 ... netbsd-src-2022-11-15/.hg/store/ 4 531 441 314 5 mozilla-central/.hg/store/ 4 531 435 157 10 mozilla-central/.hg/store/ 4 531 432 045 20 mozilla-central/.hg/store/ 4 531 429 119 50 mozilla-central/.hg/store/ 4 531 429 119 100 mozilla-central/.hg/store/ 4 531 429 119 ... mozilla-central/.hg/store/ 4 875 861 390 5 mozilla-unified/.hg/store/ 4 875 855 155 10 mozilla-unified/.hg/store/ 4 875 852 027 20 mozilla-unified/.hg/store/ 4 875 848 851 50 mozilla-unified/.hg/store/ 4 875 848 851 100 mozilla-unified/.hg/store/ 4 875 848 851 ... mozilla-unified/.hg/store/ 11 498 764 601 5 mozilla-try/.hg/store/ 11 497 968 858 10 mozilla-try/.hg/store/ 11 497 958 730 20 mozilla-try/.hg/store/ 11 497 927 156 50 mozilla-try/.hg/store/ 11 497 925 963 100 mozilla-try/.hg/store/ 11 497 923 428 ... mozilla-try/.hg/store/ 10 047 914 031 5 private-repo 9 969 132 101 10 private-repo 9 944 745 015 20 private-repo 9 939 756 703 50 private-repo 9 939 833 016 100 private-repo 9 939 822 035 ... private-repo
author Pierre-Yves David <pierre-yves.david@octobus.net>
date Wed, 23 Nov 2022 19:08:27 +0100
parents 642e31cb55f0
children 78af51ba73c5
line wrap: on
line source

# sshpeer.py - ssh repository proxy class for mercurial
#
# Copyright 2005, 2006 Olivia Mackall <olivia@selenic.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.


import re
import uuid

from .i18n import _
from .pycompat import getattr
from . import (
    error,
    pycompat,
    util,
    wireprototypes,
    wireprotov1peer,
    wireprotov1server,
)
from .utils import (
    procutil,
    stringutil,
    urlutil,
)


def _serverquote(s):
    """quote a string for the remote shell ... which we assume is sh"""
    if not s:
        return s
    if re.match(b'[a-zA-Z0-9@%_+=:,./-]*$', s):
        return s
    return b"'%s'" % s.replace(b"'", b"'\\''")


def _forwardoutput(ui, pipe, warn=False):
    """display all data currently available on pipe as remote output.

    This is non blocking."""
    if pipe and not pipe.closed:
        s = procutil.readpipe(pipe)
        if s:
            display = ui.warn if warn else ui.status
            for l in s.splitlines():
                display(_(b"remote: "), l, b'\n')


class doublepipe:
    """Operate a side-channel pipe in addition of a main one

    The side-channel pipe contains server output to be forwarded to the user
    input. The double pipe will behave as the "main" pipe, but will ensure the
    content of the "side" pipe is properly processed while we wait for blocking
    call on the "main" pipe.

    If large amounts of data are read from "main", the forward will cease after
    the first bytes start to appear. This simplifies the implementation
    without affecting actual output of sshpeer too much as we rarely issue
    large read for data not yet emitted by the server.

    The main pipe is expected to be a 'bufferedinputpipe' from the util module
    that handle all the os specific bits. This class lives in this module
    because it focus on behavior specific to the ssh protocol."""

    def __init__(self, ui, main, side):
        self._ui = ui
        self._main = main
        self._side = side

    def _wait(self):
        """wait until some data are available on main or side

        return a pair of boolean (ismainready, issideready)

        (This will only wait for data if the setup is supported by `util.poll`)
        """
        if (
            isinstance(self._main, util.bufferedinputpipe)
            and self._main.hasbuffer
        ):
            # Main has data. Assume side is worth poking at.
            return True, True

        fds = [self._main.fileno(), self._side.fileno()]
        try:
            act = util.poll(fds)
        except NotImplementedError:
            # non supported yet case, assume all have data.
            act = fds
        return (self._main.fileno() in act, self._side.fileno() in act)

    def write(self, data):
        return self._call(b'write', data)

    def read(self, size):
        r = self._call(b'read', size)
        if size != 0 and not r:
            # We've observed a condition that indicates the
            # stdout closed unexpectedly. Check stderr one
            # more time and snag anything that's there before
            # letting anyone know the main part of the pipe
            # closed prematurely.
            _forwardoutput(self._ui, self._side)
        return r

    def unbufferedread(self, size):
        r = self._call(b'unbufferedread', size)
        if size != 0 and not r:
            # We've observed a condition that indicates the
            # stdout closed unexpectedly. Check stderr one
            # more time and snag anything that's there before
            # letting anyone know the main part of the pipe
            # closed prematurely.
            _forwardoutput(self._ui, self._side)
        return r

    def readline(self):
        return self._call(b'readline')

    def _call(self, methname, data=None):
        """call <methname> on "main", forward output of "side" while blocking"""
        # data can be '' or 0
        if (data is not None and not data) or self._main.closed:
            _forwardoutput(self._ui, self._side)
            return b''
        while True:
            mainready, sideready = self._wait()
            if sideready:
                _forwardoutput(self._ui, self._side)
            if mainready:
                meth = getattr(self._main, methname)
                if data is None:
                    return meth()
                else:
                    return meth(data)

    def close(self):
        return self._main.close()

    @property
    def closed(self):
        return self._main.closed

    def flush(self):
        return self._main.flush()


def _cleanuppipes(ui, pipei, pipeo, pipee, warn):
    """Clean up pipes used by an SSH connection."""
    didsomething = False
    if pipeo and not pipeo.closed:
        didsomething = True
        pipeo.close()
    if pipei and not pipei.closed:
        didsomething = True
        pipei.close()

    if pipee and not pipee.closed:
        didsomething = True
        # Try to read from the err descriptor until EOF.
        try:
            for l in pipee:
                ui.status(_(b'remote: '), l)
        except (IOError, ValueError):
            pass

        pipee.close()

    if didsomething and warn is not None:
        # Encourage explicit close of sshpeers. Closing via __del__ is
        # not very predictable when exceptions are thrown, which has led
        # to deadlocks due to a peer get gc'ed in a fork
        # We add our own stack trace, because the stacktrace when called
        # from __del__ is useless.
        ui.develwarn(b'missing close on SSH connection created at:\n%s' % warn)


def _makeconnection(ui, sshcmd, args, remotecmd, path, sshenv=None):
    """Create an SSH connection to a server.

    Returns a tuple of (process, stdin, stdout, stderr) for the
    spawned process.
    """
    cmd = b'%s %s %s' % (
        sshcmd,
        args,
        procutil.shellquote(
            b'%s -R %s serve --stdio'
            % (_serverquote(remotecmd), _serverquote(path))
        ),
    )

    ui.debug(b'running %s\n' % cmd)

    # no buffer allow the use of 'select'
    # feel free to remove buffering and select usage when we ultimately
    # move to threading.
    stdin, stdout, stderr, proc = procutil.popen4(cmd, bufsize=0, env=sshenv)

    return proc, stdin, stdout, stderr


def _clientcapabilities():
    """Return list of capabilities of this client.

    Returns a list of capabilities that are supported by this client.
    """
    protoparams = {b'partial-pull'}
    comps = [
        e.wireprotosupport().name
        for e in util.compengines.supportedwireengines(util.CLIENTROLE)
    ]
    protoparams.add(b'comp=%s' % b','.join(comps))
    return protoparams


def _performhandshake(ui, stdin, stdout, stderr):
    def badresponse():
        # Flush any output on stderr. In general, the stderr contains errors
        # from the remote (ssh errors, some hg errors), and status indications
        # (like "adding changes"), with no current way to tell them apart.
        # Here we failed so early that it's almost certainly only errors, so
        # use warn=True so -q doesn't hide them.
        _forwardoutput(ui, stderr, warn=True)

        msg = _(b'no suitable response from remote hg')
        hint = ui.config(b'ui', b'ssherrorhint')
        raise error.RepoError(msg, hint=hint)

    # The handshake consists of sending wire protocol commands in reverse
    # order of protocol implementation and then sniffing for a response
    # to one of them.
    #
    # Those commands (from oldest to newest) are:
    #
    # ``between``
    #   Asks for the set of revisions between a pair of revisions. Command
    #   present in all Mercurial server implementations.
    #
    # ``hello``
    #   Instructs the server to advertise its capabilities. Introduced in
    #   Mercurial 0.9.1.
    #
    # ``upgrade``
    #   Requests upgrade from default transport protocol version 1 to
    #   a newer version. Introduced in Mercurial 4.6 as an experimental
    #   feature.
    #
    # The ``between`` command is issued with a request for the null
    # range. If the remote is a Mercurial server, this request will
    # generate a specific response: ``1\n\n``. This represents the
    # wire protocol encoded value for ``\n``. We look for ``1\n\n``
    # in the output stream and know this is the response to ``between``
    # and we're at the end of our handshake reply.
    #
    # The response to the ``hello`` command will be a line with the
    # length of the value returned by that command followed by that
    # value. If the server doesn't support ``hello`` (which should be
    # rare), that line will be ``0\n``. Otherwise, the value will contain
    # RFC 822 like lines. Of these, the ``capabilities:`` line contains
    # the capabilities of the server.
    #
    # The ``upgrade`` command isn't really a command in the traditional
    # sense of version 1 of the transport because it isn't using the
    # proper mechanism for formatting insteads: instead, it just encodes
    # arguments on the line, delimited by spaces.
    #
    # The ``upgrade`` line looks like ``upgrade <token> <capabilities>``.
    # If the server doesn't support protocol upgrades, it will reply to
    # this line with ``0\n``. Otherwise, it emits an
    # ``upgraded <token> <protocol>`` line to both stdout and stderr.
    # Content immediately following this line describes additional
    # protocol and server state.
    #
    # In addition to the responses to our command requests, the server
    # may emit "banner" output on stdout. SSH servers are allowed to
    # print messages to stdout on login. Issuing commands on connection
    # allows us to flush this banner output from the server by scanning
    # for output to our well-known ``between`` command. Of course, if
    # the banner contains ``1\n\n``, this will throw off our detection.

    requestlog = ui.configbool(b'devel', b'debug.peer-request')

    # Generate a random token to help identify responses to version 2
    # upgrade request.
    token = pycompat.sysbytes(str(uuid.uuid4()))

    try:
        pairsarg = b'%s-%s' % (b'0' * 40, b'0' * 40)
        handshake = [
            b'hello\n',
            b'between\n',
            b'pairs %d\n' % len(pairsarg),
            pairsarg,
        ]

        if requestlog:
            ui.debug(b'devel-peer-request: hello+between\n')
            ui.debug(b'devel-peer-request:   pairs: %d bytes\n' % len(pairsarg))
        ui.debug(b'sending hello command\n')
        ui.debug(b'sending between command\n')

        stdin.write(b''.join(handshake))
        stdin.flush()
    except IOError:
        badresponse()

    # Assume version 1 of wire protocol by default.
    protoname = wireprototypes.SSHV1
    reupgraded = re.compile(b'^upgraded %s (.*)$' % stringutil.reescape(token))

    lines = [b'', b'dummy']
    max_noise = 500
    while lines[-1] and max_noise:
        try:
            l = stdout.readline()
            _forwardoutput(ui, stderr, warn=True)

            # Look for reply to protocol upgrade request. It has a token
            # in it, so there should be no false positives.
            m = reupgraded.match(l)
            if m:
                protoname = m.group(1)
                ui.debug(b'protocol upgraded to %s\n' % protoname)
                # If an upgrade was handled, the ``hello`` and ``between``
                # requests are ignored. The next output belongs to the
                # protocol, so stop scanning lines.
                break

            # Otherwise it could be a banner, ``0\n`` response if server
            # doesn't support upgrade.

            if lines[-1] == b'1\n' and l == b'\n':
                break
            if l:
                ui.debug(b'remote: ', l)
            lines.append(l)
            max_noise -= 1
        except IOError:
            badresponse()
    else:
        badresponse()

    caps = set()

    # For version 1, we should see a ``capabilities`` line in response to the
    # ``hello`` command.
    if protoname == wireprototypes.SSHV1:
        for l in reversed(lines):
            # Look for response to ``hello`` command. Scan from the back so
            # we don't misinterpret banner output as the command reply.
            if l.startswith(b'capabilities:'):
                caps.update(l[:-1].split(b':')[1].split())
                break

    # Error if we couldn't find capabilities, this means:
    #
    # 1. Remote isn't a Mercurial server
    # 2. Remote is a <0.9.1 Mercurial server
    # 3. Remote is a future Mercurial server that dropped ``hello``
    #    and other attempted handshake mechanisms.
    if not caps:
        badresponse()

    # Flush any output on stderr before proceeding.
    _forwardoutput(ui, stderr, warn=True)

    return protoname, caps


class sshv1peer(wireprotov1peer.wirepeer):
    def __init__(
        self, ui, url, proc, stdin, stdout, stderr, caps, autoreadstderr=True
    ):
        """Create a peer from an existing SSH connection.

        ``proc`` is a handle on the underlying SSH process.
        ``stdin``, ``stdout``, and ``stderr`` are handles on the stdio
        pipes for that process.
        ``caps`` is a set of capabilities supported by the remote.
        ``autoreadstderr`` denotes whether to automatically read from
        stderr and to forward its output.
        """
        self._url = url
        self.ui = ui
        # self._subprocess is unused. Keeping a handle on the process
        # holds a reference and prevents it from being garbage collected.
        self._subprocess = proc

        # And we hook up our "doublepipe" wrapper to allow querying
        # stderr any time we perform I/O.
        if autoreadstderr:
            stdout = doublepipe(ui, util.bufferedinputpipe(stdout), stderr)
            stdin = doublepipe(ui, stdin, stderr)

        self._pipeo = stdin
        self._pipei = stdout
        self._pipee = stderr
        self._caps = caps
        self._autoreadstderr = autoreadstderr
        self._initstack = b''.join(util.getstackframes(1))

    # Commands that have a "framed" response where the first line of the
    # response contains the length of that response.
    _FRAMED_COMMANDS = {
        b'batch',
    }

    # Begin of ipeerconnection interface.

    def url(self):
        return self._url

    def local(self):
        return None

    def peer(self):
        return self

    def canpush(self):
        return True

    def close(self):
        self._cleanup()

    # End of ipeerconnection interface.

    # Begin of ipeercommands interface.

    def capabilities(self):
        return self._caps

    # End of ipeercommands interface.

    def _readerr(self):
        _forwardoutput(self.ui, self._pipee)

    def _abort(self, exception):
        self._cleanup()
        raise exception

    def _cleanup(self, warn=None):
        _cleanuppipes(self.ui, self._pipei, self._pipeo, self._pipee, warn=warn)

    def __del__(self):
        self._cleanup(warn=self._initstack)

    def _sendrequest(self, cmd, args, framed=False):
        if self.ui.debugflag and self.ui.configbool(
            b'devel', b'debug.peer-request'
        ):
            dbg = self.ui.debug
            line = b'devel-peer-request: %s\n'
            dbg(line % cmd)
            for key, value in sorted(args.items()):
                if not isinstance(value, dict):
                    dbg(line % b'  %s: %d bytes' % (key, len(value)))
                else:
                    for dk, dv in sorted(value.items()):
                        dbg(line % b'  %s-%s: %d' % (key, dk, len(dv)))
        self.ui.debug(b"sending %s command\n" % cmd)
        self._pipeo.write(b"%s\n" % cmd)
        _func, names = wireprotov1server.commands[cmd]
        keys = names.split()
        wireargs = {}
        for k in keys:
            if k == b'*':
                wireargs[b'*'] = args
                break
            else:
                wireargs[k] = args[k]
                del args[k]
        for k, v in sorted(wireargs.items()):
            self._pipeo.write(b"%s %d\n" % (k, len(v)))
            if isinstance(v, dict):
                for dk, dv in v.items():
                    self._pipeo.write(b"%s %d\n" % (dk, len(dv)))
                    self._pipeo.write(dv)
            else:
                self._pipeo.write(v)
        self._pipeo.flush()

        # We know exactly how many bytes are in the response. So return a proxy
        # around the raw output stream that allows reading exactly this many
        # bytes. Callers then can read() without fear of overrunning the
        # response.
        if framed:
            amount = self._getamount()
            return util.cappedreader(self._pipei, amount)

        return self._pipei

    def _callstream(self, cmd, **args):
        args = pycompat.byteskwargs(args)
        return self._sendrequest(cmd, args, framed=cmd in self._FRAMED_COMMANDS)

    def _callcompressable(self, cmd, **args):
        args = pycompat.byteskwargs(args)
        return self._sendrequest(cmd, args, framed=cmd in self._FRAMED_COMMANDS)

    def _call(self, cmd, **args):
        args = pycompat.byteskwargs(args)
        return self._sendrequest(cmd, args, framed=True).read()

    def _callpush(self, cmd, fp, **args):
        # The server responds with an empty frame if the client should
        # continue submitting the payload.
        r = self._call(cmd, **args)
        if r:
            return b'', r

        # The payload consists of frames with content followed by an empty
        # frame.
        for d in iter(lambda: fp.read(4096), b''):
            self._writeframed(d)
        self._writeframed(b"", flush=True)

        # In case of success, there is an empty frame and a frame containing
        # the integer result (as a string).
        # In case of error, there is a non-empty frame containing the error.
        r = self._readframed()
        if r:
            return b'', r
        return self._readframed(), b''

    def _calltwowaystream(self, cmd, fp, **args):
        # The server responds with an empty frame if the client should
        # continue submitting the payload.
        r = self._call(cmd, **args)
        if r:
            # XXX needs to be made better
            raise error.Abort(_(b'unexpected remote reply: %s') % r)

        # The payload consists of frames with content followed by an empty
        # frame.
        for d in iter(lambda: fp.read(4096), b''):
            self._writeframed(d)
        self._writeframed(b"", flush=True)

        return self._pipei

    def _getamount(self):
        l = self._pipei.readline()
        if l == b'\n':
            if self._autoreadstderr:
                self._readerr()
            msg = _(b'check previous remote output')
            self._abort(error.OutOfBandError(hint=msg))
        if self._autoreadstderr:
            self._readerr()
        try:
            return int(l)
        except ValueError:
            self._abort(error.ResponseError(_(b"unexpected response:"), l))

    def _readframed(self):
        size = self._getamount()
        if not size:
            return b''

        return self._pipei.read(size)

    def _writeframed(self, data, flush=False):
        self._pipeo.write(b"%d\n" % len(data))
        if data:
            self._pipeo.write(data)
        if flush:
            self._pipeo.flush()
        if self._autoreadstderr:
            self._readerr()


def makepeer(ui, path, proc, stdin, stdout, stderr, autoreadstderr=True):
    """Make a peer instance from existing pipes.

    ``path`` and ``proc`` are stored on the eventual peer instance and may
    not be used for anything meaningful.

    ``stdin``, ``stdout``, and ``stderr`` are the pipes connected to the
    SSH server's stdio handles.

    This function is factored out to allow creating peers that don't
    actually spawn a new process. It is useful for starting SSH protocol
    servers and clients via non-standard means, which can be useful for
    testing.
    """
    try:
        protoname, caps = _performhandshake(ui, stdin, stdout, stderr)
    except Exception:
        _cleanuppipes(ui, stdout, stdin, stderr, warn=None)
        raise

    if protoname == wireprototypes.SSHV1:
        return sshv1peer(
            ui,
            path,
            proc,
            stdin,
            stdout,
            stderr,
            caps,
            autoreadstderr=autoreadstderr,
        )
    else:
        _cleanuppipes(ui, stdout, stdin, stderr, warn=None)
        raise error.RepoError(
            _(b'unknown version of SSH protocol: %s') % protoname
        )


def instance(ui, path, create, intents=None, createopts=None):
    """Create an SSH peer.

    The returned object conforms to the ``wireprotov1peer.wirepeer`` interface.
    """
    u = urlutil.url(path, parsequery=False, parsefragment=False)
    if u.scheme != b'ssh' or not u.host or u.path is None:
        raise error.RepoError(_(b"couldn't parse location %s") % path)

    urlutil.checksafessh(path)

    if u.passwd is not None:
        raise error.RepoError(_(b'password in URL not supported'))

    sshcmd = ui.config(b'ui', b'ssh')
    remotecmd = ui.config(b'ui', b'remotecmd')
    sshaddenv = dict(ui.configitems(b'sshenv'))
    sshenv = procutil.shellenviron(sshaddenv)
    remotepath = u.path or b'.'

    args = procutil.sshargs(sshcmd, u.host, u.user, u.port)

    if create:
        # We /could/ do this, but only if the remote init command knows how to
        # handle them. We don't yet make any assumptions about that. And without
        # querying the remote, there's no way of knowing if the remote even
        # supports said requested feature.
        if createopts:
            raise error.RepoError(
                _(
                    b'cannot create remote SSH repositories '
                    b'with extra options'
                )
            )

        cmd = b'%s %s %s' % (
            sshcmd,
            args,
            procutil.shellquote(
                b'%s init %s'
                % (_serverquote(remotecmd), _serverquote(remotepath))
            ),
        )
        ui.debug(b'running %s\n' % cmd)
        res = ui.system(cmd, blockedtag=b'sshpeer', environ=sshenv)
        if res != 0:
            raise error.RepoError(_(b'could not create remote repo'))

    proc, stdin, stdout, stderr = _makeconnection(
        ui, sshcmd, args, remotecmd, remotepath, sshenv
    )

    peer = makepeer(ui, path, proc, stdin, stdout, stderr)

    # Finally, if supported by the server, notify it about our own
    # capabilities.
    if b'protocaps' in peer.capabilities():
        try:
            peer._call(
                b"protocaps", caps=b' '.join(sorted(_clientcapabilities()))
            )
        except IOError:
            peer._cleanup()
            raise error.RepoError(_(b'capability exchange failed'))

    return peer