contrib/hgperf
author Gregory Szorc <gregory.szorc@gmail.com>
Mon, 08 Oct 2018 17:10:59 -0700
changeset 40132 e67522413ca8
parent 34533 163fa0aea71e
child 43703 99e231afc29c
permissions -rwxr-xr-x
wireprotov2: define and use stream encoders Now that we have basic support for defining stream encoding, it is time to start doing something with it. We define various classes implementing stream encoders/decoders for the defined encoding profiles. This is relatively straightforward. We teach the inputstream and outputstream classes how to encode, decode, and flush data. We then teach the clientreactor how to filter received data through the inputstream decoder. One of the features of the framing format is that streams can span requests. This is a differentiating feature from say HTTP/2, which associates streams with requests. By allowing streams to span requests, we can reuse compression context data across requests/responses. But in order to do this, we need a mechanism to "flush" the encoder at logical boundaries so that receivers receive all data where it is expected. And a "flush" event is distinct from a "finish" event from the perspective of certain compressors because a "flush" will retain compression context state whereas a "finish" operation will not. This is why encoders have both a flush() and a finish() and each uses specific flushing semantics on the underlying compressor. The added tests verify various behavior of decoders via clientreactor. These tests do test some compression behavior via use of outputstream. But for all intents and purposes, server reactor support for encoding is not yet implemented. Differential Revision: https://phab.mercurial-scm.org/D4921

#!/usr/bin/env python
#
# hgperf - measure performance of Mercurial commands
#
# Copyright 2014 Matt Mackall <mpm@selenic.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.

'''measure performance of Mercurial commands

Using ``hgperf`` instead of ``hg`` measures performance of the target
Mercurial command. For example, the execution below measures
performance of :hg:`heads --topo`::

    $ hgperf heads --topo

All command output via ``ui`` is suppressed, and just measurement
result is displayed: see also "perf" extension in "contrib".

Costs of processing before dispatching to the command function like
below are not measured::

    - parsing command line (e.g. option validity check)
    - reading configuration files in

But ``pre-`` and ``post-`` hook invocation for the target command is
measured, even though these are invoked before or after dispatching to
the command function, because these may be required to repeat
execution of the target command correctly.
'''

import os
import sys

libdir = '@LIBDIR@'

if libdir != '@' 'LIBDIR' '@':
    if not os.path.isabs(libdir):
        libdir = os.path.join(os.path.dirname(os.path.realpath(__file__)),
                              libdir)
        libdir = os.path.abspath(libdir)
    sys.path.insert(0, libdir)

# enable importing on demand to reduce startup time
try:
    from mercurial import demandimport; demandimport.enable()
except ImportError:
    import sys
    sys.stderr.write("abort: couldn't find mercurial libraries in [%s]\n" %
                     ' '.join(sys.path))
    sys.stderr.write("(check your install and PYTHONPATH)\n")
    sys.exit(-1)

from mercurial import (
    dispatch,
    util,
)

def timer(func, title=None):
    results = []
    begin = util.timer()
    count = 0
    while True:
        ostart = os.times()
        cstart = util.timer()
        r = func()
        cstop = util.timer()
        ostop = os.times()
        count += 1
        a, b = ostart, ostop
        results.append((cstop - cstart, b[0] - a[0], b[1]-a[1]))
        if cstop - begin > 3 and count >= 100:
            break
        if cstop - begin > 10 and count >= 3:
            break
    if title:
        sys.stderr.write("! %s\n" % title)
    if r:
        sys.stderr.write("! result: %s\n" % r)
    m = min(results)
    sys.stderr.write("! wall %f comb %f user %f sys %f (best of %d)\n"
                     % (m[0], m[1] + m[2], m[1], m[2], count))

orgruncommand = dispatch.runcommand

def runcommand(lui, repo, cmd, fullargs, ui, options, d, cmdpats, cmdoptions):
    ui.pushbuffer()
    lui.pushbuffer()
    timer(lambda : orgruncommand(lui, repo, cmd, fullargs, ui,
                                 options, d, cmdpats, cmdoptions))
    ui.popbuffer()
    lui.popbuffer()

dispatch.runcommand = runcommand

dispatch.run()