perf: add command to benchmark bundle reading
Upcoming commits will be refactoring bundle2 I/O code.
This commit establishes a `hg perfbundleread` command that measures
how long it takes to read a bundle using various mechanisms.
As a baseline, here's output from an uncompressed bundle1
bundle of my Firefox repo (7,098,622,890 bytes):
! read(8k)
! wall 0.763481 comb 0.760000 user 0.160000 sys 0.600000 (best of 6)
! read(16k)
! wall 0.644512 comb 0.640000 user 0.110000 sys 0.530000 (best of 16)
! read(32k)
! wall 0.581172 comb 0.590000 user 0.060000 sys 0.530000 (best of 18)
! read(128k)
! wall 0.535183 comb 0.530000 user 0.010000 sys 0.520000 (best of 19)
! cg1 deltaiter()
! wall 0.873500 comb 0.880000 user 0.840000 sys 0.040000 (best of 12)
! cg1 getchunks()
! wall 6.283797 comb 6.270000 user 5.570000 sys 0.700000 (best of 3)
! cg1 read(8k)
! wall 1.097173 comb 1.100000 user 0.400000 sys 0.700000 (best of 10)
! cg1 read(16k)
! wall 0.810750 comb 0.800000 user 0.200000 sys 0.600000 (best of 13)
! cg1 read(32k)
! wall 0.671215 comb 0.670000 user 0.110000 sys 0.560000 (best of 15)
! cg1 read(128k)
! wall 0.597857 comb 0.600000 user 0.020000 sys 0.580000 (best of 15)
And from an uncompressed bundle2 bundle (6,070,036,163 bytes):
! read(8k)
! wall 0.676997 comb 0.680000 user 0.160000 sys 0.520000 (best of 15)
! read(16k)
! wall 0.592706 comb 0.590000 user 0.080000 sys 0.510000 (best of 17)
! read(32k)
! wall 0.529395 comb 0.530000 user 0.050000 sys 0.480000 (best of 16)
! read(128k)
! wall 0.491270 comb 0.490000 user 0.010000 sys 0.480000 (best of 19)
! bundle2 forwardchunks()
! wall 2.997131 comb 2.990000 user 2.270000 sys 0.720000 (best of 4)
! bundle2 iterparts()
! wall 12.247197 comb 10.670000 user 8.170000 sys 2.500000 (best of 3)
! bundle2 part seek()
! wall 11.761675 comb 10.500000 user 8.240000 sys 2.260000 (best of 3)
! bundle2 part read(8k)
! wall 9.116163 comb 9.110000 user 8.240000 sys 0.870000 (best of 3)
! bundle2 part read(16k)
! wall 8.984362 comb 8.970000 user 8.110000 sys 0.860000 (best of 3)
! bundle2 part read(32k)
! wall 8.758364 comb 8.740000 user 7.860000 sys 0.880000 (best of 3)
! bundle2 part read(128k)
! wall 8.749040 comb 8.730000 user 7.830000 sys 0.900000 (best of 3)
We already see some interesting data. Notably that bundle2 has
significant overhead compared to bundle1. This matters for e.g. stream
clone bundles, which can be applied at >1Gbps.
Differential Revision: https://phab.mercurial-scm.org/D1385
# demandimportpy3 - global demand-loading of modules for Mercurial
#
# Copyright 2017 Facebook Inc.
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
"""Lazy loading for Python 3.6 and above.
This uses the new importlib finder/loader functionality available in Python 3.5
and up. The code reuses most of the mechanics implemented inside importlib.util,
but with a few additions:
* Allow excluding certain modules from lazy imports.
* Expose an interface that's substantially the same as demandimport for
Python 2.
This also has some limitations compared to the Python 2 implementation:
* Much of the logic is per-package, not per-module, so any packages loaded
before demandimport is enabled will not be lazily imported in the future. In
practice, we only expect builtins to be loaded before demandimport is
enabled.
"""
# This line is unnecessary, but it satisfies test-check-py3-compat.t.
from __future__ import absolute_import
import contextlib
import importlib.abc
import importlib.machinery
import importlib.util
import sys
_deactivated = False
class _lazyloaderex(importlib.util.LazyLoader):
"""This is a LazyLoader except it also follows the _deactivated global and
the ignore list.
"""
def exec_module(self, module):
"""Make the module load lazily."""
if _deactivated or module.__name__ in ignore:
self.loader.exec_module(module)
else:
super().exec_module(module)
# This is 3.6+ because with Python 3.5 it isn't possible to lazily load
# extensions. See the discussion in https://python.org/sf/26186 for more.
_extensions_loader = _lazyloaderex.factory(
importlib.machinery.ExtensionFileLoader)
_bytecode_loader = _lazyloaderex.factory(
importlib.machinery.SourcelessFileLoader)
_source_loader = _lazyloaderex.factory(importlib.machinery.SourceFileLoader)
def _makefinder(path):
return importlib.machinery.FileFinder(
path,
# This is the order in which loaders are passed in in core Python.
(_extensions_loader, importlib.machinery.EXTENSION_SUFFIXES),
(_source_loader, importlib.machinery.SOURCE_SUFFIXES),
(_bytecode_loader, importlib.machinery.BYTECODE_SUFFIXES),
)
ignore = []
def init(ignorelist):
global ignore
ignore = ignorelist
def isenabled():
return _makefinder in sys.path_hooks and not _deactivated
def disable():
try:
while True:
sys.path_hooks.remove(_makefinder)
except ValueError:
pass
def enable():
sys.path_hooks.insert(0, _makefinder)
@contextlib.contextmanager
def deactivated():
# This implementation is a bit different from Python 2's. Python 3
# maintains a per-package finder cache in sys.path_importer_cache (see
# PEP 302). This means that we can't just call disable + enable.
# If we do that, in situations like:
#
# demandimport.enable()
# ...
# from foo.bar import mod1
# with demandimport.deactivated():
# from foo.bar import mod2
#
# mod2 will be imported lazily. (The converse also holds -- whatever finder
# first gets cached will be used.)
#
# Instead, have a global flag the LazyLoader can use.
global _deactivated
demandenabled = isenabled()
if demandenabled:
_deactivated = True
try:
yield
finally:
if demandenabled:
_deactivated = False