Mercurial > hg
view tests/test-parseindex.t @ 44118:f81c17ec303c
hgdemandimport: apply lazy module loading to sys.meta_path finders
Python's `sys.meta_path` finders are the primary objects whose job it
is to find a module at import time. When `import` is called, Python
iterates objects in this list and calls `o.find_spec(...)` to find
a `ModuleSpec` (or None if the module couldn't be found by that
finder). If no meta path finder can find a module, import fails.
One of the default meta path finders is `PathFinder`. Its job is to
import modules from the filesystem and is probably the most important
importer. This finder looks at `sys.path` and `sys.path_hooks` to do
its job.
The `ModuleSpec` returned by `MetaPathImporter.find_spec()` has a
`loader` attribute, which defines the concrete module loader to use.
`sys.path_hooks` is a hook point for teaching `PathFinder` to
instantiate custom loader types.
Previously, we injected a custom `sys.path_hook` that told `PathFinder`
to wrap the default loaders with a loader that creates a module object
that is lazy.
This approach worked. But its main limitation was that it only applied
to the `PathFinder` meta path importer. There are other meta path
importers that are registered. And in the case of PyOxidizer loading
modules from memory, `PathFinder` doesn't come into play since
PyOxidizer's own meta path importer was handling all imports.
This commit changes our approach to lazy module loading by proxying
all meta path importers. Specifically, we overload the `find_spec()`
method to swap in a wrapped loader on the `ModuleSpec` before it
is returned. The end result of this is all meta path importers should
be lazy.
As much as I would have loved to utilize .__class__ manipulation to
achieve this, some meta path importers are implemented in C/Rust
in such a way that they cannot be monkeypatched. This is why we
use __getattribute__ to define a proxy.
Also, this change could theoretically open us up to regressions in
meta path importers whose loader is creating module objects which
can't be monkeypatched. But I'm not aware of any of these in the
wild. So I think we'll be safe.
According to hyperfine, this change yields a decent startup time win of
5-6ms:
```
Benchmark #1: ~/.pyenv/versions/3.6.10/bin/python ./hg version
Time (mean ± σ): 86.8 ms ± 0.5 ms [User: 78.0 ms, System: 8.7 ms]
Range (min … max): 86.0 ms … 89.1 ms 50 runs
Time (mean ± σ): 81.1 ms ± 2.7 ms [User: 74.5 ms, System: 6.5 ms]
Range (min … max): 77.8 ms … 90.5 ms 50 runs
Benchmark #2: ~/.pyenv/versions/3.7.6/bin/python ./hg version
Time (mean ± σ): 78.9 ms ± 0.6 ms [User: 70.2 ms, System: 8.7 ms]
Range (min … max): 78.1 ms … 81.2 ms 50 runs
Time (mean ± σ): 73.4 ms ± 0.6 ms [User: 65.3 ms, System: 8.0 ms]
Range (min … max): 72.4 ms … 75.7 ms 50 runs
Benchmark #3: ~/.pyenv/versions/3.8.1/bin/python ./hg version
Time (mean ± σ): 78.1 ms ± 0.6 ms [User: 70.2 ms, System: 7.9 ms]
Range (min … max): 77.4 ms … 80.9 ms 50 runs
Time (mean ± σ): 72.1 ms ± 0.4 ms [User: 64.4 ms, System: 7.6 ms]
Range (min … max): 71.4 ms … 74.1 ms 50 runs
```
Differential Revision: https://phab.mercurial-scm.org/D7954
author | Gregory Szorc <gregory.szorc@gmail.com> |
---|---|
date | Mon, 20 Jan 2020 23:51:25 -0800 |
parents | 3518da504303 |
children | 61e7464477ac |
line wrap: on
line source
revlog.parseindex must be able to parse the index file even if an index entry is split between two 64k blocks. The ideal test would be to create an index file with inline data where 64k < size < 64k + 64 (64k is the size of the read buffer, 64 is the size of an index entry) and with an index entry starting right before the 64k block boundary, and try to read it. We approximate that by reducing the read buffer to 1 byte. $ hg init a $ cd a $ echo abc > foo $ hg add foo $ hg commit -m 'add foo' $ echo >> foo $ hg commit -m 'change foo' $ hg log -r 0: changeset: 0:7c31755bf9b5 user: test date: Thu Jan 01 00:00:00 1970 +0000 summary: add foo changeset: 1:26333235a41c tag: tip user: test date: Thu Jan 01 00:00:00 1970 +0000 summary: change foo $ cat >> test.py << EOF > from __future__ import print_function > from mercurial import changelog, node, pycompat, vfs > > class singlebyteread(object): > def __init__(self, real): > self.real = real > > def read(self, size=-1): > if size == 65536: > size = 1 > return self.real.read(size) > > def __getattr__(self, key): > return getattr(self.real, key) > > def __enter__(self): > self.real.__enter__() > return self > > def __exit__(self, *args, **kwargs): > return self.real.__exit__(*args, **kwargs) > > def opener(*args): > o = vfs.vfs(*args) > def wrapper(*a, **kwargs): > f = o(*a, **kwargs) > return singlebyteread(f) > wrapper.options = o.options > return wrapper > > cl = changelog.changelog(opener(b'.hg/store')) > print(len(cl), 'revisions:') > for r in cl: > print(pycompat.sysstr(node.short(cl.node(r)))) > EOF $ "$PYTHON" test.py 2 revisions: 7c31755bf9b5 26333235a41c $ cd .. #if no-pure Test SEGV caused by bad revision passed to reachableroots() (issue4775): $ cd a $ "$PYTHON" <<EOF > from __future__ import print_function > from mercurial import changelog, vfs > cl = changelog.changelog(vfs.vfs(b'.hg/store')) > print('good heads:') > for head in [0, len(cl) - 1, -1]: > print('%s: %r' % (head, cl.reachableroots(0, [head], [0]))) > print('bad heads:') > for head in [len(cl), 10000, -2, -10000, None]: > print('%s:' % head, end=' ') > try: > cl.reachableroots(0, [head], [0]) > print('uncaught buffer overflow?') > except (IndexError, TypeError) as inst: > print(inst) > print('good roots:') > for root in [0, len(cl) - 1, -1]: > print('%s: %r' % (root, cl.reachableroots(root, [len(cl) - 1], [root]))) > print('out-of-range roots are ignored:') > for root in [len(cl), 10000, -2, -10000]: > print('%s: %r' % (root, cl.reachableroots(root, [len(cl) - 1], [root]))) > print('bad roots:') > for root in [None]: > print('%s:' % root, end=' ') > try: > cl.reachableroots(root, [len(cl) - 1], [root]) > print('uncaught error?') > except TypeError as inst: > print(inst) > EOF good heads: 0: [0] 1: [0] -1: [] bad heads: 2: head out of range 10000: head out of range -2: head out of range -10000: head out of range None: an integer is required( .got type NoneType.)? (re) good roots: 0: [0] 1: [1] -1: [-1] out-of-range roots are ignored: 2: [] 10000: [] -2: [] -10000: [] bad roots: None: an integer is required( .got type NoneType.)? (re) $ cd .. Test corrupted p1/p2 fields that could cause SEGV at parsers.c: $ mkdir invalidparent $ cd invalidparent $ hg clone --pull -q --config phases.publish=False ../a limit --config format.sparse-revlog=no $ hg clone --pull -q --config phases.publish=False ../a neglimit --config format.sparse-revlog=no $ hg clone --pull -q --config phases.publish=False ../a segv --config format.sparse-revlog=no $ rm -R limit/.hg/cache neglimit/.hg/cache segv/.hg/cache $ "$PYTHON" <<EOF > data = open("limit/.hg/store/00changelog.i", "rb").read() > poisons = [ > (b'limit', b'\0\0\0\x02'), > (b'neglimit', b'\xff\xff\xff\xfe'), > (b'segv', b'\0\x01\0\0'), > ] > for n, p in poisons: > # corrupt p1 at rev0 and p2 at rev1 > d = data[:24] + p + data[28:127 + 28] + p + data[127 + 32:] > open(n + b"/.hg/store/00changelog.i", "wb").write(d) > EOF $ hg -R limit debugrevlogindex -f1 -c rev flag size link p1 p2 nodeid 0 0000 62 0 2 -1 7c31755bf9b5 1 0000 65 1 0 2 26333235a41c $ hg -R limit debugdeltachain -c rev chain# chainlen prev delta size rawsize chainsize ratio lindist extradist extraratio 0 1 1 -1 base 63 62 63 1.01613 63 0 0.00000 1 2 1 -1 base 66 65 66 1.01538 66 0 0.00000 $ hg -R neglimit debugrevlogindex -f1 -c rev flag size link p1 p2 nodeid 0 0000 62 0 -2 -1 7c31755bf9b5 1 0000 65 1 0 -2 26333235a41c $ hg -R segv debugrevlogindex -f1 -c rev flag size link p1 p2 nodeid 0 0000 62 0 65536 -1 7c31755bf9b5 1 0000 65 1 0 65536 26333235a41c $ hg -R segv debugdeltachain -c rev chain# chainlen prev delta size rawsize chainsize ratio lindist extradist extraratio 0 1 1 -1 base 63 62 63 1.01613 63 0 0.00000 1 2 1 -1 base 66 65 66 1.01538 66 0 0.00000 $ cat <<EOF > test.py > from __future__ import print_function > import sys > from mercurial import changelog, pycompat, vfs > cl = changelog.changelog(vfs.vfs(pycompat.fsencode(sys.argv[1]))) > n0, n1 = cl.node(0), cl.node(1) > ops = [ > ('reachableroots', > lambda: cl.index.reachableroots2(0, [1], [0], False)), > ('compute_phases_map_sets', lambda: cl.computephases([[0], []])), > ('index_headrevs', lambda: cl.headrevs()), > ('find_gca_candidates', lambda: cl.commonancestorsheads(n0, n1)), > ('find_deepest', lambda: cl.ancestor(n0, n1)), > ] > for l, f in ops: > print(l + ':', end=' ') > try: > f() > print('uncaught buffer overflow?') > except ValueError as inst: > print(inst) > EOF $ "$PYTHON" test.py limit/.hg/store reachableroots: parent out of range compute_phases_map_sets: parent out of range index_headrevs: parent out of range find_gca_candidates: parent out of range find_deepest: parent out of range $ "$PYTHON" test.py neglimit/.hg/store reachableroots: parent out of range compute_phases_map_sets: parent out of range index_headrevs: parent out of range find_gca_candidates: parent out of range find_deepest: parent out of range $ "$PYTHON" test.py segv/.hg/store reachableroots: parent out of range compute_phases_map_sets: parent out of range index_headrevs: parent out of range find_gca_candidates: parent out of range find_deepest: parent out of range $ cd .. #endif