Mercurial > hg
view tests/test-remotefilelog-http.t @ 44363:f7459da77f23
nodemap: introduce an option to use mmap to read the nodemap mapping
The performance and memory benefit is much greater if we don't have to copy all
the data in memory for each information. So we introduce an option (on by
default) to read the data using mmap.
This changeset is the last one definition the API for index support nodemap
data. (they have to be able to use the mmaping).
Below are some benchmark comparing the best we currently have in 5.3 with the
final step of this series (using the persistent nodemap implementation in
Rust). The benchmark run `hg perfindex` with various revset and the following
variants:
Before:
* do not use the persistent nodemap
* use the CPython implementation of the index for nodemap
* use mmapping of the changelog index
After:
* use the MixedIndex Rust code, with the NodeTree object for nodemap access
(still in review)
* use the persistent nodemap data from disk
* access the persistent nodemap data through mmap
* use mmapping of the changelog index
The persistent nodemap greatly speed up most operation on very large
repositories. Some of the previously very fast lookup end up a bit slower because
the persistent nodemap has to be setup. However the absolute slowdown is very
small and won't matters in the big picture.
Here are some numbers (in seconds) for the reference copy of mozilla-try:
Revset Before After abs-change speedup
-10000: 0.004622 0.005532 0.000910 × 0.83
-10: 0.000050 0.000132 0.000082 × 0.37
tip 0.000052 0.000085 0.000033 × 0.61
0 + (-10000:) 0.028222 0.005337 -0.022885 × 5.29
0 0.023521 0.000084 -0.023437 × 280.01
(-10000:) + 0 0.235539 0.005308 -0.230231 × 44.37
(-10:) + :9 0.232883 0.000180 -0.232703 ×1293.79
(-10000:) + (:99) 0.238735 0.005358 -0.233377 × 44.55
:99 + (-10000:) 0.317942 0.005593 -0.312349 × 56.84
:9 + (-10:) 0.313372 0.000179 -0.313193 ×1750.68
:9 0.316450 0.000143 -0.316307 ×2212.93
On smaller repositories, the cost of nodemap related operation is not as big, so
the win is much more modest. Yet it helps shaving a handful of millisecond here
and there.
Here are some numbers (in seconds) for the reference copy of mercurial:
Revset Before After abs-change speedup
-10: 0.000065 0.000097 0.000032 × 0.67
tip 0.000063 0.000078 0.000015 × 0.80
0 0.000561 0.000079 -0.000482 × 7.10
-10000: 0.004609 0.003648 -0.000961 × 1.26
0 + (-10000:) 0.005023 0.003715 -0.001307 × 1.35
(-10:) + :9 0.002187 0.000108 -0.002079 ×20.25
(-10000:) + 0 0.006252 0.003716 -0.002536 × 1.68
(-10000:) + (:99) 0.006367 0.003707 -0.002660 × 1.71
:9 + (-10:) 0.003846 0.000110 -0.003736 ×34.96
:9 0.003854 0.000099 -0.003755 ×38.92
:99 + (-10000:) 0.007644 0.003778 -0.003866 × 2.02
Differential Revision: https://phab.mercurial-scm.org/D7894
author | Pierre-Yves David <pierre-yves.david@octobus.net> |
---|---|
date | Tue, 11 Feb 2020 11:18:52 +0100 |
parents | a495435d980e |
children | 1d075b857c90 |
line wrap: on
line source
#require no-windows $ . "$TESTDIR/remotefilelog-library.sh" $ hg init master $ cd master $ cat >> .hg/hgrc <<EOF > [remotefilelog] > server=True > EOF $ echo x > x $ echo y > y $ hg commit -qAm x $ hg serve -p $HGPORT -d --pid-file=../hg1.pid -E ../error.log -A ../access.log Build a query string for later use: $ GET=`hg debugdata -m 0 | $PYTHON -c \ > 'import sys ; print([("?cmd=x_rfl_getfile&file=%s&node=%s" % tuple(s.split("\0"))) for s in sys.stdin.read().splitlines()][0])'` $ cd .. $ cat hg1.pid >> $DAEMON_PIDS $ hgcloneshallow http://localhost:$HGPORT/ shallow -q 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob) $ grep getfile access.log * "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=x_rfl_getfile+*node%3D1406e74118627694268417491f018a4a883152f0* (glob) Clear filenode cache so we can test fetching with a modified batch size $ rm -r $TESTTMP/hgcache Now do a fetch with a large batch size so we're sure it works $ hgcloneshallow http://localhost:$HGPORT/ shallow-large-batch \ > --config remotefilelog.batchsize=1000 -q 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob) The 'remotefilelog' capability should *not* be exported over http(s), as the getfile method it offers doesn't work with http. $ get-with-headers.py localhost:$HGPORT '?cmd=capabilities' | grep lookup | identifyrflcaps x_rfl_getfile x_rfl_getflogheads $ get-with-headers.py localhost:$HGPORT '?cmd=hello' | grep lookup | identifyrflcaps x_rfl_getfile x_rfl_getflogheads $ get-with-headers.py localhost:$HGPORT '?cmd=this-command-does-not-exist' | head -n 1 400 no such method: this-command-does-not-exist $ get-with-headers.py localhost:$HGPORT '?cmd=x_rfl_getfiles' | head -n 1 400 no such method: x_rfl_getfiles Verify serving from a shallow clone doesn't allow for remotefile fetches. This also serves to test the error handling for our batchable getfile RPC. $ cd shallow $ hg serve -p $HGPORT1 -d --pid-file=../hg2.pid -E ../error2.log $ cd .. $ cat hg2.pid >> $DAEMON_PIDS This GET should work, because this server is serving master, which is a full clone. $ get-with-headers.py localhost:$HGPORT "$GET" 200 Script output follows 0\x00x\x9c3b\xa8\xe0\x12a{\xee(\x91T6E\xadE\xdcS\x9e\xb1\xcb\xab\xc30\xe8\x03\x03\x91 \xe4\xc6\xfb\x99J,\x17\x0c\x9f-\xcb\xfcR7c\xf3c\x97r\xbb\x10\x06\x00\x96m\x121 (no-eol) (esc) This GET should fail using the in-band signalling mechanism, because it's not a full clone. Note that it's also plausible for servers to refuse to serve file contents for other reasons, like the file contents not being visible to the current user. $ get-with-headers.py localhost:$HGPORT1 "$GET" 200 Script output follows 1\x00cannot fetch remote files from shallow repo (no-eol) (esc) Clones should work with httppostargs turned on $ cd master $ hg --config experimental.httppostargs=1 serve -p $HGPORT2 -d --pid-file=../hg3.pid -E ../error3.log $ cd .. $ cat hg3.pid >> $DAEMON_PIDS Clear filenode cache so we can test fetching with a modified batch size $ rm -r $TESTTMP/hgcache $ hgcloneshallow http://localhost:$HGPORT2/ shallow-postargs -q 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob) All error logs should be empty: $ cat error.log $ cat error2.log $ cat error3.log