view tests/test-remotefilelog-http.t @ 42743:8c9a6adec67a

rust-discovery: using the children cache in add_missing The DAG range computation often needs to get back to very old revisions, and turns out to be disproportionately long, given that the end goal is to remove the descendents of the given missing revisons from the undecided set. The fast iteration capabilities available in the Rust case make it possible to avoid the DAG range entirely, at the cost of precomputing the children cache, and to simply iterate on children of the given missing revisions. This is a case where staying on the same side of the interface between the two languages has clear benefits. On discoveries with initial undecided sets small enough to bypass sampling entirely, the total cost of computing the children cache and the subsequent iteration becomes better than the Python + C counterpart, which relies on reachableroots2. For example, on a repo with more than one million revisions with an initial undecided set of 11 elements, we get these figures: Rust version with simple iteration addcommons: 57.287us first undecided computation: 184.278334ms first children cache computation: 131.056us addmissings iteration: 42.766us first addinfo total: 185.24 ms Python + C version first addcommons: 0.29 ms addcommons 0.21 ms first undecided computation 191.35 ms addmissings 45.75 ms first addinfo total: 237.77 ms On discoveries with large undecided sets, the initial price paid makes the first addinfo slower than the Python + C version, but that's more than compensated by the gain in sampling and subsequent iterations. Here's an extreme example with an undecided set of a million revisions: Rust version: first undecided computation: 293.842629ms first children cache computation: 407.911297ms addmissings iteration: 34.312869ms first addinfo total: 776.02 ms taking initial sample query 2: sampling time: 1318.38 ms query 2; still undecided: 1005013, sample size is: 200 addmissings: 143.062us Python + C version: first undecided computation 298.13 ms addmissings 80.13 ms first addinfo total: 399.62 ms taking initial sample query 2: sampling time: 3957.23 ms query 2; still undecided: 1005013, sample size is: 200 addmissings 52.88 ms Differential Revision: https://phab.mercurial-scm.org/D6428
author Georges Racinet <georges.racinet@octobus.net>
date Tue, 16 Apr 2019 01:16:39 +0200
parents a495435d980e
children 1d075b857c90
line wrap: on
line source

#require no-windows

  $ . "$TESTDIR/remotefilelog-library.sh"

  $ hg init master
  $ cd master
  $ cat >> .hg/hgrc <<EOF
  > [remotefilelog]
  > server=True
  > EOF
  $ echo x > x
  $ echo y > y
  $ hg commit -qAm x
  $ hg serve -p $HGPORT -d --pid-file=../hg1.pid -E ../error.log -A ../access.log

Build a query string for later use:
  $ GET=`hg debugdata -m 0 | $PYTHON -c \
  > 'import sys ; print([("?cmd=x_rfl_getfile&file=%s&node=%s" % tuple(s.split("\0"))) for s in sys.stdin.read().splitlines()][0])'`

  $ cd ..
  $ cat hg1.pid >> $DAEMON_PIDS

  $ hgcloneshallow http://localhost:$HGPORT/ shallow -q
  2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)

  $ grep getfile access.log
  * "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=x_rfl_getfile+*node%3D1406e74118627694268417491f018a4a883152f0* (glob)

Clear filenode cache so we can test fetching with a modified batch size
  $ rm -r $TESTTMP/hgcache
Now do a fetch with a large batch size so we're sure it works
  $ hgcloneshallow http://localhost:$HGPORT/ shallow-large-batch \
  >    --config remotefilelog.batchsize=1000 -q
  2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)

The 'remotefilelog' capability should *not* be exported over http(s),
as the getfile method it offers doesn't work with http.
  $ get-with-headers.py localhost:$HGPORT '?cmd=capabilities' | grep lookup | identifyrflcaps
  x_rfl_getfile
  x_rfl_getflogheads

  $ get-with-headers.py localhost:$HGPORT '?cmd=hello' | grep lookup | identifyrflcaps
  x_rfl_getfile
  x_rfl_getflogheads

  $ get-with-headers.py localhost:$HGPORT '?cmd=this-command-does-not-exist' | head -n 1
  400 no such method: this-command-does-not-exist
  $ get-with-headers.py localhost:$HGPORT '?cmd=x_rfl_getfiles' | head -n 1
  400 no such method: x_rfl_getfiles

Verify serving from a shallow clone doesn't allow for remotefile
fetches. This also serves to test the error handling for our batchable
getfile RPC.

  $ cd shallow
  $ hg serve -p $HGPORT1 -d --pid-file=../hg2.pid -E ../error2.log
  $ cd ..
  $ cat hg2.pid >> $DAEMON_PIDS

This GET should work, because this server is serving master, which is
a full clone.

  $ get-with-headers.py localhost:$HGPORT "$GET"
  200 Script output follows
  
  0\x00x\x9c3b\xa8\xe0\x12a{\xee(\x91T6E\xadE\xdcS\x9e\xb1\xcb\xab\xc30\xe8\x03\x03\x91 \xe4\xc6\xfb\x99J,\x17\x0c\x9f-\xcb\xfcR7c\xf3c\x97r\xbb\x10\x06\x00\x96m\x121 (no-eol) (esc)

This GET should fail using the in-band signalling mechanism, because
it's not a full clone. Note that it's also plausible for servers to
refuse to serve file contents for other reasons, like the file
contents not being visible to the current user.

  $ get-with-headers.py localhost:$HGPORT1 "$GET"
  200 Script output follows
  
  1\x00cannot fetch remote files from shallow repo (no-eol) (esc)

Clones should work with httppostargs turned on

  $ cd master
  $ hg --config experimental.httppostargs=1 serve -p $HGPORT2 -d --pid-file=../hg3.pid -E ../error3.log

  $ cd ..
  $ cat hg3.pid >> $DAEMON_PIDS

Clear filenode cache so we can test fetching with a modified batch size
  $ rm -r $TESTTMP/hgcache

  $ hgcloneshallow http://localhost:$HGPORT2/ shallow-postargs -q
  2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)

All error logs should be empty:
  $ cat error.log
  $ cat error2.log
  $ cat error3.log