view tests/test-narrow-clone-non-narrow-server.t @ 37631:2f626233859b

wireproto: implement batching on peer executor interface This is a bit more complicated than non-batch requests because we need to buffer sends until the last request arrives *and* we need to support resolving futures as data arrives from the remote. In a classical concurrent.futures executor model, the future "starts" as soon as it is submitted. However, we have nothing to start until the last command is submitted. If we did nothing, calling result() would deadlock, since the future hasn't "started." So in the case where we queue the command, we return a special future type whose result() will trigger sendcommands(). This eliminates the deadlock potential. It also serves as a check against callers who may be calling result() prematurely, as it will prevent any subsequent callcommands() from working. This behavior is slightly annoying and a bit restrictive. But it's the world that half duplex connections forces on us. In order to support streaming responses, we were previously using a generator. But with a futures-based API, we're using futures and not generators. So in order to get streaming, we need a background thread to read data from the server. The approach taken in this patch is to leverage the ThreadPoolExecutor from concurrent.futures for managing a background thread. We create an executor and future that resolves when all response data is processed (or an error occurs). When exiting the context manager, we wait on that background reading before returning. I was hoping we could manually spin up a threading.Thread and this would be simple. But I ran into a few deadlocks when implementing. After looking at the source code to concurrent.futures, I figured it would just be easier to use a ThreadPoolExecutor than implement all the code needed to manually manage a thread. To prove this works, a use of the batch API in discovery has been updated. Differential Revision: https://phab.mercurial-scm.org/D3269
author Gregory Szorc <gregory.szorc@gmail.com>
date Fri, 13 Apr 2018 11:02:34 -0700
parents f4c7dc24e889
children afe624d78d43
line wrap: on
line source

Test attempting a narrow clone against a server that doesn't support narrowhg.

  $ . "$TESTDIR/narrow-library.sh"

  $ hg init master
  $ cd master

  $ for x in `$TESTDIR/seq.py 10`; do
  >   echo $x > "f$x"
  >   hg add "f$x"
  >   hg commit -m "Add $x"
  > done

  $ hg serve -a localhost -p $HGPORT1 --config extensions.narrow=! -d \
  >    --pid-file=hg.pid
  $ cat hg.pid >> "$DAEMON_PIDS"
  $ hg serve -a localhost -p $HGPORT2 -d --pid-file=hg.pid
  $ cat hg.pid >> "$DAEMON_PIDS"

Verify that narrow is advertised in the bundle2 capabilities:
  $ echo hello | hg -R . serve --stdio | \
  >   $PYTHON -c "from __future__ import print_function; import sys, urllib; print(urllib.unquote_plus(list(sys.stdin)[1]))" | grep narrow
  narrow=v0

  $ cd ..

  $ hg clone --narrow --include f1 http://localhost:$HGPORT1/ narrowclone
  requesting all changes
  abort: server doesn't support narrow clones
  [255]

Make a narrow clone (via HGPORT2), then try to narrow and widen
into it (from HGPORT1) to prove that narrowing is fine and widening fails
gracefully:
  $ hg clone -r 0 --narrow --include f1 http://localhost:$HGPORT2/ narrowclone
  adding changesets
  adding manifests
  adding file changes
  added 1 changesets with 1 changes to 1 files
  new changesets * (glob)
  updating to branch default
  1 files updated, 0 files merged, 0 files removed, 0 files unresolved
  $ cd narrowclone
  $ hg tracked --addexclude f2 http://localhost:$HGPORT1/
  comparing with http://localhost:$HGPORT1/
  searching for changes
  looking for local changes to affected paths
  $ hg tracked --addinclude f1 http://localhost:$HGPORT1/
  comparing with http://localhost:$HGPORT1/
  searching for changes
  no changes found
  abort: server doesn't support narrow clones
  [255]