view tests/test-serve.t @ 37631:2f626233859b

wireproto: implement batching on peer executor interface This is a bit more complicated than non-batch requests because we need to buffer sends until the last request arrives *and* we need to support resolving futures as data arrives from the remote. In a classical concurrent.futures executor model, the future "starts" as soon as it is submitted. However, we have nothing to start until the last command is submitted. If we did nothing, calling result() would deadlock, since the future hasn't "started." So in the case where we queue the command, we return a special future type whose result() will trigger sendcommands(). This eliminates the deadlock potential. It also serves as a check against callers who may be calling result() prematurely, as it will prevent any subsequent callcommands() from working. This behavior is slightly annoying and a bit restrictive. But it's the world that half duplex connections forces on us. In order to support streaming responses, we were previously using a generator. But with a futures-based API, we're using futures and not generators. So in order to get streaming, we need a background thread to read data from the server. The approach taken in this patch is to leverage the ThreadPoolExecutor from concurrent.futures for managing a background thread. We create an executor and future that resolves when all response data is processed (or an error occurs). When exiting the context manager, we wait on that background reading before returning. I was hoping we could manually spin up a threading.Thread and this would be simple. But I ran into a few deadlocks when implementing. After looking at the source code to concurrent.futures, I figured it would just be easier to use a ThreadPoolExecutor than implement all the code needed to manually manage a thread. To prove this works, a use of the batch API in discovery has been updated. Differential Revision: https://phab.mercurial-scm.org/D3269
author Gregory Szorc <gregory.szorc@gmail.com>
date Fri, 13 Apr 2018 11:02:34 -0700
parents 5e92ba77793c
children 7de7bd407251
line wrap: on
line source

#require serve

  $ hgserve()
  > {
  >    hg serve -a localhost -d --pid-file=hg.pid -E errors.log -v $@ \
  >        | sed -e "s/:$HGPORT1\\([^0-9]\\)/:HGPORT1\1/g" \
  >              -e "s/:$HGPORT2\\([^0-9]\\)/:HGPORT2\1/g" \
  >              -e 's/http:\/\/[^/]*\//http:\/\/localhost\//'
  >    cat hg.pid >> "$DAEMON_PIDS"
  >    echo % errors
  >    cat errors.log
  >    killdaemons.py hg.pid
  > }

  $ hg init test
  $ cd test
  $ echo '[web]' > .hg/hgrc
  $ echo 'accesslog = access.log' >> .hg/hgrc
  $ echo "port = $HGPORT1" >> .hg/hgrc

Without -v

  $ hg serve -a localhost -p $HGPORT -d --pid-file=hg.pid -E errors.log
  $ cat hg.pid >> "$DAEMON_PIDS"
  $ if [ -f access.log ]; then
  >     echo 'access log created - .hg/hgrc respected'
  > fi
  access log created - .hg/hgrc respected

errors

  $ cat errors.log

With -v

  $ hgserve
  listening at http://localhost/ (bound to *$LOCALIP*:HGPORT1) (glob) (?)
  % errors

With -v and -p HGPORT2

  $ hgserve -p "$HGPORT2"
  listening at http://localhost/ (bound to *$LOCALIP*:HGPORT2) (glob) (?)
  % errors

With -v and -p daytime (should fail because low port)

#if no-root no-windows
  $ KILLQUIETLY=Y
  $ hgserve -p daytime
  abort: cannot start server at 'localhost:13': Permission denied
  abort: child process failed to start
  % errors
  $ KILLQUIETLY=N
#endif

With --prefix foo

  $ hgserve --prefix foo
  listening at http://localhost/foo/ (bound to *$LOCALIP*:HGPORT1) (glob) (?)
  % errors

With --prefix /foo

  $ hgserve --prefix /foo
  listening at http://localhost/foo/ (bound to *$LOCALIP*:HGPORT1) (glob) (?)
  % errors

With --prefix foo/

  $ hgserve --prefix foo/
  listening at http://localhost/foo/ (bound to *$LOCALIP*:HGPORT1) (glob) (?)
  % errors

With --prefix /foo/

  $ hgserve --prefix /foo/
  listening at http://localhost/foo/ (bound to *$LOCALIP*:HGPORT1) (glob) (?)
  % errors

  $ cd ..