view tests/test-sparse-fsmonitor.t @ 37631:2f626233859b

wireproto: implement batching on peer executor interface This is a bit more complicated than non-batch requests because we need to buffer sends until the last request arrives *and* we need to support resolving futures as data arrives from the remote. In a classical concurrent.futures executor model, the future "starts" as soon as it is submitted. However, we have nothing to start until the last command is submitted. If we did nothing, calling result() would deadlock, since the future hasn't "started." So in the case where we queue the command, we return a special future type whose result() will trigger sendcommands(). This eliminates the deadlock potential. It also serves as a check against callers who may be calling result() prematurely, as it will prevent any subsequent callcommands() from working. This behavior is slightly annoying and a bit restrictive. But it's the world that half duplex connections forces on us. In order to support streaming responses, we were previously using a generator. But with a futures-based API, we're using futures and not generators. So in order to get streaming, we need a background thread to read data from the server. The approach taken in this patch is to leverage the ThreadPoolExecutor from concurrent.futures for managing a background thread. We create an executor and future that resolves when all response data is processed (or an error occurs). When exiting the context manager, we wait on that background reading before returning. I was hoping we could manually spin up a threading.Thread and this would be simple. But I ran into a few deadlocks when implementing. After looking at the source code to concurrent.futures, I figured it would just be easier to use a ThreadPoolExecutor than implement all the code needed to manually manage a thread. To prove this works, a use of the batch API in discovery has been updated. Differential Revision: https://phab.mercurial-scm.org/D3269
author Gregory Szorc <gregory.szorc@gmail.com>
date Fri, 13 Apr 2018 11:02:34 -0700
parents abd7dedbaa36
children
line wrap: on
line source

This test doesn't yet work due to the way fsmonitor is integrated with test runner

  $ exit 80

test sparse interaction with other extensions

  $ hg init myrepo
  $ cd myrepo
  $ cat > .hg/hgrc <<EOF
  > [extensions]
  > sparse=
  > strip=
  > EOF

Test fsmonitor integration (if available)
TODO: make fully isolated integration test a'la https://github.com/facebook/watchman/blob/master/tests/integration/WatchmanInstance.py
(this one is using the systemwide watchman instance)

  $ touch .watchmanconfig
  $ echo "ignoredir1/" >> .hgignore
  $ hg commit -Am ignoredir1
  adding .hgignore
  $ echo "ignoredir2/" >> .hgignore
  $ hg commit -m ignoredir2

  $ hg sparse --reset
  $ hg sparse -I ignoredir1 -I ignoredir2 -I dir1

  $ mkdir ignoredir1 ignoredir2 dir1
  $ touch ignoredir1/file ignoredir2/file dir1/file

Run status twice to compensate for a condition in fsmonitor where it will check
ignored files the second time it runs, regardless of previous state (ask @sid0)
  $ hg status --config extensions.fsmonitor=
  ? dir1/file
  $ hg status --config extensions.fsmonitor=
  ? dir1/file

Test that fsmonitor ignore hash check updates when .hgignore changes

  $ hg up -q ".^"
  $ hg status --config extensions.fsmonitor=
  ? dir1/file
  ? ignoredir2/file