tests/test-narrow-clone-stream.t
author Manuel Jacob <me@manueljacob.de>
Sun, 22 May 2022 03:50:34 +0200
changeset 49269 395f28064826
parent 48669 7ee07e1a25c0
child 49826 c84844cd523a
permissions -rw-r--r--
worker: avoid potential partial write of pickled data Previously, the code wrote the pickled data using os.write(). However, os.write() can write less bytes than passed to it. To trigger the problem, the pickled data had to be larger than 2147479552 bytes on my system. Instead, open a file object and pass it to pickle.dump(). This also has the advantage that it doesn’t buffer the whole pickled data in memory. Note that the opened file must be buffered because pickle doesn’t support unbuffered streams because unbuffered streams’ write() method might write less bytes than passed to it (like os.write()) but pickle.dump() relies on that all bytes are written (see https://github.com/python/cpython/issues/93050). The side effect of using a file object and a with statement is that wfd is explicitly closed now while it seems like before it was implicitly closed by process exit.

#testcases tree flat-fncache flat-nofncache

Tests narrow stream clones

  $ . "$TESTDIR/narrow-library.sh"

#if tree
  $ cat << EOF >> $HGRCPATH
  > [experimental]
  > treemanifest = 1
  > EOF
#endif

#if flat-nofncache
  $ cat << EOF >> $HGRCPATH
  > [format]
  > usefncache = 0
  > EOF
#endif

Server setup

  $ hg init master
  $ cd master
  $ mkdir dir
  $ mkdir dir/src
  $ cd dir/src
  $ for x in `$TESTDIR/seq.py 20`; do echo $x > "F$x"; hg add "F$x"; hg commit -m "Commit src $x"; done

  $ cd ..
  $ mkdir tests
  $ cd tests
  $ for x in `$TESTDIR/seq.py 20`; do echo $x > "F$x"; hg add "F$x"; hg commit -m "Commit src $x"; done
  $ cd ../../..

Trying to stream clone when the server does not support it

  $ hg clone --narrow ssh://user@dummy/master narrow --noupdate --include "dir/src/F10" --stream
  streaming all changes
  remote: abort: server does not support narrow stream clones
  abort: pull failed on remote
  [100]

Enable stream clone on the server

  $ echo "[experimental]" >> master/.hg/hgrc
  $ echo "server.stream-narrow-clones=True" >> master/.hg/hgrc

Cloning a specific file when stream clone is supported

  $ hg clone --narrow ssh://user@dummy/master narrow --noupdate --include "dir/src/F10" --stream
  streaming all changes
  * files to transfer, * KB of data (glob)
  transferred * KB in * seconds (* */sec) (glob)

  $ cd narrow
  $ ls -A
  .hg
  $ hg tracked
  I path:dir/src/F10

Making sure we have the correct set of requirements

  $ hg debugrequires
  dotencode (tree !)
  dotencode (flat-fncache !)
  dirstate-v2 (dirstate-v2 !)
  fncache (tree !)
  fncache (flat-fncache !)
  generaldelta
  narrowhg-experimental
  persistent-nodemap (rust !)
  revlog-compression-zstd (zstd !)
  revlogv1
  share-safe
  sparserevlog
  store
  treemanifest (tree !)

Making sure store has the required files

  $ ls .hg/store/
  00changelog.i
  00manifest.i
  data
  fncache (tree !)
  fncache (flat-fncache !)
  meta (tree !)
  narrowspec
  requires
  undo
  undo.backupfiles
  undo.narrowspec
  undo.phaseroots

Checking that repository has all the required data and not broken

  $ hg verify
  checking changesets
  checking manifests
  checking directory manifests (tree !)
  crosschecking files in changesets and manifests
  checking files
  checked 40 changesets with 1 changes to 1 files