tests/test-largefiles-small-disk.t
author Jun Wu <quark@fb.com>
Fri, 17 Feb 2017 20:59:29 -0800
changeset 31015 1076f7eba964
parent 30181 7356e6b1f5b8
child 31207 1ef37b16b8e8
permissions -rw-r--r--
smartset: convert set to list lazily If the caller only wants to construct a baseset via a set, and then do "__contains__" tests. It's unnecessary to initialize the list. Testing on my unfiltered hg-committed repo where len(draft()) is 2600, this patch shows about 6% improvement on set intensive queries: Before: $ for i in `seq 5`; hg perfrevset 'draft() & draft() & draft() & draft()' ! wall 0.001196 comb 0.000000 user 0.000000 sys 0.000000 (best of 2011) ! wall 0.001191 comb 0.000000 user 0.000000 sys 0.000000 (best of 2099) ! wall 0.001186 comb 0.010000 user 0.010000 sys 0.000000 (best of 1953) ! wall 0.001182 comb 0.000000 user 0.000000 sys 0.000000 (best of 2135) ! wall 0.001193 comb 0.000000 user 0.000000 sys 0.000000 (best of 2177) After: $ for i in `seq 5`; hg perfrevset 'draft() & draft() & draft() & draft()' ! wall 0.001128 comb 0.000000 user 0.000000 sys 0.000000 (best of 2247) ! wall 0.001119 comb 0.000000 user 0.000000 sys 0.000000 (best of 2317) ! wall 0.001115 comb 0.000000 user 0.000000 sys 0.000000 (best of 2244) ! wall 0.001131 comb 0.000000 user 0.000000 sys 0.000000 (best of 2093) ! wall 0.001124 comb 0.000000 user 0.000000 sys 0.000000 (best of 2134) It could have bigger impact on larger sets in theory.

Test how largefiles abort in case the disk runs full

  $ cat > criple.py <<EOF
  > import os, errno, shutil
  > from mercurial import util
  > #
  > # this makes the original largefiles code abort:
  > def copyfileobj(fsrc, fdst, length=16*1024):
  >     fdst.write(fsrc.read(4))
  >     raise IOError(errno.ENOSPC, os.strerror(errno.ENOSPC))
  > shutil.copyfileobj = copyfileobj
  > #
  > # this makes the rewritten code abort:
  > def filechunkiter(f, size=131072, limit=None):
  >     yield f.read(4)
  >     raise IOError(errno.ENOSPC, os.strerror(errno.ENOSPC))
  > util.filechunkiter = filechunkiter
  > #
  > def oslink(src, dest):
  >     raise OSError("no hardlinks, try copying instead")
  > util.oslink = oslink
  > EOF

  $ echo "[extensions]" >> $HGRCPATH
  $ echo "largefiles =" >> $HGRCPATH

  $ hg init alice
  $ cd alice
  $ echo "this is a very big file" > big
  $ hg add --large big
  $ hg commit --config extensions.criple=$TESTTMP/criple.py -m big
  abort: No space left on device
  [255]

The largefile is not created in .hg/largefiles:

  $ ls .hg/largefiles
  dirstate

The user cache is not even created:

  >>> import os; os.path.exists("$HOME/.cache/largefiles/")
  False

Make the commit with space on the device:

  $ hg commit -m big

Now make a clone with a full disk, and make sure lfutil.link function
makes copies instead of hardlinks:

  $ cd ..
  $ hg --config extensions.criple=$TESTTMP/criple.py clone --pull alice bob
  requesting all changes
  adding changesets
  adding manifests
  adding file changes
  added 1 changesets with 1 changes to 1 files
  updating to branch default
  getting changed largefiles
  abort: No space left on device
  [255]

The largefile is not created in .hg/largefiles:

  $ ls bob/.hg/largefiles
  dirstate