view tests/test-largefiles-small-disk.t @ 41277:61f9ef23a12f

dagop: minor python optimization to `headrevs` Less lookup and less function call never hurt. This provides a small speedup on various run of the 'heads()' revset. This also buys back some of the slow down we observed in the previous changesets for single value lookup. Performance number: 0) before dagop.headrevs usage 1) after dagop.headrevs usage 2) after this change revset: heads(all()) plain min max first last reverse rev..rst rev..ast sort sor..rst sor..ast 0) 0.036503 0.032564 0.030024 0.032378 0.030887 0.036367 0.031713 0.032205 0.036467 0.032286 0.030300 1) 0.036668 0.035347 108% 0.035611 118% 0.035358 109% 0.035726 115% 0.036411 0.035261 111% 0.036096 112% 0.036052 0.035095 108% 0.035792 118% 2) 0.034254 93% 0.034482 0.035003 0.034353 0.033754 94% 0.034689 0.034361 0.035059 0.034636 0.034662 0.035465 revset: heads(-10000:-1) plain min max first last reverse rev..rst rev..ast sort sor..rst sor..ast 0) 0.003936 0.003218 0.003227 0.003302 0.003328 0.003848 0.003305 0.003252 0.003839 0.003306 0.003279 1) 0.003870 0.003785 117% 0.003821 118% 0.003780 114% 0.003769 113% 0.003776 0.003792 114% 0.003805 117% 0.003810 0.003798 114% 0.003840 117% 2) 0.003666 94% 0.003577 94% 0.003632 0.003644 0.003614 0.003638 0.003652 0.003632 0.003661 0.003660 0.003658 revset: (-5000:-1000) and heads(-10000:-1) plain min max first last reverse rev..rst rev..ast sort sor..rst sor..ast 0) 0.004244 0.003368 0.003313 0.003367 0.003327 0.004325 0.003401 0.003379 0.004310 0.003359 0.003396 1) 0.003969 93% 0.003862 114% 0.003834 115% 0.003810 113% 0.003822 114% 0.003940 91% 0.003908 114% 0.003814 112% 0.003986 92% 0.003954 117% 0.003816 112% 2) 0.003728 93% 0.003638 94% 0.003659 0.003685 0.003628 94% 0.003716 94% 0.003653 93% 0.003655 0.003748 94% 0.003740 94% 0.003686 revset: heads(matching(tip, "author")) plain min max first last reverse rev..rst rev..ast sort sor..rst sor..ast 0) 7.574666 7.545950 7.570743 7.578697 7.525725 7.509929 7.443854 7.488442 7.452880 7.445411 7.689107 1) 7.549390 7.389162 7.529790 7.536297 7.450467 7.555347 7.404586 7.514948 7.542794 7.524787 7.536918 2) 7.568294 7.479326 7.578624 7.380375 7.440102 7.454218 7.515189 7.556511 7.524585 7.537566 7.507418 revset: heads(matching(tip, "author")) and -10000:-1 plain min max first last reverse rev..rst rev..ast sort sor..rst sor..ast 0) 7.512533 7.605877 7.382894 7.462109 7.420086 7.575034 7.448452 7.549374 7.457880 7.450308 7.515019 1) 7.548677 7.551832 7.629598 7.494857 7.550554 7.521838 7.451794 error 7.321781 7.546885 7.557523 2) 7.451985 7.541044 7.506563 7.470928 7.512618 7.474988 7.498887 7.547930 7.560276 7.618599 7.465442 revset: (-10000:-1) and heads(matching(tip, "author")) plain min max first last reverse rev..rst rev..ast sort sor..rst sor..ast 0) 7.465419 7.570089 7.439594 7.521221 7.498716 7.492922 7.479108 7.552397 7.407888 error 7.468264 1) 7.539866 7.548045 7.491761 7.517170 7.469824 7.501990 7.579102 7.502568 7.578102 7.555754 7.567622 2) 7.370463 7.514712 7.497024 7.679428 7.638138 7.490775 7.472273 7.652587 7.584139 7.511893 7.466384
author Boris Feld <boris.feld@octobus.net>
date Mon, 14 Jan 2019 17:15:21 +0100
parents 556984ae0005
children c70bdd222dcd
line wrap: on
line source

Test how largefiles abort in case the disk runs full

  $ cat > criple.py <<EOF
  > from __future__ import absolute_import
  > import errno
  > import os
  > import shutil
  > from mercurial import util
  > #
  > # this makes the original largefiles code abort:
  > _origcopyfileobj = shutil.copyfileobj
  > def copyfileobj(fsrc, fdst, length=16*1024):
  >     # allow journal files (used by transaction) to be written
  >     if b'journal.' in fdst.name:
  >         return _origcopyfileobj(fsrc, fdst, length)
  >     fdst.write(fsrc.read(4))
  >     raise IOError(errno.ENOSPC, os.strerror(errno.ENOSPC))
  > shutil.copyfileobj = copyfileobj
  > #
  > # this makes the rewritten code abort:
  > def filechunkiter(f, size=131072, limit=None):
  >     yield f.read(4)
  >     raise IOError(errno.ENOSPC, os.strerror(errno.ENOSPC))
  > util.filechunkiter = filechunkiter
  > #
  > def oslink(src, dest):
  >     raise OSError("no hardlinks, try copying instead")
  > util.oslink = oslink
  > EOF

  $ echo "[extensions]" >> $HGRCPATH
  $ echo "largefiles =" >> $HGRCPATH

  $ hg init alice
  $ cd alice
  $ echo "this is a very big file" > big
  $ hg add --large big
  $ hg commit --config extensions.criple=$TESTTMP/criple.py -m big
  abort: No space left on device
  [255]

The largefile is not created in .hg/largefiles:

  $ ls .hg/largefiles
  dirstate

The user cache is not even created:

  >>> import os; os.path.exists("$HOME/.cache/largefiles/")
  False

Make the commit with space on the device:

  $ hg commit -m big

Now make a clone with a full disk, and make sure lfutil.link function
makes copies instead of hardlinks:

  $ cd ..
  $ hg --config extensions.criple=$TESTTMP/criple.py clone --pull alice bob
  requesting all changes
  adding changesets
  adding manifests
  adding file changes
  added 1 changesets with 1 changes to 1 files
  new changesets 390cf214e9ac
  updating to branch default
  getting changed largefiles
  abort: No space left on device
  [255]

The largefile is not created in .hg/largefiles:

  $ ls bob/.hg/largefiles
  dirstate