view tests/test-largefiles-small-disk.t @ 44143:7f86426fdd2c

rust-node: binary Node ID and conversion utilities The choice of type makes sure that a `Node` has the exact wanted size. We'll use a different type for prefixes. Added dependency: hexadecimal conversion relies on the `hex` crate. The fact that sooner or later Mercurial is going to need to change its hash sizes has been taken strongly in consideration: - the hash length is a constant, but that is not directly exposed to callers. Changing the value of that constant is the only thing to do to change the hash length (even in unit tests) - the code could be adapted to support several sizes of hashes, if that turned out to be useful. To that effect, only the size of a given `Node` is exposed in the public API. - callers not involved in initial computation, I/O and FFI are able to operate without a priori assumptions on the hash size. The traits `FromHex` and `ToHex` have not been directly implemented, so that the doc-comments explaining these restrictions would stay really visible in `cargo doc` Differential Revision: https://phab.mercurial-scm.org/D7788
author Georges Racinet <georges.racinet@octobus.net>
date Wed, 22 Jan 2020 16:37:05 +0100
parents c70bdd222dcd
children 42d2b31cee0b
line wrap: on
line source

Test how largefiles abort in case the disk runs full

  $ cat > criple.py <<EOF
  > from __future__ import absolute_import
  > import errno
  > import os
  > import shutil
  > from mercurial import util
  > #
  > # this makes the original largefiles code abort:
  > _origcopyfileobj = shutil.copyfileobj
  > def copyfileobj(fsrc, fdst, length=16 * 1024):
  >     # allow journal files (used by transaction) to be written
  >     if b'journal.' in fdst.name:
  >         return _origcopyfileobj(fsrc, fdst, length)
  >     fdst.write(fsrc.read(4))
  >     raise IOError(errno.ENOSPC, os.strerror(errno.ENOSPC))
  > shutil.copyfileobj = copyfileobj
  > #
  > # this makes the rewritten code abort:
  > def filechunkiter(f, size=131072, limit=None):
  >     yield f.read(4)
  >     raise IOError(errno.ENOSPC, os.strerror(errno.ENOSPC))
  > util.filechunkiter = filechunkiter
  > #
  > def oslink(src, dest):
  >     raise OSError("no hardlinks, try copying instead")
  > util.oslink = oslink
  > EOF

  $ echo "[extensions]" >> $HGRCPATH
  $ echo "largefiles =" >> $HGRCPATH

  $ hg init alice
  $ cd alice
  $ echo "this is a very big file" > big
  $ hg add --large big
  $ hg commit --config extensions.criple=$TESTTMP/criple.py -m big
  abort: No space left on device
  [255]

The largefile is not created in .hg/largefiles:

  $ ls .hg/largefiles
  dirstate

The user cache is not even created:

  >>> import os; os.path.exists("$HOME/.cache/largefiles/")
  False

Make the commit with space on the device:

  $ hg commit -m big

Now make a clone with a full disk, and make sure lfutil.link function
makes copies instead of hardlinks:

  $ cd ..
  $ hg --config extensions.criple=$TESTTMP/criple.py clone --pull alice bob
  requesting all changes
  adding changesets
  adding manifests
  adding file changes
  added 1 changesets with 1 changes to 1 files
  new changesets 390cf214e9ac
  updating to branch default
  getting changed largefiles
  abort: No space left on device
  [255]

The largefile is not created in .hg/largefiles:

  $ ls bob/.hg/largefiles
  dirstate