view tests/test-largefiles-small-disk.t @ 24735:07200e3332a1

tags: extract .hgtags filenodes cache to a standalone file Resolution of .hgtags filenodes values has historically been a performance pain point for large repositories, where reading individual manifests can take over 100ms. Multiplied by hundreds or even thousands of heads and resolving .hgtags filenodes becomes a performance issue. This patch establishes a standalone cache file holding the .hgtags filenodes for each changeset. After this patch, the .hgtags filenode for any particular changeset should only have to be computed once during the lifetime of the repository. The introduced hgtagsfnodes1 cache file is modeled after the rev branch cache: the cache is effectively an array of entries consisting of a changeset fragment and the filenode for a revision. The file grows in proportion to the length of the repository (24 bytes per changeset) and is truncated when the repository is stripped. The file is not written unless tag info is requested and tags have changed since last time. This patch partially addresses issue4550. Future patches will split the "tags" cache file into per-filter files and will refactor the cache format to not capture the .hgtags fnodes, as these are now stored in the hgtagsfnodes1 cache. This patch is capable of standing alone. We should not have to wait on the tags cache filter split and format refactor for this patch to land.
author Gregory Szorc <gregory.szorc@gmail.com>
date Wed, 15 Apr 2015 17:42:38 -0400
parents 8a021cd38719
children 7356e6b1f5b8
line wrap: on
line source

Test how largefiles abort in case the disk runs full

  $ cat > criple.py <<EOF
  > import os, errno, shutil
  > from mercurial import util
  > #
  > # this makes the original largefiles code abort:
  > def copyfileobj(fsrc, fdst, length=16*1024):
  >     fdst.write(fsrc.read(4))
  >     raise IOError(errno.ENOSPC, os.strerror(errno.ENOSPC))
  > shutil.copyfileobj = copyfileobj
  > #
  > # this makes the rewritten code abort:
  > def filechunkiter(f, size=65536, limit=None):
  >     yield f.read(4)
  >     raise IOError(errno.ENOSPC, os.strerror(errno.ENOSPC))
  > util.filechunkiter = filechunkiter
  > #
  > def oslink(src, dest):
  >     raise OSError("no hardlinks, try copying instead")
  > util.oslink = oslink
  > EOF

  $ echo "[extensions]" >> $HGRCPATH
  $ echo "largefiles =" >> $HGRCPATH

  $ hg init alice
  $ cd alice
  $ echo "this is a very big file" > big
  $ hg add --large big
  $ hg commit --config extensions.criple=$TESTTMP/criple.py -m big
  abort: No space left on device
  [255]

The largefile is not created in .hg/largefiles:

  $ ls .hg/largefiles
  dirstate

The user cache is not even created:

  >>> import os; os.path.exists("$HOME/.cache/largefiles/")
  False

Make the commit with space on the device:

  $ hg commit -m big

Now make a clone with a full disk, and make sure lfutil.link function
makes copies instead of hardlinks:

  $ cd ..
  $ hg --config extensions.criple=$TESTTMP/criple.py clone --pull alice bob
  requesting all changes
  adding changesets
  adding manifests
  adding file changes
  added 1 changesets with 1 changes to 1 files
  updating to branch default
  getting changed largefiles
  abort: No space left on device
  [255]

The largefile is not created in .hg/largefiles:

  $ ls bob/.hg/largefiles
  dirstate