Mercurial > hg
view tests/test-largefiles-small-disk.t @ 41304:76873548b051 stable
partialdiscovery: avoid `undecided` related computation sooner than necessary
Changeset 1d30be90c move the update of the `undecided` set within the
`partialdiscovery` object in order to clarify the API.
The update to the `undecided` set was unconditional in 1d30be90c and the first
access to the `self.undecided` property triggered the initial computation of
the set of undecided revisions. As a result, the set was computed much
earlier, at a time where less information is available, immediately followed
by an update of this set to remove common revisions.
To fix this regression, we ignore the `undecided` related logic in
`addcommons` when that `undecided` set has not been computed yet. Code that
actually needs to know the `undecided` set will trigger its computation later.
The change has no effects on semantic because the initial computation
`undecided` set takes all knowns `common` into account.
Example performance running `hg debugdiscovery` from a pypy repo missing 10
changesets:
870a89c6909d: 52.3ms (regression parent)
1d30be90c9dc: 72.0ms (regression)
5a5f504a7175: 64.8ms (this fix parent)
this fix: 52.6ms
author | Boris Feld <boris.feld@octobus.net> |
---|---|
date | Wed, 23 Jan 2019 18:07:42 -0500 |
parents | 556984ae0005 |
children | c70bdd222dcd |
line wrap: on
line source
Test how largefiles abort in case the disk runs full $ cat > criple.py <<EOF > from __future__ import absolute_import > import errno > import os > import shutil > from mercurial import util > # > # this makes the original largefiles code abort: > _origcopyfileobj = shutil.copyfileobj > def copyfileobj(fsrc, fdst, length=16*1024): > # allow journal files (used by transaction) to be written > if b'journal.' in fdst.name: > return _origcopyfileobj(fsrc, fdst, length) > fdst.write(fsrc.read(4)) > raise IOError(errno.ENOSPC, os.strerror(errno.ENOSPC)) > shutil.copyfileobj = copyfileobj > # > # this makes the rewritten code abort: > def filechunkiter(f, size=131072, limit=None): > yield f.read(4) > raise IOError(errno.ENOSPC, os.strerror(errno.ENOSPC)) > util.filechunkiter = filechunkiter > # > def oslink(src, dest): > raise OSError("no hardlinks, try copying instead") > util.oslink = oslink > EOF $ echo "[extensions]" >> $HGRCPATH $ echo "largefiles =" >> $HGRCPATH $ hg init alice $ cd alice $ echo "this is a very big file" > big $ hg add --large big $ hg commit --config extensions.criple=$TESTTMP/criple.py -m big abort: No space left on device [255] The largefile is not created in .hg/largefiles: $ ls .hg/largefiles dirstate The user cache is not even created: >>> import os; os.path.exists("$HOME/.cache/largefiles/") False Make the commit with space on the device: $ hg commit -m big Now make a clone with a full disk, and make sure lfutil.link function makes copies instead of hardlinks: $ cd .. $ hg --config extensions.criple=$TESTTMP/criple.py clone --pull alice bob requesting all changes adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files new changesets 390cf214e9ac updating to branch default getting changed largefiles abort: No space left on device [255] The largefile is not created in .hg/largefiles: $ ls bob/.hg/largefiles dirstate