Mercurial > hg
view tests/test-clone-uncompressed.t @ 29917:f32f8bf5dc4c
streamclone: force @filecache properties to be reloaded from file
Before this patch, consumev1() invokes repo.invalidate() after closing
transaction, to force @filecache properties to be reloaded from files
at next access, because streamclone writes data into files directly.
But this doesn't work as expected in the case below:
1. at closing transaction, repo._refreshfilecachestats() refreshes
file stat of each @filecache properties with streamclone-ed files
This means that in-memory properties are treated as valid.
2. but streamclone doesn't changes in-memory properties
This means that in-memory properties are actually invalid.
3. repo.invalidate() just forces to examine file stat of @filecache
properties at the first access after it
Such examination should concludes that reloading from file isn't
needed, because file stat was already refreshed at (1).
Therefore, invalid in-memory cached properties (2) are
unintentionally treated as valid (1).
This patch invokes repo.invalidate() with clearfilecache=True, to
force @filecache properties to be reloaded from file at next access.
BTW, it is accidental that repo.invalidate() without
clearfilecache=True in streamclone case seems to work as expected
before this patch.
If transaction is started via "filtered repo" object,
repo._refreshfilecachestats() tries to refresh file stat of each
@filecache properties on "filtered repo" object, even though all of
them are stored into "unfiltered repo" object.
In this case, repo._refreshfilecachestats() does nothing
unintentionally, but this unexpected behavior causes reloading
@filecache properties after repo.invalidate().
This is reason why this patch should be applied before making
_refreshfilecachestats() correctly refresh file stat of @filecache
properties.
author | FUJIWARA Katsunori <foozy@lares.dti.ne.jp> |
---|---|
date | Mon, 12 Sep 2016 03:06:28 +0900 |
parents | 9dc27a334fb1 |
children | e7a35f18d91f |
line wrap: on
line source
#require serve Initialize repository the status call is to check for issue5130 $ hg init server $ cd server $ touch foo $ hg -q commit -A -m initial >>> for i in range(1024): ... with open(str(i), 'wb') as fh: ... fh.write(str(i)) $ hg -q commit -A -m 'add a lot of files' $ hg st $ hg serve -p $HGPORT -d --pid-file=hg.pid $ cat hg.pid >> $DAEMON_PIDS $ cd .. Basic clone $ hg clone --uncompressed -U http://localhost:$HGPORT clone1 streaming all changes 1027 files to transfer, 96.3 KB of data transferred 96.3 KB in * seconds (*/sec) (glob) searching for changes no changes found Clone with background file closing enabled $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --uncompressed -U http://localhost:$HGPORT clone-background | grep -v adding using http://localhost:$HGPORT/ sending capabilities command sending branchmap command streaming all changes sending stream_out command 1027 files to transfer, 96.3 KB of data starting 4 threads for background file closing transferred 96.3 KB in * seconds (*/sec) (glob) query 1; heads sending batch command searching for changes all remote heads known locally no changes found sending getbundle command bundle2-input-bundle: with-transaction bundle2-input-part: "listkeys" (params: 1 mandatory) supported bundle2-input-part: total payload size 58 bundle2-input-part: "listkeys" (params: 1 mandatory) supported bundle2-input-bundle: 1 parts total checking for updated bookmarks Stream clone while repo is changing: $ mkdir changing $ cd changing extension for delaying the server process so we reliably can modify the repo while cloning $ cat > delayer.py <<EOF > import time > from mercurial import extensions, scmutil > def __call__(orig, self, path, *args, **kwargs): > if path == 'data/f1.i': > time.sleep(2) > return orig(self, path, *args, **kwargs) > extensions.wrapfunction(scmutil.vfs, '__call__', __call__) > EOF prepare repo with small and big file to cover both code paths in emitrevlogdata $ hg init repo $ touch repo/f1 $ $TESTDIR/seq.py 50000 > repo/f2 $ hg -R repo ci -Aqm "0" $ hg -R repo serve -p $HGPORT1 -d --pid-file=hg.pid --config extensions.delayer=delayer.py $ cat hg.pid >> $DAEMON_PIDS clone while modifying the repo between stating file with write lock and actually serving file content $ hg clone -q --uncompressed -U http://localhost:$HGPORT1 clone & $ sleep 1 $ echo >> repo/f1 $ echo >> repo/f2 $ hg -R repo ci -m "1" $ wait $ hg -R clone id 000000000000