Mercurial > hg
view tests/test-mq-qclone-http.t @ 29917:f32f8bf5dc4c
streamclone: force @filecache properties to be reloaded from file
Before this patch, consumev1() invokes repo.invalidate() after closing
transaction, to force @filecache properties to be reloaded from files
at next access, because streamclone writes data into files directly.
But this doesn't work as expected in the case below:
1. at closing transaction, repo._refreshfilecachestats() refreshes
file stat of each @filecache properties with streamclone-ed files
This means that in-memory properties are treated as valid.
2. but streamclone doesn't changes in-memory properties
This means that in-memory properties are actually invalid.
3. repo.invalidate() just forces to examine file stat of @filecache
properties at the first access after it
Such examination should concludes that reloading from file isn't
needed, because file stat was already refreshed at (1).
Therefore, invalid in-memory cached properties (2) are
unintentionally treated as valid (1).
This patch invokes repo.invalidate() with clearfilecache=True, to
force @filecache properties to be reloaded from file at next access.
BTW, it is accidental that repo.invalidate() without
clearfilecache=True in streamclone case seems to work as expected
before this patch.
If transaction is started via "filtered repo" object,
repo._refreshfilecachestats() tries to refresh file stat of each
@filecache properties on "filtered repo" object, even though all of
them are stored into "unfiltered repo" object.
In this case, repo._refreshfilecachestats() does nothing
unintentionally, but this unexpected behavior causes reloading
@filecache properties after repo.invalidate().
This is reason why this patch should be applied before making
_refreshfilecachestats() correctly refresh file stat of @filecache
properties.
author | FUJIWARA Katsunori <foozy@lares.dti.ne.jp> |
---|---|
date | Mon, 12 Sep 2016 03:06:28 +0900 |
parents | 8c14f87bd0ae |
children | eb586ed5d8ce |
line wrap: on
line source
#require killdaemons hide outer repo $ hg init $ echo "[extensions]" >> $HGRCPATH $ echo "mq=" >> $HGRCPATH $ mkdir webdir $ cd webdir $ hg init a $ hg --cwd a qinit -c $ echo a > a/a $ hg --cwd a ci -A -m a adding a $ echo b > a/b $ hg --cwd a addremove adding b $ hg --cwd a qnew -f b.patch $ hg --cwd a qcommit -m b.patch $ hg --cwd a log --template "{desc}\n" [mq]: b.patch a $ hg --cwd a/.hg/patches log --template "{desc}\n" b.patch $ root=`pwd` $ cd .. test with recursive collection $ cat > collections.conf <<EOF > [paths] > /=$root/** > EOF $ hg serve -p $HGPORT -d --pid-file=hg.pid --webdir-conf collections.conf \ > -A access-paths.log -E error-paths-1.log $ cat hg.pid >> $DAEMON_PIDS $ get-with-headers.py localhost:$HGPORT '?style=raw' 200 Script output follows /a/ /a/.hg/patches/ $ hg qclone http://localhost:$HGPORT/a b requesting all changes adding changesets adding manifests adding file changes added 2 changesets with 2 changes to 2 files requesting all changes adding changesets adding manifests adding file changes added 1 changesets with 3 changes to 3 files updating to branch default 3 files updated, 0 files merged, 0 files removed, 0 files unresolved 1 files updated, 0 files merged, 0 files removed, 0 files unresolved $ hg --cwd b log --template "{desc}\n" a $ hg --cwd b qpush -a applying b.patch now at: b.patch $ hg --cwd b log --template "{desc}\n" imported patch b.patch a test with normal collection $ cat > collections1.conf <<EOF > [paths] > /=$root/* > EOF $ hg serve -p $HGPORT1 -d --pid-file=hg.pid --webdir-conf collections1.conf \ > -A access-paths.log -E error-paths-1.log $ cat hg.pid >> $DAEMON_PIDS $ get-with-headers.py localhost:$HGPORT1 '?style=raw' 200 Script output follows /a/ /a/.hg/patches/ $ hg qclone http://localhost:$HGPORT1/a c requesting all changes adding changesets adding manifests adding file changes added 2 changesets with 2 changes to 2 files requesting all changes adding changesets adding manifests adding file changes added 1 changesets with 3 changes to 3 files updating to branch default 3 files updated, 0 files merged, 0 files removed, 0 files unresolved 1 files updated, 0 files merged, 0 files removed, 0 files unresolved $ hg --cwd c log --template "{desc}\n" a $ hg --cwd c qpush -a applying b.patch now at: b.patch $ hg --cwd c log --template "{desc}\n" imported patch b.patch a test with old-style collection $ cat > collections2.conf <<EOF > [collections] > $root=$root > EOF $ hg serve -p $HGPORT2 -d --pid-file=hg.pid --webdir-conf collections2.conf \ > -A access-paths.log -E error-paths-1.log $ cat hg.pid >> $DAEMON_PIDS $ get-with-headers.py localhost:$HGPORT2 '?style=raw' 200 Script output follows /a/ /a/.hg/patches/ $ hg qclone http://localhost:$HGPORT2/a d requesting all changes adding changesets adding manifests adding file changes added 2 changesets with 2 changes to 2 files requesting all changes adding changesets adding manifests adding file changes added 1 changesets with 3 changes to 3 files updating to branch default 3 files updated, 0 files merged, 0 files removed, 0 files unresolved 1 files updated, 0 files merged, 0 files removed, 0 files unresolved $ hg --cwd d log --template "{desc}\n" a $ hg --cwd d qpush -a applying b.patch now at: b.patch $ hg --cwd d log --template "{desc}\n" imported patch b.patch a test --mq works and uses correct repository config $ hg --cwd d outgoing --mq comparing with http://localhost:$HGPORT2/a/.hg/patches searching for changes no changes found [1] $ hg --cwd d log --mq --template '{rev} {desc|firstline}\n' 0 b.patch $ killdaemons.py