Mercurial > hg
view tests/test-clone-uncompressed.t @ 30190:56b930238036 stable
largefiles: more safe handling of interruptions while updating modifications
Largefiles are fragile with the design where dirstate and lfdirstate must be
kept in sync.
To be less fragile, mark all clean largefiles as unsure ("normallookup") before
updating standins. After standins have been updated and we know exactly which
largefile standins actually was changed, mark the unchanged largefiles back to
clean ("normal").
This will make the failure mode more safe. If interrupted, the next command
will continue to perform extra hashing of all largefiles. That will do that all
largefiles that are out of sync with their standin will be marked dirty and
they will show up in status and can be cleaned with update --clean.
author | Mads Kiilerich <madski@unity3d.com> |
---|---|
date | Sun, 16 Oct 2016 02:29:45 +0200 |
parents | 9dc27a334fb1 |
children | e7a35f18d91f |
line wrap: on
line source
#require serve Initialize repository the status call is to check for issue5130 $ hg init server $ cd server $ touch foo $ hg -q commit -A -m initial >>> for i in range(1024): ... with open(str(i), 'wb') as fh: ... fh.write(str(i)) $ hg -q commit -A -m 'add a lot of files' $ hg st $ hg serve -p $HGPORT -d --pid-file=hg.pid $ cat hg.pid >> $DAEMON_PIDS $ cd .. Basic clone $ hg clone --uncompressed -U http://localhost:$HGPORT clone1 streaming all changes 1027 files to transfer, 96.3 KB of data transferred 96.3 KB in * seconds (*/sec) (glob) searching for changes no changes found Clone with background file closing enabled $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --uncompressed -U http://localhost:$HGPORT clone-background | grep -v adding using http://localhost:$HGPORT/ sending capabilities command sending branchmap command streaming all changes sending stream_out command 1027 files to transfer, 96.3 KB of data starting 4 threads for background file closing transferred 96.3 KB in * seconds (*/sec) (glob) query 1; heads sending batch command searching for changes all remote heads known locally no changes found sending getbundle command bundle2-input-bundle: with-transaction bundle2-input-part: "listkeys" (params: 1 mandatory) supported bundle2-input-part: total payload size 58 bundle2-input-part: "listkeys" (params: 1 mandatory) supported bundle2-input-bundle: 1 parts total checking for updated bookmarks Stream clone while repo is changing: $ mkdir changing $ cd changing extension for delaying the server process so we reliably can modify the repo while cloning $ cat > delayer.py <<EOF > import time > from mercurial import extensions, scmutil > def __call__(orig, self, path, *args, **kwargs): > if path == 'data/f1.i': > time.sleep(2) > return orig(self, path, *args, **kwargs) > extensions.wrapfunction(scmutil.vfs, '__call__', __call__) > EOF prepare repo with small and big file to cover both code paths in emitrevlogdata $ hg init repo $ touch repo/f1 $ $TESTDIR/seq.py 50000 > repo/f2 $ hg -R repo ci -Aqm "0" $ hg -R repo serve -p $HGPORT1 -d --pid-file=hg.pid --config extensions.delayer=delayer.py $ cat hg.pid >> $DAEMON_PIDS clone while modifying the repo between stating file with write lock and actually serving file content $ hg clone -q --uncompressed -U http://localhost:$HGPORT1 clone & $ sleep 1 $ echo >> repo/f1 $ echo >> repo/f2 $ hg -R repo ci -m "1" $ wait $ hg -R clone id 000000000000