Mercurial > hg
view tests/test-clone-uncompressed.t @ 28639:64ed9f904532
tests: fix for failure of test-convert-p4-filetypes.t
Before this patch, test-convert-p4-filetypes.t fails (at least with
2015.2/1366233 version of p4/p4d), because some files below are
omitted in expected output for revision 1.
- file_tempobj
- file_xtempobj
These files are:
- add-ed at revision 0, and
- edit-ed at revision 1
According to perforce command reference below, file type 'tempobj' and
'xtempobj' imply '+S' modifier, which indicates that "Only the head
revision is stored". This means that these files should appear only in
the most recent revision (= revision 1).
https://www.perforce.com/perforce/doc.current/manuals/cmdref/file.types.html
BTW, test-convert-p4-filetypes.t with 2015.2/1366233 version of p4/p4d
fails similarly also at recent revisions for hgext/convert/p4.py in
2015. Therefore, this patch should be reviewed by perforce guru, to
examine whether this failure depends on version (and/or configuration)
of p4/p4d or not.
author | FUJIWARA Katsunori <foozy@lares.dti.ne.jp> |
---|---|
date | Sat, 26 Mar 2016 12:55:52 +0900 |
parents | aa440c3d7c5d |
children | 9dc27a334fb1 |
line wrap: on
line source
#require serve Initialize repository the status call is to check for issue5130 $ hg init server $ cd server $ touch foo $ hg -q commit -A -m initial >>> for i in range(1024): ... with open(str(i), 'wb') as fh: ... fh.write(str(i)) $ hg -q commit -A -m 'add a lot of files' $ hg st $ hg serve -p $HGPORT -d --pid-file=hg.pid $ cat hg.pid >> $DAEMON_PIDS $ cd .. Basic clone $ hg clone --uncompressed -U http://localhost:$HGPORT clone1 streaming all changes 1027 files to transfer, 96.3 KB of data transferred 96.3 KB in * seconds (*/sec) (glob) searching for changes no changes found Clone with background file closing enabled $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --uncompressed -U http://localhost:$HGPORT clone-background | grep -v adding using http://localhost:$HGPORT/ sending capabilities command sending branchmap command streaming all changes sending stream_out command 1027 files to transfer, 96.3 KB of data starting 4 threads for background file closing transferred 96.3 KB in * seconds (*/sec) (glob) query 1; heads sending batch command searching for changes all remote heads known locally no changes found sending getbundle command bundle2-input-bundle: with-transaction bundle2-input-part: "listkeys" (params: 1 mandatory) supported bundle2-input-part: "listkeys" (params: 1 mandatory) supported bundle2-input-bundle: 1 parts total checking for updated bookmarks preparing listkeys for "phases" sending listkeys command received listkey for "phases": 58 bytes Stream clone while repo is changing: $ mkdir changing $ cd changing extension for delaying the server process so we reliably can modify the repo while cloning $ cat > delayer.py <<EOF > import time > from mercurial import extensions, scmutil > def __call__(orig, self, path, *args, **kwargs): > if path == 'data/f1.i': > time.sleep(2) > return orig(self, path, *args, **kwargs) > extensions.wrapfunction(scmutil.vfs, '__call__', __call__) > EOF prepare repo with small and big file to cover both code paths in emitrevlogdata $ hg init repo $ touch repo/f1 $ $TESTDIR/seq.py 50000 > repo/f2 $ hg -R repo ci -Aqm "0" $ hg -R repo serve -p $HGPORT1 -d --pid-file=hg.pid --config extensions.delayer=delayer.py $ cat hg.pid >> $DAEMON_PIDS clone while modifying the repo between stating file with write lock and actually serving file content $ hg clone -q --uncompressed -U http://localhost:$HGPORT1 clone & $ sleep 1 $ echo >> repo/f1 $ echo >> repo/f2 $ hg -R repo ci -m "1" $ wait $ hg -R clone id 000000000000