diff tests/test-largefiles-wireproto.t @ 21424:d13b4ecdb680

test: split test-largefile.t in multiple file The `test-largefiles.t` unified test is significantly longer (about 30%) than any other tests in the mercurial test suite. As a result, its is alway the last test my test runner is waiting for at the end of a run. In practice, this means that `test-largefile.t` is wasting half a minute of my life every times I'm running the mercurial test suites. This probably mean more a few cumulated day by now. I've finally decided to split it up in multiple smaller tests to bring it back in reasonable length. This changeset extracts independent test cases in two files. One dedicated to wire protocole testing, and another one dedicated to all other tests that could be independently extracted. No test case were haltered in the making of this changeset. Various timing available below. All timing have been done on a with 90 jobs on a 64 cores machine. Similar result are shown on firefly (20 jobs on 12 core). General timing of the whole run -------------------------------- We see a 25% real time improvement for no significant cpu time impact. Before split: real 2m1.149s user 58m4.662s sys 11m28.563s After split: real 1m31.977s user 57m45.993s sys 11m33.634s Last test to finish (using run-test.py --time) ---------------------------------------------- test-largefile.t is now finishing at the same time than other slow tests. Before split: Time Test 119.280 test-largefiles.t 93.995 test-mq.t 89.897 test-subrepo.t 86.920 test-glog.t 85.508 test-rename-merge2.t 83.594 test-revset.t 79.824 test-keyword.t 78.077 test-mq-header-date.t After split: Time Test 90.414 test-mq.t 88.594 test-largefiles.t 85.363 test-subrepo.t 81.059 test-glog.t 78.927 test-rename-merge2.t 78.021 test-revset.t 77.777 test-command-template.t Timing of largefile test themself ----------------------------------- Running only tests prefixed with "test-largefiles". No significant change in cumulated time. Before: Time Test 58.673 test-largefiles.t 2.931 test-largefiles-cache.t 0.583 test-largefiles-small-disk.t After: Time Test 31.754 test-largefiles.t 17.460 test-largefiles-misc.t 8.888 test-largefiles-wireproto.t 2.864 test-largefiles-cache.t 0.580 test-largefiles-small-disk.t
author Pierre-Yves David <pierre-yves.david@fb.com>
date Fri, 16 May 2014 13:18:57 -0700
parents
children e53f6b72a0e4
line wrap: on
line diff
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/tests/test-largefiles-wireproto.t	Fri May 16 13:18:57 2014 -0700
@@ -0,0 +1,293 @@
+This file contains testcases that tend to be related to the wireprotocol part of
+largefile.
+
+  $ USERCACHE="$TESTTMP/cache"; export USERCACHE
+  $ mkdir "${USERCACHE}"
+  $ cat >> $HGRCPATH <<EOF
+  > [extensions]
+  > largefiles=
+  > purge=
+  > rebase=
+  > transplant=
+  > [phases]
+  > publish=False
+  > [largefiles]
+  > minsize=2
+  > patterns=glob:**.dat
+  > usercache=${USERCACHE}
+  > [hooks]
+  > precommit=sh -c "echo \\"Invoking status precommit hook\\"; hg status"
+  > EOF
+
+
+#if serve
+vanilla clients not locked out from largefiles servers on vanilla repos
+  $ mkdir r1
+  $ cd r1
+  $ hg init
+  $ echo c1 > f1
+  $ hg add f1
+  $ hg commit -m "m1"
+  Invoking status precommit hook
+  A f1
+  $ cd ..
+  $ hg serve -R r1 -d -p $HGPORT --pid-file hg.pid
+  $ cat hg.pid >> $DAEMON_PIDS
+  $ hg --config extensions.largefiles=! clone http://localhost:$HGPORT r2
+  requesting all changes
+  adding changesets
+  adding manifests
+  adding file changes
+  added 1 changesets with 1 changes to 1 files
+  updating to branch default
+  1 files updated, 0 files merged, 0 files removed, 0 files unresolved
+
+largefiles clients still work with vanilla servers
+  $ hg --config extensions.largefiles=! serve -R r1 -d -p $HGPORT1 --pid-file hg.pid
+  $ cat hg.pid >> $DAEMON_PIDS
+  $ hg clone http://localhost:$HGPORT1 r3
+  requesting all changes
+  adding changesets
+  adding manifests
+  adding file changes
+  added 1 changesets with 1 changes to 1 files
+  updating to branch default
+  1 files updated, 0 files merged, 0 files removed, 0 files unresolved
+#endif
+
+vanilla clients locked out from largefiles http repos
+  $ mkdir r4
+  $ cd r4
+  $ hg init
+  $ echo c1 > f1
+  $ hg add --large f1
+  $ hg commit -m "m1"
+  Invoking status precommit hook
+  A f1
+  $ cd ..
+
+largefiles can be pushed locally (issue3583)
+  $ hg init dest
+  $ cd r4
+  $ hg outgoing ../dest
+  comparing with ../dest
+  searching for changes
+  changeset:   0:639881c12b4c
+  tag:         tip
+  user:        test
+  date:        Thu Jan 01 00:00:00 1970 +0000
+  summary:     m1
+  
+  $ hg push ../dest
+  pushing to ../dest
+  searching for changes
+  adding changesets
+  adding manifests
+  adding file changes
+  added 1 changesets with 1 changes to 1 files
+
+exit code with nothing outgoing (issue3611)
+  $ hg outgoing ../dest
+  comparing with ../dest
+  searching for changes
+  no changes found
+  [1]
+  $ cd ..
+
+#if serve
+  $ hg serve -R r4 -d -p $HGPORT2 --pid-file hg.pid
+  $ cat hg.pid >> $DAEMON_PIDS
+  $ hg --config extensions.largefiles=! clone http://localhost:$HGPORT2 r5
+  abort: remote error:
+  
+  This repository uses the largefiles extension.
+  
+  Please enable it in your Mercurial config file.
+  [255]
+
+used all HGPORTs, kill all daemons
+  $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS
+#endif
+
+vanilla clients locked out from largefiles ssh repos
+  $ hg --config extensions.largefiles=! clone -e "python \"$TESTDIR/dummyssh\"" ssh://user@dummy/r4 r5
+  abort: remote error:
+  
+  This repository uses the largefiles extension.
+  
+  Please enable it in your Mercurial config file.
+  [255]
+
+#if serve
+
+largefiles clients refuse to push largefiles repos to vanilla servers
+  $ mkdir r6
+  $ cd r6
+  $ hg init
+  $ echo c1 > f1
+  $ hg add f1
+  $ hg commit -m "m1"
+  Invoking status precommit hook
+  A f1
+  $ cat >> .hg/hgrc <<!
+  > [web]
+  > push_ssl = false
+  > allow_push = *
+  > !
+  $ cd ..
+  $ hg clone r6 r7
+  updating to branch default
+  1 files updated, 0 files merged, 0 files removed, 0 files unresolved
+  $ cd r7
+  $ echo c2 > f2
+  $ hg add --large f2
+  $ hg commit -m "m2"
+  Invoking status precommit hook
+  A f2
+  $ hg --config extensions.largefiles=! -R ../r6 serve -d -p $HGPORT --pid-file ../hg.pid
+  $ cat ../hg.pid >> $DAEMON_PIDS
+  $ hg push http://localhost:$HGPORT
+  pushing to http://localhost:$HGPORT/
+  searching for changes
+  abort: http://localhost:$HGPORT/ does not appear to be a largefile store
+  [255]
+  $ cd ..
+
+putlfile errors are shown (issue3123)
+Corrupt the cached largefile in r7 and move it out of the servers usercache
+  $ mv r7/.hg/largefiles/4cdac4d8b084d0b599525cf732437fb337d422a8 .
+  $ echo 'client side corruption' > r7/.hg/largefiles/4cdac4d8b084d0b599525cf732437fb337d422a8
+  $ rm "$USERCACHE/4cdac4d8b084d0b599525cf732437fb337d422a8"
+  $ hg init empty
+  $ hg serve -R empty -d -p $HGPORT1 --pid-file hg.pid \
+  >   --config 'web.allow_push=*' --config web.push_ssl=False
+  $ cat hg.pid >> $DAEMON_PIDS
+  $ hg push -R r7 http://localhost:$HGPORT1
+  pushing to http://localhost:$HGPORT1/
+  searching for changes
+  remote: largefiles: failed to put 4cdac4d8b084d0b599525cf732437fb337d422a8 into store: largefile contents do not match hash
+  abort: remotestore: could not put $TESTTMP/r7/.hg/largefiles/4cdac4d8b084d0b599525cf732437fb337d422a8 to remote store http://localhost:$HGPORT1/ (glob)
+  [255]
+  $ mv 4cdac4d8b084d0b599525cf732437fb337d422a8 r7/.hg/largefiles/4cdac4d8b084d0b599525cf732437fb337d422a8
+Push of file that exists on server but is corrupted - magic healing would be nice ... but too magic
+  $ echo "server side corruption" > empty/.hg/largefiles/4cdac4d8b084d0b599525cf732437fb337d422a8
+  $ hg push -R r7 http://localhost:$HGPORT1
+  pushing to http://localhost:$HGPORT1/
+  searching for changes
+  remote: adding changesets
+  remote: adding manifests
+  remote: adding file changes
+  remote: added 2 changesets with 2 changes to 2 files
+  $ cat empty/.hg/largefiles/4cdac4d8b084d0b599525cf732437fb337d422a8
+  server side corruption
+  $ rm -rf empty
+
+Push a largefiles repository to a served empty repository
+  $ hg init r8
+  $ echo c3 > r8/f1
+  $ hg add --large r8/f1 -R r8
+  $ hg commit -m "m1" -R r8
+  Invoking status precommit hook
+  A f1
+  $ hg init empty
+  $ hg serve -R empty -d -p $HGPORT2 --pid-file hg.pid \
+  >   --config 'web.allow_push=*' --config web.push_ssl=False
+  $ cat hg.pid >> $DAEMON_PIDS
+  $ rm "${USERCACHE}"/*
+  $ hg push -R r8 http://localhost:$HGPORT2/#default
+  pushing to http://localhost:$HGPORT2/
+  searching for changes
+  remote: adding changesets
+  remote: adding manifests
+  remote: adding file changes
+  remote: added 1 changesets with 1 changes to 1 files
+  $ [ -f "${USERCACHE}"/02a439e5c31c526465ab1a0ca1f431f76b827b90 ]
+  $ [ -f empty/.hg/largefiles/02a439e5c31c526465ab1a0ca1f431f76b827b90 ]
+
+Clone over http, no largefiles pulled on clone.
+
+  $ hg clone http://localhost:$HGPORT2/#default http-clone -U
+  adding changesets
+  adding manifests
+  adding file changes
+  added 1 changesets with 1 changes to 1 files
+
+test 'verify' with remotestore:
+
+  $ rm "${USERCACHE}"/02a439e5c31c526465ab1a0ca1f431f76b827b90
+  $ mv empty/.hg/largefiles/02a439e5c31c526465ab1a0ca1f431f76b827b90 .
+  $ hg -R http-clone verify --large --lfa
+  checking changesets
+  checking manifests
+  crosschecking files in changesets and manifests
+  checking files
+  1 files, 1 changesets, 1 total revisions
+  searching 1 changesets for largefiles
+  changeset 0:cf03e5bb9936: f1 missing
+  verified existence of 1 revisions of 1 largefiles
+  [1]
+  $ mv 02a439e5c31c526465ab1a0ca1f431f76b827b90 empty/.hg/largefiles/
+  $ hg -R http-clone -q verify --large --lfa
+
+largefiles pulled on update - a largefile missing on the server:
+  $ mv empty/.hg/largefiles/02a439e5c31c526465ab1a0ca1f431f76b827b90 .
+  $ hg -R http-clone up --config largefiles.usercache=http-clone-usercache
+  getting changed largefiles
+  f1: largefile 02a439e5c31c526465ab1a0ca1f431f76b827b90 not available from http://localhost:$HGPORT2/
+  0 largefiles updated, 0 removed
+  1 files updated, 0 files merged, 0 files removed, 0 files unresolved
+  $ hg -R http-clone st
+  ! f1
+  $ hg -R http-clone up -Cqr null
+
+largefiles pulled on update - a largefile corrupted on the server:
+  $ echo corruption > empty/.hg/largefiles/02a439e5c31c526465ab1a0ca1f431f76b827b90
+  $ hg -R http-clone up --config largefiles.usercache=http-clone-usercache
+  getting changed largefiles
+  f1: data corruption (expected 02a439e5c31c526465ab1a0ca1f431f76b827b90, got 6a7bb2556144babe3899b25e5428123735bb1e27)
+  0 largefiles updated, 0 removed
+  1 files updated, 0 files merged, 0 files removed, 0 files unresolved
+  $ hg -R http-clone st
+  ! f1
+  $ [ ! -f http-clone/.hg/largefiles/02a439e5c31c526465ab1a0ca1f431f76b827b90 ]
+  $ [ ! -f http-clone/f1 ]
+  $ [ ! -f http-clone-usercache ]
+  $ hg -R http-clone verify --large --lfc
+  checking changesets
+  checking manifests
+  crosschecking files in changesets and manifests
+  checking files
+  1 files, 1 changesets, 1 total revisions
+  searching 1 changesets for largefiles
+  verified contents of 1 revisions of 1 largefiles
+  $ hg -R http-clone up -Cqr null
+
+largefiles pulled on update - no server side problems:
+  $ mv 02a439e5c31c526465ab1a0ca1f431f76b827b90 empty/.hg/largefiles/
+  $ hg -R http-clone --debug up --config largefiles.usercache=http-clone-usercache
+  resolving manifests
+   branchmerge: False, force: False, partial: False
+   ancestor: 000000000000, local: 000000000000+, remote: cf03e5bb9936
+   .hglf/f1: remote created -> g
+  getting .hglf/f1
+  updating: .hglf/f1 1/1 files (100.00%)
+  getting changed largefiles
+  using http://localhost:$HGPORT2/
+  sending capabilities command
+  sending batch command
+  getting largefiles: 0/1 lfile (0.00%)
+  getting f1:02a439e5c31c526465ab1a0ca1f431f76b827b90
+  sending getlfile command
+  found 02a439e5c31c526465ab1a0ca1f431f76b827b90 in store
+  1 largefiles updated, 0 removed
+  1 files updated, 0 files merged, 0 files removed, 0 files unresolved
+
+  $ ls http-clone-usercache/*
+  http-clone-usercache/02a439e5c31c526465ab1a0ca1f431f76b827b90
+
+  $ rm -rf empty http-clone*
+
+used all HGPORTs, kill all daemons
+  $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS
+
+#endif