view tests/test-pull-pull-corruption.t @ 21934:0cb34b3991f8 stable

largefiles: use "normallookup" on "lfdirstate" while reverting Before this patch, largefiles gotten from revisions other than the parent of the working directory at "hg revert" become "clean" unexpectedly in steps below: 1. "repo.status()" is invoked (for status check before reverting) 1-1 "dirstate" entry for standinfile SF is "normal"-ed (1-2 "lfdirstate" entry of largefile LF (for SF) is "normal"-ed) 2. "cmdutil.revert()" is invoked 2-1 standinfile SF is updated in the working directory 2-2 "dirstate" entry for SF is NOT updated 3. "lfcommands.updatelfiles()" is invoked (by "overrides.overriderevert()") 3-1 largefile LF (for SF) is updated in the working directory 3-2 "dirstate" returns "n" and valid timestamp for SF (by 1-1 and 2-2) 3-3 "lfdirstate" entry for LF is "normal"-ed 3-4 "lfdirstate" is written into ".hg/largefiles/dirstate", and timestamp of LF is stored into "lfdirstate" file (by 3-3) (ASSUMPTION: timestamp of LF differs from one of "lfdirstate" file) Then, "hs status" treats LF as "clean", even though LF is updated by "other" revision (by 3-1), because "lfilesrepo.status()" always treats "normal"-ed files (by 3-3 and 3-4) as "clean". When largefiles are reverted, they should be "normallookup"-ed forcibly. This patch uses "normallookup" on "lfdirstate" while reverting, by passing "True" to newly added argument "normallookup". Forcible "normallookup"-ing is not so expensive, because list of target largefiles is explicitly specified in this case. This patch uses "[debug] dirstate.delaywrite" feature in the test, to ensure that timestamp of the largefile gotten from "other" revision is stored into ".hg/largefiles/dirstate" (for ASSUMPTION at 3-4)
author FUJIWARA Katsunori <foozy@lares.dti.ne.jp>
date Wed, 23 Jul 2014 00:10:24 +0900
parents f2719b387380
children eb586ed5d8ce
line wrap: on
line source

Corrupt an hg repo with two pulls.
create one repo with a long history

  $ hg init source1
  $ cd source1
  $ touch foo
  $ hg add foo
  $ for i in 1 2 3 4 5 6 7 8 9 10; do
  >     echo $i >> foo
  >     hg ci -m $i
  > done
  $ cd ..

create one repo with a shorter history

  $ hg clone -r 0 source1 source2
  adding changesets
  adding manifests
  adding file changes
  added 1 changesets with 1 changes to 1 files
  updating to branch default
  1 files updated, 0 files merged, 0 files removed, 0 files unresolved
  $ cd source2
  $ echo a >> foo
  $ hg ci -m a
  $ cd ..

create a third repo to pull both other repos into it

  $ hg init corrupted
  $ cd corrupted

use a hook to make the second pull start while the first one is still running

  $ echo '[hooks]' >> .hg/hgrc
  $ echo 'prechangegroup = sleep 5' >> .hg/hgrc

start a pull...

  $ hg pull ../source1 > pull.out 2>&1 &

... and start another pull before the first one has finished

  $ sleep 1
  $ hg pull ../source2 2>/dev/null
  pulling from ../source2
  searching for changes
  adding changesets
  adding manifests
  adding file changes
  added 1 changesets with 1 changes to 1 files (+1 heads)
  (run 'hg heads' to see heads, 'hg merge' to merge)
  $ cat pull.out
  pulling from ../source1
  requesting all changes
  adding changesets
  adding manifests
  adding file changes
  added 10 changesets with 10 changes to 1 files
  (run 'hg update' to get a working copy)

see the result

  $ wait
  $ hg verify
  checking changesets
  checking manifests
  crosschecking files in changesets and manifests
  checking files
  1 files, 11 changesets, 11 total revisions

  $ cd ..