largefiles: don't allow corruption to propagate after detection
basestore.get uses util.atomictempfile when checking and receiving a new
largefile ... but the close/discard logic was too clever for largefiles.
Largefiles relied on being able to discard the file and thus prevent it from
being written to the store. That was however too brittle. lfutil.copyandhash
closes the infile after writing to it ... with a 'blecch' comment. The discard
was thus a silent noop, and as a result of that corruption would be detected
... and then the corrupted files would be used anyway.
Instead we now use a tmp file and rename or unlink it after validating it.
A better solution should be implemented ... but not now.
$ "$TESTDIR/hghave" killdaemons || exit 80
Test wire protocol unbundle with hashed heads (capability: unbundlehash)
Create a remote repository.
$ hg init remote
$ hg serve -R remote --config web.push_ssl=False --config web.allow_push=* -p $HGPORT -d --pid-file=hg1.pid -E error.log -A access.log
$ cat hg1.pid >> $DAEMON_PIDS
Clone the repository and push a change.
$ hg clone http://localhost:$HGPORT/ local
no changes found
updating to branch default
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
$ touch local/README
$ hg ci -R local -A -m hoge
adding README
$ hg push -R local
pushing to http://localhost:$HGPORT/
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files
Ensure hashed heads format is used.
The hash here is always the same since the remote repository only has the null head.
$ cat access.log | grep unbundle
* - - [*] "POST /?cmd=unbundle HTTP/1.1" 200 - x-hgarg-1:heads=686173686564+6768033e216468247bd031a0a2d9876d79818f8f (glob)
Explicitly kill daemons to let the test exit on Windows
$ "$TESTDIR/killdaemons.py" $DAEMON_PIDS