lfs: fix the stall and corruption issue when concurrently uploading blobs
We've avoided the issue up to this point by gating worker usage with an
experimental config. See 10e62d5efa73, and the thread linked there for some of
the initial diagnosis, but essentially some data was being read from the blob
before an error occurred and `keepalive` retried, but didn't rewind the file
pointer. So the leading data was lost from the blob on the server, and the
connection stalled, trying to send more data than available.
In trying to recreate this, I was unable to do so uploading from Windows to
CentOS 7. But it reproduced every time going from CentOS 7 to another CentOS 7
over https.
I found recent fixes in the FaceBook repo to address this[1][2]. The commit
message for the first is:
The KeepAlive HTTP implementation is bugged in it's retry logic, it supports
reading from a file pointer, but doesn't support rewinding of the seek cursor
when it performs a retry. So it can happen that an upload fails for whatever
reason and will then 'hang' on the retry event.
The sequence of events that get triggered are:
- Upload file A, goes OK. Keep-Alive caches connection.
- Upload file B, fails due to (for example) failing Keep-Alive, but LFS file
pointer has been consumed for the upload and fd has been closed.
- Retry for file B starts, sets the Content-Length properly to the expected
file size, but since file pointer has been consumed no data will be uploaded,
causing the server to wait for the uploaded data until either client or
server reaches a timeout, making it seem as our mercurial process hangs.
This is just a stop-gap measure to prevent this behavior from blocking Mercurial
(LFS has retry logic). A proper solutions need to be build on top of this
stop-gap measure: for upload from file pointers, we should support fseek() on
the interface. Since we expect to consume the whole file always anyways, this
should be safe. This way we can seek back to the beginning on a retry.
I ported those two patches, and it works. But I see that `url._sendfile()` does
a rewind on `httpsendfile` objects[3], so maybe it's better to keep this all in
one place and avoid a second seek. We may still want the first FaceBook patch
as extra protection for this problem in general. The other two uses of
`httpsendfile` are in the wire protocol to upload bundles, and to upload
largefiles. Neither of these appear to use a worker, and I'm not sure why
workers seem to trigger this, or if this could have happened without a worker.
Since `httpsendfile` already has a `close()` method, that is dropped. That
class also explicitly says there's no `__len__` attribute, so that is removed
too. The override for `read()` is necessary to avoid the progressbar usage per
file.
[1] https://github.com/facebookexperimental/eden/commit/c350d6536d90c044c837abdd3675185644481469
[2] https://github.com/facebookexperimental/eden/commit/77f0d3fd0415e81b63e317e457af9c55c46103ee
[3] https://www.mercurial-scm.org/repo/hg/file/5.2.2/mercurial/url.py#l176
Differential Revision: https://phab.mercurial-scm.org/D7962
This test makes sure that we don't mark a file as merged with its ancestor
when we do a merge.
$ cat <<EOF > merge
> from __future__ import print_function
> import sys, os
> print("merging for", os.path.basename(sys.argv[1]))
> EOF
$ HGMERGE="\"$PYTHON\" ../merge"; export HGMERGE
Creating base:
$ hg init a
$ cd a
$ echo 1 > foo
$ echo 1 > bar
$ echo 1 > baz
$ echo 1 > quux
$ hg add foo bar baz quux
$ hg commit -m "base"
$ cd ..
$ hg clone a b
updating to branch default
4 files updated, 0 files merged, 0 files removed, 0 files unresolved
Creating branch a:
$ cd a
$ echo 2a > foo
$ echo 2a > bar
$ hg commit -m "branch a"
Creating branch b:
$ cd ..
$ cd b
$ echo 2b > foo
$ echo 2b > baz
$ hg commit -m "branch b"
We shouldn't have anything but n state here:
$ hg debugstate --no-dates | grep -v "^n"
[1]
Merging:
$ hg pull ../a
pulling from ../a
searching for changes
adding changesets
adding manifests
adding file changes
added 1 changesets with 2 changes to 2 files (+1 heads)
new changesets bdd988058d16
(run 'hg heads' to see heads, 'hg merge' to merge)
$ hg merge -v
resolving manifests
getting bar
merging foo
merging for foo
1 files updated, 1 files merged, 0 files removed, 0 files unresolved
(branch merge, don't forget to commit)
$ echo 2m > foo
$ echo 2b > baz
$ echo new > quux
$ hg ci -m "merge"
main: we should have a merge here:
$ hg debugindex --changelog
rev linkrev nodeid p1 p2
0 0 cdca01651b96 000000000000 000000000000
1 1 f6718a9cb7f3 cdca01651b96 000000000000
2 2 bdd988058d16 cdca01651b96 000000000000
3 3 d8a521142a3c f6718a9cb7f3 bdd988058d16
log should show foo and quux changed:
$ hg log -v -r tip
changeset: 3:d8a521142a3c
tag: tip
parent: 1:f6718a9cb7f3
parent: 2:bdd988058d16
user: test
date: Thu Jan 01 00:00:00 1970 +0000
files: foo quux
description:
merge
foo: we should have a merge here:
$ hg debugindex foo
rev linkrev nodeid p1 p2
0 0 b8e02f643373 000000000000 000000000000
1 1 2ffeddde1b65 b8e02f643373 000000000000
2 2 33d1fb69067a b8e02f643373 000000000000
3 3 aa27919ee430 2ffeddde1b65 33d1fb69067a
bar: we should not have a merge here:
$ hg debugindex bar
rev linkrev nodeid p1 p2
0 0 b8e02f643373 000000000000 000000000000
1 2 33d1fb69067a b8e02f643373 000000000000
baz: we should not have a merge here:
$ hg debugindex baz
rev linkrev nodeid p1 p2
0 0 b8e02f643373 000000000000 000000000000
1 1 2ffeddde1b65 b8e02f643373 000000000000
quux: we should not have a merge here:
$ hg debugindex quux
rev linkrev nodeid p1 p2
0 0 b8e02f643373 000000000000 000000000000
1 3 6128c0f33108 b8e02f643373 000000000000
Manifest entries should match tips of all files:
$ hg manifest --debug
33d1fb69067a0139622a3fa3b7ba1cdb1367972e 644 bar
2ffeddde1b65b4827f6746174a145474129fa2ce 644 baz
aa27919ee4303cfd575e1fb932dd64d75aa08be4 644 foo
6128c0f33108e8cfbb4e0824d13ae48b466d7280 644 quux
Everything should be clean now:
$ hg status
$ hg verify
checking changesets
checking manifests
crosschecking files in changesets and manifests
checking files
checked 4 changesets with 10 changes to 4 files
$ cd ..