lfs: fix the stall and corruption issue when concurrently uploading blobs
We've avoided the issue up to this point by gating worker usage with an
experimental config. See 10e62d5efa73, and the thread linked there for some of
the initial diagnosis, but essentially some data was being read from the blob
before an error occurred and `keepalive` retried, but didn't rewind the file
pointer. So the leading data was lost from the blob on the server, and the
connection stalled, trying to send more data than available.
In trying to recreate this, I was unable to do so uploading from Windows to
CentOS 7. But it reproduced every time going from CentOS 7 to another CentOS 7
over https.
I found recent fixes in the FaceBook repo to address this[1][2]. The commit
message for the first is:
The KeepAlive HTTP implementation is bugged in it's retry logic, it supports
reading from a file pointer, but doesn't support rewinding of the seek cursor
when it performs a retry. So it can happen that an upload fails for whatever
reason and will then 'hang' on the retry event.
The sequence of events that get triggered are:
- Upload file A, goes OK. Keep-Alive caches connection.
- Upload file B, fails due to (for example) failing Keep-Alive, but LFS file
pointer has been consumed for the upload and fd has been closed.
- Retry for file B starts, sets the Content-Length properly to the expected
file size, but since file pointer has been consumed no data will be uploaded,
causing the server to wait for the uploaded data until either client or
server reaches a timeout, making it seem as our mercurial process hangs.
This is just a stop-gap measure to prevent this behavior from blocking Mercurial
(LFS has retry logic). A proper solutions need to be build on top of this
stop-gap measure: for upload from file pointers, we should support fseek() on
the interface. Since we expect to consume the whole file always anyways, this
should be safe. This way we can seek back to the beginning on a retry.
I ported those two patches, and it works. But I see that `url._sendfile()` does
a rewind on `httpsendfile` objects[3], so maybe it's better to keep this all in
one place and avoid a second seek. We may still want the first FaceBook patch
as extra protection for this problem in general. The other two uses of
`httpsendfile` are in the wire protocol to upload bundles, and to upload
largefiles. Neither of these appear to use a worker, and I'm not sure why
workers seem to trigger this, or if this could have happened without a worker.
Since `httpsendfile` already has a `close()` method, that is dropped. That
class also explicitly says there's no `__len__` attribute, so that is removed
too. The override for `read()` is necessary to avoid the progressbar usage per
file.
[1] https://github.com/facebookexperimental/eden/commit/c350d6536d90c044c837abdd3675185644481469
[2] https://github.com/facebookexperimental/eden/commit/77f0d3fd0415e81b63e317e457af9c55c46103ee
[3] https://www.mercurial-scm.org/repo/hg/file/5.2.2/mercurial/url.py#l176
Differential Revision: https://phab.mercurial-scm.org/D7962
#!/usr/bin/env python
"""
Utility for inspecting files in various ways.
This tool is like the collection of tools found in a unix environment but are
cross platform and stable and suitable for our needs in the test suite.
This can be used instead of tools like:
[
dd
find
head
hexdump
ls
md5sum
readlink
sha1sum
stat
tail
test
readlink.py
md5sum.py
"""
from __future__ import absolute_import
import binascii
import glob
import hashlib
import optparse
import os
import re
import sys
# Python 3 adapters
ispy3 = sys.version_info[0] >= 3
if ispy3:
def iterbytes(s):
for i in range(len(s)):
yield s[i : i + 1]
else:
iterbytes = iter
def visit(opts, filenames, outfile):
"""Process filenames in the way specified in opts, writing output to
outfile."""
for f in sorted(filenames):
isstdin = f == '-'
if not isstdin and not os.path.lexists(f):
outfile.write(b'%s: file not found\n' % f.encode('utf-8'))
continue
quiet = opts.quiet and not opts.recurse or isstdin
isdir = os.path.isdir(f)
islink = os.path.islink(f)
isfile = os.path.isfile(f) and not islink
dirfiles = None
content = None
facts = []
if isfile:
if opts.type:
facts.append(b'file')
if any((opts.hexdump, opts.dump, opts.md5, opts.sha1, opts.sha256)):
with open(f, 'rb') as fobj:
content = fobj.read()
elif islink:
if opts.type:
facts.append(b'link')
content = os.readlink(f).encode('utf8')
elif isstdin:
content = getattr(sys.stdin, 'buffer', sys.stdin).read()
if opts.size:
facts.append(b'size=%d' % len(content))
elif isdir:
if opts.recurse or opts.type:
dirfiles = glob.glob(f + '/*')
facts.append(b'directory with %d files' % len(dirfiles))
elif opts.type:
facts.append(b'type unknown')
if not isstdin:
stat = os.lstat(f)
if opts.size and not isdir:
facts.append(b'size=%d' % stat.st_size)
if opts.mode and not islink:
facts.append(b'mode=%o' % (stat.st_mode & 0o777))
if opts.links:
facts.append(b'links=%d' % stat.st_nlink)
if opts.newer:
# mtime might be in whole seconds so newer file might be same
if stat.st_mtime >= os.stat(opts.newer).st_mtime:
facts.append(
b'newer than %s' % opts.newer.encode('utf8', 'replace')
)
else:
facts.append(
b'older than %s' % opts.newer.encode('utf8', 'replace')
)
if opts.md5 and content is not None:
h = hashlib.md5(content)
facts.append(b'md5=%s' % binascii.hexlify(h.digest())[: opts.bytes])
if opts.sha1 and content is not None:
h = hashlib.sha1(content)
facts.append(
b'sha1=%s' % binascii.hexlify(h.digest())[: opts.bytes]
)
if opts.sha256 and content is not None:
h = hashlib.sha256(content)
facts.append(
b'sha256=%s' % binascii.hexlify(h.digest())[: opts.bytes]
)
if isstdin:
outfile.write(b', '.join(facts) + b'\n')
elif facts:
outfile.write(b'%s: %s\n' % (f.encode('utf-8'), b', '.join(facts)))
elif not quiet:
outfile.write(b'%s:\n' % f.encode('utf-8'))
if content is not None:
chunk = content
if not islink:
if opts.lines:
if opts.lines >= 0:
chunk = b''.join(chunk.splitlines(True)[: opts.lines])
else:
chunk = b''.join(chunk.splitlines(True)[opts.lines :])
if opts.bytes:
if opts.bytes >= 0:
chunk = chunk[: opts.bytes]
else:
chunk = chunk[opts.bytes :]
if opts.hexdump:
for i in range(0, len(chunk), 16):
s = chunk[i : i + 16]
outfile.write(
b'%04x: %-47s |%s|\n'
% (
i,
b' '.join(b'%02x' % ord(c) for c in iterbytes(s)),
re.sub(b'[^ -~]', b'.', s),
)
)
if opts.dump:
if not quiet:
outfile.write(b'>>>\n')
outfile.write(chunk)
if not quiet:
if chunk.endswith(b'\n'):
outfile.write(b'<<<\n')
else:
outfile.write(b'\n<<< no trailing newline\n')
if opts.recurse and dirfiles:
assert not isstdin
visit(opts, dirfiles, outfile)
if __name__ == "__main__":
parser = optparse.OptionParser("%prog [options] [filenames]")
parser.add_option(
"-t",
"--type",
action="store_true",
help="show file type (file or directory)",
)
parser.add_option(
"-m", "--mode", action="store_true", help="show file mode"
)
parser.add_option(
"-l", "--links", action="store_true", help="show number of links"
)
parser.add_option(
"-s", "--size", action="store_true", help="show size of file"
)
parser.add_option(
"-n", "--newer", action="store", help="check if file is newer (or same)"
)
parser.add_option(
"-r", "--recurse", action="store_true", help="recurse into directories"
)
parser.add_option(
"-S",
"--sha1",
action="store_true",
help="show sha1 hash of the content",
)
parser.add_option(
"",
"--sha256",
action="store_true",
help="show sha256 hash of the content",
)
parser.add_option(
"-M", "--md5", action="store_true", help="show md5 hash of the content"
)
parser.add_option(
"-D", "--dump", action="store_true", help="dump file content"
)
parser.add_option(
"-H", "--hexdump", action="store_true", help="hexdump file content"
)
parser.add_option(
"-B", "--bytes", type="int", help="number of characters to dump"
)
parser.add_option(
"-L", "--lines", type="int", help="number of lines to dump"
)
parser.add_option(
"-q", "--quiet", action="store_true", help="no default output"
)
(opts, filenames) = parser.parse_args(sys.argv[1:])
if not filenames:
filenames = ['-']
visit(opts, filenames, getattr(sys.stdout, 'buffer', sys.stdout))