lfs: verify lfs object content when transferring to and from the remote store
This avoids inserting corrupt files into the usercache, and local and remote
stores. One down side is that the bad file won't be available locally for
forensic purposes after a remote download. I'm thinking about adding an
'incoming' directory to the local lfs store to handle the download, and then
move it to the 'objects' directory after it passes verification. That would
have the additional benefit of not concatenating each transfer chunk in memory
until the full file is transferred.
Verification isn't needed when the data is passed back through the revlog
interface or when the oid was just calculated, but otherwise it is on by
default. The additional overhead should be well worth avoiding problems with
file based remote stores, or buggy lfs servers.
Having two different verify functions is a little sad, but the full data of the
blob is mostly passed around in memory, because that's what the revlog interface
wants. The upload function, however, chunks up the data. It would be ideal if
that was how the content is always handled, but that's probably a huge project.
I don't really like printing the long hash, but `hg debugdata` isn't a public
interface, and is the only way to get it. The filelog and revision info is
nowhere near this area, so recommending `hg verify` is the easiest thing to do.
# dirstatenonnormalcheck.py - extension to check the consistency of the
# dirstate's non-normal map
#
# For most operations on dirstate, this extensions checks that the nonnormalset
# contains the right entries.
# It compares the nonnormal file to a nonnormalset built from the map of all
# the files in the dirstate to check that they contain the same files.
from __future__ import absolute_import
from mercurial import (
dirstate,
extensions,
)
def nonnormalentries(dmap):
"""Compute nonnormal entries from dirstate's dmap"""
res = set()
for f, e in dmap.iteritems():
if e[0] != 'n' or e[3] == -1:
res.add(f)
return res
def checkconsistency(ui, orig, dmap, _nonnormalset, label):
"""Compute nonnormalset from dmap, check that it matches _nonnormalset"""
nonnormalcomputedmap = nonnormalentries(dmap)
if _nonnormalset != nonnormalcomputedmap:
ui.develwarn("%s call to %s\n" % (label, orig), config='dirstate')
ui.develwarn("inconsistency in nonnormalset\n", config='dirstate')
ui.develwarn("[nonnormalset] %s\n" % _nonnormalset, config='dirstate')
ui.develwarn("[map] %s\n" % nonnormalcomputedmap, config='dirstate')
def _checkdirstate(orig, self, arg):
"""Check nonnormal set consistency before and after the call to orig"""
checkconsistency(self._ui, orig, self._map, self._map.nonnormalset,
"before")
r = orig(self, arg)
checkconsistency(self._ui, orig, self._map, self._map.nonnormalset, "after")
return r
def extsetup(ui):
"""Wrap functions modifying dirstate to check nonnormalset consistency"""
dirstatecl = dirstate.dirstate
devel = ui.configbool('devel', 'all-warnings')
paranoid = ui.configbool('experimental', 'nonnormalparanoidcheck')
if devel:
extensions.wrapfunction(dirstatecl, '_writedirstate', _checkdirstate)
if paranoid:
# We don't do all these checks when paranoid is disable as it would
# make the extension run very slowly on large repos
extensions.wrapfunction(dirstatecl, 'normallookup', _checkdirstate)
extensions.wrapfunction(dirstatecl, 'otherparent', _checkdirstate)
extensions.wrapfunction(dirstatecl, 'normal', _checkdirstate)
extensions.wrapfunction(dirstatecl, 'write', _checkdirstate)
extensions.wrapfunction(dirstatecl, 'add', _checkdirstate)
extensions.wrapfunction(dirstatecl, 'remove', _checkdirstate)
extensions.wrapfunction(dirstatecl, 'merge', _checkdirstate)
extensions.wrapfunction(dirstatecl, 'drop', _checkdirstate)