largefiles: handle that a found standin file doesn't exist when removing it
I somehow ended up in a situation where hg crashed on an unlink I introduced in
328545c7d8a1.
I don't know how it happened and can't reproduce it. It seems like it only can
happen when the file is removed between the time of check in a working
directory context walk that finds a standin file, and the time of use when we
try to remove it because the corresponding largefile doesn't exist.
But better safe than sorry: replace the plain unlink with unlinkpath with
ignoremissing=True. That will also remove remaining empty directories, which
arguably is more correct.
from __future__ import absolute_import
import cffi
import os
ffi = cffi.FFI()
mpatch_c = os.path.join(os.path.join(os.path.dirname(__file__), 'mercurial',
'mpatch.c'))
ffi.set_source("_mpatch_cffi", open(mpatch_c).read(),
include_dirs=["mercurial"])
ffi.cdef("""
struct mpatch_frag {
int start, end, len;
const char *data;
};
struct mpatch_flist {
struct mpatch_frag *base, *head, *tail;
};
extern "Python" struct mpatch_flist* cffi_get_next_item(void*, ssize_t);
int mpatch_decode(const char *bin, ssize_t len, struct mpatch_flist** res);
ssize_t mpatch_calcsize(size_t len, struct mpatch_flist *l);
void mpatch_lfree(struct mpatch_flist *a);
static int mpatch_apply(char *buf, const char *orig, size_t len,
struct mpatch_flist *l);
struct mpatch_flist *mpatch_fold(void *bins,
struct mpatch_flist* (*get_next_item)(void*, ssize_t),
ssize_t start, ssize_t end);
""")
if __name__ == '__main__':
ffi.compile()