view tests/md5sum.py @ 30190:56b930238036 stable

largefiles: more safe handling of interruptions while updating modifications Largefiles are fragile with the design where dirstate and lfdirstate must be kept in sync. To be less fragile, mark all clean largefiles as unsure ("normallookup") before updating standins. After standins have been updated and we know exactly which largefile standins actually was changed, mark the unchanged largefiles back to clean ("normal"). This will make the failure mode more safe. If interrupted, the next command will continue to perform extra hashing of all largefiles. That will do that all largefiles that are out of sync with their standin will be marked dirty and they will show up in status and can be cleaned with update --clean.
author Mads Kiilerich <madski@unity3d.com>
date Sun, 16 Oct 2016 02:29:45 +0200
parents 6a98f9408a50
children 8d1cdee372e6
line wrap: on
line source

#!/usr/bin/env python
#
# Based on python's Tools/scripts/md5sum.py
#
# This software may be used and distributed according to the terms
# of the PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2, which is
# GPL-compatible.

from __future__ import absolute_import

import os
import sys

try:
    import hashlib
    md5 = hashlib.md5
except ImportError:
    import md5
    md5 = md5.md5

try:
    import msvcrt
    msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
    msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY)
except ImportError:
    pass

for filename in sys.argv[1:]:
    try:
        fp = open(filename, 'rb')
    except IOError as msg:
        sys.stderr.write('%s: Can\'t open: %s\n' % (filename, msg))
        sys.exit(1)

    m = md5()
    try:
        while True:
            data = fp.read(8192)
            if not data:
                break
            m.update(data)
    except IOError as msg:
        sys.stderr.write('%s: I/O error: %s\n' % (filename, msg))
        sys.exit(1)
    sys.stdout.write('%s  %s\n' % (m.hexdigest(), filename))

sys.exit(0)