view tests/md5sum.py @ 24228:542c891274b2

lazymanifest: use a binary search to do an insertion This makes insertions log(n) plus some memmove() overhead, rather than doing an append followed by an n*log(n) sort. There's probably a lot of performance to be gained by adding a batch-add method, which could be smarter about the memmove()s performed. Includes a couple of extra tests that should help prevent bugs. Thanks to Martin for some significant pre-mail cleanup of this change.
author Augie Fackler <augie@google.com>
date Fri, 30 Jan 2015 21:30:40 -0800
parents 1ffeeb91c55d
children 328739ea70c3
line wrap: on
line source

#!/usr/bin/env python
#
# Based on python's Tools/scripts/md5sum.py
#
# This software may be used and distributed according to the terms
# of the PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2, which is
# GPL-compatible.

import sys, os

try:
    from hashlib import md5
except ImportError:
    from md5 import md5

try:
    import msvcrt
    msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
    msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY)
except ImportError:
    pass

for filename in sys.argv[1:]:
    try:
        fp = open(filename, 'rb')
    except IOError, msg:
        sys.stderr.write('%s: Can\'t open: %s\n' % (filename, msg))
        sys.exit(1)

    m = md5()
    try:
        while True:
            data = fp.read(8192)
            if not data:
                break
            m.update(data)
    except IOError, msg:
        sys.stderr.write('%s: I/O error: %s\n' % (filename, msg))
        sys.exit(1)
    sys.stdout.write('%s  %s\n' % (m.hexdigest(), filename))

sys.exit(0)