view tests/md5sum.py @ 31454:a5bad127128d

branchmap: handle nullrev in setcachedata 906be86990 recently changed to switch from: self._rbcrevs[rbcrevidx:rbcrevidx + _rbcrecsize] = rec to pack_into(_rbcrecfmt, self._rbcrevs, rbcrevidx, node, branchidx) This causes an exception if rbcrevidx is -1 (i.e. the nullrev). The old code handled this because python handles out of bound sets to arrays gracefully. The new code throws because the self._rbcrevs buffer isn't long enough to write 8 bytes to. Normally it would've been resized by the immediately preceding line, but because the 0 length buffer is greater than the idx (-1) times the size, no resize happens. Setting the branch for the nullrev doesn't make sense anyway, so let's skip it. This was caught by external tests in the Facebook extensions repo, but I've added a test here that catches the issue.
author Durham Goode <durham@fb.com>
date Wed, 15 Mar 2017 15:48:57 -0700
parents 8d1cdee372e6
children 3a64ac39b893
line wrap: on
line source

#!/usr/bin/env python
#
# Based on python's Tools/scripts/md5sum.py
#
# This software may be used and distributed according to the terms
# of the PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2, which is
# GPL-compatible.

from __future__ import absolute_import

import os
import sys

try:
    import hashlib
    md5 = hashlib.md5
except ImportError:
    import md5
    md5 = md5.md5

try:
    import msvcrt
    msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
    msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY)
except ImportError:
    pass

for filename in sys.argv[1:]:
    try:
        fp = open(filename, 'rb')
    except IOError as msg:
        sys.stderr.write('%s: Can\'t open: %s\n' % (filename, msg))
        sys.exit(1)

    m = md5()
    try:
        for data in iter(lambda: fp.read(8192), ''):
            m.update(data)
    except IOError as msg:
        sys.stderr.write('%s: I/O error: %s\n' % (filename, msg))
        sys.exit(1)
    sys.stdout.write('%s  %s\n' % (m.hexdigest(), filename))

sys.exit(0)