dirstate.walk: don't keep track of normalized files in parallel
Rev
2bb13f2b778c changed the semantics of the work list to store (normalized,
non-normalized) pairs. All the tuple creation and destruction hurts perf: on a
large repo on OS X, 'hg status' went from 3.62 seconds to 3.78.
It also is unnecessary in most cases:
- it is clearly unnecessary on case-sensitive filesystems.
- it is also unnecessary when filenames have been read off of disk rather than
being supplied by the user.
The only case where the non-normalized case is required at all is when the file
is unknown.
To eliminate most of the perf cost, keep trace of whether the directory needs
to be normalized at all with a boolean called 'alreadynormed'. Pay the cost of
directory normalization only when necessary.
For the above large repo, 'hg status' goes to 3.63 seconds.
#!/usr/bin/env python
#
# Based on python's Tools/scripts/md5sum.py
#
# This software may be used and distributed according to the terms
# of the PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2, which is
# GPL-compatible.
import sys, os
try:
from hashlib import md5
except ImportError:
from md5 import md5
try:
import msvcrt
msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY)
except ImportError:
pass
for filename in sys.argv[1:]:
try:
fp = open(filename, 'rb')
except IOError, msg:
sys.stderr.write('%s: Can\'t open: %s\n' % (filename, msg))
sys.exit(1)
m = md5()
try:
while True:
data = fp.read(8192)
if not data:
break
m.update(data)
except IOError, msg:
sys.stderr.write('%s: I/O error: %s\n' % (filename, msg))
sys.exit(1)
sys.stdout.write('%s %s\n' % (m.hexdigest(), filename))
sys.exit(0)