Mercurial > hg-stable
view contrib/byteify-strings.py @ 49773:523cacdfd324
delta-find: set the default candidate chunk size to 10
I ran performance and storage tests on repositories of various sizes and shapes
for the following values of the config : 5, 10, 20, 50, 100, no-chunking
The performance tests do not show any statistical impact on computation
times for large pushes and pulls.
For searching for an individual delta, this can provide a significant
performance improvement with a minor degradation of space-quality on the
result. (see data at the end of the commit).
For overall store size, the change :
- does not have any impact on many small repositories,
- has an observable, but very negligible impact on most larger repositories.
- One private repository we use for testing sees a small increase in size
(1%) in the narrower version.
We will try to get more numbers on a larger version of that repository to
make sure nothing pathological happens.
We pick "10" as the limit as "5" seems a bit more risky.
There are room to improve the current code, by using more aggressive filtering
and better (i.e any) sorting of the candidates. However this is already a large
improvement for pathological cases, with little impact in the common
situations.
The initial motivation for this change is to fix performance of delta
computation for a file where the previous code ended up testing 20 000 possible
candidate-bases in one go, which is… slow. This affected about ½ of the file
revisions leading to atrocious performance, especially during some push/pull
operations.
Details about individual delta finding timing:
----------------------------------------------
The vast majority of benchmark cases are unchanged but the three below. The first
two do not see any impact on the final delta. The last one sees a change in
delta-size that is negligible compared to the full text size.
### data-env-vars.name = mozilla-try-2019-02-18-zstd-sparse-revlog
# benchmark.name = perf-delta-find
# benchmark.variants.rev = manifest-snapshot-many-tries-a (revision 756096)
∞: 5.844783
5: 4.473523 (-23.46%)
10: 4.970053 (-14.97%)
20: 5.770386 (-1.27%)
50 5.821358
100: 5.834887
MANIFESTLOG: rev = 756096: (no-limit)
delta-base = 301840
search-rounds = 6
try-count = 60
delta-type = snapshot
snap-depth = 7
delta-size = 179
MANIFESTLOG: rev=756096: (limit = 10)
delta-base=301840
search-rounds=9
try-count=51
delta-type=snapshot
snap-depth=7
delta-size=179
### data-env-vars.name = mozilla-try-2019-02-18-zstd-sparse-revlog
# benchmark.name = perf-delta-find
# benchmark.variants.rev = manifest-snapshot-many-tries-d (revision 754060)
∞: 5.017663
5: 3.655931 (-27.14%)
10: 4.095436 (-18.38%)
20: 4.828949 (-3.76%)
50 4.987574
100: 4.994889
MANIFESTLOG: rev=754060: (no limit)
delta-base=301840
search-rounds=5
try-count=53
delta-type=snapshot
snap-depth=7
delta-size = 179
MANIFESTLOG: rev=754060: (limite = 10)
delta-base=301840
search-rounds=8
try-count=45
delta-type=snapshot
snap-depth=7
delta-size = 179
### data-env-vars.name = mozilla-try-2019-02-18-zstd-sparse-revlog
# benchmark.name = perf-delta-find
# bin-env-vars.hg.flavor = rust
# benchmark.variants.rev = manifest-snapshot-many-tries-e (revision 693368)
∞: 4.869282
5: 2.039732 (-58.11%)
10: 2.413537 (-50.43%)
20: 4.449639 (-8.62%)
50 4.865863
100: 4.882649
MANIFESTLOG: rev=693368:
delta-base=693336
search-rounds=6
try-count=53
delta-type=snapshot
snap-depth=6
full-test-size=131065
delta-size=199
MANIFESTLOG: rev=693368:
delta-base=278023
search-rounds=5
try-count=21
delta-type=snapshot
snap-depth=4
full-test-size=131065
delta-size=278
Raw data for store size (in bytes) for various chunk size value below:
----------------------------------------------------------------------
440 134 384 5 pypy/.hg/store/
440 134 384 10 pypy/.hg/store/
440 134 384 20 pypy/.hg/store/
440 134 384 50 pypy/.hg/store/
440 134 384 100 pypy/.hg/store/
440 134 384 ... pypy/.hg/store/
666 987 471 5 netbsd-xsrc-2022-11-15/.hg/store/
666 987 471 10 netbsd-xsrc-2022-11-15/.hg/store/
666 987 471 20 netbsd-xsrc-2022-11-15/.hg/store/
666 987 471 50 netbsd-xsrc-2022-11-15/.hg/store/
666 987 471 100 netbsd-xsrc-2022-11-15/.hg/store/
666 987 471 ... netbsd-xsrc-2022-11-15/.hg/store/
852 844 884 5 netbsd-pkgsrc-2022-11-15/.hg/store/
852 844 884 10 netbsd-pkgsrc-2022-11-15/.hg/store/
852 844 884 20 netbsd-pkgsrc-2022-11-15/.hg/store/
852 844 884 50 netbsd-pkgsrc-2022-11-15/.hg/store/
852 844 884 100 netbsd-pkgsrc-2022-11-15/.hg/store/
852 844 884 ... netbsd-pkgsrc-2022-11-15/.hg/store/
1 504 227 981 5 netbeans-2018-08-01-sparse-zstd/.hg/store/
1 504 227 871 10 netbeans-2018-08-01-sparse-zstd/.hg/store/
1 504 227 813 20 netbeans-2018-08-01-sparse-zstd/.hg/store/
1 504 227 813 50 netbeans-2018-08-01-sparse-zstd/.hg/store/
1 504 227 813 100 netbeans-2018-08-01-sparse-zstd/.hg/store/
1 504 227 813 ... netbeans-2018-08-01-sparse-zstd/.hg/store/
3 875 801 068 5 netbsd-src-2022-11-15/.hg/store/
3 875 696 767 10 netbsd-src-2022-11-15/.hg/store/
3 875 696 757 20 netbsd-src-2022-11-15/.hg/store/
3 875 696 653 50 netbsd-src-2022-11-15/.hg/store/
3 875 696 653 100 netbsd-src-2022-11-15/.hg/store/
3 875 696 653 ... netbsd-src-2022-11-15/.hg/store/
4 531 441 314 5 mozilla-central/.hg/store/
4 531 435 157 10 mozilla-central/.hg/store/
4 531 432 045 20 mozilla-central/.hg/store/
4 531 429 119 50 mozilla-central/.hg/store/
4 531 429 119 100 mozilla-central/.hg/store/
4 531 429 119 ... mozilla-central/.hg/store/
4 875 861 390 5 mozilla-unified/.hg/store/
4 875 855 155 10 mozilla-unified/.hg/store/
4 875 852 027 20 mozilla-unified/.hg/store/
4 875 848 851 50 mozilla-unified/.hg/store/
4 875 848 851 100 mozilla-unified/.hg/store/
4 875 848 851 ... mozilla-unified/.hg/store/
11 498 764 601 5 mozilla-try/.hg/store/
11 497 968 858 10 mozilla-try/.hg/store/
11 497 958 730 20 mozilla-try/.hg/store/
11 497 927 156 50 mozilla-try/.hg/store/
11 497 925 963 100 mozilla-try/.hg/store/
11 497 923 428 ... mozilla-try/.hg/store/
10 047 914 031 5 private-repo
9 969 132 101 10 private-repo
9 944 745 015 20 private-repo
9 939 756 703 50 private-repo
9 939 833 016 100 private-repo
9 939 822 035 ... private-repo
author | Pierre-Yves David <pierre-yves.david@octobus.net> |
---|---|
date | Wed, 23 Nov 2022 19:08:27 +0100 |
parents | 6000f5b25c9b |
children | 8250ecb53f30 |
line wrap: on
line source
#!/usr/bin/env python3 # # byteify-strings.py - transform string literals to be Python 3 safe # # Copyright 2015 Gregory Szorc <gregory.szorc@gmail.com> # # This software may be used and distributed according to the terms of the # GNU General Public License version 2 or any later version. import argparse import contextlib import errno import os import sys import tempfile import token import tokenize def adjusttokenpos(t, ofs): """Adjust start/end column of the given token""" return t._replace( start=(t.start[0], t.start[1] + ofs), end=(t.end[0], t.end[1] + ofs) ) def replacetokens(tokens, opts): """Transform a stream of tokens from raw to Python 3. Returns a generator of possibly rewritten tokens. The input token list may be mutated as part of processing. However, its changes do not necessarily match the output token stream. """ sysstrtokens = set() # The following utility functions access the tokens list and i index of # the for i, t enumerate(tokens) loop below def _isop(j, *o): """Assert that tokens[j] is an OP with one of the given values""" try: return tokens[j].type == token.OP and tokens[j].string in o except IndexError: return False def _findargnofcall(n): """Find arg n of a call expression (start at 0) Returns index of the first token of that argument, or None if there is not that many arguments. Assumes that token[i + 1] is '('. """ nested = 0 for j in range(i + 2, len(tokens)): if _isop(j, ')', ']', '}'): # end of call, tuple, subscription or dict / set nested -= 1 if nested < 0: return None elif n == 0: # this is the starting position of arg return j elif _isop(j, '(', '[', '{'): nested += 1 elif _isop(j, ',') and nested == 0: n -= 1 return None def _ensuresysstr(j): """Make sure the token at j is a system string Remember the given token so the string transformer won't add the byte prefix. Ignores tokens that are not strings. Assumes bounds checking has already been done. """ k = j currtoken = tokens[k] while currtoken.type in (token.STRING, token.NEWLINE, tokenize.NL): k += 1 if currtoken.type == token.STRING and currtoken.string.startswith( ("'", '"') ): sysstrtokens.add(currtoken) try: currtoken = tokens[k] except IndexError: break def _isitemaccess(j): """Assert the next tokens form an item access on `tokens[j]` and that `tokens[j]` is a name. """ try: return ( tokens[j].type == token.NAME and _isop(j + 1, '[') and tokens[j + 2].type == token.STRING and _isop(j + 3, ']') ) except IndexError: return False def _ismethodcall(j, *methodnames): """Assert the next tokens form a call to `methodname` with a string as first argument on `tokens[j]` and that `tokens[j]` is a name. """ try: return ( tokens[j].type == token.NAME and _isop(j + 1, '.') and tokens[j + 2].type == token.NAME and tokens[j + 2].string in methodnames and _isop(j + 3, '(') and tokens[j + 4].type == token.STRING ) except IndexError: return False coldelta = 0 # column increment for new opening parens coloffset = -1 # column offset for the current line (-1: TBD) parens = [(0, 0, 0, -1)] # stack of (line, end-column, column-offset, type) ignorenextline = False # don't transform the next line insideignoreblock = False # don't transform until turned off for i, t in enumerate(tokens): # Compute the column offset for the current line, such that # the current line will be aligned to the last opening paren # as before. if coloffset < 0: lastparen = parens[-1] if t.start[1] == lastparen[1]: coloffset = lastparen[2] elif t.start[1] + 1 == lastparen[1] and lastparen[3] not in ( token.NEWLINE, tokenize.NL, ): # fix misaligned indent of s/util.Abort/error.Abort/ coloffset = lastparen[2] + (lastparen[1] - t.start[1]) else: coloffset = 0 # Reset per-line attributes at EOL. if t.type in (token.NEWLINE, tokenize.NL): yield adjusttokenpos(t, coloffset) coldelta = 0 coloffset = -1 if not insideignoreblock: ignorenextline = ( tokens[i - 1].type == token.COMMENT and tokens[i - 1].string == "# no-py3-transform" ) continue if t.type == token.COMMENT: if t.string == "# py3-transform: off": insideignoreblock = True if t.string == "# py3-transform: on": insideignoreblock = False if ignorenextline or insideignoreblock: yield adjusttokenpos(t, coloffset) continue # Remember the last paren position. if _isop(i, '(', '[', '{'): parens.append(t.end + (coloffset + coldelta, tokens[i + 1].type)) elif _isop(i, ')', ']', '}'): parens.pop() # Convert most string literals to byte literals. String literals # in Python 2 are bytes. String literals in Python 3 are unicode. # Most strings in Mercurial are bytes and unicode strings are rare. # Rather than rewrite all string literals to use ``b''`` to indicate # byte strings, we apply this token transformer to insert the ``b`` # prefix nearly everywhere. if t.type == token.STRING and t not in sysstrtokens: s = t.string # Preserve docstrings as string literals. This is inconsistent # with regular unprefixed strings. However, the # "from __future__" parsing (which allows a module docstring to # exist before it) doesn't properly handle the docstring if it # is b''' prefixed, leading to a SyntaxError. We leave all # docstrings as unprefixed to avoid this. This means Mercurial # components touching docstrings need to handle unicode, # unfortunately. if s[0:3] in ("'''", '"""'): # If it's assigned to something, it's not a docstring if not _isop(i - 1, '='): yield adjusttokenpos(t, coloffset) continue # If the first character isn't a quote, it is likely a string # prefixing character (such as 'b', 'u', or 'r'. Ignore. if s[0] not in ("'", '"'): yield adjusttokenpos(t, coloffset) continue # String literal. Prefix to make a b'' string. yield adjusttokenpos(t._replace(string='b%s' % t.string), coloffset) coldelta += 1 continue # This looks like a function call. if t.type == token.NAME and _isop(i + 1, '('): fn = t.string # *attr() builtins don't accept byte strings to 2nd argument. if ( fn in ( 'getattr', 'setattr', 'hasattr', 'safehasattr', 'wrapfunction', 'wrapclass', 'addattr', ) and (opts['allow-attr-methods'] or not _isop(i - 1, '.')) ): arg1idx = _findargnofcall(1) if arg1idx is not None: _ensuresysstr(arg1idx) # .encode() and .decode() on str/bytes/unicode don't accept # byte strings on Python 3. elif fn in ('encode', 'decode') and _isop(i - 1, '.'): for argn in range(2): argidx = _findargnofcall(argn) if argidx is not None: _ensuresysstr(argidx) # It changes iteritems/values to items/values as they are not # present in Python 3 world. elif opts['dictiter'] and fn in ('iteritems', 'itervalues'): yield adjusttokenpos(t._replace(string=fn[4:]), coloffset) continue if t.type == token.NAME and t.string in opts['treat-as-kwargs']: if _isitemaccess(i): _ensuresysstr(i + 2) if _ismethodcall(i, 'get', 'pop', 'setdefault', 'popitem'): _ensuresysstr(i + 4) # Looks like "if __name__ == '__main__'". if ( t.type == token.NAME and t.string == '__name__' and _isop(i + 1, '==') ): _ensuresysstr(i + 2) # Emit unmodified token. yield adjusttokenpos(t, coloffset) def process(fin, fout, opts): tokens = tokenize.tokenize(fin.readline) tokens = replacetokens(list(tokens), opts) fout.write(tokenize.untokenize(tokens)) def tryunlink(fname): try: os.unlink(fname) except OSError as err: if err.errno != errno.ENOENT: raise @contextlib.contextmanager def editinplace(fname): n = os.path.basename(fname) d = os.path.dirname(fname) fp = tempfile.NamedTemporaryFile( prefix='.%s-' % n, suffix='~', dir=d, delete=False ) try: yield fp fp.close() if os.name == 'nt': tryunlink(fname) os.rename(fp.name, fname) finally: fp.close() tryunlink(fp.name) def main(): ap = argparse.ArgumentParser() ap.add_argument( '--version', action='version', version='Byteify strings 1.0' ) ap.add_argument( '-i', '--inplace', action='store_true', default=False, help='edit files in place', ) ap.add_argument( '--dictiter', action='store_true', default=False, help='rewrite iteritems() and itervalues()', ), ap.add_argument( '--allow-attr-methods', action='store_true', default=False, help='also handle attr*() when they are methods', ), ap.add_argument( '--treat-as-kwargs', nargs="+", default=[], help="ignore kwargs-like objects", ), ap.add_argument('files', metavar='FILE', nargs='+', help='source file') args = ap.parse_args() opts = { 'dictiter': args.dictiter, 'treat-as-kwargs': set(args.treat_as_kwargs), 'allow-attr-methods': args.allow_attr_methods, } for fname in args.files: fname = os.path.realpath(fname) if args.inplace: with editinplace(fname) as fout: with open(fname, 'rb') as fin: process(fin, fout, opts) else: with open(fname, 'rb') as fin: fout = sys.stdout.buffer process(fin, fout, opts) if __name__ == '__main__': if sys.version_info[0:2] < (3, 7): print('This script must be run under Python 3.7+') sys.exit(3) main()