view tests/check-perf-code.py @ 38732:be4984261611

merge: mark file gets as not thread safe (issue5933) In default installs, this has the effect of disabling the thread-based worker on Windows when manifesting files in the working directory. My measurements have shown that with revlog-based repositories, Mercurial spends a lot of CPU time in revlog code resolving file data. This ends up incurring a lot of context switching across threads and slows down `hg update` operations when going from an empty working directory to the tip of the repo. On mozilla-unified (246,351 files) on an i7-6700K (4+4 CPUs): before: 487s wall after: 360s wall (equivalent to worker.enabled=false) cpus=2: 379s wall Even with only 2 threads, the thread pool is still slower. The introduction of the thread-based worker (02b36e860e0b) states that it resulted in a "~50%" speedup for `hg sparse --enable-profile` and `hg sparse --disable-profile`. This disagrees with my measurement above. I theorize a few reasons for this: 1) Removal of files from the working directory is I/O - not CPU - bound and should benefit from a thread pool (unless I/O is insanely fast and the GIL release is near instantaneous). So tests like `hg sparse --enable-profile` may exercise deletion throughput and aren't good benchmarks for worker tasks that are CPU heavy. 2) The patch was authored by someone at Facebook. The results were likely measured against a repository using remotefilelog. And I believe that revision retrieval during working directory updates with remotefilelog will often use a remote store, thus being I/O and not CPU bound. This probably resulted in an overstated performance gain. Since there appears to be a need to enable the thread-based worker with some stores, I've made the flagging of file gets as thread safe configurable. I've made it experimental because I don't want to formalize a boolean flag for this option and because this attribute is best captured against the store implementation. But we don't have a proper store API for this yet. I'd rather cross this bridge later. It is possible there are revlog-based repositories that do benefit from a thread-based worker. I didn't do very comprehensive testing. If there are, we may want to devise a more proper algorithm for whether to use the thread-based worker, including possibly config options to limit the number of threads to use. But until I see evidence that justifies complexity, simplicity wins. Differential Revision: https://phab.mercurial-scm.org/D3963
author Gregory Szorc <gregory.szorc@gmail.com>
date Wed, 18 Jul 2018 09:49:34 -0700
parents bd872f64a8ba
children eb8a8af4cbd0
line wrap: on
line source

#!/usr/bin/env python
#
# check-perf-code - (historical) portability checker for contrib/perf.py

from __future__ import absolute_import

import os
import sys

# write static check patterns here
perfpypats = [
  [
    (r'(branchmap|repoview)\.subsettable',
     "use getbranchmapsubsettable() for early Mercurial"),
    (r'\.(vfs|svfs|opener|sopener)',
     "use getvfs()/getsvfs() for early Mercurial"),
    (r'ui\.configint',
     "use getint() instead of ui.configint() for early Mercurial"),
  ],
  # warnings
  [
  ]
]

def modulewhitelist(names):
    replacement = [('.py', ''), ('.c', ''), # trim suffix
                   ('mercurial%s' % (os.sep), ''), # trim "mercurial/" path
                  ]
    ignored = {'__init__'}
    modules = {}

    # convert from file name to module name, and count # of appearances
    for name in names:
        name = name.strip()
        for old, new in replacement:
            name = name.replace(old, new)
        if name not in ignored:
            modules[name] = modules.get(name, 0) + 1

    # list up module names, which appear multiple times
    whitelist = []
    for name, count in modules.items():
        if count > 1:
            whitelist.append(name)

    return whitelist

if __name__ == "__main__":
    # in this case, it is assumed that result of "hg files" at
    # multiple revisions is given via stdin
    whitelist = modulewhitelist(sys.stdin)
    assert whitelist, "module whitelist is empty"

    # build up module whitelist check from file names given at runtime
    perfpypats[0].append(
        # this matching pattern assumes importing modules from
        # "mercurial" package in the current style below, for simplicity
        #
        #    from mercurial import (
        #        foo,
        #        bar,
        #        baz
        #    )
        ((r'from mercurial import [(][a-z0-9, \n#]*\n(?! *%s,|^[ #]*\n|[)])'
          % ',| *'.join(whitelist)),
         "import newer module separately in try clause for early Mercurial"
         ))

    # import contrib/check-code.py as checkcode
    assert 'RUNTESTDIR' in os.environ, "use check-perf-code.py in *.t script"
    contribpath = os.path.join(os.environ['RUNTESTDIR'], '..', 'contrib')
    sys.path.insert(0, contribpath)
    checkcode = __import__('check-code')

    # register perf.py specific entry with "checks" in check-code.py
    checkcode.checks.append(('perf.py', r'contrib/perf.py$', '',
                             checkcode.pyfilters, perfpypats))

    sys.exit(checkcode.main())