view tests/generate-working-copy-states.py @ 38732:be4984261611

merge: mark file gets as not thread safe (issue5933) In default installs, this has the effect of disabling the thread-based worker on Windows when manifesting files in the working directory. My measurements have shown that with revlog-based repositories, Mercurial spends a lot of CPU time in revlog code resolving file data. This ends up incurring a lot of context switching across threads and slows down `hg update` operations when going from an empty working directory to the tip of the repo. On mozilla-unified (246,351 files) on an i7-6700K (4+4 CPUs): before: 487s wall after: 360s wall (equivalent to worker.enabled=false) cpus=2: 379s wall Even with only 2 threads, the thread pool is still slower. The introduction of the thread-based worker (02b36e860e0b) states that it resulted in a "~50%" speedup for `hg sparse --enable-profile` and `hg sparse --disable-profile`. This disagrees with my measurement above. I theorize a few reasons for this: 1) Removal of files from the working directory is I/O - not CPU - bound and should benefit from a thread pool (unless I/O is insanely fast and the GIL release is near instantaneous). So tests like `hg sparse --enable-profile` may exercise deletion throughput and aren't good benchmarks for worker tasks that are CPU heavy. 2) The patch was authored by someone at Facebook. The results were likely measured against a repository using remotefilelog. And I believe that revision retrieval during working directory updates with remotefilelog will often use a remote store, thus being I/O and not CPU bound. This probably resulted in an overstated performance gain. Since there appears to be a need to enable the thread-based worker with some stores, I've made the flagging of file gets as thread safe configurable. I've made it experimental because I don't want to formalize a boolean flag for this option and because this attribute is best captured against the store implementation. But we don't have a proper store API for this yet. I'd rather cross this bridge later. It is possible there are revlog-based repositories that do benefit from a thread-based worker. I didn't do very comprehensive testing. If there are, we may want to devise a more proper algorithm for whether to use the thread-based worker, including possibly config options to limit the number of threads to use. But until I see evidence that justifies complexity, simplicity wins. Differential Revision: https://phab.mercurial-scm.org/D3963
author Gregory Szorc <gregory.szorc@gmail.com>
date Wed, 18 Jul 2018 09:49:34 -0700
parents ed46d48453e8
children 2372284d9457
line wrap: on
line source

# Helper script used for generating history and working copy files and content.
# The file's name corresponds to its history. The number of changesets can
# be specified on the command line. With 2 changesets, files with names like
# content1_content2_content1-untracked are generated. The first two filename
# segments describe the contents in the two changesets. The third segment
# ("content1-untracked") describes the state in the working copy, i.e.
# the file has content "content1" and is untracked (since it was previously
# tracked, it has been forgotten).
#
# This script generates the filenames and their content, but it's up to the
# caller to tell hg about the state.
#
# There are two subcommands:
#   filelist <numchangesets>
#   state <numchangesets> (<changeset>|wc)
#
# Typical usage:
#
# $ python $TESTDIR/generate-working-copy-states.py state 2 1
# $ hg addremove --similarity 0
# $ hg commit -m 'first'
#
# $ python $TESTDIR/generate-working-copy-states.py state 2 1
# $ hg addremove --similarity 0
# $ hg commit -m 'second'
#
# $ python $TESTDIR/generate-working-copy-states.py state 2 wc
# $ hg addremove --similarity 0
# $ hg forget *_*_*-untracked
# $ rm *_*_missing-*

from __future__ import absolute_import, print_function

import os
import sys

# Generates pairs of (filename, contents), where 'contents' is a list
# describing the file's content at each revision (or in the working copy).
# At each revision, it is either None or the file's actual content. When not
# None, it may be either new content or the same content as an earlier
# revisions, so all of (modified,clean,added,removed) can be tested.
def generatestates(maxchangesets, parentcontents):
    depth = len(parentcontents)
    if depth == maxchangesets + 1:
        for tracked in (b'untracked', b'tracked'):
            filename = b"_".join([(content is None and b'missing' or content)
                                for content in parentcontents]) + b"-" + tracked
            yield (filename, parentcontents)
    else:
        for content in ({None, b'content' + (b"%d" % (depth + 1))} |
                      set(parentcontents)):
            for combination in generatestates(maxchangesets,
                                              parentcontents + [content]):
                yield combination

# retrieve the command line arguments
target = sys.argv[1]
maxchangesets = int(sys.argv[2])
if target == 'state':
    depth = sys.argv[3]

# sort to make sure we have stable output
combinations = sorted(generatestates(maxchangesets, []))

# compute file content
content = []
for filename, states in combinations:
    if target == 'filelist':
        print(filename.decode('ascii'))
    elif target == 'state':
        if depth == 'wc':
            # Make sure there is content so the file gets written and can be
            # tracked. It will be deleted outside of this script.
            content.append((filename, states[maxchangesets] or b'TOBEDELETED'))
        else:
            content.append((filename, states[int(depth) - 1]))
    else:
        print("unknown target:", target, file=sys.stderr)
        sys.exit(1)

# write actual content
for filename, data in content:
    if data is not None:
        f = open(filename, 'wb')
        f.write(data + b'\n')
        f.close()
    elif os.path.exists(filename):
        os.remove(filename)