view mercurial/pushkey.py @ 38732:be4984261611

merge: mark file gets as not thread safe (issue5933) In default installs, this has the effect of disabling the thread-based worker on Windows when manifesting files in the working directory. My measurements have shown that with revlog-based repositories, Mercurial spends a lot of CPU time in revlog code resolving file data. This ends up incurring a lot of context switching across threads and slows down `hg update` operations when going from an empty working directory to the tip of the repo. On mozilla-unified (246,351 files) on an i7-6700K (4+4 CPUs): before: 487s wall after: 360s wall (equivalent to worker.enabled=false) cpus=2: 379s wall Even with only 2 threads, the thread pool is still slower. The introduction of the thread-based worker (02b36e860e0b) states that it resulted in a "~50%" speedup for `hg sparse --enable-profile` and `hg sparse --disable-profile`. This disagrees with my measurement above. I theorize a few reasons for this: 1) Removal of files from the working directory is I/O - not CPU - bound and should benefit from a thread pool (unless I/O is insanely fast and the GIL release is near instantaneous). So tests like `hg sparse --enable-profile` may exercise deletion throughput and aren't good benchmarks for worker tasks that are CPU heavy. 2) The patch was authored by someone at Facebook. The results were likely measured against a repository using remotefilelog. And I believe that revision retrieval during working directory updates with remotefilelog will often use a remote store, thus being I/O and not CPU bound. This probably resulted in an overstated performance gain. Since there appears to be a need to enable the thread-based worker with some stores, I've made the flagging of file gets as thread safe configurable. I've made it experimental because I don't want to formalize a boolean flag for this option and because this attribute is best captured against the store implementation. But we don't have a proper store API for this yet. I'd rather cross this bridge later. It is possible there are revlog-based repositories that do benefit from a thread-based worker. I didn't do very comprehensive testing. If there are, we may want to devise a more proper algorithm for whether to use the thread-based worker, including possibly config options to limit the number of threads to use. But until I see evidence that justifies complexity, simplicity wins. Differential Revision: https://phab.mercurial-scm.org/D3963
author Gregory Szorc <gregory.szorc@gmail.com>
date Wed, 18 Jul 2018 09:49:34 -0700
parents 7b200566e474
children 57875cf423c9
line wrap: on
line source

# pushkey.py - dispatching for pushing and pulling keys
#
# Copyright 2010 Matt Mackall <mpm@selenic.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.

from __future__ import absolute_import

from . import (
    bookmarks,
    encoding,
    obsolete,
    phases,
)

def _nslist(repo):
    n = {}
    for k in _namespaces:
        n[k] = ""
    if not obsolete.isenabled(repo, obsolete.exchangeopt):
        n.pop('obsolete')
    return n

_namespaces = {"namespaces": (lambda *x: False, _nslist),
               "bookmarks": (bookmarks.pushbookmark, bookmarks.listbookmarks),
               "phases": (phases.pushphase, phases.listphases),
               "obsolete": (obsolete.pushmarker, obsolete.listmarkers),
              }

def register(namespace, pushkey, listkeys):
    _namespaces[namespace] = (pushkey, listkeys)

def _get(namespace):
    return _namespaces.get(namespace, (lambda *x: False, lambda *x: {}))

def push(repo, namespace, key, old, new):
    '''should succeed iff value was old'''
    pk = _get(namespace)[0]
    return pk(repo, key, old, new)

def list(repo, namespace):
    '''return a dict'''
    lk = _get(namespace)[1]
    return lk(repo)

encode = encoding.fromlocal

decode = encoding.tolocal

def encodekeys(keys):
    """encode the content of a pushkey namespace for exchange over the wire"""
    return '\n'.join(['%s\t%s' % (encode(k), encode(v)) for k, v in keys])

def decodekeys(data):
    """decode the content of a pushkey namespace from exchange over the wire"""
    result = {}
    for l in data.splitlines():
        k, v = l.split('\t')
        result[decode(k)] = decode(v)
    return result