view mercurial/logexchange.py @ 38732:be4984261611

merge: mark file gets as not thread safe (issue5933) In default installs, this has the effect of disabling the thread-based worker on Windows when manifesting files in the working directory. My measurements have shown that with revlog-based repositories, Mercurial spends a lot of CPU time in revlog code resolving file data. This ends up incurring a lot of context switching across threads and slows down `hg update` operations when going from an empty working directory to the tip of the repo. On mozilla-unified (246,351 files) on an i7-6700K (4+4 CPUs): before: 487s wall after: 360s wall (equivalent to worker.enabled=false) cpus=2: 379s wall Even with only 2 threads, the thread pool is still slower. The introduction of the thread-based worker (02b36e860e0b) states that it resulted in a "~50%" speedup for `hg sparse --enable-profile` and `hg sparse --disable-profile`. This disagrees with my measurement above. I theorize a few reasons for this: 1) Removal of files from the working directory is I/O - not CPU - bound and should benefit from a thread pool (unless I/O is insanely fast and the GIL release is near instantaneous). So tests like `hg sparse --enable-profile` may exercise deletion throughput and aren't good benchmarks for worker tasks that are CPU heavy. 2) The patch was authored by someone at Facebook. The results were likely measured against a repository using remotefilelog. And I believe that revision retrieval during working directory updates with remotefilelog will often use a remote store, thus being I/O and not CPU bound. This probably resulted in an overstated performance gain. Since there appears to be a need to enable the thread-based worker with some stores, I've made the flagging of file gets as thread safe configurable. I've made it experimental because I don't want to formalize a boolean flag for this option and because this attribute is best captured against the store implementation. But we don't have a proper store API for this yet. I'd rather cross this bridge later. It is possible there are revlog-based repositories that do benefit from a thread-based worker. I didn't do very comprehensive testing. If there are, we may want to devise a more proper algorithm for whether to use the thread-based worker, including possibly config options to limit the number of threads to use. But until I see evidence that justifies complexity, simplicity wins. Differential Revision: https://phab.mercurial-scm.org/D3963
author Gregory Szorc <gregory.szorc@gmail.com>
date Wed, 18 Jul 2018 09:49:34 -0700
parents bbdc1bc56e58
children 94c0421d67a0
line wrap: on
line source

# logexchange.py
#
# Copyright 2017 Augie Fackler <raf@durin42.com>
# Copyright 2017 Sean Farley <sean@farley.io>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.

from __future__ import absolute_import

from .node import hex

from . import (
    util,
    vfs as vfsmod,
)

# directory name in .hg/ in which remotenames files will be present
remotenamedir = 'logexchange'

def readremotenamefile(repo, filename):
    """
    reads a file from .hg/logexchange/ directory and yields it's content
    filename: the file to be read
    yield a tuple (node, remotepath, name)
    """

    vfs = vfsmod.vfs(repo.vfs.join(remotenamedir))
    if not vfs.exists(filename):
        return
    f = vfs(filename)
    lineno = 0
    for line in f:
        line = line.strip()
        if not line:
            continue
        # contains the version number
        if lineno == 0:
            lineno += 1
        try:
            node, remote, rname = line.split('\0')
            yield node, remote, rname
        except ValueError:
            pass

    f.close()

def readremotenames(repo):
    """
    read the details about the remotenames stored in .hg/logexchange/ and
    yields a tuple (node, remotepath, name). It does not yields information
    about whether an entry yielded is branch or bookmark. To get that
    information, call the respective functions.
    """

    for bmentry in readremotenamefile(repo, 'bookmarks'):
        yield bmentry
    for branchentry in readremotenamefile(repo, 'branches'):
        yield branchentry

def writeremotenamefile(repo, remotepath, names, nametype):
    vfs = vfsmod.vfs(repo.vfs.join(remotenamedir))
    f = vfs(nametype, 'w', atomictemp=True)
    # write the storage version info on top of file
    # version '0' represents the very initial version of the storage format
    f.write('0\n\n')

    olddata = set(readremotenamefile(repo, nametype))
    # re-save the data from a different remote than this one.
    for node, oldpath, rname in sorted(olddata):
        if oldpath != remotepath:
            f.write('%s\0%s\0%s\n' % (node, oldpath, rname))

    for name, node in sorted(names.iteritems()):
        if nametype == "branches":
            for n in node:
                f.write('%s\0%s\0%s\n' % (n, remotepath, name))
        elif nametype == "bookmarks":
            if node:
                f.write('%s\0%s\0%s\n' % (node, remotepath, name))

    f.close()

def saveremotenames(repo, remotepath, branches=None, bookmarks=None):
    """
    save remotenames i.e. remotebookmarks and remotebranches in their
    respective files under ".hg/logexchange/" directory.
    """
    wlock = repo.wlock()
    try:
        if bookmarks:
            writeremotenamefile(repo, remotepath, bookmarks, 'bookmarks')
        if branches:
            writeremotenamefile(repo, remotepath, branches, 'branches')
    finally:
        wlock.release()

def activepath(repo, remote):
    """returns remote path"""
    local = None
    # is the remote a local peer
    local = remote.local()

    # determine the remote path from the repo, if possible; else just
    # use the string given to us
    rpath = remote
    if local:
        rpath = remote._repo.root
    elif not isinstance(remote, bytes):
        rpath = remote._url

    # represent the remotepath with user defined path name if exists
    for path, url in repo.ui.configitems('paths'):
        # remove auth info from user defined url
        noauthurl = util.removeauth(url)
        if url == rpath or noauthurl == rpath:
            rpath = path
            break

    return rpath

def pullremotenames(localrepo, remoterepo):
    """
    pulls bookmarks and branches information of the remote repo during a
    pull or clone operation.
    localrepo is our local repository
    remoterepo is the peer instance
    """
    remotepath = activepath(localrepo, remoterepo)

    with remoterepo.commandexecutor() as e:
        bookmarks = e.callcommand('listkeys', {
            'namespace': 'bookmarks',
        }).result()

    # on a push, we don't want to keep obsolete heads since
    # they won't show up as heads on the next pull, so we
    # remove them here otherwise we would require the user
    # to issue a pull to refresh the storage
    bmap = {}
    repo = localrepo.unfiltered()

    with remoterepo.commandexecutor() as e:
        branchmap = e.callcommand('branchmap', {}).result()

    for branch, nodes in branchmap.iteritems():
        bmap[branch] = []
        for node in nodes:
            if node in repo and not repo[node].obsolete():
                bmap[branch].append(hex(node))

    saveremotenames(localrepo, remotepath, bmap, bookmarks)