view tests/killdaemons.py @ 51681:522b4d729e89

mmap: populate the mapping by default Without pre-population, accessing all data through a mmap can result in many pagefault, reducing performance significantly. If the mmap is prepopulated, the performance can no longer get slower than a full read. (See benchmark number below) In some cases were very few data is read, prepopulating can be overkill and slower than populating on access (through page fault). So that behavior can be controlled when the caller can pre-determine the best behavior. (See benchmark number below) In addition, testing with populating in a secondary thread yield great result combining the best of each approach. This might be implemented in later changesets. In all cases, using mmap has a great effect on memory usage when many processes run in parallel on the same machine. ### Benchmarks # What did I run A couple of month back I ran a large benchmark campaign to assess the impact of various approach for using mmap with the revlog (and other files), it highlighted a few benchmarks that capture the impact of the changes well. So to validate this change I checked the following: - log command displaying various revisions (read the changelog index) - log command displaying the patch of listed revisions (read the changelog index, the manifest index and a few files indexes) - unbundling a few revisions (read and write changelog, manifest and few files indexes, and walk the graph to update some cache) - pushing a few revisions (read and write changelog, manifest and few files indexes, walk the graph to update some cache, performs various accesses locally and remotely during discovery) Benchmarks were run using the default module policy (c+py) and the rust one. No significant difference were found between the two implementation, so we will present result using the default policy (unless otherwise specified). I ran them on a few repositories : - mercurial: a "public changeset only" copy of mercurial from 2018-08-01 using zstd compression and sparse-revlog - pypy: a copy of pypy from 2018-08-01 using zstd compression and sparse-revlog - netbeans: a copy of netbeans from 2018-08-01 using zstd compression and sparse-revlog - mozilla-try: a copy of mozilla-try from 2019-02-18 using zstd compression and sparse-revlog - mozilla-try persistent-nodemap: Same as the above but with a persistent nodemap. Used for the log --patch benchmark only # Results For the smaller repositories (mercurial, pypy), the impact of mmap is almost imperceptible, other cost dominating the operation. The impact of prepopulating is undiscernible in the benchmark we ran. For larger repositories the benchmark support explanation given above: On netbeans, the log can be about 1% faster without repopulation (for a difference < 100ms) but unbundle becomes a bit slower, even when small. ### data-env-vars.name = netbeans-2018-08-01-zstd-sparse-revlog # benchmark.name = hg.command.unbundle # benchmark.variants.issue6528 = disabled # benchmark.variants.reuse-external-delta-parent = yes # benchmark.variants.revs = any-1-extra-rev # benchmark.variants.source = unbundle # benchmark.variants.verbosity = quiet with-populate: 0.240157 no-populate: 0.265087 (+10.38%, +0.02) # benchmark.variants.revs = any-100-extra-rev with-populate: 1.459518 no-populate: 1.481290 (+1.49%, +0.02) ## benchmark.name = hg.command.push # benchmark.variants.explicit-rev = none # benchmark.variants.issue6528 = disabled # benchmark.variants.protocol = ssh # benchmark.variants.reuse-external-delta-parent = yes # benchmark.variants.revs = any-1-extra-rev with-populate: 0.771919 no-populate: 0.792025 (+2.60%, +0.02) # benchmark.variants.revs = any-100-extra-rev with-populate: 1.459518 no-populate: 1.481290 (+1.49%, +0.02) For mozilla-try, the "slow down" from pre-populate for small `hg log` is more visible, but still small in absolute time. (using rust value for the persistent nodemap value to be relevant). ### data-env-vars.name = mozilla-try-2019-02-18-ds2-pnm # benchmark.name = hg.command.log # bin-env-vars.hg.flavor = rust # benchmark.variants.patch = yes # benchmark.variants.limit-rev = 1 with-populate: 0.237813 no-populate: 0.229452 (-3.52%, -0.01) # benchmark.variants.limit-rev = 10 # benchmark.variants.patch = yes with-populate: 1.213578 no-populate: 1.205189 ### data-env-vars.name = mozilla-try-2019-02-18-zstd-sparse-revlog # benchmark.variants.limit-rev = 1000 # benchmark.variants.patch = no # benchmark.variants.rev = tip with-populate: 0.198607 no-populate: 0.195038 (-1.80%, -0.00) However pre-populating provide a significant boost on more complex operations like unbundle or push: ### data-env-vars.name = mozilla-try-2019-02-18-zstd-sparse-revlog # benchmark.name = hg.command.push # benchmark.variants.explicit-rev = none # benchmark.variants.issue6528 = disabled # benchmark.variants.protocol = ssh # benchmark.variants.reuse-external-delta-parent = yes # benchmark.variants.revs = any-1-extra-rev with-populate: 4.798632 no-populate: 4.953295 (+3.22%, +0.15) # benchmark.variants.revs = any-100-extra-rev with-populate: 4.903618 no-populate: 5.014963 (+2.27%, +0.11) ## benchmark.name = hg.command.unbundle # benchmark.variants.revs = any-1-extra-rev with-populate: 1.423411 no-populate: 1.585365 (+11.38%, +0.16) # benchmark.variants.revs = any-100-extra-rev with-populate: 1.537909 no-populate: 1.688489 (+9.79%, +0.15)
author Pierre-Yves David <pierre-yves.david@octobus.net>
date Thu, 11 Apr 2024 00:02:07 +0200
parents d54b213c4380
children 493034cc3265
line wrap: on
line source

#!/usr/bin/env python3

import os
import signal
import sys
import time

if os.name == 'nt':
    import ctypes

    _BOOL = ctypes.c_long
    _DWORD = ctypes.c_ulong
    _UINT = ctypes.c_uint
    _HANDLE = ctypes.c_void_p

    ctypes.windll.kernel32.CloseHandle.argtypes = [_HANDLE]
    ctypes.windll.kernel32.CloseHandle.restype = _BOOL

    ctypes.windll.kernel32.GetLastError.argtypes = []
    ctypes.windll.kernel32.GetLastError.restype = _DWORD

    ctypes.windll.kernel32.OpenProcess.argtypes = [_DWORD, _BOOL, _DWORD]
    ctypes.windll.kernel32.OpenProcess.restype = _HANDLE

    ctypes.windll.kernel32.TerminateProcess.argtypes = [_HANDLE, _UINT]
    ctypes.windll.kernel32.TerminateProcess.restype = _BOOL

    ctypes.windll.kernel32.WaitForSingleObject.argtypes = [_HANDLE, _DWORD]
    ctypes.windll.kernel32.WaitForSingleObject.restype = _DWORD

    def _check(ret, expectederr=None):
        if ret == 0:
            winerrno = ctypes.GetLastError()
            if winerrno == expectederr:
                return True
            raise ctypes.WinError(winerrno)

    def kill(pid, logfn, tryhard=True):
        logfn('# Killing daemon process %d' % pid)
        PROCESS_TERMINATE = 1
        PROCESS_QUERY_INFORMATION = 0x400
        SYNCHRONIZE = 0x00100000
        WAIT_OBJECT_0 = 0
        WAIT_TIMEOUT = 258
        WAIT_FAILED = _DWORD(0xFFFFFFFF).value
        handle = ctypes.windll.kernel32.OpenProcess(
            PROCESS_TERMINATE | SYNCHRONIZE | PROCESS_QUERY_INFORMATION,
            False,
            pid,
        )
        if handle is None:
            _check(0, 87)  # err 87 when process not found
            return  # process not found, already finished
        try:
            r = ctypes.windll.kernel32.WaitForSingleObject(handle, 100)
            if r == WAIT_OBJECT_0:
                pass  # terminated, but process handle still available
            elif r == WAIT_TIMEOUT:
                _check(ctypes.windll.kernel32.TerminateProcess(handle, -1))
            elif r == WAIT_FAILED:
                _check(0)  # err stored in GetLastError()

            # TODO?: forcefully kill when timeout
            #        and ?shorter waiting time? when tryhard==True
            r = ctypes.windll.kernel32.WaitForSingleObject(handle, 100)
            # timeout = 100 ms
            if r == WAIT_OBJECT_0:
                pass  # process is terminated
            elif r == WAIT_TIMEOUT:
                logfn('# Daemon process %d is stuck')
            elif r == WAIT_FAILED:
                _check(0)  # err stored in GetLastError()
        except:  # re-raises
            ctypes.windll.kernel32.CloseHandle(handle)  # no _check, keep error
            raise
        _check(ctypes.windll.kernel32.CloseHandle(handle))


else:

    def kill(pid, logfn, tryhard=True):
        try:
            os.kill(pid, 0)
            logfn('# Killing daemon process %d' % pid)
            os.kill(pid, signal.SIGTERM)
            if tryhard:
                for i in range(10):
                    time.sleep(0.05)
                    os.kill(pid, 0)
            else:
                time.sleep(0.1)
                os.kill(pid, 0)
            logfn('# Daemon process %d is stuck - really killing it' % pid)
            os.kill(pid, signal.SIGKILL)
        except ProcessLookupError:
            pass


def killdaemons(pidfile, tryhard=True, remove=False, logfn=None):
    if not logfn:
        logfn = lambda s: s
    # Kill off any leftover daemon processes
    try:
        pids = []
        with open(pidfile) as fp:
            for line in fp:
                try:
                    pid = int(line)
                    if pid <= 0:
                        raise ValueError
                except ValueError:
                    logfn(
                        '# Not killing daemon process %s - invalid pid'
                        % line.rstrip()
                    )
                    continue
                pids.append(pid)
        for pid in pids:
            kill(pid, logfn, tryhard)
        if remove:
            os.unlink(pidfile)
    except IOError:
        pass


if __name__ == '__main__':
    if len(sys.argv) > 1:
        (path,) = sys.argv[1:]
    else:
        path = os.environ["DAEMON_PIDS"]

    killdaemons(path, remove=True)