tests/killdaemons.py
author Gregory Szorc <gregory.szorc@gmail.com>
Sat, 18 Jul 2015 10:57:20 -0700
changeset 25823 2406e2baa937
parent 25473 123c99034cb6
child 28942 05cb9c6f310e
permissions -rwxr-xr-x
changegroup: compute seen files as changesets are added (issue4750) Before this patch, addchangegroup() would walk the changelog and compute the set of seen files between applying changesets and applying manifests. When cloning large repositories such as mozilla-central, this consumed a non-trivial amount of time. On my MBP, this walk takes ~10s. On a dainty EC2 instance, this was measured to take ~125s! On the latter machine, this delay was enough for the Mercurial server to disconnect the client, thinking it had timed out, thus causing a clone to abort. This patch enables the changelog to compute the set of changed files as new revisions are added. By doing so, we: * avoid a potentially heavy computation between changelog and manifest processing by spreading the computation across all changelog additions * avoid extra reads from the changelog by operating on the data as it is added The downside of this is that the add revision callback does result in extra I/O. Before, we would perform a flush (and subsequent read to construct the full revision) when new delta chains were created. For changelogs, this is typically every 2-4 revisions. Using the callback guarantees there will be a flush after every added revision *and* an open + read of the changelog to obtain the full revision in order to read the added files. So, this increases the frequency of these operations by the average chain length. In the future, the revlog should be smart enough to know how to read revisions that haven't been flushed yet, thus eliminating this extra I/O. On my MBP, the total CPU times for an `hg unbundle` with a local mozilla-central gzip bundle containing 251,934 changesets and 211,065 files did not have a statistically significant change with this patch, holding steady around 360s. So, the increased revlog flushing did not have an effect. With this patch, there is no longer a visible pause between applying changeset and manifest data. Before, it sure felt like Mercurial was lethargic making this transition. Now, the transition is nearly instantaneous, giving the impression that Mercurial is faster. Of course, eliminating this pause means that the potential for network disconnect due to channel inactivity during the changelog walk is eliminated as well. And that is the impetus behind this change.

#!/usr/bin/env python

import os, sys, time, errno, signal

if os.name =='nt':
    import ctypes

    def _check(ret, expectederr=None):
        if ret == 0:
            winerrno = ctypes.GetLastError()
            if winerrno == expectederr:
                return True
            raise ctypes.WinError(winerrno)

    def kill(pid, logfn, tryhard=True):
        logfn('# Killing daemon process %d' % pid)
        PROCESS_TERMINATE = 1
        PROCESS_QUERY_INFORMATION = 0x400
        SYNCHRONIZE = 0x00100000
        WAIT_OBJECT_0 = 0
        WAIT_TIMEOUT = 258
        handle = ctypes.windll.kernel32.OpenProcess(
                PROCESS_TERMINATE|SYNCHRONIZE|PROCESS_QUERY_INFORMATION,
                False, pid)
        if handle == 0:
            _check(0, 87) # err 87 when process not found
            return # process not found, already finished
        try:
            r = ctypes.windll.kernel32.WaitForSingleObject(handle, 100)
            if r == WAIT_OBJECT_0:
                pass # terminated, but process handle still available
            elif r == WAIT_TIMEOUT:
                _check(ctypes.windll.kernel32.TerminateProcess(handle, -1))
            else:
                _check(r)

            # TODO?: forcefully kill when timeout
            #        and ?shorter waiting time? when tryhard==True
            r = ctypes.windll.kernel32.WaitForSingleObject(handle, 100)
                                                       # timeout = 100 ms
            if r == WAIT_OBJECT_0:
                pass # process is terminated
            elif r == WAIT_TIMEOUT:
                logfn('# Daemon process %d is stuck')
            else:
                _check(r) # any error
        except: #re-raises
            ctypes.windll.kernel32.CloseHandle(handle) # no _check, keep error
            raise
        _check(ctypes.windll.kernel32.CloseHandle(handle))

else:
    def kill(pid, logfn, tryhard=True):
        try:
            os.kill(pid, 0)
            logfn('# Killing daemon process %d' % pid)
            os.kill(pid, signal.SIGTERM)
            if tryhard:
                for i in range(10):
                    time.sleep(0.05)
                    os.kill(pid, 0)
            else:
                time.sleep(0.1)
                os.kill(pid, 0)
            logfn('# Daemon process %d is stuck - really killing it' % pid)
            os.kill(pid, signal.SIGKILL)
        except OSError as err:
            if err.errno != errno.ESRCH:
                raise

def killdaemons(pidfile, tryhard=True, remove=False, logfn=None):
    if not logfn:
        logfn = lambda s: s
    # Kill off any leftover daemon processes
    try:
        fp = open(pidfile)
        for line in fp:
            try:
                pid = int(line)
            except ValueError:
                continue
            kill(pid, logfn, tryhard)
        fp.close()
        if remove:
            os.unlink(pidfile)
    except IOError:
        pass

if __name__ == '__main__':
    if len(sys.argv) > 1:
        path, = sys.argv[1:]
    else:
        path = os.environ["DAEMON_PIDS"]

    killdaemons(path)