view tests/killdaemons.py @ 29021:92d37fb3f1aa stable

verify: don't init subrepo when missing one is referenced (issue5128) (API) Initializing a subrepo when one doesn't exist is the right thing to do when the parent is being updated, but in few other cases. Unfortunately, there isn't enough context in the subrepo module to distinguish this case. This same issue can be caused with other subrepo aware commands, so there is a general issue here beyond the scope of this fix. A simpler attempt I tried was to add an '_updating' boolean to localrepo, and set/clear it around the call to mergemod.update() in hg.updaterepo(). That mostly worked, but doesn't handle the case where archive will clone the subrepo if it is missing. (I vaguely recall that there may be other commands that will clone if needed like this, but certainly not all do. It seems both handy, and a bit surprising for what should be a read only operation. It might be nice if all commands did this consistently, but we probably need Angel's subrepo caching first, to not make a mess of the working directory.) I originally handled 'Exception' in order to pick up the Aborts raised in subrepo.state(), but this turns out to be unnecessary because that is called once and cached by ctx.sub() when iterating the subrepos. It was suggested in the bug discussion to skip looking at the subrepo links unless -S is specified. I don't really like that idea because missing a subrepo or (less likely, but worse) a corrupt .hgsubstate is a problem of the parent repo when checking out a revision. The -S option seems like a better fit for functionality that would recurse into each subrepo and do a full verification. Ultimately, the default value for 'allowcreate' should probably be flipped, but since the default behavior was to allow creation, this is less risky for now.
author Matt Harbison <matt_harbison@yahoo.com>
date Wed, 27 Apr 2016 22:45:52 -0400
parents 05cb9c6f310e
children 4ddfb730789d
line wrap: on
line source

#!/usr/bin/env python

from __future__ import absolute_import
import errno
import os
import signal
import sys
import time

if os.name =='nt':
    import ctypes

    def _check(ret, expectederr=None):
        if ret == 0:
            winerrno = ctypes.GetLastError()
            if winerrno == expectederr:
                return True
            raise ctypes.WinError(winerrno)

    def kill(pid, logfn, tryhard=True):
        logfn('# Killing daemon process %d' % pid)
        PROCESS_TERMINATE = 1
        PROCESS_QUERY_INFORMATION = 0x400
        SYNCHRONIZE = 0x00100000
        WAIT_OBJECT_0 = 0
        WAIT_TIMEOUT = 258
        handle = ctypes.windll.kernel32.OpenProcess(
                PROCESS_TERMINATE|SYNCHRONIZE|PROCESS_QUERY_INFORMATION,
                False, pid)
        if handle == 0:
            _check(0, 87) # err 87 when process not found
            return # process not found, already finished
        try:
            r = ctypes.windll.kernel32.WaitForSingleObject(handle, 100)
            if r == WAIT_OBJECT_0:
                pass # terminated, but process handle still available
            elif r == WAIT_TIMEOUT:
                _check(ctypes.windll.kernel32.TerminateProcess(handle, -1))
            else:
                _check(r)

            # TODO?: forcefully kill when timeout
            #        and ?shorter waiting time? when tryhard==True
            r = ctypes.windll.kernel32.WaitForSingleObject(handle, 100)
                                                       # timeout = 100 ms
            if r == WAIT_OBJECT_0:
                pass # process is terminated
            elif r == WAIT_TIMEOUT:
                logfn('# Daemon process %d is stuck')
            else:
                _check(r) # any error
        except: #re-raises
            ctypes.windll.kernel32.CloseHandle(handle) # no _check, keep error
            raise
        _check(ctypes.windll.kernel32.CloseHandle(handle))

else:
    def kill(pid, logfn, tryhard=True):
        try:
            os.kill(pid, 0)
            logfn('# Killing daemon process %d' % pid)
            os.kill(pid, signal.SIGTERM)
            if tryhard:
                for i in range(10):
                    time.sleep(0.05)
                    os.kill(pid, 0)
            else:
                time.sleep(0.1)
                os.kill(pid, 0)
            logfn('# Daemon process %d is stuck - really killing it' % pid)
            os.kill(pid, signal.SIGKILL)
        except OSError as err:
            if err.errno != errno.ESRCH:
                raise

def killdaemons(pidfile, tryhard=True, remove=False, logfn=None):
    if not logfn:
        logfn = lambda s: s
    # Kill off any leftover daemon processes
    try:
        fp = open(pidfile)
        for line in fp:
            try:
                pid = int(line)
            except ValueError:
                continue
            kill(pid, logfn, tryhard)
        fp.close()
        if remove:
            os.unlink(pidfile)
    except IOError:
        pass

if __name__ == '__main__':
    if len(sys.argv) > 1:
        path, = sys.argv[1:]
    else:
        path = os.environ["DAEMON_PIDS"]

    killdaemons(path)