Mercurial > hg
view tests/killdaemons.py @ 21934:0cb34b3991f8 stable
largefiles: use "normallookup" on "lfdirstate" while reverting
Before this patch, largefiles gotten from revisions other than the
parent of the working directory at "hg revert" become "clean"
unexpectedly in steps below:
1. "repo.status()" is invoked (for status check before reverting)
1-1 "dirstate" entry for standinfile SF is "normal"-ed
(1-2 "lfdirstate" entry of largefile LF (for SF) is "normal"-ed)
2. "cmdutil.revert()" is invoked
2-1 standinfile SF is updated in the working directory
2-2 "dirstate" entry for SF is NOT updated
3. "lfcommands.updatelfiles()" is invoked (by "overrides.overriderevert()")
3-1 largefile LF (for SF) is updated in the working directory
3-2 "dirstate" returns "n" and valid timestamp for SF (by 1-1 and 2-2)
3-3 "lfdirstate" entry for LF is "normal"-ed
3-4 "lfdirstate" is written into ".hg/largefiles/dirstate", and
timestamp of LF is stored into "lfdirstate" file (by 3-3)
(ASSUMPTION: timestamp of LF differs from one of "lfdirstate" file)
Then, "hs status" treats LF as "clean", even though LF is updated by
"other" revision (by 3-1), because "lfilesrepo.status()" always treats
"normal"-ed files (by 3-3 and 3-4) as "clean".
When largefiles are reverted, they should be "normallookup"-ed
forcibly.
This patch uses "normallookup" on "lfdirstate" while reverting, by
passing "True" to newly added argument "normallookup".
Forcible "normallookup"-ing is not so expensive, because list of
target largefiles is explicitly specified in this case.
This patch uses "[debug] dirstate.delaywrite" feature in the test, to
ensure that timestamp of the largefile gotten from "other" revision is
stored into ".hg/largefiles/dirstate" (for ASSUMPTION at 3-4)
author | FUJIWARA Katsunori <foozy@lares.dti.ne.jp> |
---|---|
date | Wed, 23 Jul 2014 00:10:24 +0900 |
parents | 476069509e72 |
children | 0adc22a0b6b3 |
line wrap: on
line source
#!/usr/bin/env python import os, sys, time, errno, signal if os.name =='nt': import ctypes def _check(ret, expectederr=None): if ret == 0: winerrno = ctypes.GetLastError() if winerrno == expectederr: return True raise ctypes.WinError(winerrno) def kill(pid, logfn, tryhard=True): logfn('# Killing daemon process %d' % pid) PROCESS_TERMINATE = 1 PROCESS_QUERY_INFORMATION = 0x400 SYNCHRONIZE = 0x00100000 WAIT_OBJECT_0 = 0 WAIT_TIMEOUT = 258 handle = ctypes.windll.kernel32.OpenProcess( PROCESS_TERMINATE|SYNCHRONIZE|PROCESS_QUERY_INFORMATION, False, pid) if handle == 0: _check(0, 87) # err 87 when process not found return # process not found, already finished try: r = ctypes.windll.kernel32.WaitForSingleObject(handle, 100) if r == WAIT_OBJECT_0: pass # terminated, but process handle still available elif r == WAIT_TIMEOUT: _check(ctypes.windll.kernel32.TerminateProcess(handle, -1)) else: _check(r) # TODO?: forcefully kill when timeout # and ?shorter waiting time? when tryhard==True r = ctypes.windll.kernel32.WaitForSingleObject(handle, 100) # timeout = 100 ms if r == WAIT_OBJECT_0: pass # process is terminated elif r == WAIT_TIMEOUT: logfn('# Daemon process %d is stuck') else: _check(r) # any error except: #re-raises ctypes.windll.kernel32.CloseHandle(handle) # no _check, keep error raise _check(ctypes.windll.kernel32.CloseHandle(handle)) else: def kill(pid, logfn, tryhard=True): try: os.kill(pid, 0) logfn('# Killing daemon process %d' % pid) os.kill(pid, signal.SIGTERM) if tryhard: for i in range(10): time.sleep(0.05) os.kill(pid, 0) else: time.sleep(0.1) os.kill(pid, 0) logfn('# Daemon process %d is stuck - really killing it' % pid) os.kill(pid, signal.SIGKILL) except OSError, err: if err.errno != errno.ESRCH: raise def killdaemons(pidfile, tryhard=True, remove=False, logfn=None): if not logfn: logfn = lambda s: s # Kill off any leftover daemon processes try: fp = open(pidfile) for line in fp: try: pid = int(line) except ValueError: continue kill(pid, logfn, tryhard) fp.close() if remove: os.unlink(pidfile) except IOError: pass if __name__ == '__main__': path, = sys.argv[1:] killdaemons(path)