view contrib/showstack.py @ 45095:8e04607023e5

procutil: ensure that procutil.std{out,err}.write() writes all bytes Python 3 offers different kind of streams and it’s not guaranteed for all of them that calling write() writes all bytes. When Python is started in unbuffered mode, sys.std{out,err}.buffer are instances of io.FileIO, whose write() can write less bytes for platform-specific reasons (e.g. Linux has a 0x7ffff000 bytes maximum and could write less if interrupted by a signal; when writing to Windows consoles, it’s limited to 32767 bytes to avoid the "not enough space" error). This can lead to silent loss of data, both when using sys.std{out,err}.buffer (which may in fact not be a buffered stream) and when using the text streams sys.std{out,err} (I’ve created a CPython bug report for that: https://bugs.python.org/issue41221). Python may fix the problem at some point. For now, we implement our own wrapper for procutil.std{out,err} that calls the raw stream’s write() method until all bytes have been written. We don’t use sys.std{out,err} for larger writes, so I think it’s not worth the effort to patch them.
author Manuel Jacob <me@manueljacob.de>
date Fri, 10 Jul 2020 12:27:58 +0200
parents 2372284d9457
children 6000f5b25c9b
line wrap: on
line source

# showstack.py - extension to dump a Python stack trace on signal
#
# binds to both SIGQUIT (Ctrl-\) and SIGINFO (Ctrl-T on BSDs)
r"""dump stack trace when receiving SIGQUIT (Ctrl-\) or SIGINFO (Ctrl-T on BSDs)
"""

from __future__ import absolute_import, print_function
import signal
import sys
import traceback


def sigshow(*args):
    sys.stderr.write("\n")
    traceback.print_stack(args[1], limit=10, file=sys.stderr)
    sys.stderr.write("----\n")


def sigexit(*args):
    sigshow(*args)
    print('alarm!')
    sys.exit(1)


def extsetup(ui):
    signal.signal(signal.SIGQUIT, sigshow)
    signal.signal(signal.SIGALRM, sigexit)
    try:
        signal.signal(signal.SIGINFO, sigshow)
    except AttributeError:
        pass