tests/filterpyflakes.py
author Manuel Jacob <me@manueljacob.de>
Sun, 22 May 2022 03:50:34 +0200
changeset 49269 395f28064826
parent 48875 6000f5b25c9b
child 50452 e07dc1e7a454
permissions -rwxr-xr-x
worker: avoid potential partial write of pickled data Previously, the code wrote the pickled data using os.write(). However, os.write() can write less bytes than passed to it. To trigger the problem, the pickled data had to be larger than 2147479552 bytes on my system. Instead, open a file object and pass it to pickle.dump(). This also has the advantage that it doesn’t buffer the whole pickled data in memory. Note that the opened file must be buffered because pickle doesn’t support unbuffered streams because unbuffered streams’ write() method might write less bytes than passed to it (like os.write()) but pickle.dump() relies on that all bytes are written (see https://github.com/python/cpython/issues/93050). The side effect of using a file object and a with statement is that wfd is explicitly closed now while it seems like before it was implicitly closed by process exit.

#!/usr/bin/env python3

# Filter output by pyflakes to control which warnings we check


import re
import sys

lines = []
for line in sys.stdin:
    # We blacklist tests that are too noisy for us
    pats = [
        r"undefined name 'WindowsError'",
        r"redefinition of unused '[^']+' from line",
        # for cffi, allow re-exports from pure.*
        r"cffi/[^:]*:.*\bimport \*' used",
        r"cffi/[^:]*:.*\*' imported but unused",
    ]

    keep = True
    for pat in pats:
        if re.search(pat, line):
            keep = False
            break  # pattern matches
    if keep:
        fn = line.split(':', 1)[0]
        f = open(fn)
        data = f.read()
        f.close()
        if 'no-' 'check-code' in data:
            continue
        lines.append(line)

for line in lines:
    sys.stdout.write(line)
print()