zstandard: vendor python-zstandard 0.13.0
authorGregory Szorc <gregory.szorc@gmail.com>
Sat, 28 Dec 2019 09:55:45 -0800
changeset 43994 de7838053207
parent 43993 873d0fecb9a3
child 43995 801b8d314791
zstandard: vendor python-zstandard 0.13.0 Version 0.13.0 of the package was just released. It contains an upgraded zstd C library which can result in some performance wins, official support for Python 3.8, and a blackened code base. There were no meaningful code or functionality changes in this release of python-zstandard: just reformatting and an upgraded zstd library version. So the diff seems much larger than what it is. Files were added without modifications. The clang-format-ignorelist file was updated to reflect a new header file in the zstd distribution. # no-check-commit because 3rd party code has different style guidelines Differential Revision: https://phab.mercurial-scm.org/D7770
contrib/clang-format-ignorelist
contrib/python-zstandard/NEWS.rst
contrib/python-zstandard/README.rst
contrib/python-zstandard/c-ext/python-zstandard.h
contrib/python-zstandard/make_cffi.py
contrib/python-zstandard/setup.py
contrib/python-zstandard/setup_zstd.py
contrib/python-zstandard/tests/common.py
contrib/python-zstandard/tests/test_buffer_util.py
contrib/python-zstandard/tests/test_compressor.py
contrib/python-zstandard/tests/test_compressor_fuzzing.py
contrib/python-zstandard/tests/test_data_structures.py
contrib/python-zstandard/tests/test_data_structures_fuzzing.py
contrib/python-zstandard/tests/test_decompressor.py
contrib/python-zstandard/tests/test_decompressor_fuzzing.py
contrib/python-zstandard/tests/test_estimate_sizes.py
contrib/python-zstandard/tests/test_module_attributes.py
contrib/python-zstandard/tests/test_train_dictionary.py
contrib/python-zstandard/zstandard/__init__.py
contrib/python-zstandard/zstandard/cffi.py
contrib/python-zstandard/zstd.c
contrib/python-zstandard/zstd/common/bitstream.h
contrib/python-zstandard/zstd/common/compiler.h
contrib/python-zstandard/zstd/common/fse.h
contrib/python-zstandard/zstd/common/fse_decompress.c
contrib/python-zstandard/zstd/common/mem.h
contrib/python-zstandard/zstd/common/pool.c
contrib/python-zstandard/zstd/common/threading.c
contrib/python-zstandard/zstd/common/threading.h
contrib/python-zstandard/zstd/common/zstd_internal.h
contrib/python-zstandard/zstd/compress/zstd_compress.c
contrib/python-zstandard/zstd/compress/zstd_compress_internal.h
contrib/python-zstandard/zstd/compress/zstd_compress_literals.c
contrib/python-zstandard/zstd/compress/zstd_compress_literals.h
contrib/python-zstandard/zstd/compress/zstd_compress_sequences.c
contrib/python-zstandard/zstd/compress/zstd_compress_sequences.h
contrib/python-zstandard/zstd/compress/zstd_cwksp.h
contrib/python-zstandard/zstd/compress/zstd_double_fast.c
contrib/python-zstandard/zstd/compress/zstd_fast.c
contrib/python-zstandard/zstd/compress/zstd_lazy.c
contrib/python-zstandard/zstd/compress/zstd_ldm.c
contrib/python-zstandard/zstd/compress/zstd_opt.c
contrib/python-zstandard/zstd/compress/zstdmt_compress.c
contrib/python-zstandard/zstd/decompress/huf_decompress.c
contrib/python-zstandard/zstd/decompress/zstd_decompress.c
contrib/python-zstandard/zstd/decompress/zstd_decompress_block.c
contrib/python-zstandard/zstd/deprecated/zbuff.h
contrib/python-zstandard/zstd/dictBuilder/cover.c
contrib/python-zstandard/zstd/dictBuilder/zdict.c
contrib/python-zstandard/zstd/zstd.h
--- a/contrib/clang-format-ignorelist	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/clang-format-ignorelist	Sat Dec 28 09:55:45 2019 -0800
@@ -52,6 +52,7 @@
 contrib/python-zstandard/zstd/compress/zstd_compress_literals.h
 contrib/python-zstandard/zstd/compress/zstd_compress_sequences.c
 contrib/python-zstandard/zstd/compress/zstd_compress_sequences.h
+contrib/python-zstandard/zstd/compress/zstd_cwksp.h
 contrib/python-zstandard/zstd/compress/zstd_double_fast.c
 contrib/python-zstandard/zstd/compress/zstd_double_fast.h
 contrib/python-zstandard/zstd/compress/zstd_fast.c
--- a/contrib/python-zstandard/NEWS.rst	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/NEWS.rst	Sat Dec 28 09:55:45 2019 -0800
@@ -43,13 +43,18 @@
 * Support modifying compression parameters mid operation when supported by
   zstd API.
 * Expose ``ZSTD_CLEVEL_DEFAULT`` constant.
+* Expose ``ZSTD_SRCSIZEHINT_{MIN,MAX}`` constants.
 * Support ``ZSTD_p_forceAttachDict`` compression parameter.
-* Support ``ZSTD_c_literalCompressionMode `` compression parameter.
+* Support ``ZSTD_dictForceLoad`` dictionary compression parameter.
+* Support ``ZSTD_c_targetCBlockSize`` compression parameter.
+* Support ``ZSTD_c_literalCompressionMode`` compression parameter.
+* Support ``ZSTD_c_srcSizeHint`` compression parameter.
 * Use ``ZSTD_CCtx_getParameter()``/``ZSTD_CCtxParam_getParameter()`` for retrieving
   compression parameters.
 * Consider exposing ``ZSTDMT_toFlushNow()``.
 * Expose ``ZDICT_trainFromBuffer_fastCover()``,
   ``ZDICT_optimizeTrainFromBuffer_fastCover``.
+* Expose ``ZSTD_Sequence`` struct and related ``ZSTD_getSequences()`` API.
 * Expose and enforce ``ZSTD_minCLevel()`` for minimum compression level.
 * Consider a ``chunker()`` API for decompression.
 * Consider stats for ``chunker()`` API, including finding the last consumed
@@ -67,6 +72,20 @@
 * API for ensuring max memory ceiling isn't exceeded.
 * Move off nose for testing.
 
+0.13.0 (released 2019-12-28)
+============================
+
+Changes
+-------
+
+* ``pytest-xdist`` ``pytest`` extension is now installed so tests can be
+  run in parallel.
+* CI now builds ``manylinux2010`` and ``manylinux2014`` binary wheels
+  instead of a mix of ``manylinux2010`` and ``manylinux1``.
+* Official support for Python 3.8 has been added.
+* Bundled zstandard library upgraded from 1.4.3 to 1.4.4.
+* Python code has been reformatted with black.
+
 0.12.0 (released 2019-09-15)
 ============================
 
--- a/contrib/python-zstandard/README.rst	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/README.rst	Sat Dec 28 09:55:45 2019 -0800
@@ -20,7 +20,7 @@
 Requirements
 ============
 
-This extension is designed to run with Python 2.7, 3.4, 3.5, 3.6, and 3.7
+This extension is designed to run with Python 2.7, 3.5, 3.6, 3.7, and 3.8
 on common platforms (Linux, Windows, and OS X). On PyPy (both PyPy2 and PyPy3) we support version 6.0.0 and above. 
 x86 and x86_64 are well-tested on Windows. Only x86_64 is well-tested on Linux and macOS.
 
--- a/contrib/python-zstandard/c-ext/python-zstandard.h	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/c-ext/python-zstandard.h	Sat Dec 28 09:55:45 2019 -0800
@@ -16,7 +16,7 @@
 #include <zdict.h>
 
 /* Remember to change the string in zstandard/__init__ as well */
-#define PYTHON_ZSTANDARD_VERSION "0.12.0"
+#define PYTHON_ZSTANDARD_VERSION "0.13.0"
 
 typedef enum {
 	compressorobj_flush_finish,
--- a/contrib/python-zstandard/make_cffi.py	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/make_cffi.py	Sat Dec 28 09:55:45 2019 -0800
@@ -16,80 +16,82 @@
 
 HERE = os.path.abspath(os.path.dirname(__file__))
 
-SOURCES = ['zstd/%s' % p for p in (
-    'common/debug.c',
-    'common/entropy_common.c',
-    'common/error_private.c',
-    'common/fse_decompress.c',
-    'common/pool.c',
-    'common/threading.c',
-    'common/xxhash.c',
-    'common/zstd_common.c',
-    'compress/fse_compress.c',
-    'compress/hist.c',
-    'compress/huf_compress.c',
-    'compress/zstd_compress.c',
-    'compress/zstd_compress_literals.c',
-    'compress/zstd_compress_sequences.c',
-    'compress/zstd_double_fast.c',
-    'compress/zstd_fast.c',
-    'compress/zstd_lazy.c',
-    'compress/zstd_ldm.c',
-    'compress/zstd_opt.c',
-    'compress/zstdmt_compress.c',
-    'decompress/huf_decompress.c',
-    'decompress/zstd_ddict.c',
-    'decompress/zstd_decompress.c',
-    'decompress/zstd_decompress_block.c',
-    'dictBuilder/cover.c',
-    'dictBuilder/fastcover.c',
-    'dictBuilder/divsufsort.c',
-    'dictBuilder/zdict.c',
-)]
+SOURCES = [
+    "zstd/%s" % p
+    for p in (
+        "common/debug.c",
+        "common/entropy_common.c",
+        "common/error_private.c",
+        "common/fse_decompress.c",
+        "common/pool.c",
+        "common/threading.c",
+        "common/xxhash.c",
+        "common/zstd_common.c",
+        "compress/fse_compress.c",
+        "compress/hist.c",
+        "compress/huf_compress.c",
+        "compress/zstd_compress.c",
+        "compress/zstd_compress_literals.c",
+        "compress/zstd_compress_sequences.c",
+        "compress/zstd_double_fast.c",
+        "compress/zstd_fast.c",
+        "compress/zstd_lazy.c",
+        "compress/zstd_ldm.c",
+        "compress/zstd_opt.c",
+        "compress/zstdmt_compress.c",
+        "decompress/huf_decompress.c",
+        "decompress/zstd_ddict.c",
+        "decompress/zstd_decompress.c",
+        "decompress/zstd_decompress_block.c",
+        "dictBuilder/cover.c",
+        "dictBuilder/fastcover.c",
+        "dictBuilder/divsufsort.c",
+        "dictBuilder/zdict.c",
+    )
+]
 
 # Headers whose preprocessed output will be fed into cdef().
-HEADERS = [os.path.join(HERE, 'zstd', *p) for p in (
-    ('zstd.h',),
-    ('dictBuilder', 'zdict.h'),
-)]
+HEADERS = [
+    os.path.join(HERE, "zstd", *p) for p in (("zstd.h",), ("dictBuilder", "zdict.h"),)
+]
 
-INCLUDE_DIRS = [os.path.join(HERE, d) for d in (
-    'zstd',
-    'zstd/common',
-    'zstd/compress',
-    'zstd/decompress',
-    'zstd/dictBuilder',
-)]
+INCLUDE_DIRS = [
+    os.path.join(HERE, d)
+    for d in (
+        "zstd",
+        "zstd/common",
+        "zstd/compress",
+        "zstd/decompress",
+        "zstd/dictBuilder",
+    )
+]
 
 # cffi can't parse some of the primitives in zstd.h. So we invoke the
 # preprocessor and feed its output into cffi.
 compiler = distutils.ccompiler.new_compiler()
 
 # Needed for MSVC.
-if hasattr(compiler, 'initialize'):
+if hasattr(compiler, "initialize"):
     compiler.initialize()
 
 # Distutils doesn't set compiler.preprocessor, so invoke the preprocessor
 # manually.
-if compiler.compiler_type == 'unix':
-    args = list(compiler.executables['compiler'])
-    args.extend([
-        '-E',
-        '-DZSTD_STATIC_LINKING_ONLY',
-        '-DZDICT_STATIC_LINKING_ONLY',
-    ])
-elif compiler.compiler_type == 'msvc':
+if compiler.compiler_type == "unix":
+    args = list(compiler.executables["compiler"])
+    args.extend(
+        ["-E", "-DZSTD_STATIC_LINKING_ONLY", "-DZDICT_STATIC_LINKING_ONLY",]
+    )
+elif compiler.compiler_type == "msvc":
     args = [compiler.cc]
-    args.extend([
-        '/EP',
-        '/DZSTD_STATIC_LINKING_ONLY',
-        '/DZDICT_STATIC_LINKING_ONLY',
-    ])
+    args.extend(
+        ["/EP", "/DZSTD_STATIC_LINKING_ONLY", "/DZDICT_STATIC_LINKING_ONLY",]
+    )
 else:
-    raise Exception('unsupported compiler type: %s' % compiler.compiler_type)
+    raise Exception("unsupported compiler type: %s" % compiler.compiler_type)
+
 
 def preprocess(path):
-    with open(path, 'rb') as fh:
+    with open(path, "rb") as fh:
         lines = []
         it = iter(fh)
 
@@ -104,32 +106,44 @@
             # We define ZSTD_STATIC_LINKING_ONLY, which is redundant with the inline
             # #define in zstdmt_compress.h and results in a compiler warning. So drop
             # the inline #define.
-            if l.startswith((b'#include <stddef.h>',
-                             b'#include "zstd.h"',
-                             b'#define ZSTD_STATIC_LINKING_ONLY')):
+            if l.startswith(
+                (
+                    b"#include <stddef.h>",
+                    b'#include "zstd.h"',
+                    b"#define ZSTD_STATIC_LINKING_ONLY",
+                )
+            ):
                 continue
 
+            # The preprocessor environment on Windows doesn't define include
+            # paths, so the #include of limits.h fails. We work around this
+            # by removing that import and defining INT_MAX ourselves. This is
+            # a bit hacky. But it gets the job done.
+            # TODO make limits.h work on Windows so we ensure INT_MAX is
+            # correct.
+            if l.startswith(b"#include <limits.h>"):
+                l = b"#define INT_MAX 2147483647\n"
+
             # ZSTDLIB_API may not be defined if we dropped zstd.h. It isn't
             # important so just filter it out.
-            if l.startswith(b'ZSTDLIB_API'):
-                l = l[len(b'ZSTDLIB_API '):]
+            if l.startswith(b"ZSTDLIB_API"):
+                l = l[len(b"ZSTDLIB_API ") :]
 
             lines.append(l)
 
-    fd, input_file = tempfile.mkstemp(suffix='.h')
-    os.write(fd, b''.join(lines))
+    fd, input_file = tempfile.mkstemp(suffix=".h")
+    os.write(fd, b"".join(lines))
     os.close(fd)
 
     try:
         env = dict(os.environ)
-        if getattr(compiler, '_paths', None):
-            env['PATH'] = compiler._paths
-        process = subprocess.Popen(args + [input_file], stdout=subprocess.PIPE,
-                                   env=env)
+        if getattr(compiler, "_paths", None):
+            env["PATH"] = compiler._paths
+        process = subprocess.Popen(args + [input_file], stdout=subprocess.PIPE, env=env)
         output = process.communicate()[0]
         ret = process.poll()
         if ret:
-            raise Exception('preprocessor exited with error')
+            raise Exception("preprocessor exited with error")
 
         return output
     finally:
@@ -141,16 +155,16 @@
     for line in output.splitlines():
         # CFFI's parser doesn't like __attribute__ on UNIX compilers.
         if line.startswith(b'__attribute__ ((visibility ("default"))) '):
-            line = line[len(b'__attribute__ ((visibility ("default"))) '):]
+            line = line[len(b'__attribute__ ((visibility ("default"))) ') :]
 
-        if line.startswith(b'__attribute__((deprecated('):
+        if line.startswith(b"__attribute__((deprecated("):
             continue
-        elif b'__declspec(deprecated(' in line:
+        elif b"__declspec(deprecated(" in line:
             continue
 
         lines.append(line)
 
-    return b'\n'.join(lines)
+    return b"\n".join(lines)
 
 
 ffi = cffi.FFI()
@@ -159,18 +173,22 @@
 # *_DISABLE_DEPRECATE_WARNINGS prevents the compiler from emitting a warning
 # when cffi uses the function. Since we statically link against zstd, even
 # if we use the deprecated functions it shouldn't be a huge problem.
-ffi.set_source('_zstd_cffi', '''
+ffi.set_source(
+    "_zstd_cffi",
+    """
 #define MIN(a,b) ((a)<(b) ? (a) : (b))
 #define ZSTD_STATIC_LINKING_ONLY
 #include <zstd.h>
 #define ZDICT_STATIC_LINKING_ONLY
 #define ZDICT_DISABLE_DEPRECATE_WARNINGS
 #include <zdict.h>
-''', sources=SOURCES,
-     include_dirs=INCLUDE_DIRS,
-     extra_compile_args=['-DZSTD_MULTITHREAD'])
+""",
+    sources=SOURCES,
+    include_dirs=INCLUDE_DIRS,
+    extra_compile_args=["-DZSTD_MULTITHREAD"],
+)
 
-DEFINE = re.compile(b'^\\#define ([a-zA-Z0-9_]+) ')
+DEFINE = re.compile(b"^\\#define ([a-zA-Z0-9_]+) ")
 
 sources = []
 
@@ -181,27 +199,27 @@
 
     # #define's are effectively erased as part of going through preprocessor.
     # So perform a manual pass to re-add those to the cdef source.
-    with open(header, 'rb') as fh:
+    with open(header, "rb") as fh:
         for line in fh:
             line = line.strip()
             m = DEFINE.match(line)
             if not m:
                 continue
 
-            if m.group(1) == b'ZSTD_STATIC_LINKING_ONLY':
+            if m.group(1) == b"ZSTD_STATIC_LINKING_ONLY":
                 continue
 
             # The parser doesn't like some constants with complex values.
-            if m.group(1) in (b'ZSTD_LIB_VERSION', b'ZSTD_VERSION_STRING'):
+            if m.group(1) in (b"ZSTD_LIB_VERSION", b"ZSTD_VERSION_STRING"):
                 continue
 
             # The ... is magic syntax by the cdef parser to resolve the
             # value at compile time.
-            sources.append(m.group(0) + b' ...')
+            sources.append(m.group(0) + b" ...")
 
-cdeflines = b'\n'.join(sources).splitlines()
+cdeflines = b"\n".join(sources).splitlines()
 cdeflines = [l for l in cdeflines if l.strip()]
-ffi.cdef(b'\n'.join(cdeflines).decode('latin1'))
+ffi.cdef(b"\n".join(cdeflines).decode("latin1"))
 
-if __name__ == '__main__':
+if __name__ == "__main__":
     ffi.compile()
--- a/contrib/python-zstandard/setup.py	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/setup.py	Sat Dec 28 09:55:45 2019 -0800
@@ -16,7 +16,7 @@
 # (like memoryview).
 # Need feature in 1.11 for ffi.gc() to declare size of objects so we avoid
 # garbage collection pitfalls.
-MINIMUM_CFFI_VERSION = '1.11'
+MINIMUM_CFFI_VERSION = "1.11"
 
 try:
     import cffi
@@ -26,9 +26,11 @@
     # out the CFFI version here and reject CFFI if it is too old.
     cffi_version = LooseVersion(cffi.__version__)
     if cffi_version < LooseVersion(MINIMUM_CFFI_VERSION):
-        print('CFFI 1.11 or newer required (%s found); '
-              'not building CFFI backend' % cffi_version,
-              file=sys.stderr)
+        print(
+            "CFFI 1.11 or newer required (%s found); "
+            "not building CFFI backend" % cffi_version,
+            file=sys.stderr,
+        )
         cffi = None
 
 except ImportError:
@@ -40,73 +42,77 @@
 SYSTEM_ZSTD = False
 WARNINGS_AS_ERRORS = False
 
-if os.environ.get('ZSTD_WARNINGS_AS_ERRORS', ''):
+if os.environ.get("ZSTD_WARNINGS_AS_ERRORS", ""):
     WARNINGS_AS_ERRORS = True
 
-if '--legacy' in sys.argv:
+if "--legacy" in sys.argv:
     SUPPORT_LEGACY = True
-    sys.argv.remove('--legacy')
+    sys.argv.remove("--legacy")
 
-if '--system-zstd' in sys.argv:
+if "--system-zstd" in sys.argv:
     SYSTEM_ZSTD = True
-    sys.argv.remove('--system-zstd')
+    sys.argv.remove("--system-zstd")
 
-if '--warnings-as-errors' in sys.argv:
+if "--warnings-as-errors" in sys.argv:
     WARNINGS_AS_ERRORS = True
-    sys.argv.remove('--warning-as-errors')
+    sys.argv.remove("--warning-as-errors")
 
 # Code for obtaining the Extension instance is in its own module to
 # facilitate reuse in other projects.
 extensions = [
-    setup_zstd.get_c_extension(name='zstd',
-                               support_legacy=SUPPORT_LEGACY,
-                               system_zstd=SYSTEM_ZSTD,
-                               warnings_as_errors=WARNINGS_AS_ERRORS),
+    setup_zstd.get_c_extension(
+        name="zstd",
+        support_legacy=SUPPORT_LEGACY,
+        system_zstd=SYSTEM_ZSTD,
+        warnings_as_errors=WARNINGS_AS_ERRORS,
+    ),
 ]
 
 install_requires = []
 
 if cffi:
     import make_cffi
+
     extensions.append(make_cffi.ffi.distutils_extension())
-    install_requires.append('cffi>=%s' % MINIMUM_CFFI_VERSION)
+    install_requires.append("cffi>=%s" % MINIMUM_CFFI_VERSION)
 
 version = None
 
-with open('c-ext/python-zstandard.h', 'r') as fh:
+with open("c-ext/python-zstandard.h", "r") as fh:
     for line in fh:
-        if not line.startswith('#define PYTHON_ZSTANDARD_VERSION'):
+        if not line.startswith("#define PYTHON_ZSTANDARD_VERSION"):
             continue
 
         version = line.split()[2][1:-1]
         break
 
 if not version:
-    raise Exception('could not resolve package version; '
-                    'this should never happen')
+    raise Exception("could not resolve package version; " "this should never happen")
 
 setup(
-    name='zstandard',
+    name="zstandard",
     version=version,
-    description='Zstandard bindings for Python',
-    long_description=open('README.rst', 'r').read(),
-    url='https://github.com/indygreg/python-zstandard',
-    author='Gregory Szorc',
-    author_email='gregory.szorc@gmail.com',
-    license='BSD',
+    description="Zstandard bindings for Python",
+    long_description=open("README.rst", "r").read(),
+    url="https://github.com/indygreg/python-zstandard",
+    author="Gregory Szorc",
+    author_email="gregory.szorc@gmail.com",
+    license="BSD",
     classifiers=[
-        'Development Status :: 4 - Beta',
-        'Intended Audience :: Developers',
-        'License :: OSI Approved :: BSD License',
-        'Programming Language :: C',
-        'Programming Language :: Python :: 2.7',
-        'Programming Language :: Python :: 3.5',
-        'Programming Language :: Python :: 3.6',
-        'Programming Language :: Python :: 3.7',
+        "Development Status :: 4 - Beta",
+        "Intended Audience :: Developers",
+        "License :: OSI Approved :: BSD License",
+        "Programming Language :: C",
+        "Programming Language :: Python :: 2.7",
+        "Programming Language :: Python :: 3.5",
+        "Programming Language :: Python :: 3.6",
+        "Programming Language :: Python :: 3.7",
+        "Programming Language :: Python :: 3.8",
     ],
-    keywords='zstandard zstd compression',
-    packages=['zstandard'],
+    keywords="zstandard zstd compression",
+    packages=["zstandard"],
     ext_modules=extensions,
-    test_suite='tests',
+    test_suite="tests",
     install_requires=install_requires,
+    tests_require=["hypothesis"],
 )
--- a/contrib/python-zstandard/setup_zstd.py	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/setup_zstd.py	Sat Dec 28 09:55:45 2019 -0800
@@ -10,97 +10,110 @@
 from distutils.extension import Extension
 
 
-zstd_sources = ['zstd/%s' % p for p in (
-    'common/debug.c',
-    'common/entropy_common.c',
-    'common/error_private.c',
-    'common/fse_decompress.c',
-    'common/pool.c',
-    'common/threading.c',
-    'common/xxhash.c',
-    'common/zstd_common.c',
-    'compress/fse_compress.c',
-    'compress/hist.c',
-    'compress/huf_compress.c',
-    'compress/zstd_compress_literals.c',
-    'compress/zstd_compress_sequences.c',
-    'compress/zstd_compress.c',
-    'compress/zstd_double_fast.c',
-    'compress/zstd_fast.c',
-    'compress/zstd_lazy.c',
-    'compress/zstd_ldm.c',
-    'compress/zstd_opt.c',
-    'compress/zstdmt_compress.c',
-    'decompress/huf_decompress.c',
-    'decompress/zstd_ddict.c',
-    'decompress/zstd_decompress.c',
-    'decompress/zstd_decompress_block.c',
-    'dictBuilder/cover.c',
-    'dictBuilder/divsufsort.c',
-    'dictBuilder/fastcover.c',
-    'dictBuilder/zdict.c',
-)]
+zstd_sources = [
+    "zstd/%s" % p
+    for p in (
+        "common/debug.c",
+        "common/entropy_common.c",
+        "common/error_private.c",
+        "common/fse_decompress.c",
+        "common/pool.c",
+        "common/threading.c",
+        "common/xxhash.c",
+        "common/zstd_common.c",
+        "compress/fse_compress.c",
+        "compress/hist.c",
+        "compress/huf_compress.c",
+        "compress/zstd_compress_literals.c",
+        "compress/zstd_compress_sequences.c",
+        "compress/zstd_compress.c",
+        "compress/zstd_double_fast.c",
+        "compress/zstd_fast.c",
+        "compress/zstd_lazy.c",
+        "compress/zstd_ldm.c",
+        "compress/zstd_opt.c",
+        "compress/zstdmt_compress.c",
+        "decompress/huf_decompress.c",
+        "decompress/zstd_ddict.c",
+        "decompress/zstd_decompress.c",
+        "decompress/zstd_decompress_block.c",
+        "dictBuilder/cover.c",
+        "dictBuilder/divsufsort.c",
+        "dictBuilder/fastcover.c",
+        "dictBuilder/zdict.c",
+    )
+]
 
-zstd_sources_legacy = ['zstd/%s' % p for p in (
-    'deprecated/zbuff_common.c',
-    'deprecated/zbuff_compress.c',
-    'deprecated/zbuff_decompress.c',
-    'legacy/zstd_v01.c',
-    'legacy/zstd_v02.c',
-    'legacy/zstd_v03.c',
-    'legacy/zstd_v04.c',
-    'legacy/zstd_v05.c',
-    'legacy/zstd_v06.c',
-    'legacy/zstd_v07.c'
-)]
+zstd_sources_legacy = [
+    "zstd/%s" % p
+    for p in (
+        "deprecated/zbuff_common.c",
+        "deprecated/zbuff_compress.c",
+        "deprecated/zbuff_decompress.c",
+        "legacy/zstd_v01.c",
+        "legacy/zstd_v02.c",
+        "legacy/zstd_v03.c",
+        "legacy/zstd_v04.c",
+        "legacy/zstd_v05.c",
+        "legacy/zstd_v06.c",
+        "legacy/zstd_v07.c",
+    )
+]
 
 zstd_includes = [
-    'zstd',
-    'zstd/common',
-    'zstd/compress',
-    'zstd/decompress',
-    'zstd/dictBuilder',
+    "zstd",
+    "zstd/common",
+    "zstd/compress",
+    "zstd/decompress",
+    "zstd/dictBuilder",
 ]
 
 zstd_includes_legacy = [
-    'zstd/deprecated',
-    'zstd/legacy',
+    "zstd/deprecated",
+    "zstd/legacy",
 ]
 
 ext_includes = [
-    'c-ext',
-    'zstd/common',
+    "c-ext",
+    "zstd/common",
 ]
 
 ext_sources = [
-    'zstd/common/pool.c',
-    'zstd/common/threading.c',
-    'zstd.c',
-    'c-ext/bufferutil.c',
-    'c-ext/compressiondict.c',
-    'c-ext/compressobj.c',
-    'c-ext/compressor.c',
-    'c-ext/compressoriterator.c',
-    'c-ext/compressionchunker.c',
-    'c-ext/compressionparams.c',
-    'c-ext/compressionreader.c',
-    'c-ext/compressionwriter.c',
-    'c-ext/constants.c',
-    'c-ext/decompressobj.c',
-    'c-ext/decompressor.c',
-    'c-ext/decompressoriterator.c',
-    'c-ext/decompressionreader.c',
-    'c-ext/decompressionwriter.c',
-    'c-ext/frameparams.c',
+    "zstd/common/error_private.c",
+    "zstd/common/pool.c",
+    "zstd/common/threading.c",
+    "zstd/common/zstd_common.c",
+    "zstd.c",
+    "c-ext/bufferutil.c",
+    "c-ext/compressiondict.c",
+    "c-ext/compressobj.c",
+    "c-ext/compressor.c",
+    "c-ext/compressoriterator.c",
+    "c-ext/compressionchunker.c",
+    "c-ext/compressionparams.c",
+    "c-ext/compressionreader.c",
+    "c-ext/compressionwriter.c",
+    "c-ext/constants.c",
+    "c-ext/decompressobj.c",
+    "c-ext/decompressor.c",
+    "c-ext/decompressoriterator.c",
+    "c-ext/decompressionreader.c",
+    "c-ext/decompressionwriter.c",
+    "c-ext/frameparams.c",
 ]
 
 zstd_depends = [
-    'c-ext/python-zstandard.h',
+    "c-ext/python-zstandard.h",
 ]
 
 
-def get_c_extension(support_legacy=False, system_zstd=False, name='zstd',
-                    warnings_as_errors=False, root=None):
+def get_c_extension(
+    support_legacy=False,
+    system_zstd=False,
+    name="zstd",
+    warnings_as_errors=False,
+    root=None,
+):
     """Obtain a distutils.extension.Extension for the C extension.
 
     ``support_legacy`` controls whether to compile in legacy zstd format support.
@@ -125,17 +138,16 @@
     if not system_zstd:
         sources.update([os.path.join(actual_root, p) for p in zstd_sources])
         if support_legacy:
-            sources.update([os.path.join(actual_root, p)
-                            for p in zstd_sources_legacy])
+            sources.update([os.path.join(actual_root, p) for p in zstd_sources_legacy])
     sources = list(sources)
 
     include_dirs = set([os.path.join(actual_root, d) for d in ext_includes])
     if not system_zstd:
-        include_dirs.update([os.path.join(actual_root, d)
-                             for d in zstd_includes])
+        include_dirs.update([os.path.join(actual_root, d) for d in zstd_includes])
         if support_legacy:
-            include_dirs.update([os.path.join(actual_root, d)
-                                 for d in zstd_includes_legacy])
+            include_dirs.update(
+                [os.path.join(actual_root, d) for d in zstd_includes_legacy]
+            )
     include_dirs = list(include_dirs)
 
     depends = [os.path.join(actual_root, p) for p in zstd_depends]
@@ -143,41 +155,40 @@
     compiler = distutils.ccompiler.new_compiler()
 
     # Needed for MSVC.
-    if hasattr(compiler, 'initialize'):
+    if hasattr(compiler, "initialize"):
         compiler.initialize()
 
-    if compiler.compiler_type == 'unix':
-        compiler_type = 'unix'
-    elif compiler.compiler_type == 'msvc':
-        compiler_type = 'msvc'
-    elif compiler.compiler_type == 'mingw32':
-        compiler_type = 'mingw32'
+    if compiler.compiler_type == "unix":
+        compiler_type = "unix"
+    elif compiler.compiler_type == "msvc":
+        compiler_type = "msvc"
+    elif compiler.compiler_type == "mingw32":
+        compiler_type = "mingw32"
     else:
-        raise Exception('unhandled compiler type: %s' %
-                        compiler.compiler_type)
+        raise Exception("unhandled compiler type: %s" % compiler.compiler_type)
 
-    extra_args = ['-DZSTD_MULTITHREAD']
+    extra_args = ["-DZSTD_MULTITHREAD"]
 
     if not system_zstd:
-        extra_args.append('-DZSTDLIB_VISIBILITY=')
-        extra_args.append('-DZDICTLIB_VISIBILITY=')
-        extra_args.append('-DZSTDERRORLIB_VISIBILITY=')
+        extra_args.append("-DZSTDLIB_VISIBILITY=")
+        extra_args.append("-DZDICTLIB_VISIBILITY=")
+        extra_args.append("-DZSTDERRORLIB_VISIBILITY=")
 
-        if compiler_type == 'unix':
-            extra_args.append('-fvisibility=hidden')
+        if compiler_type == "unix":
+            extra_args.append("-fvisibility=hidden")
 
     if not system_zstd and support_legacy:
-        extra_args.append('-DZSTD_LEGACY_SUPPORT=1')
+        extra_args.append("-DZSTD_LEGACY_SUPPORT=1")
 
     if warnings_as_errors:
-        if compiler_type in ('unix', 'mingw32'):
-            extra_args.append('-Werror')
-        elif compiler_type == 'msvc':
-            extra_args.append('/WX')
+        if compiler_type in ("unix", "mingw32"):
+            extra_args.append("-Werror")
+        elif compiler_type == "msvc":
+            extra_args.append("/WX")
         else:
             assert False
 
-    libraries = ['zstd'] if system_zstd else []
+    libraries = ["zstd"] if system_zstd else []
 
     # Python 3.7 doesn't like absolute paths. So normalize to relative.
     sources = [os.path.relpath(p, root) for p in sources]
@@ -185,8 +196,11 @@
     depends = [os.path.relpath(p, root) for p in depends]
 
     # TODO compile with optimizations.
-    return Extension(name, sources,
-                     include_dirs=include_dirs,
-                     depends=depends,
-                     extra_compile_args=extra_args,
-                     libraries=libraries)
+    return Extension(
+        name,
+        sources,
+        include_dirs=include_dirs,
+        depends=depends,
+        extra_compile_args=extra_args,
+        libraries=libraries,
+    )
--- a/contrib/python-zstandard/tests/common.py	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/tests/common.py	Sat Dec 28 09:55:45 2019 -0800
@@ -3,6 +3,7 @@
 import io
 import os
 import types
+import unittest
 
 try:
     import hypothesis
@@ -10,39 +11,46 @@
     hypothesis = None
 
 
+class TestCase(unittest.TestCase):
+    if not getattr(unittest.TestCase, "assertRaisesRegex", False):
+        assertRaisesRegex = unittest.TestCase.assertRaisesRegexp
+
+
 def make_cffi(cls):
     """Decorator to add CFFI versions of each test method."""
 
     # The module containing this class definition should
     # `import zstandard as zstd`. Otherwise things may blow up.
     mod = inspect.getmodule(cls)
-    if not hasattr(mod, 'zstd'):
+    if not hasattr(mod, "zstd"):
         raise Exception('test module does not contain "zstd" symbol')
 
-    if not hasattr(mod.zstd, 'backend'):
-        raise Exception('zstd symbol does not have "backend" attribute; did '
-                        'you `import zstandard as zstd`?')
+    if not hasattr(mod.zstd, "backend"):
+        raise Exception(
+            'zstd symbol does not have "backend" attribute; did '
+            "you `import zstandard as zstd`?"
+        )
 
     # If `import zstandard` already chose the cffi backend, there is nothing
     # for us to do: we only add the cffi variation if the default backend
     # is the C extension.
-    if mod.zstd.backend == 'cffi':
+    if mod.zstd.backend == "cffi":
         return cls
 
     old_env = dict(os.environ)
-    os.environ['PYTHON_ZSTANDARD_IMPORT_POLICY'] = 'cffi'
+    os.environ["PYTHON_ZSTANDARD_IMPORT_POLICY"] = "cffi"
     try:
         try:
-            mod_info = imp.find_module('zstandard')
-            mod = imp.load_module('zstandard_cffi', *mod_info)
+            mod_info = imp.find_module("zstandard")
+            mod = imp.load_module("zstandard_cffi", *mod_info)
         except ImportError:
             return cls
     finally:
         os.environ.clear()
         os.environ.update(old_env)
 
-    if mod.backend != 'cffi':
-        raise Exception('got the zstandard %s backend instead of cffi' % mod.backend)
+    if mod.backend != "cffi":
+        raise Exception("got the zstandard %s backend instead of cffi" % mod.backend)
 
     # If CFFI version is available, dynamically construct test methods
     # that use it.
@@ -52,27 +60,31 @@
         if not inspect.ismethod(fn) and not inspect.isfunction(fn):
             continue
 
-        if not fn.__name__.startswith('test_'):
+        if not fn.__name__.startswith("test_"):
             continue
 
-        name = '%s_cffi' % fn.__name__
+        name = "%s_cffi" % fn.__name__
 
         # Replace the "zstd" symbol with the CFFI module instance. Then copy
         # the function object and install it in a new attribute.
         if isinstance(fn, types.FunctionType):
             globs = dict(fn.__globals__)
-            globs['zstd'] = mod
-            new_fn = types.FunctionType(fn.__code__, globs, name,
-                                        fn.__defaults__, fn.__closure__)
+            globs["zstd"] = mod
+            new_fn = types.FunctionType(
+                fn.__code__, globs, name, fn.__defaults__, fn.__closure__
+            )
             new_method = new_fn
         else:
             globs = dict(fn.__func__.func_globals)
-            globs['zstd'] = mod
-            new_fn = types.FunctionType(fn.__func__.func_code, globs, name,
-                                        fn.__func__.func_defaults,
-                                        fn.__func__.func_closure)
-            new_method = types.UnboundMethodType(new_fn, fn.im_self,
-                                                 fn.im_class)
+            globs["zstd"] = mod
+            new_fn = types.FunctionType(
+                fn.__func__.func_code,
+                globs,
+                name,
+                fn.__func__.func_defaults,
+                fn.__func__.func_closure,
+            )
+            new_method = types.UnboundMethodType(new_fn, fn.im_self, fn.im_class)
 
         setattr(cls, name, new_method)
 
@@ -84,6 +96,7 @@
 
     This allows us to access written data after close().
     """
+
     def __init__(self, *args, **kwargs):
         super(NonClosingBytesIO, self).__init__(*args, **kwargs)
         self._saved_buffer = None
@@ -135,7 +148,7 @@
         dirs[:] = list(sorted(dirs))
         for f in sorted(files):
             try:
-                with open(os.path.join(root, f), 'rb') as fh:
+                with open(os.path.join(root, f), "rb") as fh:
                     data = fh.read()
                     if data:
                         _source_files.append(data)
@@ -154,11 +167,11 @@
 
 def generate_samples():
     inputs = [
-        b'foo',
-        b'bar',
-        b'abcdef',
-        b'sometext',
-        b'baz',
+        b"foo",
+        b"bar",
+        b"abcdef",
+        b"sometext",
+        b"baz",
     ]
 
     samples = []
@@ -173,13 +186,12 @@
 
 if hypothesis:
     default_settings = hypothesis.settings(deadline=10000)
-    hypothesis.settings.register_profile('default', default_settings)
+    hypothesis.settings.register_profile("default", default_settings)
 
     ci_settings = hypothesis.settings(deadline=20000, max_examples=1000)
-    hypothesis.settings.register_profile('ci', ci_settings)
+    hypothesis.settings.register_profile("ci", ci_settings)
 
     expensive_settings = hypothesis.settings(deadline=None, max_examples=10000)
-    hypothesis.settings.register_profile('expensive', expensive_settings)
+    hypothesis.settings.register_profile("expensive", expensive_settings)
 
-    hypothesis.settings.load_profile(
-        os.environ.get('HYPOTHESIS_PROFILE', 'default'))
+    hypothesis.settings.load_profile(os.environ.get("HYPOTHESIS_PROFILE", "default"))
--- a/contrib/python-zstandard/tests/test_buffer_util.py	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/tests/test_buffer_util.py	Sat Dec 28 09:55:45 2019 -0800
@@ -3,104 +3,114 @@
 
 import zstandard as zstd
 
-ss = struct.Struct('=QQ')
+from .common import TestCase
+
+ss = struct.Struct("=QQ")
 
 
-class TestBufferWithSegments(unittest.TestCase):
+class TestBufferWithSegments(TestCase):
     def test_arguments(self):
-        if not hasattr(zstd, 'BufferWithSegments'):
-            self.skipTest('BufferWithSegments not available')
+        if not hasattr(zstd, "BufferWithSegments"):
+            self.skipTest("BufferWithSegments not available")
 
         with self.assertRaises(TypeError):
             zstd.BufferWithSegments()
 
         with self.assertRaises(TypeError):
-            zstd.BufferWithSegments(b'foo')
+            zstd.BufferWithSegments(b"foo")
 
         # Segments data should be a multiple of 16.
-        with self.assertRaisesRegexp(ValueError, 'segments array size is not a multiple of 16'):
-            zstd.BufferWithSegments(b'foo', b'\x00\x00')
+        with self.assertRaisesRegex(
+            ValueError, "segments array size is not a multiple of 16"
+        ):
+            zstd.BufferWithSegments(b"foo", b"\x00\x00")
 
     def test_invalid_offset(self):
-        if not hasattr(zstd, 'BufferWithSegments'):
-            self.skipTest('BufferWithSegments not available')
+        if not hasattr(zstd, "BufferWithSegments"):
+            self.skipTest("BufferWithSegments not available")
 
-        with self.assertRaisesRegexp(ValueError, 'offset within segments array references memory'):
-            zstd.BufferWithSegments(b'foo', ss.pack(0, 4))
+        with self.assertRaisesRegex(
+            ValueError, "offset within segments array references memory"
+        ):
+            zstd.BufferWithSegments(b"foo", ss.pack(0, 4))
 
     def test_invalid_getitem(self):
-        if not hasattr(zstd, 'BufferWithSegments'):
-            self.skipTest('BufferWithSegments not available')
+        if not hasattr(zstd, "BufferWithSegments"):
+            self.skipTest("BufferWithSegments not available")
 
-        b = zstd.BufferWithSegments(b'foo', ss.pack(0, 3))
+        b = zstd.BufferWithSegments(b"foo", ss.pack(0, 3))
 
-        with self.assertRaisesRegexp(IndexError, 'offset must be non-negative'):
+        with self.assertRaisesRegex(IndexError, "offset must be non-negative"):
             test = b[-10]
 
-        with self.assertRaisesRegexp(IndexError, 'offset must be less than 1'):
+        with self.assertRaisesRegex(IndexError, "offset must be less than 1"):
             test = b[1]
 
-        with self.assertRaisesRegexp(IndexError, 'offset must be less than 1'):
+        with self.assertRaisesRegex(IndexError, "offset must be less than 1"):
             test = b[2]
 
     def test_single(self):
-        if not hasattr(zstd, 'BufferWithSegments'):
-            self.skipTest('BufferWithSegments not available')
+        if not hasattr(zstd, "BufferWithSegments"):
+            self.skipTest("BufferWithSegments not available")
 
-        b = zstd.BufferWithSegments(b'foo', ss.pack(0, 3))
+        b = zstd.BufferWithSegments(b"foo", ss.pack(0, 3))
         self.assertEqual(len(b), 1)
         self.assertEqual(b.size, 3)
-        self.assertEqual(b.tobytes(), b'foo')
+        self.assertEqual(b.tobytes(), b"foo")
 
         self.assertEqual(len(b[0]), 3)
         self.assertEqual(b[0].offset, 0)
-        self.assertEqual(b[0].tobytes(), b'foo')
+        self.assertEqual(b[0].tobytes(), b"foo")
 
     def test_multiple(self):
-        if not hasattr(zstd, 'BufferWithSegments'):
-            self.skipTest('BufferWithSegments not available')
+        if not hasattr(zstd, "BufferWithSegments"):
+            self.skipTest("BufferWithSegments not available")
 
-        b = zstd.BufferWithSegments(b'foofooxfooxy', b''.join([ss.pack(0, 3),
-                                                               ss.pack(3, 4),
-                                                               ss.pack(7, 5)]))
+        b = zstd.BufferWithSegments(
+            b"foofooxfooxy", b"".join([ss.pack(0, 3), ss.pack(3, 4), ss.pack(7, 5)])
+        )
         self.assertEqual(len(b), 3)
         self.assertEqual(b.size, 12)
-        self.assertEqual(b.tobytes(), b'foofooxfooxy')
+        self.assertEqual(b.tobytes(), b"foofooxfooxy")
 
-        self.assertEqual(b[0].tobytes(), b'foo')
-        self.assertEqual(b[1].tobytes(), b'foox')
-        self.assertEqual(b[2].tobytes(), b'fooxy')
+        self.assertEqual(b[0].tobytes(), b"foo")
+        self.assertEqual(b[1].tobytes(), b"foox")
+        self.assertEqual(b[2].tobytes(), b"fooxy")
 
 
-class TestBufferWithSegmentsCollection(unittest.TestCase):
+class TestBufferWithSegmentsCollection(TestCase):
     def test_empty_constructor(self):
-        if not hasattr(zstd, 'BufferWithSegmentsCollection'):
-            self.skipTest('BufferWithSegmentsCollection not available')
+        if not hasattr(zstd, "BufferWithSegmentsCollection"):
+            self.skipTest("BufferWithSegmentsCollection not available")
 
-        with self.assertRaisesRegexp(ValueError, 'must pass at least 1 argument'):
+        with self.assertRaisesRegex(ValueError, "must pass at least 1 argument"):
             zstd.BufferWithSegmentsCollection()
 
     def test_argument_validation(self):
-        if not hasattr(zstd, 'BufferWithSegmentsCollection'):
-            self.skipTest('BufferWithSegmentsCollection not available')
+        if not hasattr(zstd, "BufferWithSegmentsCollection"):
+            self.skipTest("BufferWithSegmentsCollection not available")
 
-        with self.assertRaisesRegexp(TypeError, 'arguments must be BufferWithSegments'):
+        with self.assertRaisesRegex(TypeError, "arguments must be BufferWithSegments"):
             zstd.BufferWithSegmentsCollection(None)
 
-        with self.assertRaisesRegexp(TypeError, 'arguments must be BufferWithSegments'):
-            zstd.BufferWithSegmentsCollection(zstd.BufferWithSegments(b'foo', ss.pack(0, 3)),
-                                              None)
+        with self.assertRaisesRegex(TypeError, "arguments must be BufferWithSegments"):
+            zstd.BufferWithSegmentsCollection(
+                zstd.BufferWithSegments(b"foo", ss.pack(0, 3)), None
+            )
 
-        with self.assertRaisesRegexp(ValueError, 'ZstdBufferWithSegments cannot be empty'):
-            zstd.BufferWithSegmentsCollection(zstd.BufferWithSegments(b'', b''))
+        with self.assertRaisesRegex(
+            ValueError, "ZstdBufferWithSegments cannot be empty"
+        ):
+            zstd.BufferWithSegmentsCollection(zstd.BufferWithSegments(b"", b""))
 
     def test_length(self):
-        if not hasattr(zstd, 'BufferWithSegmentsCollection'):
-            self.skipTest('BufferWithSegmentsCollection not available')
+        if not hasattr(zstd, "BufferWithSegmentsCollection"):
+            self.skipTest("BufferWithSegmentsCollection not available")
 
-        b1 = zstd.BufferWithSegments(b'foo', ss.pack(0, 3))
-        b2 = zstd.BufferWithSegments(b'barbaz', b''.join([ss.pack(0, 3),
-                                                          ss.pack(3, 3)]))
+        b1 = zstd.BufferWithSegments(b"foo", ss.pack(0, 3))
+        b2 = zstd.BufferWithSegments(
+            b"barbaz", b"".join([ss.pack(0, 3), ss.pack(3, 3)])
+        )
 
         c = zstd.BufferWithSegmentsCollection(b1)
         self.assertEqual(len(c), 1)
@@ -115,21 +125,22 @@
         self.assertEqual(c.size(), 9)
 
     def test_getitem(self):
-        if not hasattr(zstd, 'BufferWithSegmentsCollection'):
-            self.skipTest('BufferWithSegmentsCollection not available')
+        if not hasattr(zstd, "BufferWithSegmentsCollection"):
+            self.skipTest("BufferWithSegmentsCollection not available")
 
-        b1 = zstd.BufferWithSegments(b'foo', ss.pack(0, 3))
-        b2 = zstd.BufferWithSegments(b'barbaz', b''.join([ss.pack(0, 3),
-                                                          ss.pack(3, 3)]))
+        b1 = zstd.BufferWithSegments(b"foo", ss.pack(0, 3))
+        b2 = zstd.BufferWithSegments(
+            b"barbaz", b"".join([ss.pack(0, 3), ss.pack(3, 3)])
+        )
 
         c = zstd.BufferWithSegmentsCollection(b1, b2)
 
-        with self.assertRaisesRegexp(IndexError, 'offset must be less than 3'):
+        with self.assertRaisesRegex(IndexError, "offset must be less than 3"):
             c[3]
 
-        with self.assertRaisesRegexp(IndexError, 'offset must be less than 3'):
+        with self.assertRaisesRegex(IndexError, "offset must be less than 3"):
             c[4]
 
-        self.assertEqual(c[0].tobytes(), b'foo')
-        self.assertEqual(c[1].tobytes(), b'bar')
-        self.assertEqual(c[2].tobytes(), b'baz')
+        self.assertEqual(c[0].tobytes(), b"foo")
+        self.assertEqual(c[1].tobytes(), b"bar")
+        self.assertEqual(c[2].tobytes(), b"baz")
--- a/contrib/python-zstandard/tests/test_compressor.py	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/tests/test_compressor.py	Sat Dec 28 09:55:45 2019 -0800
@@ -13,6 +13,7 @@
     make_cffi,
     NonClosingBytesIO,
     OpCountingBytesIO,
+    TestCase,
 )
 
 
@@ -23,14 +24,13 @@
 
 
 def multithreaded_chunk_size(level, source_size=0):
-    params = zstd.ZstdCompressionParameters.from_level(level,
-                                                       source_size=source_size)
+    params = zstd.ZstdCompressionParameters.from_level(level, source_size=source_size)
 
     return 1 << (params.window_log + 2)
 
 
 @make_cffi
-class TestCompressor(unittest.TestCase):
+class TestCompressor(TestCase):
     def test_level_bounds(self):
         with self.assertRaises(ValueError):
             zstd.ZstdCompressor(level=23)
@@ -41,11 +41,11 @@
 
 
 @make_cffi
-class TestCompressor_compress(unittest.TestCase):
+class TestCompressor_compress(TestCase):
     def test_compress_empty(self):
         cctx = zstd.ZstdCompressor(level=1, write_content_size=False)
-        result = cctx.compress(b'')
-        self.assertEqual(result, b'\x28\xb5\x2f\xfd\x00\x48\x01\x00\x00')
+        result = cctx.compress(b"")
+        self.assertEqual(result, b"\x28\xb5\x2f\xfd\x00\x48\x01\x00\x00")
         params = zstd.get_frame_parameters(result)
         self.assertEqual(params.content_size, zstd.CONTENTSIZE_UNKNOWN)
         self.assertEqual(params.window_size, 524288)
@@ -53,21 +53,21 @@
         self.assertFalse(params.has_checksum, 0)
 
         cctx = zstd.ZstdCompressor()
-        result = cctx.compress(b'')
-        self.assertEqual(result, b'\x28\xb5\x2f\xfd\x20\x00\x01\x00\x00')
+        result = cctx.compress(b"")
+        self.assertEqual(result, b"\x28\xb5\x2f\xfd\x20\x00\x01\x00\x00")
         params = zstd.get_frame_parameters(result)
         self.assertEqual(params.content_size, 0)
 
     def test_input_types(self):
         cctx = zstd.ZstdCompressor(level=1, write_content_size=False)
-        expected = b'\x28\xb5\x2f\xfd\x00\x00\x19\x00\x00\x66\x6f\x6f'
+        expected = b"\x28\xb5\x2f\xfd\x00\x00\x19\x00\x00\x66\x6f\x6f"
 
         mutable_array = bytearray(3)
-        mutable_array[:] = b'foo'
+        mutable_array[:] = b"foo"
 
         sources = [
-            memoryview(b'foo'),
-            bytearray(b'foo'),
+            memoryview(b"foo"),
+            bytearray(b"foo"),
             mutable_array,
         ]
 
@@ -77,43 +77,46 @@
     def test_compress_large(self):
         chunks = []
         for i in range(255):
-            chunks.append(struct.Struct('>B').pack(i) * 16384)
+            chunks.append(struct.Struct(">B").pack(i) * 16384)
 
         cctx = zstd.ZstdCompressor(level=3, write_content_size=False)
-        result = cctx.compress(b''.join(chunks))
+        result = cctx.compress(b"".join(chunks))
         self.assertEqual(len(result), 999)
-        self.assertEqual(result[0:4], b'\x28\xb5\x2f\xfd')
+        self.assertEqual(result[0:4], b"\x28\xb5\x2f\xfd")
 
         # This matches the test for read_to_iter() below.
         cctx = zstd.ZstdCompressor(level=1, write_content_size=False)
-        result = cctx.compress(b'f' * zstd.COMPRESSION_RECOMMENDED_INPUT_SIZE + b'o')
-        self.assertEqual(result, b'\x28\xb5\x2f\xfd\x00\x40\x54\x00\x00'
-                                 b'\x10\x66\x66\x01\x00\xfb\xff\x39\xc0'
-                                 b'\x02\x09\x00\x00\x6f')
+        result = cctx.compress(b"f" * zstd.COMPRESSION_RECOMMENDED_INPUT_SIZE + b"o")
+        self.assertEqual(
+            result,
+            b"\x28\xb5\x2f\xfd\x00\x40\x54\x00\x00"
+            b"\x10\x66\x66\x01\x00\xfb\xff\x39\xc0"
+            b"\x02\x09\x00\x00\x6f",
+        )
 
     def test_negative_level(self):
         cctx = zstd.ZstdCompressor(level=-4)
-        result = cctx.compress(b'foo' * 256)
+        result = cctx.compress(b"foo" * 256)
 
     def test_no_magic(self):
-        params = zstd.ZstdCompressionParameters.from_level(
-            1, format=zstd.FORMAT_ZSTD1)
+        params = zstd.ZstdCompressionParameters.from_level(1, format=zstd.FORMAT_ZSTD1)
         cctx = zstd.ZstdCompressor(compression_params=params)
-        magic = cctx.compress(b'foobar')
+        magic = cctx.compress(b"foobar")
 
         params = zstd.ZstdCompressionParameters.from_level(
-            1, format=zstd.FORMAT_ZSTD1_MAGICLESS)
+            1, format=zstd.FORMAT_ZSTD1_MAGICLESS
+        )
         cctx = zstd.ZstdCompressor(compression_params=params)
-        no_magic = cctx.compress(b'foobar')
+        no_magic = cctx.compress(b"foobar")
 
-        self.assertEqual(magic[0:4], b'\x28\xb5\x2f\xfd')
+        self.assertEqual(magic[0:4], b"\x28\xb5\x2f\xfd")
         self.assertEqual(magic[4:], no_magic)
 
     def test_write_checksum(self):
         cctx = zstd.ZstdCompressor(level=1)
-        no_checksum = cctx.compress(b'foobar')
+        no_checksum = cctx.compress(b"foobar")
         cctx = zstd.ZstdCompressor(level=1, write_checksum=True)
-        with_checksum = cctx.compress(b'foobar')
+        with_checksum = cctx.compress(b"foobar")
 
         self.assertEqual(len(with_checksum), len(no_checksum) + 4)
 
@@ -125,9 +128,9 @@
 
     def test_write_content_size(self):
         cctx = zstd.ZstdCompressor(level=1)
-        with_size = cctx.compress(b'foobar' * 256)
+        with_size = cctx.compress(b"foobar" * 256)
         cctx = zstd.ZstdCompressor(level=1, write_content_size=False)
-        no_size = cctx.compress(b'foobar' * 256)
+        no_size = cctx.compress(b"foobar" * 256)
 
         self.assertEqual(len(with_size), len(no_size) + 1)
 
@@ -139,17 +142,17 @@
     def test_no_dict_id(self):
         samples = []
         for i in range(128):
-            samples.append(b'foo' * 64)
-            samples.append(b'bar' * 64)
-            samples.append(b'foobar' * 64)
+            samples.append(b"foo" * 64)
+            samples.append(b"bar" * 64)
+            samples.append(b"foobar" * 64)
 
         d = zstd.train_dictionary(1024, samples)
 
         cctx = zstd.ZstdCompressor(level=1, dict_data=d)
-        with_dict_id = cctx.compress(b'foobarfoobar')
+        with_dict_id = cctx.compress(b"foobarfoobar")
 
         cctx = zstd.ZstdCompressor(level=1, dict_data=d, write_dict_id=False)
-        no_dict_id = cctx.compress(b'foobarfoobar')
+        no_dict_id = cctx.compress(b"foobarfoobar")
 
         self.assertEqual(len(with_dict_id), len(no_dict_id) + 4)
 
@@ -161,23 +164,23 @@
     def test_compress_dict_multiple(self):
         samples = []
         for i in range(128):
-            samples.append(b'foo' * 64)
-            samples.append(b'bar' * 64)
-            samples.append(b'foobar' * 64)
+            samples.append(b"foo" * 64)
+            samples.append(b"bar" * 64)
+            samples.append(b"foobar" * 64)
 
         d = zstd.train_dictionary(8192, samples)
 
         cctx = zstd.ZstdCompressor(level=1, dict_data=d)
 
         for i in range(32):
-            cctx.compress(b'foo bar foobar foo bar foobar')
+            cctx.compress(b"foo bar foobar foo bar foobar")
 
     def test_dict_precompute(self):
         samples = []
         for i in range(128):
-            samples.append(b'foo' * 64)
-            samples.append(b'bar' * 64)
-            samples.append(b'foobar' * 64)
+            samples.append(b"foo" * 64)
+            samples.append(b"bar" * 64)
+            samples.append(b"foobar" * 64)
 
         d = zstd.train_dictionary(8192, samples)
         d.precompute_compress(level=1)
@@ -185,11 +188,11 @@
         cctx = zstd.ZstdCompressor(level=1, dict_data=d)
 
         for i in range(32):
-            cctx.compress(b'foo bar foobar foo bar foobar')
+            cctx.compress(b"foo bar foobar foo bar foobar")
 
     def test_multithreaded(self):
         chunk_size = multithreaded_chunk_size(1)
-        source = b''.join([b'x' * chunk_size, b'y' * chunk_size])
+        source = b"".join([b"x" * chunk_size, b"y" * chunk_size])
 
         cctx = zstd.ZstdCompressor(level=1, threads=2)
         compressed = cctx.compress(source)
@@ -205,73 +208,72 @@
     def test_multithreaded_dict(self):
         samples = []
         for i in range(128):
-            samples.append(b'foo' * 64)
-            samples.append(b'bar' * 64)
-            samples.append(b'foobar' * 64)
+            samples.append(b"foo" * 64)
+            samples.append(b"bar" * 64)
+            samples.append(b"foobar" * 64)
 
         d = zstd.train_dictionary(1024, samples)
 
         cctx = zstd.ZstdCompressor(dict_data=d, threads=2)
 
-        result = cctx.compress(b'foo')
-        params = zstd.get_frame_parameters(result);
-        self.assertEqual(params.content_size, 3);
+        result = cctx.compress(b"foo")
+        params = zstd.get_frame_parameters(result)
+        self.assertEqual(params.content_size, 3)
         self.assertEqual(params.dict_id, d.dict_id())
 
-        self.assertEqual(result,
-                         b'\x28\xb5\x2f\xfd\x23\x8f\x55\x0f\x70\x03\x19\x00\x00'
-                         b'\x66\x6f\x6f')
+        self.assertEqual(
+            result,
+            b"\x28\xb5\x2f\xfd\x23\x8f\x55\x0f\x70\x03\x19\x00\x00" b"\x66\x6f\x6f",
+        )
 
     def test_multithreaded_compression_params(self):
         params = zstd.ZstdCompressionParameters.from_level(0, threads=2)
         cctx = zstd.ZstdCompressor(compression_params=params)
 
-        result = cctx.compress(b'foo')
-        params = zstd.get_frame_parameters(result);
-        self.assertEqual(params.content_size, 3);
+        result = cctx.compress(b"foo")
+        params = zstd.get_frame_parameters(result)
+        self.assertEqual(params.content_size, 3)
 
-        self.assertEqual(result,
-                         b'\x28\xb5\x2f\xfd\x20\x03\x19\x00\x00\x66\x6f\x6f')
+        self.assertEqual(result, b"\x28\xb5\x2f\xfd\x20\x03\x19\x00\x00\x66\x6f\x6f")
 
 
 @make_cffi
-class TestCompressor_compressobj(unittest.TestCase):
+class TestCompressor_compressobj(TestCase):
     def test_compressobj_empty(self):
         cctx = zstd.ZstdCompressor(level=1, write_content_size=False)
         cobj = cctx.compressobj()
-        self.assertEqual(cobj.compress(b''), b'')
-        self.assertEqual(cobj.flush(),
-                         b'\x28\xb5\x2f\xfd\x00\x48\x01\x00\x00')
+        self.assertEqual(cobj.compress(b""), b"")
+        self.assertEqual(cobj.flush(), b"\x28\xb5\x2f\xfd\x00\x48\x01\x00\x00")
 
     def test_input_types(self):
-        expected = b'\x28\xb5\x2f\xfd\x00\x48\x19\x00\x00\x66\x6f\x6f'
+        expected = b"\x28\xb5\x2f\xfd\x00\x48\x19\x00\x00\x66\x6f\x6f"
         cctx = zstd.ZstdCompressor(level=1, write_content_size=False)
 
         mutable_array = bytearray(3)
-        mutable_array[:] = b'foo'
+        mutable_array[:] = b"foo"
 
         sources = [
-            memoryview(b'foo'),
-            bytearray(b'foo'),
+            memoryview(b"foo"),
+            bytearray(b"foo"),
             mutable_array,
         ]
 
         for source in sources:
             cobj = cctx.compressobj()
-            self.assertEqual(cobj.compress(source), b'')
+            self.assertEqual(cobj.compress(source), b"")
             self.assertEqual(cobj.flush(), expected)
 
     def test_compressobj_large(self):
         chunks = []
         for i in range(255):
-            chunks.append(struct.Struct('>B').pack(i) * 16384)
+            chunks.append(struct.Struct(">B").pack(i) * 16384)
 
         cctx = zstd.ZstdCompressor(level=3)
         cobj = cctx.compressobj()
 
-        result = cobj.compress(b''.join(chunks)) + cobj.flush()
+        result = cobj.compress(b"".join(chunks)) + cobj.flush()
         self.assertEqual(len(result), 999)
-        self.assertEqual(result[0:4], b'\x28\xb5\x2f\xfd')
+        self.assertEqual(result[0:4], b"\x28\xb5\x2f\xfd")
 
         params = zstd.get_frame_parameters(result)
         self.assertEqual(params.content_size, zstd.CONTENTSIZE_UNKNOWN)
@@ -282,10 +284,10 @@
     def test_write_checksum(self):
         cctx = zstd.ZstdCompressor(level=1)
         cobj = cctx.compressobj()
-        no_checksum = cobj.compress(b'foobar') + cobj.flush()
+        no_checksum = cobj.compress(b"foobar") + cobj.flush()
         cctx = zstd.ZstdCompressor(level=1, write_checksum=True)
         cobj = cctx.compressobj()
-        with_checksum = cobj.compress(b'foobar') + cobj.flush()
+        with_checksum = cobj.compress(b"foobar") + cobj.flush()
 
         no_params = zstd.get_frame_parameters(no_checksum)
         with_params = zstd.get_frame_parameters(with_checksum)
@@ -300,11 +302,11 @@
 
     def test_write_content_size(self):
         cctx = zstd.ZstdCompressor(level=1)
-        cobj = cctx.compressobj(size=len(b'foobar' * 256))
-        with_size = cobj.compress(b'foobar' * 256) + cobj.flush()
+        cobj = cctx.compressobj(size=len(b"foobar" * 256))
+        with_size = cobj.compress(b"foobar" * 256) + cobj.flush()
         cctx = zstd.ZstdCompressor(level=1, write_content_size=False)
-        cobj = cctx.compressobj(size=len(b'foobar' * 256))
-        no_size = cobj.compress(b'foobar' * 256) + cobj.flush()
+        cobj = cctx.compressobj(size=len(b"foobar" * 256))
+        no_size = cobj.compress(b"foobar" * 256) + cobj.flush()
 
         no_params = zstd.get_frame_parameters(no_size)
         with_params = zstd.get_frame_parameters(with_size)
@@ -321,48 +323,53 @@
         cctx = zstd.ZstdCompressor()
         cobj = cctx.compressobj()
 
-        cobj.compress(b'foo')
+        cobj.compress(b"foo")
         cobj.flush()
 
-        with self.assertRaisesRegexp(zstd.ZstdError, r'cannot call compress\(\) after compressor'):
-            cobj.compress(b'foo')
+        with self.assertRaisesRegex(
+            zstd.ZstdError, r"cannot call compress\(\) after compressor"
+        ):
+            cobj.compress(b"foo")
 
-        with self.assertRaisesRegexp(zstd.ZstdError, 'compressor object already finished'):
+        with self.assertRaisesRegex(
+            zstd.ZstdError, "compressor object already finished"
+        ):
             cobj.flush()
 
     def test_flush_block_repeated(self):
         cctx = zstd.ZstdCompressor(level=1)
         cobj = cctx.compressobj()
 
-        self.assertEqual(cobj.compress(b'foo'), b'')
-        self.assertEqual(cobj.flush(zstd.COMPRESSOBJ_FLUSH_BLOCK),
-                         b'\x28\xb5\x2f\xfd\x00\x48\x18\x00\x00foo')
-        self.assertEqual(cobj.compress(b'bar'), b'')
+        self.assertEqual(cobj.compress(b"foo"), b"")
+        self.assertEqual(
+            cobj.flush(zstd.COMPRESSOBJ_FLUSH_BLOCK),
+            b"\x28\xb5\x2f\xfd\x00\x48\x18\x00\x00foo",
+        )
+        self.assertEqual(cobj.compress(b"bar"), b"")
         # 3 byte header plus content.
-        self.assertEqual(cobj.flush(zstd.COMPRESSOBJ_FLUSH_BLOCK),
-                         b'\x18\x00\x00bar')
-        self.assertEqual(cobj.flush(), b'\x01\x00\x00')
+        self.assertEqual(cobj.flush(zstd.COMPRESSOBJ_FLUSH_BLOCK), b"\x18\x00\x00bar")
+        self.assertEqual(cobj.flush(), b"\x01\x00\x00")
 
     def test_flush_empty_block(self):
         cctx = zstd.ZstdCompressor(write_checksum=True)
         cobj = cctx.compressobj()
 
-        cobj.compress(b'foobar')
+        cobj.compress(b"foobar")
         cobj.flush(zstd.COMPRESSOBJ_FLUSH_BLOCK)
         # No-op if no block is active (this is internal to zstd).
-        self.assertEqual(cobj.flush(zstd.COMPRESSOBJ_FLUSH_BLOCK), b'')
+        self.assertEqual(cobj.flush(zstd.COMPRESSOBJ_FLUSH_BLOCK), b"")
 
         trailing = cobj.flush()
         # 3 bytes block header + 4 bytes frame checksum
         self.assertEqual(len(trailing), 7)
         header = trailing[0:3]
-        self.assertEqual(header, b'\x01\x00\x00')
+        self.assertEqual(header, b"\x01\x00\x00")
 
     def test_multithreaded(self):
         source = io.BytesIO()
-        source.write(b'a' * 1048576)
-        source.write(b'b' * 1048576)
-        source.write(b'c' * 1048576)
+        source.write(b"a" * 1048576)
+        source.write(b"b" * 1048576)
+        source.write(b"c" * 1048576)
         source.seek(0)
 
         cctx = zstd.ZstdCompressor(level=1, threads=2)
@@ -378,9 +385,9 @@
 
         chunks.append(cobj.flush())
 
-        compressed = b''.join(chunks)
+        compressed = b"".join(chunks)
 
-        self.assertEqual(len(compressed), 295)
+        self.assertEqual(len(compressed), 119)
 
     def test_frame_progression(self):
         cctx = zstd.ZstdCompressor()
@@ -389,7 +396,7 @@
 
         cobj = cctx.compressobj()
 
-        cobj.compress(b'foobar')
+        cobj.compress(b"foobar")
         self.assertEqual(cctx.frame_progression(), (6, 0, 0))
 
         cobj.flush()
@@ -399,20 +406,20 @@
         cctx = zstd.ZstdCompressor()
 
         cobj = cctx.compressobj(size=2)
-        with self.assertRaisesRegexp(zstd.ZstdError, 'Src size is incorrect'):
-            cobj.compress(b'foo')
+        with self.assertRaisesRegex(zstd.ZstdError, "Src size is incorrect"):
+            cobj.compress(b"foo")
 
         # Try another operation on this instance.
-        with self.assertRaisesRegexp(zstd.ZstdError, 'Src size is incorrect'):
-            cobj.compress(b'aa')
+        with self.assertRaisesRegex(zstd.ZstdError, "Src size is incorrect"):
+            cobj.compress(b"aa")
 
         # Try another operation on the compressor.
         cctx.compressobj(size=4)
-        cctx.compress(b'foobar')
+        cctx.compress(b"foobar")
 
 
 @make_cffi
-class TestCompressor_copy_stream(unittest.TestCase):
+class TestCompressor_copy_stream(TestCase):
     def test_no_read(self):
         source = object()
         dest = io.BytesIO()
@@ -438,13 +445,12 @@
         self.assertEqual(int(r), 0)
         self.assertEqual(w, 9)
 
-        self.assertEqual(dest.getvalue(),
-                         b'\x28\xb5\x2f\xfd\x00\x48\x01\x00\x00')
+        self.assertEqual(dest.getvalue(), b"\x28\xb5\x2f\xfd\x00\x48\x01\x00\x00")
 
     def test_large_data(self):
         source = io.BytesIO()
         for i in range(255):
-            source.write(struct.Struct('>B').pack(i) * 16384)
+            source.write(struct.Struct(">B").pack(i) * 16384)
         source.seek(0)
 
         dest = io.BytesIO()
@@ -461,7 +467,7 @@
         self.assertFalse(params.has_checksum)
 
     def test_write_checksum(self):
-        source = io.BytesIO(b'foobar')
+        source = io.BytesIO(b"foobar")
         no_checksum = io.BytesIO()
 
         cctx = zstd.ZstdCompressor(level=1)
@@ -472,8 +478,7 @@
         cctx = zstd.ZstdCompressor(level=1, write_checksum=True)
         cctx.copy_stream(source, with_checksum)
 
-        self.assertEqual(len(with_checksum.getvalue()),
-                         len(no_checksum.getvalue()) + 4)
+        self.assertEqual(len(with_checksum.getvalue()), len(no_checksum.getvalue()) + 4)
 
         no_params = zstd.get_frame_parameters(no_checksum.getvalue())
         with_params = zstd.get_frame_parameters(with_checksum.getvalue())
@@ -485,7 +490,7 @@
         self.assertTrue(with_params.has_checksum)
 
     def test_write_content_size(self):
-        source = io.BytesIO(b'foobar' * 256)
+        source = io.BytesIO(b"foobar" * 256)
         no_size = io.BytesIO()
 
         cctx = zstd.ZstdCompressor(level=1, write_content_size=False)
@@ -497,16 +502,14 @@
         cctx.copy_stream(source, with_size)
 
         # Source content size is unknown, so no content size written.
-        self.assertEqual(len(with_size.getvalue()),
-                         len(no_size.getvalue()))
+        self.assertEqual(len(with_size.getvalue()), len(no_size.getvalue()))
 
         source.seek(0)
         with_size = io.BytesIO()
         cctx.copy_stream(source, with_size, size=len(source.getvalue()))
 
         # We specified source size, so content size header is present.
-        self.assertEqual(len(with_size.getvalue()),
-                         len(no_size.getvalue()) + 1)
+        self.assertEqual(len(with_size.getvalue()), len(no_size.getvalue()) + 1)
 
         no_params = zstd.get_frame_parameters(no_size.getvalue())
         with_params = zstd.get_frame_parameters(with_size.getvalue())
@@ -518,7 +521,7 @@
         self.assertFalse(with_params.has_checksum)
 
     def test_read_write_size(self):
-        source = OpCountingBytesIO(b'foobarfoobar')
+        source = OpCountingBytesIO(b"foobarfoobar")
         dest = OpCountingBytesIO()
         cctx = zstd.ZstdCompressor()
         r, w = cctx.copy_stream(source, dest, read_size=1, write_size=1)
@@ -530,16 +533,16 @@
 
     def test_multithreaded(self):
         source = io.BytesIO()
-        source.write(b'a' * 1048576)
-        source.write(b'b' * 1048576)
-        source.write(b'c' * 1048576)
+        source.write(b"a" * 1048576)
+        source.write(b"b" * 1048576)
+        source.write(b"c" * 1048576)
         source.seek(0)
 
         dest = io.BytesIO()
         cctx = zstd.ZstdCompressor(threads=2, write_content_size=False)
         r, w = cctx.copy_stream(source, dest)
         self.assertEqual(r, 3145728)
-        self.assertEqual(w, 295)
+        self.assertEqual(w, 111)
 
         params = zstd.get_frame_parameters(dest.getvalue())
         self.assertEqual(params.content_size, zstd.CONTENTSIZE_UNKNOWN)
@@ -559,15 +562,15 @@
 
     def test_bad_size(self):
         source = io.BytesIO()
-        source.write(b'a' * 32768)
-        source.write(b'b' * 32768)
+        source.write(b"a" * 32768)
+        source.write(b"b" * 32768)
         source.seek(0)
 
         dest = io.BytesIO()
 
         cctx = zstd.ZstdCompressor()
 
-        with self.assertRaisesRegexp(zstd.ZstdError, 'Src size is incorrect'):
+        with self.assertRaisesRegex(zstd.ZstdError, "Src size is incorrect"):
             cctx.copy_stream(source, dest, size=42)
 
         # Try another operation on this compressor.
@@ -577,31 +580,31 @@
 
 
 @make_cffi
-class TestCompressor_stream_reader(unittest.TestCase):
+class TestCompressor_stream_reader(TestCase):
     def test_context_manager(self):
         cctx = zstd.ZstdCompressor()
 
-        with cctx.stream_reader(b'foo') as reader:
-            with self.assertRaisesRegexp(ValueError, 'cannot __enter__ multiple times'):
+        with cctx.stream_reader(b"foo") as reader:
+            with self.assertRaisesRegex(ValueError, "cannot __enter__ multiple times"):
                 with reader as reader2:
                     pass
 
     def test_no_context_manager(self):
         cctx = zstd.ZstdCompressor()
 
-        reader = cctx.stream_reader(b'foo')
+        reader = cctx.stream_reader(b"foo")
         reader.read(4)
         self.assertFalse(reader.closed)
 
         reader.close()
         self.assertTrue(reader.closed)
-        with self.assertRaisesRegexp(ValueError, 'stream is closed'):
+        with self.assertRaisesRegex(ValueError, "stream is closed"):
             reader.read(1)
 
     def test_not_implemented(self):
         cctx = zstd.ZstdCompressor()
 
-        with cctx.stream_reader(b'foo' * 60) as reader:
+        with cctx.stream_reader(b"foo" * 60) as reader:
             with self.assertRaises(io.UnsupportedOperation):
                 reader.readline()
 
@@ -618,12 +621,12 @@
                 reader.writelines([])
 
             with self.assertRaises(OSError):
-                reader.write(b'foo')
+                reader.write(b"foo")
 
     def test_constant_methods(self):
         cctx = zstd.ZstdCompressor()
 
-        with cctx.stream_reader(b'boo') as reader:
+        with cctx.stream_reader(b"boo") as reader:
             self.assertTrue(reader.readable())
             self.assertFalse(reader.writable())
             self.assertFalse(reader.seekable())
@@ -637,27 +640,29 @@
     def test_read_closed(self):
         cctx = zstd.ZstdCompressor()
 
-        with cctx.stream_reader(b'foo' * 60) as reader:
+        with cctx.stream_reader(b"foo" * 60) as reader:
             reader.close()
             self.assertTrue(reader.closed)
-            with self.assertRaisesRegexp(ValueError, 'stream is closed'):
+            with self.assertRaisesRegex(ValueError, "stream is closed"):
                 reader.read(10)
 
     def test_read_sizes(self):
         cctx = zstd.ZstdCompressor()
-        foo = cctx.compress(b'foo')
+        foo = cctx.compress(b"foo")
 
-        with cctx.stream_reader(b'foo') as reader:
-            with self.assertRaisesRegexp(ValueError, 'cannot read negative amounts less than -1'):
+        with cctx.stream_reader(b"foo") as reader:
+            with self.assertRaisesRegex(
+                ValueError, "cannot read negative amounts less than -1"
+            ):
                 reader.read(-2)
 
-            self.assertEqual(reader.read(0), b'')
+            self.assertEqual(reader.read(0), b"")
             self.assertEqual(reader.read(), foo)
 
     def test_read_buffer(self):
         cctx = zstd.ZstdCompressor()
 
-        source = b''.join([b'foo' * 60, b'bar' * 60, b'baz' * 60])
+        source = b"".join([b"foo" * 60, b"bar" * 60, b"baz" * 60])
         frame = cctx.compress(source)
 
         with cctx.stream_reader(source) as reader:
@@ -667,13 +672,13 @@
             result = reader.read(8192)
             self.assertEqual(result, frame)
             self.assertEqual(reader.tell(), len(result))
-            self.assertEqual(reader.read(), b'')
+            self.assertEqual(reader.read(), b"")
             self.assertEqual(reader.tell(), len(result))
 
     def test_read_buffer_small_chunks(self):
         cctx = zstd.ZstdCompressor()
 
-        source = b'foo' * 60
+        source = b"foo" * 60
         chunks = []
 
         with cctx.stream_reader(source) as reader:
@@ -687,12 +692,12 @@
                 chunks.append(chunk)
                 self.assertEqual(reader.tell(), sum(map(len, chunks)))
 
-        self.assertEqual(b''.join(chunks), cctx.compress(source))
+        self.assertEqual(b"".join(chunks), cctx.compress(source))
 
     def test_read_stream(self):
         cctx = zstd.ZstdCompressor()
 
-        source = b''.join([b'foo' * 60, b'bar' * 60, b'baz' * 60])
+        source = b"".join([b"foo" * 60, b"bar" * 60, b"baz" * 60])
         frame = cctx.compress(source)
 
         with cctx.stream_reader(io.BytesIO(source), size=len(source)) as reader:
@@ -701,13 +706,13 @@
             chunk = reader.read(8192)
             self.assertEqual(chunk, frame)
             self.assertEqual(reader.tell(), len(chunk))
-            self.assertEqual(reader.read(), b'')
+            self.assertEqual(reader.read(), b"")
             self.assertEqual(reader.tell(), len(chunk))
 
     def test_read_stream_small_chunks(self):
         cctx = zstd.ZstdCompressor()
 
-        source = b'foo' * 60
+        source = b"foo" * 60
         chunks = []
 
         with cctx.stream_reader(io.BytesIO(source), size=len(source)) as reader:
@@ -721,25 +726,25 @@
                 chunks.append(chunk)
                 self.assertEqual(reader.tell(), sum(map(len, chunks)))
 
-        self.assertEqual(b''.join(chunks), cctx.compress(source))
+        self.assertEqual(b"".join(chunks), cctx.compress(source))
 
     def test_read_after_exit(self):
         cctx = zstd.ZstdCompressor()
 
-        with cctx.stream_reader(b'foo' * 60) as reader:
+        with cctx.stream_reader(b"foo" * 60) as reader:
             while reader.read(8192):
                 pass
 
-        with self.assertRaisesRegexp(ValueError, 'stream is closed'):
+        with self.assertRaisesRegex(ValueError, "stream is closed"):
             reader.read(10)
 
     def test_bad_size(self):
         cctx = zstd.ZstdCompressor()
 
-        source = io.BytesIO(b'foobar')
+        source = io.BytesIO(b"foobar")
 
         with cctx.stream_reader(source, size=2) as reader:
-            with self.assertRaisesRegexp(zstd.ZstdError, 'Src size is incorrect'):
+            with self.assertRaisesRegex(zstd.ZstdError, "Src size is incorrect"):
                 reader.read(10)
 
         # Try another compression operation.
@@ -748,36 +753,36 @@
 
     def test_readall(self):
         cctx = zstd.ZstdCompressor()
-        frame = cctx.compress(b'foo' * 1024)
+        frame = cctx.compress(b"foo" * 1024)
 
-        reader = cctx.stream_reader(b'foo' * 1024)
+        reader = cctx.stream_reader(b"foo" * 1024)
         self.assertEqual(reader.readall(), frame)
 
     def test_readinto(self):
         cctx = zstd.ZstdCompressor()
-        foo = cctx.compress(b'foo')
+        foo = cctx.compress(b"foo")
 
-        reader = cctx.stream_reader(b'foo')
+        reader = cctx.stream_reader(b"foo")
         with self.assertRaises(Exception):
-            reader.readinto(b'foobar')
+            reader.readinto(b"foobar")
 
         # readinto() with sufficiently large destination.
         b = bytearray(1024)
-        reader = cctx.stream_reader(b'foo')
+        reader = cctx.stream_reader(b"foo")
         self.assertEqual(reader.readinto(b), len(foo))
-        self.assertEqual(b[0:len(foo)], foo)
+        self.assertEqual(b[0 : len(foo)], foo)
         self.assertEqual(reader.readinto(b), 0)
-        self.assertEqual(b[0:len(foo)], foo)
+        self.assertEqual(b[0 : len(foo)], foo)
 
         # readinto() with small reads.
         b = bytearray(1024)
-        reader = cctx.stream_reader(b'foo', read_size=1)
+        reader = cctx.stream_reader(b"foo", read_size=1)
         self.assertEqual(reader.readinto(b), len(foo))
-        self.assertEqual(b[0:len(foo)], foo)
+        self.assertEqual(b[0 : len(foo)], foo)
 
         # Too small destination buffer.
         b = bytearray(2)
-        reader = cctx.stream_reader(b'foo')
+        reader = cctx.stream_reader(b"foo")
         self.assertEqual(reader.readinto(b), 2)
         self.assertEqual(b[:], foo[0:2])
         self.assertEqual(reader.readinto(b), 2)
@@ -787,41 +792,41 @@
 
     def test_readinto1(self):
         cctx = zstd.ZstdCompressor()
-        foo = b''.join(cctx.read_to_iter(io.BytesIO(b'foo')))
+        foo = b"".join(cctx.read_to_iter(io.BytesIO(b"foo")))
 
-        reader = cctx.stream_reader(b'foo')
+        reader = cctx.stream_reader(b"foo")
         with self.assertRaises(Exception):
-            reader.readinto1(b'foobar')
+            reader.readinto1(b"foobar")
 
         b = bytearray(1024)
-        source = OpCountingBytesIO(b'foo')
+        source = OpCountingBytesIO(b"foo")
         reader = cctx.stream_reader(source)
         self.assertEqual(reader.readinto1(b), len(foo))
-        self.assertEqual(b[0:len(foo)], foo)
+        self.assertEqual(b[0 : len(foo)], foo)
         self.assertEqual(source._read_count, 2)
 
         # readinto1() with small reads.
         b = bytearray(1024)
-        source = OpCountingBytesIO(b'foo')
+        source = OpCountingBytesIO(b"foo")
         reader = cctx.stream_reader(source, read_size=1)
         self.assertEqual(reader.readinto1(b), len(foo))
-        self.assertEqual(b[0:len(foo)], foo)
+        self.assertEqual(b[0 : len(foo)], foo)
         self.assertEqual(source._read_count, 4)
 
     def test_read1(self):
         cctx = zstd.ZstdCompressor()
-        foo = b''.join(cctx.read_to_iter(io.BytesIO(b'foo')))
+        foo = b"".join(cctx.read_to_iter(io.BytesIO(b"foo")))
 
-        b = OpCountingBytesIO(b'foo')
+        b = OpCountingBytesIO(b"foo")
         reader = cctx.stream_reader(b)
 
         self.assertEqual(reader.read1(), foo)
         self.assertEqual(b._read_count, 2)
 
-        b = OpCountingBytesIO(b'foo')
+        b = OpCountingBytesIO(b"foo")
         reader = cctx.stream_reader(b)
 
-        self.assertEqual(reader.read1(0), b'')
+        self.assertEqual(reader.read1(0), b"")
         self.assertEqual(reader.read1(2), foo[0:2])
         self.assertEqual(b._read_count, 2)
         self.assertEqual(reader.read1(2), foo[2:4])
@@ -829,7 +834,7 @@
 
 
 @make_cffi
-class TestCompressor_stream_writer(unittest.TestCase):
+class TestCompressor_stream_writer(TestCase):
     def test_io_api(self):
         buffer = io.BytesIO()
         cctx = zstd.ZstdCompressor()
@@ -899,7 +904,7 @@
         self.assertFalse(writer.closed)
 
     def test_fileno_file(self):
-        with tempfile.TemporaryFile('wb') as tf:
+        with tempfile.TemporaryFile("wb") as tf:
             cctx = zstd.ZstdCompressor()
             writer = cctx.stream_writer(tf)
 
@@ -910,33 +915,35 @@
         cctx = zstd.ZstdCompressor(level=1)
         writer = cctx.stream_writer(buffer)
 
-        writer.write(b'foo' * 1024)
+        writer.write(b"foo" * 1024)
         self.assertFalse(writer.closed)
         self.assertFalse(buffer.closed)
         writer.close()
         self.assertTrue(writer.closed)
         self.assertTrue(buffer.closed)
 
-        with self.assertRaisesRegexp(ValueError, 'stream is closed'):
-            writer.write(b'foo')
+        with self.assertRaisesRegex(ValueError, "stream is closed"):
+            writer.write(b"foo")
 
-        with self.assertRaisesRegexp(ValueError, 'stream is closed'):
+        with self.assertRaisesRegex(ValueError, "stream is closed"):
             writer.flush()
 
-        with self.assertRaisesRegexp(ValueError, 'stream is closed'):
+        with self.assertRaisesRegex(ValueError, "stream is closed"):
             with writer:
                 pass
 
-        self.assertEqual(buffer.getvalue(),
-                         b'\x28\xb5\x2f\xfd\x00\x48\x55\x00\x00\x18\x66\x6f'
-                         b'\x6f\x01\x00\xfa\xd3\x77\x43')
+        self.assertEqual(
+            buffer.getvalue(),
+            b"\x28\xb5\x2f\xfd\x00\x48\x55\x00\x00\x18\x66\x6f"
+            b"\x6f\x01\x00\xfa\xd3\x77\x43",
+        )
 
         # Context manager exit should close stream.
         buffer = io.BytesIO()
         writer = cctx.stream_writer(buffer)
 
         with writer:
-            writer.write(b'foo')
+            writer.write(b"foo")
 
         self.assertTrue(writer.closed)
 
@@ -944,10 +951,10 @@
         buffer = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(level=1, write_content_size=False)
         with cctx.stream_writer(buffer) as compressor:
-            compressor.write(b'')
+            compressor.write(b"")
 
         result = buffer.getvalue()
-        self.assertEqual(result, b'\x28\xb5\x2f\xfd\x00\x48\x01\x00\x00')
+        self.assertEqual(result, b"\x28\xb5\x2f\xfd\x00\x48\x01\x00\x00")
 
         params = zstd.get_frame_parameters(result)
         self.assertEqual(params.content_size, zstd.CONTENTSIZE_UNKNOWN)
@@ -958,11 +965,11 @@
         # Test without context manager.
         buffer = io.BytesIO()
         compressor = cctx.stream_writer(buffer)
-        self.assertEqual(compressor.write(b''), 0)
-        self.assertEqual(buffer.getvalue(), b'')
+        self.assertEqual(compressor.write(b""), 0)
+        self.assertEqual(buffer.getvalue(), b"")
         self.assertEqual(compressor.flush(zstd.FLUSH_FRAME), 9)
         result = buffer.getvalue()
-        self.assertEqual(result, b'\x28\xb5\x2f\xfd\x00\x48\x01\x00\x00')
+        self.assertEqual(result, b"\x28\xb5\x2f\xfd\x00\x48\x01\x00\x00")
 
         params = zstd.get_frame_parameters(result)
         self.assertEqual(params.content_size, zstd.CONTENTSIZE_UNKNOWN)
@@ -972,18 +979,18 @@
 
         # Test write_return_read=True
         compressor = cctx.stream_writer(buffer, write_return_read=True)
-        self.assertEqual(compressor.write(b''), 0)
+        self.assertEqual(compressor.write(b""), 0)
 
     def test_input_types(self):
-        expected = b'\x28\xb5\x2f\xfd\x00\x48\x19\x00\x00\x66\x6f\x6f'
+        expected = b"\x28\xb5\x2f\xfd\x00\x48\x19\x00\x00\x66\x6f\x6f"
         cctx = zstd.ZstdCompressor(level=1)
 
         mutable_array = bytearray(3)
-        mutable_array[:] = b'foo'
+        mutable_array[:] = b"foo"
 
         sources = [
-            memoryview(b'foo'),
-            bytearray(b'foo'),
+            memoryview(b"foo"),
+            bytearray(b"foo"),
             mutable_array,
         ]
 
@@ -1001,51 +1008,55 @@
         buffer = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(level=5)
         with cctx.stream_writer(buffer) as compressor:
-            self.assertEqual(compressor.write(b'foo'), 0)
-            self.assertEqual(compressor.write(b'bar'), 0)
-            self.assertEqual(compressor.write(b'x' * 8192), 0)
+            self.assertEqual(compressor.write(b"foo"), 0)
+            self.assertEqual(compressor.write(b"bar"), 0)
+            self.assertEqual(compressor.write(b"x" * 8192), 0)
 
         result = buffer.getvalue()
-        self.assertEqual(result,
-                         b'\x28\xb5\x2f\xfd\x00\x58\x75\x00\x00\x38\x66\x6f'
-                         b'\x6f\x62\x61\x72\x78\x01\x00\xfc\xdf\x03\x23')
+        self.assertEqual(
+            result,
+            b"\x28\xb5\x2f\xfd\x00\x58\x75\x00\x00\x38\x66\x6f"
+            b"\x6f\x62\x61\x72\x78\x01\x00\xfc\xdf\x03\x23",
+        )
 
         # Test without context manager.
         buffer = io.BytesIO()
         compressor = cctx.stream_writer(buffer)
-        self.assertEqual(compressor.write(b'foo'), 0)
-        self.assertEqual(compressor.write(b'bar'), 0)
-        self.assertEqual(compressor.write(b'x' * 8192), 0)
+        self.assertEqual(compressor.write(b"foo"), 0)
+        self.assertEqual(compressor.write(b"bar"), 0)
+        self.assertEqual(compressor.write(b"x" * 8192), 0)
         self.assertEqual(compressor.flush(zstd.FLUSH_FRAME), 23)
         result = buffer.getvalue()
-        self.assertEqual(result,
-                         b'\x28\xb5\x2f\xfd\x00\x58\x75\x00\x00\x38\x66\x6f'
-                         b'\x6f\x62\x61\x72\x78\x01\x00\xfc\xdf\x03\x23')
+        self.assertEqual(
+            result,
+            b"\x28\xb5\x2f\xfd\x00\x58\x75\x00\x00\x38\x66\x6f"
+            b"\x6f\x62\x61\x72\x78\x01\x00\xfc\xdf\x03\x23",
+        )
 
         # Test with write_return_read=True.
         compressor = cctx.stream_writer(buffer, write_return_read=True)
-        self.assertEqual(compressor.write(b'foo'), 3)
-        self.assertEqual(compressor.write(b'barbiz'), 6)
-        self.assertEqual(compressor.write(b'x' * 8192), 8192)
+        self.assertEqual(compressor.write(b"foo"), 3)
+        self.assertEqual(compressor.write(b"barbiz"), 6)
+        self.assertEqual(compressor.write(b"x" * 8192), 8192)
 
     def test_dictionary(self):
         samples = []
         for i in range(128):
-            samples.append(b'foo' * 64)
-            samples.append(b'bar' * 64)
-            samples.append(b'foobar' * 64)
+            samples.append(b"foo" * 64)
+            samples.append(b"bar" * 64)
+            samples.append(b"foobar" * 64)
 
         d = zstd.train_dictionary(8192, samples)
 
         h = hashlib.sha1(d.as_bytes()).hexdigest()
-        self.assertEqual(h, '7a2e59a876db958f74257141045af8f912e00d4e')
+        self.assertEqual(h, "7a2e59a876db958f74257141045af8f912e00d4e")
 
         buffer = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(level=9, dict_data=d)
         with cctx.stream_writer(buffer) as compressor:
-            self.assertEqual(compressor.write(b'foo'), 0)
-            self.assertEqual(compressor.write(b'bar'), 0)
-            self.assertEqual(compressor.write(b'foo' * 16384), 0)
+            self.assertEqual(compressor.write(b"foo"), 0)
+            self.assertEqual(compressor.write(b"bar"), 0)
+            self.assertEqual(compressor.write(b"foo" * 16384), 0)
 
         compressed = buffer.getvalue()
 
@@ -1056,14 +1067,15 @@
         self.assertFalse(params.has_checksum)
 
         h = hashlib.sha1(compressed).hexdigest()
-        self.assertEqual(h, '0a7c05635061f58039727cdbe76388c6f4cfef06')
+        self.assertEqual(h, "0a7c05635061f58039727cdbe76388c6f4cfef06")
 
-        source = b'foo' + b'bar' + (b'foo' * 16384)
+        source = b"foo" + b"bar" + (b"foo" * 16384)
 
         dctx = zstd.ZstdDecompressor(dict_data=d)
 
-        self.assertEqual(dctx.decompress(compressed, max_output_size=len(source)),
-                         source)
+        self.assertEqual(
+            dctx.decompress(compressed, max_output_size=len(source)), source
+        )
 
     def test_compression_params(self):
         params = zstd.ZstdCompressionParameters(
@@ -1073,14 +1085,15 @@
             min_match=5,
             search_log=4,
             target_length=10,
-            strategy=zstd.STRATEGY_FAST)
+            strategy=zstd.STRATEGY_FAST,
+        )
 
         buffer = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(compression_params=params)
         with cctx.stream_writer(buffer) as compressor:
-            self.assertEqual(compressor.write(b'foo'), 0)
-            self.assertEqual(compressor.write(b'bar'), 0)
-            self.assertEqual(compressor.write(b'foobar' * 16384), 0)
+            self.assertEqual(compressor.write(b"foo"), 0)
+            self.assertEqual(compressor.write(b"bar"), 0)
+            self.assertEqual(compressor.write(b"foobar" * 16384), 0)
 
         compressed = buffer.getvalue()
 
@@ -1091,18 +1104,18 @@
         self.assertFalse(params.has_checksum)
 
         h = hashlib.sha1(compressed).hexdigest()
-        self.assertEqual(h, 'dd4bb7d37c1a0235b38a2f6b462814376843ef0b')
+        self.assertEqual(h, "dd4bb7d37c1a0235b38a2f6b462814376843ef0b")
 
     def test_write_checksum(self):
         no_checksum = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(level=1)
         with cctx.stream_writer(no_checksum) as compressor:
-            self.assertEqual(compressor.write(b'foobar'), 0)
+            self.assertEqual(compressor.write(b"foobar"), 0)
 
         with_checksum = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(level=1, write_checksum=True)
         with cctx.stream_writer(with_checksum) as compressor:
-            self.assertEqual(compressor.write(b'foobar'), 0)
+            self.assertEqual(compressor.write(b"foobar"), 0)
 
         no_params = zstd.get_frame_parameters(no_checksum.getvalue())
         with_params = zstd.get_frame_parameters(with_checksum.getvalue())
@@ -1113,29 +1126,27 @@
         self.assertFalse(no_params.has_checksum)
         self.assertTrue(with_params.has_checksum)
 
-        self.assertEqual(len(with_checksum.getvalue()),
-                         len(no_checksum.getvalue()) + 4)
+        self.assertEqual(len(with_checksum.getvalue()), len(no_checksum.getvalue()) + 4)
 
     def test_write_content_size(self):
         no_size = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(level=1, write_content_size=False)
         with cctx.stream_writer(no_size) as compressor:
-            self.assertEqual(compressor.write(b'foobar' * 256), 0)
+            self.assertEqual(compressor.write(b"foobar" * 256), 0)
 
         with_size = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(level=1)
         with cctx.stream_writer(with_size) as compressor:
-            self.assertEqual(compressor.write(b'foobar' * 256), 0)
+            self.assertEqual(compressor.write(b"foobar" * 256), 0)
 
         # Source size is not known in streaming mode, so header not
         # written.
-        self.assertEqual(len(with_size.getvalue()),
-                         len(no_size.getvalue()))
+        self.assertEqual(len(with_size.getvalue()), len(no_size.getvalue()))
 
         # Declaring size will write the header.
         with_size = NonClosingBytesIO()
-        with cctx.stream_writer(with_size, size=len(b'foobar' * 256)) as compressor:
-            self.assertEqual(compressor.write(b'foobar' * 256), 0)
+        with cctx.stream_writer(with_size, size=len(b"foobar" * 256)) as compressor:
+            self.assertEqual(compressor.write(b"foobar" * 256), 0)
 
         no_params = zstd.get_frame_parameters(no_size.getvalue())
         with_params = zstd.get_frame_parameters(with_size.getvalue())
@@ -1146,31 +1157,30 @@
         self.assertFalse(no_params.has_checksum)
         self.assertFalse(with_params.has_checksum)
 
-        self.assertEqual(len(with_size.getvalue()),
-                         len(no_size.getvalue()) + 1)
+        self.assertEqual(len(with_size.getvalue()), len(no_size.getvalue()) + 1)
 
     def test_no_dict_id(self):
         samples = []
         for i in range(128):
-            samples.append(b'foo' * 64)
-            samples.append(b'bar' * 64)
-            samples.append(b'foobar' * 64)
+            samples.append(b"foo" * 64)
+            samples.append(b"bar" * 64)
+            samples.append(b"foobar" * 64)
 
         d = zstd.train_dictionary(1024, samples)
 
         with_dict_id = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(level=1, dict_data=d)
         with cctx.stream_writer(with_dict_id) as compressor:
-            self.assertEqual(compressor.write(b'foobarfoobar'), 0)
+            self.assertEqual(compressor.write(b"foobarfoobar"), 0)
 
-        self.assertEqual(with_dict_id.getvalue()[4:5], b'\x03')
+        self.assertEqual(with_dict_id.getvalue()[4:5], b"\x03")
 
         cctx = zstd.ZstdCompressor(level=1, dict_data=d, write_dict_id=False)
         no_dict_id = NonClosingBytesIO()
         with cctx.stream_writer(no_dict_id) as compressor:
-            self.assertEqual(compressor.write(b'foobarfoobar'), 0)
+            self.assertEqual(compressor.write(b"foobarfoobar"), 0)
 
-        self.assertEqual(no_dict_id.getvalue()[4:5], b'\x00')
+        self.assertEqual(no_dict_id.getvalue()[4:5], b"\x00")
 
         no_params = zstd.get_frame_parameters(no_dict_id.getvalue())
         with_params = zstd.get_frame_parameters(with_dict_id.getvalue())
@@ -1181,14 +1191,13 @@
         self.assertFalse(no_params.has_checksum)
         self.assertFalse(with_params.has_checksum)
 
-        self.assertEqual(len(with_dict_id.getvalue()),
-                         len(no_dict_id.getvalue()) + 4)
+        self.assertEqual(len(with_dict_id.getvalue()), len(no_dict_id.getvalue()) + 4)
 
     def test_memory_size(self):
         cctx = zstd.ZstdCompressor(level=3)
         buffer = io.BytesIO()
         with cctx.stream_writer(buffer) as compressor:
-            compressor.write(b'foo')
+            compressor.write(b"foo")
             size = compressor.memory_size()
 
         self.assertGreater(size, 100000)
@@ -1197,9 +1206,9 @@
         cctx = zstd.ZstdCompressor(level=3)
         dest = OpCountingBytesIO()
         with cctx.stream_writer(dest, write_size=1) as compressor:
-            self.assertEqual(compressor.write(b'foo'), 0)
-            self.assertEqual(compressor.write(b'bar'), 0)
-            self.assertEqual(compressor.write(b'foobar'), 0)
+            self.assertEqual(compressor.write(b"foo"), 0)
+            self.assertEqual(compressor.write(b"bar"), 0)
+            self.assertEqual(compressor.write(b"foobar"), 0)
 
         self.assertEqual(len(dest.getvalue()), dest._write_count)
 
@@ -1207,15 +1216,15 @@
         cctx = zstd.ZstdCompressor(level=3)
         dest = OpCountingBytesIO()
         with cctx.stream_writer(dest) as compressor:
-            self.assertEqual(compressor.write(b'foo'), 0)
+            self.assertEqual(compressor.write(b"foo"), 0)
             self.assertEqual(dest._write_count, 0)
             self.assertEqual(compressor.flush(), 12)
             self.assertEqual(dest._write_count, 1)
-            self.assertEqual(compressor.write(b'bar'), 0)
+            self.assertEqual(compressor.write(b"bar"), 0)
             self.assertEqual(dest._write_count, 1)
             self.assertEqual(compressor.flush(), 6)
             self.assertEqual(dest._write_count, 2)
-            self.assertEqual(compressor.write(b'baz'), 0)
+            self.assertEqual(compressor.write(b"baz"), 0)
 
         self.assertEqual(dest._write_count, 3)
 
@@ -1223,7 +1232,7 @@
         cctx = zstd.ZstdCompressor(level=3, write_checksum=True)
         dest = OpCountingBytesIO()
         with cctx.stream_writer(dest) as compressor:
-            self.assertEqual(compressor.write(b'foobar' * 8192), 0)
+            self.assertEqual(compressor.write(b"foobar" * 8192), 0)
             count = dest._write_count
             offset = dest.tell()
             self.assertEqual(compressor.flush(), 23)
@@ -1238,41 +1247,43 @@
         self.assertEqual(len(trailing), 7)
 
         header = trailing[0:3]
-        self.assertEqual(header, b'\x01\x00\x00')
+        self.assertEqual(header, b"\x01\x00\x00")
 
     def test_flush_frame(self):
         cctx = zstd.ZstdCompressor(level=3)
         dest = OpCountingBytesIO()
 
         with cctx.stream_writer(dest) as compressor:
-            self.assertEqual(compressor.write(b'foobar' * 8192), 0)
+            self.assertEqual(compressor.write(b"foobar" * 8192), 0)
             self.assertEqual(compressor.flush(zstd.FLUSH_FRAME), 23)
-            compressor.write(b'biz' * 16384)
+            compressor.write(b"biz" * 16384)
 
-        self.assertEqual(dest.getvalue(),
-                         # Frame 1.
-                         b'\x28\xb5\x2f\xfd\x00\x58\x75\x00\x00\x30\x66\x6f\x6f'
-                         b'\x62\x61\x72\x01\x00\xf7\xbf\xe8\xa5\x08'
-                         # Frame 2.
-                         b'\x28\xb5\x2f\xfd\x00\x58\x5d\x00\x00\x18\x62\x69\x7a'
-                         b'\x01\x00\xfa\x3f\x75\x37\x04')
+        self.assertEqual(
+            dest.getvalue(),
+            # Frame 1.
+            b"\x28\xb5\x2f\xfd\x00\x58\x75\x00\x00\x30\x66\x6f\x6f"
+            b"\x62\x61\x72\x01\x00\xf7\xbf\xe8\xa5\x08"
+            # Frame 2.
+            b"\x28\xb5\x2f\xfd\x00\x58\x5d\x00\x00\x18\x62\x69\x7a"
+            b"\x01\x00\xfa\x3f\x75\x37\x04",
+        )
 
     def test_bad_flush_mode(self):
         cctx = zstd.ZstdCompressor()
         dest = io.BytesIO()
         with cctx.stream_writer(dest) as compressor:
-            with self.assertRaisesRegexp(ValueError, 'unknown flush_mode: 42'):
+            with self.assertRaisesRegex(ValueError, "unknown flush_mode: 42"):
                 compressor.flush(flush_mode=42)
 
     def test_multithreaded(self):
         dest = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(threads=2)
         with cctx.stream_writer(dest) as compressor:
-            compressor.write(b'a' * 1048576)
-            compressor.write(b'b' * 1048576)
-            compressor.write(b'c' * 1048576)
+            compressor.write(b"a" * 1048576)
+            compressor.write(b"b" * 1048576)
+            compressor.write(b"c" * 1048576)
 
-        self.assertEqual(len(dest.getvalue()), 295)
+        self.assertEqual(len(dest.getvalue()), 111)
 
     def test_tell(self):
         dest = io.BytesIO()
@@ -1281,7 +1292,7 @@
             self.assertEqual(compressor.tell(), 0)
 
             for i in range(256):
-                compressor.write(b'foo' * (i + 1))
+                compressor.write(b"foo" * (i + 1))
                 self.assertEqual(compressor.tell(), dest.tell())
 
     def test_bad_size(self):
@@ -1289,9 +1300,9 @@
 
         dest = io.BytesIO()
 
-        with self.assertRaisesRegexp(zstd.ZstdError, 'Src size is incorrect'):
+        with self.assertRaisesRegex(zstd.ZstdError, "Src size is incorrect"):
             with cctx.stream_writer(dest, size=2) as compressor:
-                compressor.write(b'foo')
+                compressor.write(b"foo")
 
         # Test another operation.
         with cctx.stream_writer(dest, size=42):
@@ -1301,20 +1312,20 @@
         dest = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor()
         with cctx.stream_writer(dest) as compressor:
-            with tarfile.open('tf', mode='w|', fileobj=compressor) as tf:
-                tf.add(__file__, 'test_compressor.py')
+            with tarfile.open("tf", mode="w|", fileobj=compressor) as tf:
+                tf.add(__file__, "test_compressor.py")
 
         dest = io.BytesIO(dest.getvalue())
 
         dctx = zstd.ZstdDecompressor()
         with dctx.stream_reader(dest) as reader:
-            with tarfile.open(mode='r|', fileobj=reader) as tf:
+            with tarfile.open(mode="r|", fileobj=reader) as tf:
                 for member in tf:
-                    self.assertEqual(member.name, 'test_compressor.py')
+                    self.assertEqual(member.name, "test_compressor.py")
 
 
 @make_cffi
-class TestCompressor_read_to_iter(unittest.TestCase):
+class TestCompressor_read_to_iter(TestCase):
     def test_type_validation(self):
         cctx = zstd.ZstdCompressor()
 
@@ -1323,10 +1334,10 @@
             pass
 
         # Buffer protocol works.
-        for chunk in cctx.read_to_iter(b'foobar'):
+        for chunk in cctx.read_to_iter(b"foobar"):
             pass
 
-        with self.assertRaisesRegexp(ValueError, 'must pass an object with a read'):
+        with self.assertRaisesRegex(ValueError, "must pass an object with a read"):
             for chunk in cctx.read_to_iter(True):
                 pass
 
@@ -1337,22 +1348,22 @@
         it = cctx.read_to_iter(source)
         chunks = list(it)
         self.assertEqual(len(chunks), 1)
-        compressed = b''.join(chunks)
-        self.assertEqual(compressed, b'\x28\xb5\x2f\xfd\x00\x48\x01\x00\x00')
+        compressed = b"".join(chunks)
+        self.assertEqual(compressed, b"\x28\xb5\x2f\xfd\x00\x48\x01\x00\x00")
 
         # And again with the buffer protocol.
-        it = cctx.read_to_iter(b'')
+        it = cctx.read_to_iter(b"")
         chunks = list(it)
         self.assertEqual(len(chunks), 1)
-        compressed2 = b''.join(chunks)
+        compressed2 = b"".join(chunks)
         self.assertEqual(compressed2, compressed)
 
     def test_read_large(self):
         cctx = zstd.ZstdCompressor(level=1, write_content_size=False)
 
         source = io.BytesIO()
-        source.write(b'f' * zstd.COMPRESSION_RECOMMENDED_INPUT_SIZE)
-        source.write(b'o')
+        source.write(b"f" * zstd.COMPRESSION_RECOMMENDED_INPUT_SIZE)
+        source.write(b"o")
         source.seek(0)
 
         # Creating an iterator should not perform any compression until
@@ -1380,9 +1391,9 @@
             next(it)
 
         # We should get the same output as the one-shot compression mechanism.
-        self.assertEqual(b''.join(chunks), cctx.compress(source.getvalue()))
+        self.assertEqual(b"".join(chunks), cctx.compress(source.getvalue()))
 
-        params = zstd.get_frame_parameters(b''.join(chunks))
+        params = zstd.get_frame_parameters(b"".join(chunks))
         self.assertEqual(params.content_size, zstd.CONTENTSIZE_UNKNOWN)
         self.assertEqual(params.window_size, 262144)
         self.assertEqual(params.dict_id, 0)
@@ -1393,16 +1404,16 @@
         chunks = list(it)
         self.assertEqual(len(chunks), 2)
 
-        params = zstd.get_frame_parameters(b''.join(chunks))
+        params = zstd.get_frame_parameters(b"".join(chunks))
         self.assertEqual(params.content_size, zstd.CONTENTSIZE_UNKNOWN)
-        #self.assertEqual(params.window_size, 262144)
+        # self.assertEqual(params.window_size, 262144)
         self.assertEqual(params.dict_id, 0)
         self.assertFalse(params.has_checksum)
 
-        self.assertEqual(b''.join(chunks), cctx.compress(source.getvalue()))
+        self.assertEqual(b"".join(chunks), cctx.compress(source.getvalue()))
 
     def test_read_write_size(self):
-        source = OpCountingBytesIO(b'foobarfoobar')
+        source = OpCountingBytesIO(b"foobarfoobar")
         cctx = zstd.ZstdCompressor(level=3)
         for chunk in cctx.read_to_iter(source, read_size=1, write_size=1):
             self.assertEqual(len(chunk), 1)
@@ -1411,42 +1422,42 @@
 
     def test_multithreaded(self):
         source = io.BytesIO()
-        source.write(b'a' * 1048576)
-        source.write(b'b' * 1048576)
-        source.write(b'c' * 1048576)
+        source.write(b"a" * 1048576)
+        source.write(b"b" * 1048576)
+        source.write(b"c" * 1048576)
         source.seek(0)
 
         cctx = zstd.ZstdCompressor(threads=2)
 
-        compressed = b''.join(cctx.read_to_iter(source))
-        self.assertEqual(len(compressed), 295)
+        compressed = b"".join(cctx.read_to_iter(source))
+        self.assertEqual(len(compressed), 111)
 
     def test_bad_size(self):
         cctx = zstd.ZstdCompressor()
 
-        source = io.BytesIO(b'a' * 42)
+        source = io.BytesIO(b"a" * 42)
 
-        with self.assertRaisesRegexp(zstd.ZstdError, 'Src size is incorrect'):
-            b''.join(cctx.read_to_iter(source, size=2))
+        with self.assertRaisesRegex(zstd.ZstdError, "Src size is incorrect"):
+            b"".join(cctx.read_to_iter(source, size=2))
 
         # Test another operation on errored compressor.
-        b''.join(cctx.read_to_iter(source))
+        b"".join(cctx.read_to_iter(source))
 
 
 @make_cffi
-class TestCompressor_chunker(unittest.TestCase):
+class TestCompressor_chunker(TestCase):
     def test_empty(self):
         cctx = zstd.ZstdCompressor(write_content_size=False)
         chunker = cctx.chunker()
 
-        it = chunker.compress(b'')
+        it = chunker.compress(b"")
 
         with self.assertRaises(StopIteration):
             next(it)
 
         it = chunker.finish()
 
-        self.assertEqual(next(it), b'\x28\xb5\x2f\xfd\x00\x58\x01\x00\x00')
+        self.assertEqual(next(it), b"\x28\xb5\x2f\xfd\x00\x58\x01\x00\x00")
 
         with self.assertRaises(StopIteration):
             next(it)
@@ -1455,21 +1466,23 @@
         cctx = zstd.ZstdCompressor()
         chunker = cctx.chunker()
 
-        it = chunker.compress(b'foobar')
+        it = chunker.compress(b"foobar")
 
         with self.assertRaises(StopIteration):
             next(it)
 
-        it = chunker.compress(b'baz' * 30)
+        it = chunker.compress(b"baz" * 30)
 
         with self.assertRaises(StopIteration):
             next(it)
 
         it = chunker.finish()
 
-        self.assertEqual(next(it),
-                         b'\x28\xb5\x2f\xfd\x00\x58\x7d\x00\x00\x48\x66\x6f'
-                         b'\x6f\x62\x61\x72\x62\x61\x7a\x01\x00\xe4\xe4\x8e')
+        self.assertEqual(
+            next(it),
+            b"\x28\xb5\x2f\xfd\x00\x58\x7d\x00\x00\x48\x66\x6f"
+            b"\x6f\x62\x61\x72\x62\x61\x7a\x01\x00\xe4\xe4\x8e",
+        )
 
         with self.assertRaises(StopIteration):
             next(it)
@@ -1478,57 +1491,60 @@
         cctx = zstd.ZstdCompressor()
         chunker = cctx.chunker(size=1024)
 
-        it = chunker.compress(b'x' * 1000)
+        it = chunker.compress(b"x" * 1000)
 
         with self.assertRaises(StopIteration):
             next(it)
 
-        it = chunker.compress(b'y' * 24)
+        it = chunker.compress(b"y" * 24)
 
         with self.assertRaises(StopIteration):
             next(it)
 
         chunks = list(chunker.finish())
 
-        self.assertEqual(chunks, [
-            b'\x28\xb5\x2f\xfd\x60\x00\x03\x65\x00\x00\x18\x78\x78\x79\x02\x00'
-            b'\xa0\x16\xe3\x2b\x80\x05'
-        ])
+        self.assertEqual(
+            chunks,
+            [
+                b"\x28\xb5\x2f\xfd\x60\x00\x03\x65\x00\x00\x18\x78\x78\x79\x02\x00"
+                b"\xa0\x16\xe3\x2b\x80\x05"
+            ],
+        )
 
         dctx = zstd.ZstdDecompressor()
 
-        self.assertEqual(dctx.decompress(b''.join(chunks)),
-                         (b'x' * 1000) + (b'y' * 24))
+        self.assertEqual(dctx.decompress(b"".join(chunks)), (b"x" * 1000) + (b"y" * 24))
 
     def test_small_chunk_size(self):
         cctx = zstd.ZstdCompressor()
         chunker = cctx.chunker(chunk_size=1)
 
-        chunks = list(chunker.compress(b'foo' * 1024))
+        chunks = list(chunker.compress(b"foo" * 1024))
         self.assertEqual(chunks, [])
 
         chunks = list(chunker.finish())
         self.assertTrue(all(len(chunk) == 1 for chunk in chunks))
 
         self.assertEqual(
-            b''.join(chunks),
-            b'\x28\xb5\x2f\xfd\x00\x58\x55\x00\x00\x18\x66\x6f\x6f\x01\x00'
-            b'\xfa\xd3\x77\x43')
+            b"".join(chunks),
+            b"\x28\xb5\x2f\xfd\x00\x58\x55\x00\x00\x18\x66\x6f\x6f\x01\x00"
+            b"\xfa\xd3\x77\x43",
+        )
 
         dctx = zstd.ZstdDecompressor()
-        self.assertEqual(dctx.decompress(b''.join(chunks),
-                                         max_output_size=10000),
-                         b'foo' * 1024)
+        self.assertEqual(
+            dctx.decompress(b"".join(chunks), max_output_size=10000), b"foo" * 1024
+        )
 
     def test_input_types(self):
         cctx = zstd.ZstdCompressor()
 
         mutable_array = bytearray(3)
-        mutable_array[:] = b'foo'
+        mutable_array[:] = b"foo"
 
         sources = [
-            memoryview(b'foo'),
-            bytearray(b'foo'),
+            memoryview(b"foo"),
+            bytearray(b"foo"),
             mutable_array,
         ]
 
@@ -1536,28 +1552,32 @@
             chunker = cctx.chunker()
 
             self.assertEqual(list(chunker.compress(source)), [])
-            self.assertEqual(list(chunker.finish()), [
-                b'\x28\xb5\x2f\xfd\x00\x58\x19\x00\x00\x66\x6f\x6f'
-            ])
+            self.assertEqual(
+                list(chunker.finish()),
+                [b"\x28\xb5\x2f\xfd\x00\x58\x19\x00\x00\x66\x6f\x6f"],
+            )
 
     def test_flush(self):
         cctx = zstd.ZstdCompressor()
         chunker = cctx.chunker()
 
-        self.assertEqual(list(chunker.compress(b'foo' * 1024)), [])
-        self.assertEqual(list(chunker.compress(b'bar' * 1024)), [])
+        self.assertEqual(list(chunker.compress(b"foo" * 1024)), [])
+        self.assertEqual(list(chunker.compress(b"bar" * 1024)), [])
 
         chunks1 = list(chunker.flush())
 
-        self.assertEqual(chunks1, [
-            b'\x28\xb5\x2f\xfd\x00\x58\x8c\x00\x00\x30\x66\x6f\x6f\x62\x61\x72'
-            b'\x02\x00\xfa\x03\xfe\xd0\x9f\xbe\x1b\x02'
-        ])
+        self.assertEqual(
+            chunks1,
+            [
+                b"\x28\xb5\x2f\xfd\x00\x58\x8c\x00\x00\x30\x66\x6f\x6f\x62\x61\x72"
+                b"\x02\x00\xfa\x03\xfe\xd0\x9f\xbe\x1b\x02"
+            ],
+        )
 
         self.assertEqual(list(chunker.flush()), [])
         self.assertEqual(list(chunker.flush()), [])
 
-        self.assertEqual(list(chunker.compress(b'baz' * 1024)), [])
+        self.assertEqual(list(chunker.compress(b"baz" * 1024)), [])
 
         chunks2 = list(chunker.flush())
         self.assertEqual(len(chunks2), 1)
@@ -1567,53 +1587,56 @@
 
         dctx = zstd.ZstdDecompressor()
 
-        self.assertEqual(dctx.decompress(b''.join(chunks1 + chunks2 + chunks3),
-                                         max_output_size=10000),
-                         (b'foo' * 1024) + (b'bar' * 1024) + (b'baz' * 1024))
+        self.assertEqual(
+            dctx.decompress(
+                b"".join(chunks1 + chunks2 + chunks3), max_output_size=10000
+            ),
+            (b"foo" * 1024) + (b"bar" * 1024) + (b"baz" * 1024),
+        )
 
     def test_compress_after_finish(self):
         cctx = zstd.ZstdCompressor()
         chunker = cctx.chunker()
 
-        list(chunker.compress(b'foo'))
+        list(chunker.compress(b"foo"))
         list(chunker.finish())
 
-        with self.assertRaisesRegexp(
-                zstd.ZstdError,
-                r'cannot call compress\(\) after compression finished'):
-            list(chunker.compress(b'foo'))
+        with self.assertRaisesRegex(
+            zstd.ZstdError, r"cannot call compress\(\) after compression finished"
+        ):
+            list(chunker.compress(b"foo"))
 
     def test_flush_after_finish(self):
         cctx = zstd.ZstdCompressor()
         chunker = cctx.chunker()
 
-        list(chunker.compress(b'foo'))
+        list(chunker.compress(b"foo"))
         list(chunker.finish())
 
-        with self.assertRaisesRegexp(
-                zstd.ZstdError,
-                r'cannot call flush\(\) after compression finished'):
+        with self.assertRaisesRegex(
+            zstd.ZstdError, r"cannot call flush\(\) after compression finished"
+        ):
             list(chunker.flush())
 
     def test_finish_after_finish(self):
         cctx = zstd.ZstdCompressor()
         chunker = cctx.chunker()
 
-        list(chunker.compress(b'foo'))
+        list(chunker.compress(b"foo"))
         list(chunker.finish())
 
-        with self.assertRaisesRegexp(
-                zstd.ZstdError,
-                r'cannot call finish\(\) after compression finished'):
+        with self.assertRaisesRegex(
+            zstd.ZstdError, r"cannot call finish\(\) after compression finished"
+        ):
             list(chunker.finish())
 
 
-class TestCompressor_multi_compress_to_buffer(unittest.TestCase):
+class TestCompressor_multi_compress_to_buffer(TestCase):
     def test_invalid_inputs(self):
         cctx = zstd.ZstdCompressor()
 
-        if not hasattr(cctx, 'multi_compress_to_buffer'):
-            self.skipTest('multi_compress_to_buffer not available')
+        if not hasattr(cctx, "multi_compress_to_buffer"):
+            self.skipTest("multi_compress_to_buffer not available")
 
         with self.assertRaises(TypeError):
             cctx.multi_compress_to_buffer(True)
@@ -1621,28 +1644,28 @@
         with self.assertRaises(TypeError):
             cctx.multi_compress_to_buffer((1, 2))
 
-        with self.assertRaisesRegexp(TypeError, 'item 0 not a bytes like object'):
-            cctx.multi_compress_to_buffer([u'foo'])
+        with self.assertRaisesRegex(TypeError, "item 0 not a bytes like object"):
+            cctx.multi_compress_to_buffer([u"foo"])
 
     def test_empty_input(self):
         cctx = zstd.ZstdCompressor()
 
-        if not hasattr(cctx, 'multi_compress_to_buffer'):
-            self.skipTest('multi_compress_to_buffer not available')
+        if not hasattr(cctx, "multi_compress_to_buffer"):
+            self.skipTest("multi_compress_to_buffer not available")
 
-        with self.assertRaisesRegexp(ValueError, 'no source elements found'):
+        with self.assertRaisesRegex(ValueError, "no source elements found"):
             cctx.multi_compress_to_buffer([])
 
-        with self.assertRaisesRegexp(ValueError, 'source elements are empty'):
-            cctx.multi_compress_to_buffer([b'', b'', b''])
+        with self.assertRaisesRegex(ValueError, "source elements are empty"):
+            cctx.multi_compress_to_buffer([b"", b"", b""])
 
     def test_list_input(self):
         cctx = zstd.ZstdCompressor(write_checksum=True)
 
-        if not hasattr(cctx, 'multi_compress_to_buffer'):
-            self.skipTest('multi_compress_to_buffer not available')
+        if not hasattr(cctx, "multi_compress_to_buffer"):
+            self.skipTest("multi_compress_to_buffer not available")
 
-        original = [b'foo' * 12, b'bar' * 6]
+        original = [b"foo" * 12, b"bar" * 6]
         frames = [cctx.compress(c) for c in original]
         b = cctx.multi_compress_to_buffer(original)
 
@@ -1657,15 +1680,16 @@
     def test_buffer_with_segments_input(self):
         cctx = zstd.ZstdCompressor(write_checksum=True)
 
-        if not hasattr(cctx, 'multi_compress_to_buffer'):
-            self.skipTest('multi_compress_to_buffer not available')
+        if not hasattr(cctx, "multi_compress_to_buffer"):
+            self.skipTest("multi_compress_to_buffer not available")
 
-        original = [b'foo' * 4, b'bar' * 6]
+        original = [b"foo" * 4, b"bar" * 6]
         frames = [cctx.compress(c) for c in original]
 
-        offsets = struct.pack('=QQQQ', 0, len(original[0]),
-                                       len(original[0]), len(original[1]))
-        segments = zstd.BufferWithSegments(b''.join(original), offsets)
+        offsets = struct.pack(
+            "=QQQQ", 0, len(original[0]), len(original[0]), len(original[1])
+        )
+        segments = zstd.BufferWithSegments(b"".join(original), offsets)
 
         result = cctx.multi_compress_to_buffer(segments)
 
@@ -1678,28 +1702,39 @@
     def test_buffer_with_segments_collection_input(self):
         cctx = zstd.ZstdCompressor(write_checksum=True)
 
-        if not hasattr(cctx, 'multi_compress_to_buffer'):
-            self.skipTest('multi_compress_to_buffer not available')
+        if not hasattr(cctx, "multi_compress_to_buffer"):
+            self.skipTest("multi_compress_to_buffer not available")
 
         original = [
-            b'foo1',
-            b'foo2' * 2,
-            b'foo3' * 3,
-            b'foo4' * 4,
-            b'foo5' * 5,
+            b"foo1",
+            b"foo2" * 2,
+            b"foo3" * 3,
+            b"foo4" * 4,
+            b"foo5" * 5,
         ]
 
         frames = [cctx.compress(c) for c in original]
 
-        b = b''.join([original[0], original[1]])
-        b1 = zstd.BufferWithSegments(b, struct.pack('=QQQQ',
-                                                    0, len(original[0]),
-                                                    len(original[0]), len(original[1])))
-        b = b''.join([original[2], original[3], original[4]])
-        b2 = zstd.BufferWithSegments(b, struct.pack('=QQQQQQ',
-                                                    0, len(original[2]),
-                                                    len(original[2]), len(original[3]),
-                                                    len(original[2]) + len(original[3]), len(original[4])))
+        b = b"".join([original[0], original[1]])
+        b1 = zstd.BufferWithSegments(
+            b,
+            struct.pack(
+                "=QQQQ", 0, len(original[0]), len(original[0]), len(original[1])
+            ),
+        )
+        b = b"".join([original[2], original[3], original[4]])
+        b2 = zstd.BufferWithSegments(
+            b,
+            struct.pack(
+                "=QQQQQQ",
+                0,
+                len(original[2]),
+                len(original[2]),
+                len(original[3]),
+                len(original[2]) + len(original[3]),
+                len(original[4]),
+            ),
+        )
 
         c = zstd.BufferWithSegmentsCollection(b1, b2)
 
@@ -1714,16 +1749,16 @@
         # threads argument will cause multi-threaded ZSTD APIs to be used, which will
         # make output different.
         refcctx = zstd.ZstdCompressor(write_checksum=True)
-        reference = [refcctx.compress(b'x' * 64), refcctx.compress(b'y' * 64)]
+        reference = [refcctx.compress(b"x" * 64), refcctx.compress(b"y" * 64)]
 
         cctx = zstd.ZstdCompressor(write_checksum=True)
 
-        if not hasattr(cctx, 'multi_compress_to_buffer'):
-            self.skipTest('multi_compress_to_buffer not available')
+        if not hasattr(cctx, "multi_compress_to_buffer"):
+            self.skipTest("multi_compress_to_buffer not available")
 
         frames = []
-        frames.extend(b'x' * 64 for i in range(256))
-        frames.extend(b'y' * 64 for i in range(256))
+        frames.extend(b"x" * 64 for i in range(256))
+        frames.extend(b"y" * 64 for i in range(256))
 
         result = cctx.multi_compress_to_buffer(frames, threads=-1)
 
--- a/contrib/python-zstandard/tests/test_compressor_fuzzing.py	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/tests/test_compressor_fuzzing.py	Sat Dec 28 09:55:45 2019 -0800
@@ -6,28 +6,31 @@
     import hypothesis
     import hypothesis.strategies as strategies
 except ImportError:
-    raise unittest.SkipTest('hypothesis not available')
+    raise unittest.SkipTest("hypothesis not available")
 
 import zstandard as zstd
 
-from . common import (
+from .common import (
     make_cffi,
     NonClosingBytesIO,
     random_input_data,
+    TestCase,
 )
 
 
-@unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
+@unittest.skipUnless("ZSTD_SLOW_TESTS" in os.environ, "ZSTD_SLOW_TESTS not set")
 @make_cffi
-class TestCompressor_stream_reader_fuzzing(unittest.TestCase):
+class TestCompressor_stream_reader_fuzzing(TestCase):
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      source_read_size=strategies.integers(1, 16384),
-                      read_size=strategies.integers(-1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE))
-    def test_stream_source_read(self, original, level, source_read_size,
-                                read_size):
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        source_read_size=strategies.integers(1, 16384),
+        read_size=strategies.integers(-1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE),
+    )
+    def test_stream_source_read(self, original, level, source_read_size, read_size):
         if read_size == 0:
             read_size = -1
 
@@ -35,8 +38,9 @@
         ref_frame = refctx.compress(original)
 
         cctx = zstd.ZstdCompressor(level=level)
-        with cctx.stream_reader(io.BytesIO(original), size=len(original),
-                                read_size=source_read_size) as reader:
+        with cctx.stream_reader(
+            io.BytesIO(original), size=len(original), read_size=source_read_size
+        ) as reader:
             chunks = []
             while True:
                 chunk = reader.read(read_size)
@@ -45,16 +49,18 @@
 
                 chunks.append(chunk)
 
-        self.assertEqual(b''.join(chunks), ref_frame)
+        self.assertEqual(b"".join(chunks), ref_frame)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      source_read_size=strategies.integers(1, 16384),
-                      read_size=strategies.integers(-1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE))
-    def test_buffer_source_read(self, original, level, source_read_size,
-                                read_size):
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        source_read_size=strategies.integers(1, 16384),
+        read_size=strategies.integers(-1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE),
+    )
+    def test_buffer_source_read(self, original, level, source_read_size, read_size):
         if read_size == 0:
             read_size = -1
 
@@ -62,8 +68,9 @@
         ref_frame = refctx.compress(original)
 
         cctx = zstd.ZstdCompressor(level=level)
-        with cctx.stream_reader(original, size=len(original),
-                                read_size=source_read_size) as reader:
+        with cctx.stream_reader(
+            original, size=len(original), read_size=source_read_size
+        ) as reader:
             chunks = []
             while True:
                 chunk = reader.read(read_size)
@@ -72,22 +79,30 @@
 
                 chunks.append(chunk)
 
-        self.assertEqual(b''.join(chunks), ref_frame)
+        self.assertEqual(b"".join(chunks), ref_frame)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      source_read_size=strategies.integers(1, 16384),
-                      read_sizes=strategies.data())
-    def test_stream_source_read_variance(self, original, level, source_read_size,
-                                         read_sizes):
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        source_read_size=strategies.integers(1, 16384),
+        read_sizes=strategies.data(),
+    )
+    def test_stream_source_read_variance(
+        self, original, level, source_read_size, read_sizes
+    ):
         refctx = zstd.ZstdCompressor(level=level)
         ref_frame = refctx.compress(original)
 
         cctx = zstd.ZstdCompressor(level=level)
-        with cctx.stream_reader(io.BytesIO(original), size=len(original),
-                                read_size=source_read_size) as reader:
+        with cctx.stream_reader(
+            io.BytesIO(original), size=len(original), read_size=source_read_size
+        ) as reader:
             chunks = []
             while True:
                 read_size = read_sizes.draw(strategies.integers(-1, 16384))
@@ -97,23 +112,31 @@
 
                 chunks.append(chunk)
 
-        self.assertEqual(b''.join(chunks), ref_frame)
+        self.assertEqual(b"".join(chunks), ref_frame)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      source_read_size=strategies.integers(1, 16384),
-                      read_sizes=strategies.data())
-    def test_buffer_source_read_variance(self, original, level, source_read_size,
-                                         read_sizes):
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        source_read_size=strategies.integers(1, 16384),
+        read_sizes=strategies.data(),
+    )
+    def test_buffer_source_read_variance(
+        self, original, level, source_read_size, read_sizes
+    ):
 
         refctx = zstd.ZstdCompressor(level=level)
         ref_frame = refctx.compress(original)
 
         cctx = zstd.ZstdCompressor(level=level)
-        with cctx.stream_reader(original, size=len(original),
-                                read_size=source_read_size) as reader:
+        with cctx.stream_reader(
+            original, size=len(original), read_size=source_read_size
+        ) as reader:
             chunks = []
             while True:
                 read_size = read_sizes.draw(strategies.integers(-1, 16384))
@@ -123,22 +146,25 @@
 
                 chunks.append(chunk)
 
-        self.assertEqual(b''.join(chunks), ref_frame)
+        self.assertEqual(b"".join(chunks), ref_frame)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      source_read_size=strategies.integers(1, 16384),
-                      read_size=strategies.integers(1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE))
-    def test_stream_source_readinto(self, original, level,
-                                    source_read_size, read_size):
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        source_read_size=strategies.integers(1, 16384),
+        read_size=strategies.integers(1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE),
+    )
+    def test_stream_source_readinto(self, original, level, source_read_size, read_size):
         refctx = zstd.ZstdCompressor(level=level)
         ref_frame = refctx.compress(original)
 
         cctx = zstd.ZstdCompressor(level=level)
-        with cctx.stream_reader(io.BytesIO(original), size=len(original),
-                                read_size=source_read_size) as reader:
+        with cctx.stream_reader(
+            io.BytesIO(original), size=len(original), read_size=source_read_size
+        ) as reader:
             chunks = []
             while True:
                 b = bytearray(read_size)
@@ -149,23 +175,26 @@
 
                 chunks.append(bytes(b[0:count]))
 
-        self.assertEqual(b''.join(chunks), ref_frame)
+        self.assertEqual(b"".join(chunks), ref_frame)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      source_read_size=strategies.integers(1, 16384),
-                      read_size=strategies.integers(1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE))
-    def test_buffer_source_readinto(self, original, level,
-                                    source_read_size, read_size):
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        source_read_size=strategies.integers(1, 16384),
+        read_size=strategies.integers(1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE),
+    )
+    def test_buffer_source_readinto(self, original, level, source_read_size, read_size):
 
         refctx = zstd.ZstdCompressor(level=level)
         ref_frame = refctx.compress(original)
 
         cctx = zstd.ZstdCompressor(level=level)
-        with cctx.stream_reader(original, size=len(original),
-                                read_size=source_read_size) as reader:
+        with cctx.stream_reader(
+            original, size=len(original), read_size=source_read_size
+        ) as reader:
             chunks = []
             while True:
                 b = bytearray(read_size)
@@ -176,22 +205,30 @@
 
                 chunks.append(bytes(b[0:count]))
 
-        self.assertEqual(b''.join(chunks), ref_frame)
+        self.assertEqual(b"".join(chunks), ref_frame)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      source_read_size=strategies.integers(1, 16384),
-                      read_sizes=strategies.data())
-    def test_stream_source_readinto_variance(self, original, level,
-                                             source_read_size, read_sizes):
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        source_read_size=strategies.integers(1, 16384),
+        read_sizes=strategies.data(),
+    )
+    def test_stream_source_readinto_variance(
+        self, original, level, source_read_size, read_sizes
+    ):
         refctx = zstd.ZstdCompressor(level=level)
         ref_frame = refctx.compress(original)
 
         cctx = zstd.ZstdCompressor(level=level)
-        with cctx.stream_reader(io.BytesIO(original), size=len(original),
-                                read_size=source_read_size) as reader:
+        with cctx.stream_reader(
+            io.BytesIO(original), size=len(original), read_size=source_read_size
+        ) as reader:
             chunks = []
             while True:
                 read_size = read_sizes.draw(strategies.integers(1, 16384))
@@ -203,23 +240,31 @@
 
                 chunks.append(bytes(b[0:count]))
 
-        self.assertEqual(b''.join(chunks), ref_frame)
+        self.assertEqual(b"".join(chunks), ref_frame)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      source_read_size=strategies.integers(1, 16384),
-                      read_sizes=strategies.data())
-    def test_buffer_source_readinto_variance(self, original, level,
-                                             source_read_size, read_sizes):
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        source_read_size=strategies.integers(1, 16384),
+        read_sizes=strategies.data(),
+    )
+    def test_buffer_source_readinto_variance(
+        self, original, level, source_read_size, read_sizes
+    ):
 
         refctx = zstd.ZstdCompressor(level=level)
         ref_frame = refctx.compress(original)
 
         cctx = zstd.ZstdCompressor(level=level)
-        with cctx.stream_reader(original, size=len(original),
-                                read_size=source_read_size) as reader:
+        with cctx.stream_reader(
+            original, size=len(original), read_size=source_read_size
+        ) as reader:
             chunks = []
             while True:
                 read_size = read_sizes.draw(strategies.integers(1, 16384))
@@ -231,16 +276,18 @@
 
                 chunks.append(bytes(b[0:count]))
 
-        self.assertEqual(b''.join(chunks), ref_frame)
+        self.assertEqual(b"".join(chunks), ref_frame)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      source_read_size=strategies.integers(1, 16384),
-                      read_size=strategies.integers(-1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE))
-    def test_stream_source_read1(self, original, level, source_read_size,
-                                 read_size):
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        source_read_size=strategies.integers(1, 16384),
+        read_size=strategies.integers(-1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE),
+    )
+    def test_stream_source_read1(self, original, level, source_read_size, read_size):
         if read_size == 0:
             read_size = -1
 
@@ -248,8 +295,9 @@
         ref_frame = refctx.compress(original)
 
         cctx = zstd.ZstdCompressor(level=level)
-        with cctx.stream_reader(io.BytesIO(original), size=len(original),
-                                read_size=source_read_size) as reader:
+        with cctx.stream_reader(
+            io.BytesIO(original), size=len(original), read_size=source_read_size
+        ) as reader:
             chunks = []
             while True:
                 chunk = reader.read1(read_size)
@@ -258,16 +306,18 @@
 
                 chunks.append(chunk)
 
-        self.assertEqual(b''.join(chunks), ref_frame)
+        self.assertEqual(b"".join(chunks), ref_frame)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      source_read_size=strategies.integers(1, 16384),
-                      read_size=strategies.integers(-1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE))
-    def test_buffer_source_read1(self, original, level, source_read_size,
-                                 read_size):
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        source_read_size=strategies.integers(1, 16384),
+        read_size=strategies.integers(-1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE),
+    )
+    def test_buffer_source_read1(self, original, level, source_read_size, read_size):
         if read_size == 0:
             read_size = -1
 
@@ -275,8 +325,9 @@
         ref_frame = refctx.compress(original)
 
         cctx = zstd.ZstdCompressor(level=level)
-        with cctx.stream_reader(original, size=len(original),
-                                read_size=source_read_size) as reader:
+        with cctx.stream_reader(
+            original, size=len(original), read_size=source_read_size
+        ) as reader:
             chunks = []
             while True:
                 chunk = reader.read1(read_size)
@@ -285,22 +336,30 @@
 
                 chunks.append(chunk)
 
-        self.assertEqual(b''.join(chunks), ref_frame)
+        self.assertEqual(b"".join(chunks), ref_frame)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      source_read_size=strategies.integers(1, 16384),
-                      read_sizes=strategies.data())
-    def test_stream_source_read1_variance(self, original, level, source_read_size,
-                                          read_sizes):
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        source_read_size=strategies.integers(1, 16384),
+        read_sizes=strategies.data(),
+    )
+    def test_stream_source_read1_variance(
+        self, original, level, source_read_size, read_sizes
+    ):
         refctx = zstd.ZstdCompressor(level=level)
         ref_frame = refctx.compress(original)
 
         cctx = zstd.ZstdCompressor(level=level)
-        with cctx.stream_reader(io.BytesIO(original), size=len(original),
-                                read_size=source_read_size) as reader:
+        with cctx.stream_reader(
+            io.BytesIO(original), size=len(original), read_size=source_read_size
+        ) as reader:
             chunks = []
             while True:
                 read_size = read_sizes.draw(strategies.integers(-1, 16384))
@@ -310,23 +369,31 @@
 
                 chunks.append(chunk)
 
-        self.assertEqual(b''.join(chunks), ref_frame)
+        self.assertEqual(b"".join(chunks), ref_frame)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      source_read_size=strategies.integers(1, 16384),
-                      read_sizes=strategies.data())
-    def test_buffer_source_read1_variance(self, original, level, source_read_size,
-                                          read_sizes):
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        source_read_size=strategies.integers(1, 16384),
+        read_sizes=strategies.data(),
+    )
+    def test_buffer_source_read1_variance(
+        self, original, level, source_read_size, read_sizes
+    ):
 
         refctx = zstd.ZstdCompressor(level=level)
         ref_frame = refctx.compress(original)
 
         cctx = zstd.ZstdCompressor(level=level)
-        with cctx.stream_reader(original, size=len(original),
-                                read_size=source_read_size) as reader:
+        with cctx.stream_reader(
+            original, size=len(original), read_size=source_read_size
+        ) as reader:
             chunks = []
             while True:
                 read_size = read_sizes.draw(strategies.integers(-1, 16384))
@@ -336,17 +403,20 @@
 
                 chunks.append(chunk)
 
-        self.assertEqual(b''.join(chunks), ref_frame)
-
+        self.assertEqual(b"".join(chunks), ref_frame)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      source_read_size=strategies.integers(1, 16384),
-                      read_size=strategies.integers(1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE))
-    def test_stream_source_readinto1(self, original, level, source_read_size,
-                                     read_size):
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        source_read_size=strategies.integers(1, 16384),
+        read_size=strategies.integers(1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE),
+    )
+    def test_stream_source_readinto1(
+        self, original, level, source_read_size, read_size
+    ):
         if read_size == 0:
             read_size = -1
 
@@ -354,8 +424,9 @@
         ref_frame = refctx.compress(original)
 
         cctx = zstd.ZstdCompressor(level=level)
-        with cctx.stream_reader(io.BytesIO(original), size=len(original),
-                                read_size=source_read_size) as reader:
+        with cctx.stream_reader(
+            io.BytesIO(original), size=len(original), read_size=source_read_size
+        ) as reader:
             chunks = []
             while True:
                 b = bytearray(read_size)
@@ -366,16 +437,20 @@
 
                 chunks.append(bytes(b[0:count]))
 
-        self.assertEqual(b''.join(chunks), ref_frame)
+        self.assertEqual(b"".join(chunks), ref_frame)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      source_read_size=strategies.integers(1, 16384),
-                      read_size=strategies.integers(1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE))
-    def test_buffer_source_readinto1(self, original, level, source_read_size,
-                                     read_size):
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        source_read_size=strategies.integers(1, 16384),
+        read_size=strategies.integers(1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE),
+    )
+    def test_buffer_source_readinto1(
+        self, original, level, source_read_size, read_size
+    ):
         if read_size == 0:
             read_size = -1
 
@@ -383,8 +458,9 @@
         ref_frame = refctx.compress(original)
 
         cctx = zstd.ZstdCompressor(level=level)
-        with cctx.stream_reader(original, size=len(original),
-                                read_size=source_read_size) as reader:
+        with cctx.stream_reader(
+            original, size=len(original), read_size=source_read_size
+        ) as reader:
             chunks = []
             while True:
                 b = bytearray(read_size)
@@ -395,22 +471,30 @@
 
                 chunks.append(bytes(b[0:count]))
 
-        self.assertEqual(b''.join(chunks), ref_frame)
+        self.assertEqual(b"".join(chunks), ref_frame)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      source_read_size=strategies.integers(1, 16384),
-                      read_sizes=strategies.data())
-    def test_stream_source_readinto1_variance(self, original, level, source_read_size,
-                                              read_sizes):
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        source_read_size=strategies.integers(1, 16384),
+        read_sizes=strategies.data(),
+    )
+    def test_stream_source_readinto1_variance(
+        self, original, level, source_read_size, read_sizes
+    ):
         refctx = zstd.ZstdCompressor(level=level)
         ref_frame = refctx.compress(original)
 
         cctx = zstd.ZstdCompressor(level=level)
-        with cctx.stream_reader(io.BytesIO(original), size=len(original),
-                                read_size=source_read_size) as reader:
+        with cctx.stream_reader(
+            io.BytesIO(original), size=len(original), read_size=source_read_size
+        ) as reader:
             chunks = []
             while True:
                 read_size = read_sizes.draw(strategies.integers(1, 16384))
@@ -422,23 +506,31 @@
 
                 chunks.append(bytes(b[0:count]))
 
-        self.assertEqual(b''.join(chunks), ref_frame)
+        self.assertEqual(b"".join(chunks), ref_frame)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      source_read_size=strategies.integers(1, 16384),
-                      read_sizes=strategies.data())
-    def test_buffer_source_readinto1_variance(self, original, level, source_read_size,
-                                              read_sizes):
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        source_read_size=strategies.integers(1, 16384),
+        read_sizes=strategies.data(),
+    )
+    def test_buffer_source_readinto1_variance(
+        self, original, level, source_read_size, read_sizes
+    ):
 
         refctx = zstd.ZstdCompressor(level=level)
         ref_frame = refctx.compress(original)
 
         cctx = zstd.ZstdCompressor(level=level)
-        with cctx.stream_reader(original, size=len(original),
-                                read_size=source_read_size) as reader:
+        with cctx.stream_reader(
+            original, size=len(original), read_size=source_read_size
+        ) as reader:
             chunks = []
             while True:
                 read_size = read_sizes.draw(strategies.integers(1, 16384))
@@ -450,35 +542,40 @@
 
                 chunks.append(bytes(b[0:count]))
 
-        self.assertEqual(b''.join(chunks), ref_frame)
-
+        self.assertEqual(b"".join(chunks), ref_frame)
 
 
-@unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
+@unittest.skipUnless("ZSTD_SLOW_TESTS" in os.environ, "ZSTD_SLOW_TESTS not set")
 @make_cffi
-class TestCompressor_stream_writer_fuzzing(unittest.TestCase):
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                        level=strategies.integers(min_value=1, max_value=5),
-                        write_size=strategies.integers(min_value=1, max_value=1048576))
+class TestCompressor_stream_writer_fuzzing(TestCase):
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        write_size=strategies.integers(min_value=1, max_value=1048576),
+    )
     def test_write_size_variance(self, original, level, write_size):
         refctx = zstd.ZstdCompressor(level=level)
         ref_frame = refctx.compress(original)
 
         cctx = zstd.ZstdCompressor(level=level)
         b = NonClosingBytesIO()
-        with cctx.stream_writer(b, size=len(original), write_size=write_size) as compressor:
+        with cctx.stream_writer(
+            b, size=len(original), write_size=write_size
+        ) as compressor:
             compressor.write(original)
 
         self.assertEqual(b.getvalue(), ref_frame)
 
 
-@unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
+@unittest.skipUnless("ZSTD_SLOW_TESTS" in os.environ, "ZSTD_SLOW_TESTS not set")
 @make_cffi
-class TestCompressor_copy_stream_fuzzing(unittest.TestCase):
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      read_size=strategies.integers(min_value=1, max_value=1048576),
-                      write_size=strategies.integers(min_value=1, max_value=1048576))
+class TestCompressor_copy_stream_fuzzing(TestCase):
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        read_size=strategies.integers(min_value=1, max_value=1048576),
+        write_size=strategies.integers(min_value=1, max_value=1048576),
+    )
     def test_read_write_size_variance(self, original, level, read_size, write_size):
         refctx = zstd.ZstdCompressor(level=level)
         ref_frame = refctx.compress(original)
@@ -487,20 +584,27 @@
         source = io.BytesIO(original)
         dest = io.BytesIO()
 
-        cctx.copy_stream(source, dest, size=len(original), read_size=read_size,
-                         write_size=write_size)
+        cctx.copy_stream(
+            source, dest, size=len(original), read_size=read_size, write_size=write_size
+        )
 
         self.assertEqual(dest.getvalue(), ref_frame)
 
 
-@unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
+@unittest.skipUnless("ZSTD_SLOW_TESTS" in os.environ, "ZSTD_SLOW_TESTS not set")
 @make_cffi
-class TestCompressor_compressobj_fuzzing(unittest.TestCase):
+class TestCompressor_compressobj_fuzzing(TestCase):
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      chunk_sizes=strategies.data())
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        chunk_sizes=strategies.data(),
+    )
     def test_random_input_sizes(self, original, level, chunk_sizes):
         refctx = zstd.ZstdCompressor(level=level)
         ref_frame = refctx.compress(original)
@@ -512,7 +616,7 @@
         i = 0
         while True:
             chunk_size = chunk_sizes.draw(strategies.integers(1, 4096))
-            source = original[i:i + chunk_size]
+            source = original[i : i + chunk_size]
             if not source:
                 break
 
@@ -521,14 +625,20 @@
 
         chunks.append(cobj.flush())
 
-        self.assertEqual(b''.join(chunks), ref_frame)
+        self.assertEqual(b"".join(chunks), ref_frame)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      chunk_sizes=strategies.data(),
-                      flushes=strategies.data())
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        chunk_sizes=strategies.data(),
+        flushes=strategies.data(),
+    )
     def test_flush_block(self, original, level, chunk_sizes, flushes):
         cctx = zstd.ZstdCompressor(level=level)
         cobj = cctx.compressobj()
@@ -541,7 +651,7 @@
         i = 0
         while True:
             input_size = chunk_sizes.draw(strategies.integers(1, 4096))
-            source = original[i:i + input_size]
+            source = original[i : i + input_size]
             if not source:
                 break
 
@@ -558,24 +668,28 @@
             compressed_chunks.append(chunk)
             decompressed_chunks.append(dobj.decompress(chunk))
 
-            self.assertEqual(b''.join(decompressed_chunks), original[0:i])
+            self.assertEqual(b"".join(decompressed_chunks), original[0:i])
 
         chunk = cobj.flush(zstd.COMPRESSOBJ_FLUSH_FINISH)
         compressed_chunks.append(chunk)
         decompressed_chunks.append(dobj.decompress(chunk))
 
-        self.assertEqual(dctx.decompress(b''.join(compressed_chunks),
-                                         max_output_size=len(original)),
-                         original)
-        self.assertEqual(b''.join(decompressed_chunks), original)
+        self.assertEqual(
+            dctx.decompress(b"".join(compressed_chunks), max_output_size=len(original)),
+            original,
+        )
+        self.assertEqual(b"".join(decompressed_chunks), original)
+
 
-@unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
+@unittest.skipUnless("ZSTD_SLOW_TESTS" in os.environ, "ZSTD_SLOW_TESTS not set")
 @make_cffi
-class TestCompressor_read_to_iter_fuzzing(unittest.TestCase):
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      read_size=strategies.integers(min_value=1, max_value=4096),
-                      write_size=strategies.integers(min_value=1, max_value=4096))
+class TestCompressor_read_to_iter_fuzzing(TestCase):
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        read_size=strategies.integers(min_value=1, max_value=4096),
+        write_size=strategies.integers(min_value=1, max_value=4096),
+    )
     def test_read_write_size_variance(self, original, level, read_size, write_size):
         refcctx = zstd.ZstdCompressor(level=level)
         ref_frame = refcctx.compress(original)
@@ -583,32 +697,35 @@
         source = io.BytesIO(original)
 
         cctx = zstd.ZstdCompressor(level=level)
-        chunks = list(cctx.read_to_iter(source, size=len(original),
-                                        read_size=read_size,
-                                        write_size=write_size))
+        chunks = list(
+            cctx.read_to_iter(
+                source, size=len(original), read_size=read_size, write_size=write_size
+            )
+        )
 
-        self.assertEqual(b''.join(chunks), ref_frame)
+        self.assertEqual(b"".join(chunks), ref_frame)
 
 
-@unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
-class TestCompressor_multi_compress_to_buffer_fuzzing(unittest.TestCase):
-    @hypothesis.given(original=strategies.lists(strategies.sampled_from(random_input_data()),
-                                                min_size=1, max_size=1024),
-                        threads=strategies.integers(min_value=1, max_value=8),
-                        use_dict=strategies.booleans())
+@unittest.skipUnless("ZSTD_SLOW_TESTS" in os.environ, "ZSTD_SLOW_TESTS not set")
+class TestCompressor_multi_compress_to_buffer_fuzzing(TestCase):
+    @hypothesis.given(
+        original=strategies.lists(
+            strategies.sampled_from(random_input_data()), min_size=1, max_size=1024
+        ),
+        threads=strategies.integers(min_value=1, max_value=8),
+        use_dict=strategies.booleans(),
+    )
     def test_data_equivalence(self, original, threads, use_dict):
         kwargs = {}
 
         # Use a content dictionary because it is cheap to create.
         if use_dict:
-            kwargs['dict_data'] = zstd.ZstdCompressionDict(original[0])
+            kwargs["dict_data"] = zstd.ZstdCompressionDict(original[0])
 
-        cctx = zstd.ZstdCompressor(level=1,
-                                   write_checksum=True,
-                                   **kwargs)
+        cctx = zstd.ZstdCompressor(level=1, write_checksum=True, **kwargs)
 
-        if not hasattr(cctx, 'multi_compress_to_buffer'):
-            self.skipTest('multi_compress_to_buffer not available')
+        if not hasattr(cctx, "multi_compress_to_buffer"):
+            self.skipTest("multi_compress_to_buffer not available")
 
         result = cctx.multi_compress_to_buffer(original, threads=-1)
 
@@ -624,17 +741,21 @@
             self.assertEqual(dctx.decompress(frame), original[i])
 
 
-@unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
+@unittest.skipUnless("ZSTD_SLOW_TESTS" in os.environ, "ZSTD_SLOW_TESTS not set")
 @make_cffi
-class TestCompressor_chunker_fuzzing(unittest.TestCase):
+class TestCompressor_chunker_fuzzing(TestCase):
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      chunk_size=strategies.integers(
-                          min_value=1,
-                          max_value=32 * 1048576),
-                      input_sizes=strategies.data())
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        chunk_size=strategies.integers(min_value=1, max_value=32 * 1048576),
+        input_sizes=strategies.data(),
+    )
     def test_random_input_sizes(self, original, level, chunk_size, input_sizes):
         cctx = zstd.ZstdCompressor(level=level)
         chunker = cctx.chunker(chunk_size=chunk_size)
@@ -643,7 +764,7 @@
         i = 0
         while True:
             input_size = input_sizes.draw(strategies.integers(1, 4096))
-            source = original[i:i + input_size]
+            source = original[i : i + input_size]
             if not source:
                 break
 
@@ -654,23 +775,26 @@
 
         dctx = zstd.ZstdDecompressor()
 
-        self.assertEqual(dctx.decompress(b''.join(chunks),
-                                         max_output_size=len(original)),
-                         original)
+        self.assertEqual(
+            dctx.decompress(b"".join(chunks), max_output_size=len(original)), original
+        )
 
         self.assertTrue(all(len(chunk) == chunk_size for chunk in chunks[:-1]))
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      chunk_size=strategies.integers(
-                          min_value=1,
-                          max_value=32 * 1048576),
-                      input_sizes=strategies.data(),
-                      flushes=strategies.data())
-    def test_flush_block(self, original, level, chunk_size, input_sizes,
-                         flushes):
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        chunk_size=strategies.integers(min_value=1, max_value=32 * 1048576),
+        input_sizes=strategies.data(),
+        flushes=strategies.data(),
+    )
+    def test_flush_block(self, original, level, chunk_size, input_sizes, flushes):
         cctx = zstd.ZstdCompressor(level=level)
         chunker = cctx.chunker(chunk_size=chunk_size)
 
@@ -682,7 +806,7 @@
         i = 0
         while True:
             input_size = input_sizes.draw(strategies.integers(1, 4096))
-            source = original[i:i + input_size]
+            source = original[i : i + input_size]
             if not source:
                 break
 
@@ -690,22 +814,23 @@
 
             chunks = list(chunker.compress(source))
             compressed_chunks.extend(chunks)
-            decompressed_chunks.append(dobj.decompress(b''.join(chunks)))
+            decompressed_chunks.append(dobj.decompress(b"".join(chunks)))
 
             if not flushes.draw(strategies.booleans()):
                 continue
 
             chunks = list(chunker.flush())
             compressed_chunks.extend(chunks)
-            decompressed_chunks.append(dobj.decompress(b''.join(chunks)))
+            decompressed_chunks.append(dobj.decompress(b"".join(chunks)))
 
-            self.assertEqual(b''.join(decompressed_chunks), original[0:i])
+            self.assertEqual(b"".join(decompressed_chunks), original[0:i])
 
         chunks = list(chunker.finish())
         compressed_chunks.extend(chunks)
-        decompressed_chunks.append(dobj.decompress(b''.join(chunks)))
+        decompressed_chunks.append(dobj.decompress(b"".join(chunks)))
 
-        self.assertEqual(dctx.decompress(b''.join(compressed_chunks),
-                                         max_output_size=len(original)),
-                         original)
-        self.assertEqual(b''.join(decompressed_chunks), original)
\ No newline at end of file
+        self.assertEqual(
+            dctx.decompress(b"".join(compressed_chunks), max_output_size=len(original)),
+            original,
+        )
+        self.assertEqual(b"".join(decompressed_chunks), original)
--- a/contrib/python-zstandard/tests/test_data_structures.py	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/tests/test_data_structures.py	Sat Dec 28 09:55:45 2019 -0800
@@ -3,29 +3,34 @@
 
 import zstandard as zstd
 
-from . common import (
+from .common import (
     make_cffi,
+    TestCase,
 )
 
 
 @make_cffi
-class TestCompressionParameters(unittest.TestCase):
+class TestCompressionParameters(TestCase):
     def test_bounds(self):
-        zstd.ZstdCompressionParameters(window_log=zstd.WINDOWLOG_MIN,
-                                       chain_log=zstd.CHAINLOG_MIN,
-                                       hash_log=zstd.HASHLOG_MIN,
-                                       search_log=zstd.SEARCHLOG_MIN,
-                                       min_match=zstd.MINMATCH_MIN + 1,
-                                       target_length=zstd.TARGETLENGTH_MIN,
-                                       strategy=zstd.STRATEGY_FAST)
+        zstd.ZstdCompressionParameters(
+            window_log=zstd.WINDOWLOG_MIN,
+            chain_log=zstd.CHAINLOG_MIN,
+            hash_log=zstd.HASHLOG_MIN,
+            search_log=zstd.SEARCHLOG_MIN,
+            min_match=zstd.MINMATCH_MIN + 1,
+            target_length=zstd.TARGETLENGTH_MIN,
+            strategy=zstd.STRATEGY_FAST,
+        )
 
-        zstd.ZstdCompressionParameters(window_log=zstd.WINDOWLOG_MAX,
-                                       chain_log=zstd.CHAINLOG_MAX,
-                                       hash_log=zstd.HASHLOG_MAX,
-                                       search_log=zstd.SEARCHLOG_MAX,
-                                       min_match=zstd.MINMATCH_MAX - 1,
-                                       target_length=zstd.TARGETLENGTH_MAX,
-                                       strategy=zstd.STRATEGY_BTULTRA2)
+        zstd.ZstdCompressionParameters(
+            window_log=zstd.WINDOWLOG_MAX,
+            chain_log=zstd.CHAINLOG_MAX,
+            hash_log=zstd.HASHLOG_MAX,
+            search_log=zstd.SEARCHLOG_MAX,
+            min_match=zstd.MINMATCH_MAX - 1,
+            target_length=zstd.TARGETLENGTH_MAX,
+            strategy=zstd.STRATEGY_BTULTRA2,
+        )
 
     def test_from_level(self):
         p = zstd.ZstdCompressionParameters.from_level(1)
@@ -37,13 +42,15 @@
         self.assertEqual(p.window_log, 19)
 
     def test_members(self):
-        p = zstd.ZstdCompressionParameters(window_log=10,
-                                           chain_log=6,
-                                           hash_log=7,
-                                           search_log=4,
-                                           min_match=5,
-                                           target_length=8,
-                                           strategy=1)
+        p = zstd.ZstdCompressionParameters(
+            window_log=10,
+            chain_log=6,
+            hash_log=7,
+            search_log=4,
+            min_match=5,
+            target_length=8,
+            strategy=1,
+        )
         self.assertEqual(p.window_log, 10)
         self.assertEqual(p.chain_log, 6)
         self.assertEqual(p.hash_log, 7)
@@ -58,8 +65,7 @@
         p = zstd.ZstdCompressionParameters(threads=4)
         self.assertEqual(p.threads, 4)
 
-        p = zstd.ZstdCompressionParameters(threads=2, job_size=1048576,
-                                           overlap_log=6)
+        p = zstd.ZstdCompressionParameters(threads=2, job_size=1048576, overlap_log=6)
         self.assertEqual(p.threads, 2)
         self.assertEqual(p.job_size, 1048576)
         self.assertEqual(p.overlap_log, 6)
@@ -91,20 +97,25 @@
         self.assertEqual(p.ldm_hash_rate_log, 8)
 
     def test_estimated_compression_context_size(self):
-        p = zstd.ZstdCompressionParameters(window_log=20,
-                                           chain_log=16,
-                                           hash_log=17,
-                                           search_log=1,
-                                           min_match=5,
-                                           target_length=16,
-                                           strategy=zstd.STRATEGY_DFAST)
+        p = zstd.ZstdCompressionParameters(
+            window_log=20,
+            chain_log=16,
+            hash_log=17,
+            search_log=1,
+            min_match=5,
+            target_length=16,
+            strategy=zstd.STRATEGY_DFAST,
+        )
 
         # 32-bit has slightly different values from 64-bit.
-        self.assertAlmostEqual(p.estimated_compression_context_size(), 1294144,
-                               delta=250)
+        self.assertAlmostEqual(
+            p.estimated_compression_context_size(), 1294464, delta=400
+        )
 
     def test_strategy(self):
-        with self.assertRaisesRegexp(ValueError, 'cannot specify both compression_strategy'):
+        with self.assertRaisesRegex(
+            ValueError, "cannot specify both compression_strategy"
+        ):
             zstd.ZstdCompressionParameters(strategy=0, compression_strategy=0)
 
         p = zstd.ZstdCompressionParameters(strategy=2)
@@ -114,7 +125,9 @@
         self.assertEqual(p.compression_strategy, 3)
 
     def test_ldm_hash_rate_log(self):
-        with self.assertRaisesRegexp(ValueError, 'cannot specify both ldm_hash_rate_log'):
+        with self.assertRaisesRegex(
+            ValueError, "cannot specify both ldm_hash_rate_log"
+        ):
             zstd.ZstdCompressionParameters(ldm_hash_rate_log=8, ldm_hash_every_log=4)
 
         p = zstd.ZstdCompressionParameters(ldm_hash_rate_log=8)
@@ -124,7 +137,7 @@
         self.assertEqual(p.ldm_hash_every_log, 16)
 
     def test_overlap_log(self):
-        with self.assertRaisesRegexp(ValueError, 'cannot specify both overlap_log'):
+        with self.assertRaisesRegex(ValueError, "cannot specify both overlap_log"):
             zstd.ZstdCompressionParameters(overlap_log=1, overlap_size_log=9)
 
         p = zstd.ZstdCompressionParameters(overlap_log=2)
@@ -137,7 +150,7 @@
 
 
 @make_cffi
-class TestFrameParameters(unittest.TestCase):
+class TestFrameParameters(TestCase):
     def test_invalid_type(self):
         with self.assertRaises(TypeError):
             zstd.get_frame_parameters(None)
@@ -145,71 +158,71 @@
         # Python 3 doesn't appear to convert unicode to Py_buffer.
         if sys.version_info[0] >= 3:
             with self.assertRaises(TypeError):
-                zstd.get_frame_parameters(u'foobarbaz')
+                zstd.get_frame_parameters(u"foobarbaz")
         else:
             # CPython will convert unicode to Py_buffer. But CFFI won't.
-            if zstd.backend == 'cffi':
+            if zstd.backend == "cffi":
                 with self.assertRaises(TypeError):
-                    zstd.get_frame_parameters(u'foobarbaz')
+                    zstd.get_frame_parameters(u"foobarbaz")
             else:
                 with self.assertRaises(zstd.ZstdError):
-                    zstd.get_frame_parameters(u'foobarbaz')
+                    zstd.get_frame_parameters(u"foobarbaz")
 
     def test_invalid_input_sizes(self):
-        with self.assertRaisesRegexp(zstd.ZstdError, 'not enough data for frame'):
-            zstd.get_frame_parameters(b'')
+        with self.assertRaisesRegex(zstd.ZstdError, "not enough data for frame"):
+            zstd.get_frame_parameters(b"")
 
-        with self.assertRaisesRegexp(zstd.ZstdError, 'not enough data for frame'):
+        with self.assertRaisesRegex(zstd.ZstdError, "not enough data for frame"):
             zstd.get_frame_parameters(zstd.FRAME_HEADER)
 
     def test_invalid_frame(self):
-        with self.assertRaisesRegexp(zstd.ZstdError, 'Unknown frame descriptor'):
-            zstd.get_frame_parameters(b'foobarbaz')
+        with self.assertRaisesRegex(zstd.ZstdError, "Unknown frame descriptor"):
+            zstd.get_frame_parameters(b"foobarbaz")
 
     def test_attributes(self):
-        params = zstd.get_frame_parameters(zstd.FRAME_HEADER + b'\x00\x00')
+        params = zstd.get_frame_parameters(zstd.FRAME_HEADER + b"\x00\x00")
         self.assertEqual(params.content_size, zstd.CONTENTSIZE_UNKNOWN)
         self.assertEqual(params.window_size, 1024)
         self.assertEqual(params.dict_id, 0)
         self.assertFalse(params.has_checksum)
 
         # Lowest 2 bits indicate a dictionary and length. Here, the dict id is 1 byte.
-        params = zstd.get_frame_parameters(zstd.FRAME_HEADER + b'\x01\x00\xff')
+        params = zstd.get_frame_parameters(zstd.FRAME_HEADER + b"\x01\x00\xff")
         self.assertEqual(params.content_size, zstd.CONTENTSIZE_UNKNOWN)
         self.assertEqual(params.window_size, 1024)
         self.assertEqual(params.dict_id, 255)
         self.assertFalse(params.has_checksum)
 
         # Lowest 3rd bit indicates if checksum is present.
-        params = zstd.get_frame_parameters(zstd.FRAME_HEADER + b'\x04\x00')
+        params = zstd.get_frame_parameters(zstd.FRAME_HEADER + b"\x04\x00")
         self.assertEqual(params.content_size, zstd.CONTENTSIZE_UNKNOWN)
         self.assertEqual(params.window_size, 1024)
         self.assertEqual(params.dict_id, 0)
         self.assertTrue(params.has_checksum)
 
         # Upper 2 bits indicate content size.
-        params = zstd.get_frame_parameters(zstd.FRAME_HEADER + b'\x40\x00\xff\x00')
+        params = zstd.get_frame_parameters(zstd.FRAME_HEADER + b"\x40\x00\xff\x00")
         self.assertEqual(params.content_size, 511)
         self.assertEqual(params.window_size, 1024)
         self.assertEqual(params.dict_id, 0)
         self.assertFalse(params.has_checksum)
 
         # Window descriptor is 2nd byte after frame header.
-        params = zstd.get_frame_parameters(zstd.FRAME_HEADER + b'\x00\x40')
+        params = zstd.get_frame_parameters(zstd.FRAME_HEADER + b"\x00\x40")
         self.assertEqual(params.content_size, zstd.CONTENTSIZE_UNKNOWN)
         self.assertEqual(params.window_size, 262144)
         self.assertEqual(params.dict_id, 0)
         self.assertFalse(params.has_checksum)
 
         # Set multiple things.
-        params = zstd.get_frame_parameters(zstd.FRAME_HEADER + b'\x45\x40\x0f\x10\x00')
+        params = zstd.get_frame_parameters(zstd.FRAME_HEADER + b"\x45\x40\x0f\x10\x00")
         self.assertEqual(params.content_size, 272)
         self.assertEqual(params.window_size, 262144)
         self.assertEqual(params.dict_id, 15)
         self.assertTrue(params.has_checksum)
 
     def test_input_types(self):
-        v = zstd.FRAME_HEADER + b'\x00\x00'
+        v = zstd.FRAME_HEADER + b"\x00\x00"
 
         mutable_array = bytearray(len(v))
         mutable_array[:] = v
--- a/contrib/python-zstandard/tests/test_data_structures_fuzzing.py	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/tests/test_data_structures_fuzzing.py	Sat Dec 28 09:55:45 2019 -0800
@@ -7,70 +7,99 @@
     import hypothesis
     import hypothesis.strategies as strategies
 except ImportError:
-    raise unittest.SkipTest('hypothesis not available')
+    raise unittest.SkipTest("hypothesis not available")
 
 import zstandard as zstd
 
 from .common import (
     make_cffi,
+    TestCase,
+)
+
+
+s_windowlog = strategies.integers(
+    min_value=zstd.WINDOWLOG_MIN, max_value=zstd.WINDOWLOG_MAX
+)
+s_chainlog = strategies.integers(
+    min_value=zstd.CHAINLOG_MIN, max_value=zstd.CHAINLOG_MAX
+)
+s_hashlog = strategies.integers(min_value=zstd.HASHLOG_MIN, max_value=zstd.HASHLOG_MAX)
+s_searchlog = strategies.integers(
+    min_value=zstd.SEARCHLOG_MIN, max_value=zstd.SEARCHLOG_MAX
+)
+s_minmatch = strategies.integers(
+    min_value=zstd.MINMATCH_MIN, max_value=zstd.MINMATCH_MAX
+)
+s_targetlength = strategies.integers(
+    min_value=zstd.TARGETLENGTH_MIN, max_value=zstd.TARGETLENGTH_MAX
+)
+s_strategy = strategies.sampled_from(
+    (
+        zstd.STRATEGY_FAST,
+        zstd.STRATEGY_DFAST,
+        zstd.STRATEGY_GREEDY,
+        zstd.STRATEGY_LAZY,
+        zstd.STRATEGY_LAZY2,
+        zstd.STRATEGY_BTLAZY2,
+        zstd.STRATEGY_BTOPT,
+        zstd.STRATEGY_BTULTRA,
+        zstd.STRATEGY_BTULTRA2,
+    )
 )
 
 
-s_windowlog = strategies.integers(min_value=zstd.WINDOWLOG_MIN,
-                                    max_value=zstd.WINDOWLOG_MAX)
-s_chainlog = strategies.integers(min_value=zstd.CHAINLOG_MIN,
-                                    max_value=zstd.CHAINLOG_MAX)
-s_hashlog = strategies.integers(min_value=zstd.HASHLOG_MIN,
-                                max_value=zstd.HASHLOG_MAX)
-s_searchlog = strategies.integers(min_value=zstd.SEARCHLOG_MIN,
-                                    max_value=zstd.SEARCHLOG_MAX)
-s_minmatch = strategies.integers(min_value=zstd.MINMATCH_MIN,
-                                 max_value=zstd.MINMATCH_MAX)
-s_targetlength = strategies.integers(min_value=zstd.TARGETLENGTH_MIN,
-                                     max_value=zstd.TARGETLENGTH_MAX)
-s_strategy = strategies.sampled_from((zstd.STRATEGY_FAST,
-                                        zstd.STRATEGY_DFAST,
-                                        zstd.STRATEGY_GREEDY,
-                                        zstd.STRATEGY_LAZY,
-                                        zstd.STRATEGY_LAZY2,
-                                        zstd.STRATEGY_BTLAZY2,
-                                        zstd.STRATEGY_BTOPT,
-                                        zstd.STRATEGY_BTULTRA,
-                                        zstd.STRATEGY_BTULTRA2))
-
+@make_cffi
+@unittest.skipUnless("ZSTD_SLOW_TESTS" in os.environ, "ZSTD_SLOW_TESTS not set")
+class TestCompressionParametersHypothesis(TestCase):
+    @hypothesis.given(
+        s_windowlog,
+        s_chainlog,
+        s_hashlog,
+        s_searchlog,
+        s_minmatch,
+        s_targetlength,
+        s_strategy,
+    )
+    def test_valid_init(
+        self, windowlog, chainlog, hashlog, searchlog, minmatch, targetlength, strategy
+    ):
+        zstd.ZstdCompressionParameters(
+            window_log=windowlog,
+            chain_log=chainlog,
+            hash_log=hashlog,
+            search_log=searchlog,
+            min_match=minmatch,
+            target_length=targetlength,
+            strategy=strategy,
+        )
 
-@make_cffi
-@unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
-class TestCompressionParametersHypothesis(unittest.TestCase):
-    @hypothesis.given(s_windowlog, s_chainlog, s_hashlog, s_searchlog,
-                        s_minmatch, s_targetlength, s_strategy)
-    def test_valid_init(self, windowlog, chainlog, hashlog, searchlog,
-                        minmatch, targetlength, strategy):
-        zstd.ZstdCompressionParameters(window_log=windowlog,
-                                       chain_log=chainlog,
-                                       hash_log=hashlog,
-                                       search_log=searchlog,
-                                       min_match=minmatch,
-                                       target_length=targetlength,
-                                       strategy=strategy)
-
-    @hypothesis.given(s_windowlog, s_chainlog, s_hashlog, s_searchlog,
-                      s_minmatch, s_targetlength, s_strategy)
-    def test_estimated_compression_context_size(self, windowlog, chainlog,
-                                                hashlog, searchlog,
-                                                minmatch, targetlength,
-                                                strategy):
-        if minmatch == zstd.MINMATCH_MIN and strategy in (zstd.STRATEGY_FAST, zstd.STRATEGY_GREEDY):
+    @hypothesis.given(
+        s_windowlog,
+        s_chainlog,
+        s_hashlog,
+        s_searchlog,
+        s_minmatch,
+        s_targetlength,
+        s_strategy,
+    )
+    def test_estimated_compression_context_size(
+        self, windowlog, chainlog, hashlog, searchlog, minmatch, targetlength, strategy
+    ):
+        if minmatch == zstd.MINMATCH_MIN and strategy in (
+            zstd.STRATEGY_FAST,
+            zstd.STRATEGY_GREEDY,
+        ):
             minmatch += 1
         elif minmatch == zstd.MINMATCH_MAX and strategy != zstd.STRATEGY_FAST:
             minmatch -= 1
 
-        p = zstd.ZstdCompressionParameters(window_log=windowlog,
-                                           chain_log=chainlog,
-                                           hash_log=hashlog,
-                                           search_log=searchlog,
-                                           min_match=minmatch,
-                                           target_length=targetlength,
-                                           strategy=strategy)
+        p = zstd.ZstdCompressionParameters(
+            window_log=windowlog,
+            chain_log=chainlog,
+            hash_log=hashlog,
+            search_log=searchlog,
+            min_match=minmatch,
+            target_length=targetlength,
+            strategy=strategy,
+        )
         size = p.estimated_compression_context_size()
-
--- a/contrib/python-zstandard/tests/test_decompressor.py	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/tests/test_decompressor.py	Sat Dec 28 09:55:45 2019 -0800
@@ -13,6 +13,7 @@
     make_cffi,
     NonClosingBytesIO,
     OpCountingBytesIO,
+    TestCase,
 )
 
 
@@ -23,62 +24,67 @@
 
 
 @make_cffi
-class TestFrameHeaderSize(unittest.TestCase):
+class TestFrameHeaderSize(TestCase):
     def test_empty(self):
-        with self.assertRaisesRegexp(
-            zstd.ZstdError, 'could not determine frame header size: Src size '
-                            'is incorrect'):
-            zstd.frame_header_size(b'')
+        with self.assertRaisesRegex(
+            zstd.ZstdError,
+            "could not determine frame header size: Src size " "is incorrect",
+        ):
+            zstd.frame_header_size(b"")
 
     def test_too_small(self):
-        with self.assertRaisesRegexp(
-            zstd.ZstdError, 'could not determine frame header size: Src size '
-                            'is incorrect'):
-            zstd.frame_header_size(b'foob')
+        with self.assertRaisesRegex(
+            zstd.ZstdError,
+            "could not determine frame header size: Src size " "is incorrect",
+        ):
+            zstd.frame_header_size(b"foob")
 
     def test_basic(self):
         # It doesn't matter that it isn't a valid frame.
-        self.assertEqual(zstd.frame_header_size(b'long enough but no magic'), 6)
+        self.assertEqual(zstd.frame_header_size(b"long enough but no magic"), 6)
 
 
 @make_cffi
-class TestFrameContentSize(unittest.TestCase):
+class TestFrameContentSize(TestCase):
     def test_empty(self):
-        with self.assertRaisesRegexp(zstd.ZstdError,
-                                     'error when determining content size'):
-            zstd.frame_content_size(b'')
+        with self.assertRaisesRegex(
+            zstd.ZstdError, "error when determining content size"
+        ):
+            zstd.frame_content_size(b"")
 
     def test_too_small(self):
-        with self.assertRaisesRegexp(zstd.ZstdError,
-                                     'error when determining content size'):
-            zstd.frame_content_size(b'foob')
+        with self.assertRaisesRegex(
+            zstd.ZstdError, "error when determining content size"
+        ):
+            zstd.frame_content_size(b"foob")
 
     def test_bad_frame(self):
-        with self.assertRaisesRegexp(zstd.ZstdError,
-                                     'error when determining content size'):
-            zstd.frame_content_size(b'invalid frame header')
+        with self.assertRaisesRegex(
+            zstd.ZstdError, "error when determining content size"
+        ):
+            zstd.frame_content_size(b"invalid frame header")
 
     def test_unknown(self):
         cctx = zstd.ZstdCompressor(write_content_size=False)
-        frame = cctx.compress(b'foobar')
+        frame = cctx.compress(b"foobar")
 
         self.assertEqual(zstd.frame_content_size(frame), -1)
 
     def test_empty(self):
         cctx = zstd.ZstdCompressor()
-        frame = cctx.compress(b'')
+        frame = cctx.compress(b"")
 
         self.assertEqual(zstd.frame_content_size(frame), 0)
 
     def test_basic(self):
         cctx = zstd.ZstdCompressor()
-        frame = cctx.compress(b'foobar')
+        frame = cctx.compress(b"foobar")
 
         self.assertEqual(zstd.frame_content_size(frame), 6)
 
 
 @make_cffi
-class TestDecompressor(unittest.TestCase):
+class TestDecompressor(TestCase):
     def test_memory_size(self):
         dctx = zstd.ZstdDecompressor()
 
@@ -86,22 +92,26 @@
 
 
 @make_cffi
-class TestDecompressor_decompress(unittest.TestCase):
+class TestDecompressor_decompress(TestCase):
     def test_empty_input(self):
         dctx = zstd.ZstdDecompressor()
 
-        with self.assertRaisesRegexp(zstd.ZstdError, 'error determining content size from frame header'):
-            dctx.decompress(b'')
+        with self.assertRaisesRegex(
+            zstd.ZstdError, "error determining content size from frame header"
+        ):
+            dctx.decompress(b"")
 
     def test_invalid_input(self):
         dctx = zstd.ZstdDecompressor()
 
-        with self.assertRaisesRegexp(zstd.ZstdError, 'error determining content size from frame header'):
-            dctx.decompress(b'foobar')
+        with self.assertRaisesRegex(
+            zstd.ZstdError, "error determining content size from frame header"
+        ):
+            dctx.decompress(b"foobar")
 
     def test_input_types(self):
         cctx = zstd.ZstdCompressor(level=1)
-        compressed = cctx.compress(b'foo')
+        compressed = cctx.compress(b"foo")
 
         mutable_array = bytearray(len(compressed))
         mutable_array[:] = compressed
@@ -114,36 +124,38 @@
 
         dctx = zstd.ZstdDecompressor()
         for source in sources:
-            self.assertEqual(dctx.decompress(source), b'foo')
+            self.assertEqual(dctx.decompress(source), b"foo")
 
     def test_no_content_size_in_frame(self):
         cctx = zstd.ZstdCompressor(write_content_size=False)
-        compressed = cctx.compress(b'foobar')
+        compressed = cctx.compress(b"foobar")
 
         dctx = zstd.ZstdDecompressor()
-        with self.assertRaisesRegexp(zstd.ZstdError, 'could not determine content size in frame header'):
+        with self.assertRaisesRegex(
+            zstd.ZstdError, "could not determine content size in frame header"
+        ):
             dctx.decompress(compressed)
 
     def test_content_size_present(self):
         cctx = zstd.ZstdCompressor()
-        compressed = cctx.compress(b'foobar')
+        compressed = cctx.compress(b"foobar")
 
         dctx = zstd.ZstdDecompressor()
         decompressed = dctx.decompress(compressed)
-        self.assertEqual(decompressed, b'foobar')
+        self.assertEqual(decompressed, b"foobar")
 
     def test_empty_roundtrip(self):
         cctx = zstd.ZstdCompressor()
-        compressed = cctx.compress(b'')
+        compressed = cctx.compress(b"")
 
         dctx = zstd.ZstdDecompressor()
         decompressed = dctx.decompress(compressed)
 
-        self.assertEqual(decompressed, b'')
+        self.assertEqual(decompressed, b"")
 
     def test_max_output_size(self):
         cctx = zstd.ZstdCompressor(write_content_size=False)
-        source = b'foobar' * 256
+        source = b"foobar" * 256
         compressed = cctx.compress(source)
 
         dctx = zstd.ZstdDecompressor()
@@ -152,8 +164,9 @@
         self.assertEqual(decompressed, source)
 
         # Input size - 1 fails
-        with self.assertRaisesRegexp(zstd.ZstdError,
-                'decompression error: did not decompress full frame'):
+        with self.assertRaisesRegex(
+            zstd.ZstdError, "decompression error: did not decompress full frame"
+        ):
             dctx.decompress(compressed, max_output_size=len(source) - 1)
 
         # Input size + 1 works
@@ -166,24 +179,24 @@
 
     def test_stupidly_large_output_buffer(self):
         cctx = zstd.ZstdCompressor(write_content_size=False)
-        compressed = cctx.compress(b'foobar' * 256)
+        compressed = cctx.compress(b"foobar" * 256)
         dctx = zstd.ZstdDecompressor()
 
         # Will get OverflowError on some Python distributions that can't
         # handle really large integers.
         with self.assertRaises((MemoryError, OverflowError)):
-            dctx.decompress(compressed, max_output_size=2**62)
+            dctx.decompress(compressed, max_output_size=2 ** 62)
 
     def test_dictionary(self):
         samples = []
         for i in range(128):
-            samples.append(b'foo' * 64)
-            samples.append(b'bar' * 64)
-            samples.append(b'foobar' * 64)
+            samples.append(b"foo" * 64)
+            samples.append(b"bar" * 64)
+            samples.append(b"foobar" * 64)
 
         d = zstd.train_dictionary(8192, samples)
 
-        orig = b'foobar' * 16384
+        orig = b"foobar" * 16384
         cctx = zstd.ZstdCompressor(level=1, dict_data=d)
         compressed = cctx.compress(orig)
 
@@ -195,13 +208,13 @@
     def test_dictionary_multiple(self):
         samples = []
         for i in range(128):
-            samples.append(b'foo' * 64)
-            samples.append(b'bar' * 64)
-            samples.append(b'foobar' * 64)
+            samples.append(b"foo" * 64)
+            samples.append(b"bar" * 64)
+            samples.append(b"foobar" * 64)
 
         d = zstd.train_dictionary(8192, samples)
 
-        sources = (b'foobar' * 8192, b'foo' * 8192, b'bar' * 8192)
+        sources = (b"foobar" * 8192, b"foo" * 8192, b"bar" * 8192)
         compressed = []
         cctx = zstd.ZstdCompressor(level=1, dict_data=d)
         for source in sources:
@@ -213,7 +226,7 @@
             self.assertEqual(decompressed, sources[i])
 
     def test_max_window_size(self):
-        with open(__file__, 'rb') as fh:
+        with open(__file__, "rb") as fh:
             source = fh.read()
 
         # If we write a content size, the decompressor engages single pass
@@ -221,15 +234,16 @@
         cctx = zstd.ZstdCompressor(write_content_size=False)
         frame = cctx.compress(source)
 
-        dctx = zstd.ZstdDecompressor(max_window_size=2**zstd.WINDOWLOG_MIN)
+        dctx = zstd.ZstdDecompressor(max_window_size=2 ** zstd.WINDOWLOG_MIN)
 
-        with self.assertRaisesRegexp(
-            zstd.ZstdError, 'decompression error: Frame requires too much memory'):
+        with self.assertRaisesRegex(
+            zstd.ZstdError, "decompression error: Frame requires too much memory"
+        ):
             dctx.decompress(frame, max_output_size=len(source))
 
 
 @make_cffi
-class TestDecompressor_copy_stream(unittest.TestCase):
+class TestDecompressor_copy_stream(TestCase):
     def test_no_read(self):
         source = object()
         dest = io.BytesIO()
@@ -256,12 +270,12 @@
 
         self.assertEqual(r, 0)
         self.assertEqual(w, 0)
-        self.assertEqual(dest.getvalue(), b'')
+        self.assertEqual(dest.getvalue(), b"")
 
     def test_large_data(self):
         source = io.BytesIO()
         for i in range(255):
-            source.write(struct.Struct('>B').pack(i) * 16384)
+            source.write(struct.Struct(">B").pack(i) * 16384)
         source.seek(0)
 
         compressed = io.BytesIO()
@@ -277,33 +291,32 @@
         self.assertEqual(w, len(source.getvalue()))
 
     def test_read_write_size(self):
-        source = OpCountingBytesIO(zstd.ZstdCompressor().compress(
-            b'foobarfoobar'))
+        source = OpCountingBytesIO(zstd.ZstdCompressor().compress(b"foobarfoobar"))
 
         dest = OpCountingBytesIO()
         dctx = zstd.ZstdDecompressor()
         r, w = dctx.copy_stream(source, dest, read_size=1, write_size=1)
 
         self.assertEqual(r, len(source.getvalue()))
-        self.assertEqual(w, len(b'foobarfoobar'))
+        self.assertEqual(w, len(b"foobarfoobar"))
         self.assertEqual(source._read_count, len(source.getvalue()) + 1)
         self.assertEqual(dest._write_count, len(dest.getvalue()))
 
 
 @make_cffi
-class TestDecompressor_stream_reader(unittest.TestCase):
+class TestDecompressor_stream_reader(TestCase):
     def test_context_manager(self):
         dctx = zstd.ZstdDecompressor()
 
-        with dctx.stream_reader(b'foo') as reader:
-            with self.assertRaisesRegexp(ValueError, 'cannot __enter__ multiple times'):
+        with dctx.stream_reader(b"foo") as reader:
+            with self.assertRaisesRegex(ValueError, "cannot __enter__ multiple times"):
                 with reader as reader2:
                     pass
 
     def test_not_implemented(self):
         dctx = zstd.ZstdDecompressor()
 
-        with dctx.stream_reader(b'foo') as reader:
+        with dctx.stream_reader(b"foo") as reader:
             with self.assertRaises(io.UnsupportedOperation):
                 reader.readline()
 
@@ -317,7 +330,7 @@
                 next(reader)
 
             with self.assertRaises(io.UnsupportedOperation):
-                reader.write(b'foo')
+                reader.write(b"foo")
 
             with self.assertRaises(io.UnsupportedOperation):
                 reader.writelines([])
@@ -325,7 +338,7 @@
     def test_constant_methods(self):
         dctx = zstd.ZstdDecompressor()
 
-        with dctx.stream_reader(b'foo') as reader:
+        with dctx.stream_reader(b"foo") as reader:
             self.assertFalse(reader.closed)
             self.assertTrue(reader.readable())
             self.assertFalse(reader.writable())
@@ -340,29 +353,31 @@
     def test_read_closed(self):
         dctx = zstd.ZstdDecompressor()
 
-        with dctx.stream_reader(b'foo') as reader:
+        with dctx.stream_reader(b"foo") as reader:
             reader.close()
             self.assertTrue(reader.closed)
-            with self.assertRaisesRegexp(ValueError, 'stream is closed'):
+            with self.assertRaisesRegex(ValueError, "stream is closed"):
                 reader.read(1)
 
     def test_read_sizes(self):
         cctx = zstd.ZstdCompressor()
-        foo = cctx.compress(b'foo')
+        foo = cctx.compress(b"foo")
 
         dctx = zstd.ZstdDecompressor()
 
         with dctx.stream_reader(foo) as reader:
-            with self.assertRaisesRegexp(ValueError, 'cannot read negative amounts less than -1'):
+            with self.assertRaisesRegex(
+                ValueError, "cannot read negative amounts less than -1"
+            ):
                 reader.read(-2)
 
-            self.assertEqual(reader.read(0), b'')
-            self.assertEqual(reader.read(), b'foo')
+            self.assertEqual(reader.read(0), b"")
+            self.assertEqual(reader.read(), b"foo")
 
     def test_read_buffer(self):
         cctx = zstd.ZstdCompressor()
 
-        source = b''.join([b'foo' * 60, b'bar' * 60, b'baz' * 60])
+        source = b"".join([b"foo" * 60, b"bar" * 60, b"baz" * 60])
         frame = cctx.compress(source)
 
         dctx = zstd.ZstdDecompressor()
@@ -376,14 +391,14 @@
             self.assertEqual(reader.tell(), len(source))
 
             # Read after EOF should return empty bytes.
-            self.assertEqual(reader.read(1), b'')
+            self.assertEqual(reader.read(1), b"")
             self.assertEqual(reader.tell(), len(result))
 
         self.assertTrue(reader.closed)
 
     def test_read_buffer_small_chunks(self):
         cctx = zstd.ZstdCompressor()
-        source = b''.join([b'foo' * 60, b'bar' * 60, b'baz' * 60])
+        source = b"".join([b"foo" * 60, b"bar" * 60, b"baz" * 60])
         frame = cctx.compress(source)
 
         dctx = zstd.ZstdDecompressor()
@@ -398,11 +413,11 @@
                 chunks.append(chunk)
                 self.assertEqual(reader.tell(), sum(map(len, chunks)))
 
-        self.assertEqual(b''.join(chunks), source)
+        self.assertEqual(b"".join(chunks), source)
 
     def test_read_stream(self):
         cctx = zstd.ZstdCompressor()
-        source = b''.join([b'foo' * 60, b'bar' * 60, b'baz' * 60])
+        source = b"".join([b"foo" * 60, b"bar" * 60, b"baz" * 60])
         frame = cctx.compress(source)
 
         dctx = zstd.ZstdDecompressor()
@@ -412,7 +427,7 @@
             chunk = reader.read(8192)
             self.assertEqual(chunk, source)
             self.assertEqual(reader.tell(), len(source))
-            self.assertEqual(reader.read(1), b'')
+            self.assertEqual(reader.read(1), b"")
             self.assertEqual(reader.tell(), len(source))
             self.assertFalse(reader.closed)
 
@@ -420,7 +435,7 @@
 
     def test_read_stream_small_chunks(self):
         cctx = zstd.ZstdCompressor()
-        source = b''.join([b'foo' * 60, b'bar' * 60, b'baz' * 60])
+        source = b"".join([b"foo" * 60, b"bar" * 60, b"baz" * 60])
         frame = cctx.compress(source)
 
         dctx = zstd.ZstdDecompressor()
@@ -435,11 +450,11 @@
                 chunks.append(chunk)
                 self.assertEqual(reader.tell(), sum(map(len, chunks)))
 
-        self.assertEqual(b''.join(chunks), source)
+        self.assertEqual(b"".join(chunks), source)
 
     def test_read_after_exit(self):
         cctx = zstd.ZstdCompressor()
-        frame = cctx.compress(b'foo' * 60)
+        frame = cctx.compress(b"foo" * 60)
 
         dctx = zstd.ZstdDecompressor()
 
@@ -449,45 +464,46 @@
 
         self.assertTrue(reader.closed)
 
-        with self.assertRaisesRegexp(ValueError, 'stream is closed'):
+        with self.assertRaisesRegex(ValueError, "stream is closed"):
             reader.read(10)
 
     def test_illegal_seeks(self):
         cctx = zstd.ZstdCompressor()
-        frame = cctx.compress(b'foo' * 60)
+        frame = cctx.compress(b"foo" * 60)
 
         dctx = zstd.ZstdDecompressor()
 
         with dctx.stream_reader(frame) as reader:
-            with self.assertRaisesRegexp(ValueError,
-                                         'cannot seek to negative position'):
+            with self.assertRaisesRegex(ValueError, "cannot seek to negative position"):
                 reader.seek(-1, os.SEEK_SET)
 
             reader.read(1)
 
-            with self.assertRaisesRegexp(
-                ValueError, 'cannot seek zstd decompression stream backwards'):
+            with self.assertRaisesRegex(
+                ValueError, "cannot seek zstd decompression stream backwards"
+            ):
                 reader.seek(0, os.SEEK_SET)
 
-            with self.assertRaisesRegexp(
-                ValueError, 'cannot seek zstd decompression stream backwards'):
+            with self.assertRaisesRegex(
+                ValueError, "cannot seek zstd decompression stream backwards"
+            ):
                 reader.seek(-1, os.SEEK_CUR)
 
-            with self.assertRaisesRegexp(
-                ValueError,
-                'zstd decompression streams cannot be seeked with SEEK_END'):
+            with self.assertRaisesRegex(
+                ValueError, "zstd decompression streams cannot be seeked with SEEK_END"
+            ):
                 reader.seek(0, os.SEEK_END)
 
             reader.close()
 
-            with self.assertRaisesRegexp(ValueError, 'stream is closed'):
+            with self.assertRaisesRegex(ValueError, "stream is closed"):
                 reader.seek(4, os.SEEK_SET)
 
-        with self.assertRaisesRegexp(ValueError, 'stream is closed'):
+        with self.assertRaisesRegex(ValueError, "stream is closed"):
             reader.seek(0)
 
     def test_seek(self):
-        source = b'foobar' * 60
+        source = b"foobar" * 60
         cctx = zstd.ZstdCompressor()
         frame = cctx.compress(source)
 
@@ -495,32 +511,32 @@
 
         with dctx.stream_reader(frame) as reader:
             reader.seek(3)
-            self.assertEqual(reader.read(3), b'bar')
+            self.assertEqual(reader.read(3), b"bar")
 
             reader.seek(4, os.SEEK_CUR)
-            self.assertEqual(reader.read(2), b'ar')
+            self.assertEqual(reader.read(2), b"ar")
 
     def test_no_context_manager(self):
-        source = b'foobar' * 60
+        source = b"foobar" * 60
         cctx = zstd.ZstdCompressor()
         frame = cctx.compress(source)
 
         dctx = zstd.ZstdDecompressor()
         reader = dctx.stream_reader(frame)
 
-        self.assertEqual(reader.read(6), b'foobar')
-        self.assertEqual(reader.read(18), b'foobar' * 3)
+        self.assertEqual(reader.read(6), b"foobar")
+        self.assertEqual(reader.read(18), b"foobar" * 3)
         self.assertFalse(reader.closed)
 
         # Calling close prevents subsequent use.
         reader.close()
         self.assertTrue(reader.closed)
 
-        with self.assertRaisesRegexp(ValueError, 'stream is closed'):
+        with self.assertRaisesRegex(ValueError, "stream is closed"):
             reader.read(6)
 
     def test_read_after_error(self):
-        source = io.BytesIO(b'')
+        source = io.BytesIO(b"")
         dctx = zstd.ZstdDecompressor()
 
         reader = dctx.stream_reader(source)
@@ -529,7 +545,7 @@
             reader.read(0)
 
         with reader:
-            with self.assertRaisesRegexp(ValueError, 'stream is closed'):
+            with self.assertRaisesRegex(ValueError, "stream is closed"):
                 reader.read(100)
 
     def test_partial_read(self):
@@ -553,87 +569,87 @@
         cctx = zstd.ZstdCompressor()
         source = io.BytesIO()
         writer = cctx.stream_writer(source)
-        writer.write(b'foo')
+        writer.write(b"foo")
         writer.flush(zstd.FLUSH_FRAME)
-        writer.write(b'bar')
+        writer.write(b"bar")
         writer.flush(zstd.FLUSH_FRAME)
 
         dctx = zstd.ZstdDecompressor()
 
         reader = dctx.stream_reader(source.getvalue())
-        self.assertEqual(reader.read(2), b'fo')
-        self.assertEqual(reader.read(2), b'o')
-        self.assertEqual(reader.read(2), b'ba')
-        self.assertEqual(reader.read(2), b'r')
+        self.assertEqual(reader.read(2), b"fo")
+        self.assertEqual(reader.read(2), b"o")
+        self.assertEqual(reader.read(2), b"ba")
+        self.assertEqual(reader.read(2), b"r")
 
         source.seek(0)
         reader = dctx.stream_reader(source)
-        self.assertEqual(reader.read(2), b'fo')
-        self.assertEqual(reader.read(2), b'o')
-        self.assertEqual(reader.read(2), b'ba')
-        self.assertEqual(reader.read(2), b'r')
+        self.assertEqual(reader.read(2), b"fo")
+        self.assertEqual(reader.read(2), b"o")
+        self.assertEqual(reader.read(2), b"ba")
+        self.assertEqual(reader.read(2), b"r")
 
         reader = dctx.stream_reader(source.getvalue())
-        self.assertEqual(reader.read(3), b'foo')
-        self.assertEqual(reader.read(3), b'bar')
+        self.assertEqual(reader.read(3), b"foo")
+        self.assertEqual(reader.read(3), b"bar")
 
         source.seek(0)
         reader = dctx.stream_reader(source)
-        self.assertEqual(reader.read(3), b'foo')
-        self.assertEqual(reader.read(3), b'bar')
+        self.assertEqual(reader.read(3), b"foo")
+        self.assertEqual(reader.read(3), b"bar")
 
         reader = dctx.stream_reader(source.getvalue())
-        self.assertEqual(reader.read(4), b'foo')
-        self.assertEqual(reader.read(4), b'bar')
+        self.assertEqual(reader.read(4), b"foo")
+        self.assertEqual(reader.read(4), b"bar")
 
         source.seek(0)
         reader = dctx.stream_reader(source)
-        self.assertEqual(reader.read(4), b'foo')
-        self.assertEqual(reader.read(4), b'bar')
+        self.assertEqual(reader.read(4), b"foo")
+        self.assertEqual(reader.read(4), b"bar")
 
         reader = dctx.stream_reader(source.getvalue())
-        self.assertEqual(reader.read(128), b'foo')
-        self.assertEqual(reader.read(128), b'bar')
+        self.assertEqual(reader.read(128), b"foo")
+        self.assertEqual(reader.read(128), b"bar")
 
         source.seek(0)
         reader = dctx.stream_reader(source)
-        self.assertEqual(reader.read(128), b'foo')
-        self.assertEqual(reader.read(128), b'bar')
+        self.assertEqual(reader.read(128), b"foo")
+        self.assertEqual(reader.read(128), b"bar")
 
         # Now tests for reads spanning frames.
         reader = dctx.stream_reader(source.getvalue(), read_across_frames=True)
-        self.assertEqual(reader.read(3), b'foo')
-        self.assertEqual(reader.read(3), b'bar')
+        self.assertEqual(reader.read(3), b"foo")
+        self.assertEqual(reader.read(3), b"bar")
 
         source.seek(0)
         reader = dctx.stream_reader(source, read_across_frames=True)
-        self.assertEqual(reader.read(3), b'foo')
-        self.assertEqual(reader.read(3), b'bar')
+        self.assertEqual(reader.read(3), b"foo")
+        self.assertEqual(reader.read(3), b"bar")
 
         reader = dctx.stream_reader(source.getvalue(), read_across_frames=True)
-        self.assertEqual(reader.read(6), b'foobar')
+        self.assertEqual(reader.read(6), b"foobar")
 
         source.seek(0)
         reader = dctx.stream_reader(source, read_across_frames=True)
-        self.assertEqual(reader.read(6), b'foobar')
+        self.assertEqual(reader.read(6), b"foobar")
 
         reader = dctx.stream_reader(source.getvalue(), read_across_frames=True)
-        self.assertEqual(reader.read(7), b'foobar')
+        self.assertEqual(reader.read(7), b"foobar")
 
         source.seek(0)
         reader = dctx.stream_reader(source, read_across_frames=True)
-        self.assertEqual(reader.read(7), b'foobar')
+        self.assertEqual(reader.read(7), b"foobar")
 
         reader = dctx.stream_reader(source.getvalue(), read_across_frames=True)
-        self.assertEqual(reader.read(128), b'foobar')
+        self.assertEqual(reader.read(128), b"foobar")
 
         source.seek(0)
         reader = dctx.stream_reader(source, read_across_frames=True)
-        self.assertEqual(reader.read(128), b'foobar')
+        self.assertEqual(reader.read(128), b"foobar")
 
     def test_readinto(self):
         cctx = zstd.ZstdCompressor()
-        foo = cctx.compress(b'foo')
+        foo = cctx.compress(b"foo")
 
         dctx = zstd.ZstdDecompressor()
 
@@ -641,116 +657,116 @@
         # The exact exception varies based on the backend.
         reader = dctx.stream_reader(foo)
         with self.assertRaises(Exception):
-            reader.readinto(b'foobar')
+            reader.readinto(b"foobar")
 
         # readinto() with sufficiently large destination.
         b = bytearray(1024)
         reader = dctx.stream_reader(foo)
         self.assertEqual(reader.readinto(b), 3)
-        self.assertEqual(b[0:3], b'foo')
+        self.assertEqual(b[0:3], b"foo")
         self.assertEqual(reader.readinto(b), 0)
-        self.assertEqual(b[0:3], b'foo')
+        self.assertEqual(b[0:3], b"foo")
 
         # readinto() with small reads.
         b = bytearray(1024)
         reader = dctx.stream_reader(foo, read_size=1)
         self.assertEqual(reader.readinto(b), 3)
-        self.assertEqual(b[0:3], b'foo')
+        self.assertEqual(b[0:3], b"foo")
 
         # Too small destination buffer.
         b = bytearray(2)
         reader = dctx.stream_reader(foo)
         self.assertEqual(reader.readinto(b), 2)
-        self.assertEqual(b[:], b'fo')
+        self.assertEqual(b[:], b"fo")
 
     def test_readinto1(self):
         cctx = zstd.ZstdCompressor()
-        foo = cctx.compress(b'foo')
+        foo = cctx.compress(b"foo")
 
         dctx = zstd.ZstdDecompressor()
 
         reader = dctx.stream_reader(foo)
         with self.assertRaises(Exception):
-            reader.readinto1(b'foobar')
+            reader.readinto1(b"foobar")
 
         # Sufficiently large destination.
         b = bytearray(1024)
         reader = dctx.stream_reader(foo)
         self.assertEqual(reader.readinto1(b), 3)
-        self.assertEqual(b[0:3], b'foo')
+        self.assertEqual(b[0:3], b"foo")
         self.assertEqual(reader.readinto1(b), 0)
-        self.assertEqual(b[0:3], b'foo')
+        self.assertEqual(b[0:3], b"foo")
 
         # readinto() with small reads.
         b = bytearray(1024)
         reader = dctx.stream_reader(foo, read_size=1)
         self.assertEqual(reader.readinto1(b), 3)
-        self.assertEqual(b[0:3], b'foo')
+        self.assertEqual(b[0:3], b"foo")
 
         # Too small destination buffer.
         b = bytearray(2)
         reader = dctx.stream_reader(foo)
         self.assertEqual(reader.readinto1(b), 2)
-        self.assertEqual(b[:], b'fo')
+        self.assertEqual(b[:], b"fo")
 
     def test_readall(self):
         cctx = zstd.ZstdCompressor()
-        foo = cctx.compress(b'foo')
+        foo = cctx.compress(b"foo")
 
         dctx = zstd.ZstdDecompressor()
         reader = dctx.stream_reader(foo)
 
-        self.assertEqual(reader.readall(), b'foo')
+        self.assertEqual(reader.readall(), b"foo")
 
     def test_read1(self):
         cctx = zstd.ZstdCompressor()
-        foo = cctx.compress(b'foo')
+        foo = cctx.compress(b"foo")
 
         dctx = zstd.ZstdDecompressor()
 
         b = OpCountingBytesIO(foo)
         reader = dctx.stream_reader(b)
 
-        self.assertEqual(reader.read1(), b'foo')
+        self.assertEqual(reader.read1(), b"foo")
         self.assertEqual(b._read_count, 1)
 
         b = OpCountingBytesIO(foo)
         reader = dctx.stream_reader(b)
 
-        self.assertEqual(reader.read1(0), b'')
-        self.assertEqual(reader.read1(2), b'fo')
+        self.assertEqual(reader.read1(0), b"")
+        self.assertEqual(reader.read1(2), b"fo")
         self.assertEqual(b._read_count, 1)
-        self.assertEqual(reader.read1(1), b'o')
+        self.assertEqual(reader.read1(1), b"o")
         self.assertEqual(b._read_count, 1)
-        self.assertEqual(reader.read1(1), b'')
+        self.assertEqual(reader.read1(1), b"")
         self.assertEqual(b._read_count, 2)
 
     def test_read_lines(self):
         cctx = zstd.ZstdCompressor()
-        source = b'\n'.join(('line %d' % i).encode('ascii') for i in range(1024))
+        source = b"\n".join(("line %d" % i).encode("ascii") for i in range(1024))
 
         frame = cctx.compress(source)
 
         dctx = zstd.ZstdDecompressor()
         reader = dctx.stream_reader(frame)
-        tr = io.TextIOWrapper(reader, encoding='utf-8')
+        tr = io.TextIOWrapper(reader, encoding="utf-8")
 
         lines = []
         for line in tr:
-            lines.append(line.encode('utf-8'))
+            lines.append(line.encode("utf-8"))
 
         self.assertEqual(len(lines), 1024)
-        self.assertEqual(b''.join(lines), source)
+        self.assertEqual(b"".join(lines), source)
 
         reader = dctx.stream_reader(frame)
-        tr = io.TextIOWrapper(reader, encoding='utf-8')
+        tr = io.TextIOWrapper(reader, encoding="utf-8")
 
         lines = tr.readlines()
         self.assertEqual(len(lines), 1024)
-        self.assertEqual(''.join(lines).encode('utf-8'), source)
+        self.assertEqual("".join(lines).encode("utf-8"), source)
 
         reader = dctx.stream_reader(frame)
-        tr = io.TextIOWrapper(reader, encoding='utf-8')
+        tr = io.TextIOWrapper(reader, encoding="utf-8")
 
         lines = []
         while True:
@@ -758,26 +774,26 @@
             if not line:
                 break
 
-            lines.append(line.encode('utf-8'))
+            lines.append(line.encode("utf-8"))
 
         self.assertEqual(len(lines), 1024)
-        self.assertEqual(b''.join(lines), source)
+        self.assertEqual(b"".join(lines), source)
 
 
 @make_cffi
-class TestDecompressor_decompressobj(unittest.TestCase):
+class TestDecompressor_decompressobj(TestCase):
     def test_simple(self):
-        data = zstd.ZstdCompressor(level=1).compress(b'foobar')
+        data = zstd.ZstdCompressor(level=1).compress(b"foobar")
 
         dctx = zstd.ZstdDecompressor()
         dobj = dctx.decompressobj()
-        self.assertEqual(dobj.decompress(data), b'foobar')
+        self.assertEqual(dobj.decompress(data), b"foobar")
         self.assertIsNone(dobj.flush())
         self.assertIsNone(dobj.flush(10))
         self.assertIsNone(dobj.flush(length=100))
 
     def test_input_types(self):
-        compressed = zstd.ZstdCompressor(level=1).compress(b'foo')
+        compressed = zstd.ZstdCompressor(level=1).compress(b"foo")
 
         dctx = zstd.ZstdDecompressor()
 
@@ -795,28 +811,28 @@
             self.assertIsNone(dobj.flush())
             self.assertIsNone(dobj.flush(10))
             self.assertIsNone(dobj.flush(length=100))
-            self.assertEqual(dobj.decompress(source), b'foo')
+            self.assertEqual(dobj.decompress(source), b"foo")
             self.assertIsNone(dobj.flush())
 
     def test_reuse(self):
-        data = zstd.ZstdCompressor(level=1).compress(b'foobar')
+        data = zstd.ZstdCompressor(level=1).compress(b"foobar")
 
         dctx = zstd.ZstdDecompressor()
         dobj = dctx.decompressobj()
         dobj.decompress(data)
 
-        with self.assertRaisesRegexp(zstd.ZstdError, 'cannot use a decompressobj'):
+        with self.assertRaisesRegex(zstd.ZstdError, "cannot use a decompressobj"):
             dobj.decompress(data)
             self.assertIsNone(dobj.flush())
 
     def test_bad_write_size(self):
         dctx = zstd.ZstdDecompressor()
 
-        with self.assertRaisesRegexp(ValueError, 'write_size must be positive'):
+        with self.assertRaisesRegex(ValueError, "write_size must be positive"):
             dctx.decompressobj(write_size=0)
 
     def test_write_size(self):
-        source = b'foo' * 64 + b'bar' * 128
+        source = b"foo" * 64 + b"bar" * 128
         data = zstd.ZstdCompressor(level=1).compress(source)
 
         dctx = zstd.ZstdDecompressor()
@@ -836,7 +852,7 @@
 
 
 @make_cffi
-class TestDecompressor_stream_writer(unittest.TestCase):
+class TestDecompressor_stream_writer(TestCase):
     def test_io_api(self):
         buffer = io.BytesIO()
         dctx = zstd.ZstdDecompressor()
@@ -908,14 +924,14 @@
             writer.fileno()
 
     def test_fileno_file(self):
-        with tempfile.TemporaryFile('wb') as tf:
+        with tempfile.TemporaryFile("wb") as tf:
             dctx = zstd.ZstdDecompressor()
             writer = dctx.stream_writer(tf)
 
             self.assertEqual(writer.fileno(), tf.fileno())
 
     def test_close(self):
-        foo = zstd.ZstdCompressor().compress(b'foo')
+        foo = zstd.ZstdCompressor().compress(b"foo")
 
         buffer = NonClosingBytesIO()
         dctx = zstd.ZstdDecompressor()
@@ -928,17 +944,17 @@
         self.assertTrue(writer.closed)
         self.assertTrue(buffer.closed)
 
-        with self.assertRaisesRegexp(ValueError, 'stream is closed'):
-            writer.write(b'')
+        with self.assertRaisesRegex(ValueError, "stream is closed"):
+            writer.write(b"")
 
-        with self.assertRaisesRegexp(ValueError, 'stream is closed'):
+        with self.assertRaisesRegex(ValueError, "stream is closed"):
             writer.flush()
 
-        with self.assertRaisesRegexp(ValueError, 'stream is closed'):
+        with self.assertRaisesRegex(ValueError, "stream is closed"):
             with writer:
                 pass
 
-        self.assertEqual(buffer.getvalue(), b'foo')
+        self.assertEqual(buffer.getvalue(), b"foo")
 
         # Context manager exit should close stream.
         buffer = NonClosingBytesIO()
@@ -948,7 +964,7 @@
             writer.write(foo)
 
         self.assertTrue(writer.closed)
-        self.assertEqual(buffer.getvalue(), b'foo')
+        self.assertEqual(buffer.getvalue(), b"foo")
 
     def test_flush(self):
         buffer = OpCountingBytesIO()
@@ -962,12 +978,12 @@
 
     def test_empty_roundtrip(self):
         cctx = zstd.ZstdCompressor()
-        empty = cctx.compress(b'')
-        self.assertEqual(decompress_via_writer(empty), b'')
+        empty = cctx.compress(b"")
+        self.assertEqual(decompress_via_writer(empty), b"")
 
     def test_input_types(self):
         cctx = zstd.ZstdCompressor(level=1)
-        compressed = cctx.compress(b'foo')
+        compressed = cctx.compress(b"foo")
 
         mutable_array = bytearray(len(compressed))
         mutable_array[:] = compressed
@@ -984,25 +1000,25 @@
 
             decompressor = dctx.stream_writer(buffer)
             decompressor.write(source)
-            self.assertEqual(buffer.getvalue(), b'foo')
+            self.assertEqual(buffer.getvalue(), b"foo")
 
             buffer = NonClosingBytesIO()
 
             with dctx.stream_writer(buffer) as decompressor:
                 self.assertEqual(decompressor.write(source), 3)
 
-            self.assertEqual(buffer.getvalue(), b'foo')
+            self.assertEqual(buffer.getvalue(), b"foo")
 
             buffer = io.BytesIO()
             writer = dctx.stream_writer(buffer, write_return_read=True)
             self.assertEqual(writer.write(source), len(source))
-            self.assertEqual(buffer.getvalue(), b'foo')
+            self.assertEqual(buffer.getvalue(), b"foo")
 
     def test_large_roundtrip(self):
         chunks = []
         for i in range(255):
-            chunks.append(struct.Struct('>B').pack(i) * 16384)
-        orig = b''.join(chunks)
+            chunks.append(struct.Struct(">B").pack(i) * 16384)
+        orig = b"".join(chunks)
         cctx = zstd.ZstdCompressor()
         compressed = cctx.compress(orig)
 
@@ -1012,9 +1028,9 @@
         chunks = []
         for i in range(255):
             for j in range(255):
-                chunks.append(struct.Struct('>B').pack(j) * i)
+                chunks.append(struct.Struct(">B").pack(j) * i)
 
-        orig = b''.join(chunks)
+        orig = b"".join(chunks)
         cctx = zstd.ZstdCompressor()
         compressed = cctx.compress(orig)
 
@@ -1042,13 +1058,13 @@
     def test_dictionary(self):
         samples = []
         for i in range(128):
-            samples.append(b'foo' * 64)
-            samples.append(b'bar' * 64)
-            samples.append(b'foobar' * 64)
+            samples.append(b"foo" * 64)
+            samples.append(b"bar" * 64)
+            samples.append(b"foobar" * 64)
 
         d = zstd.train_dictionary(8192, samples)
 
-        orig = b'foobar' * 16384
+        orig = b"foobar" * 16384
         buffer = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(dict_data=d)
         with cctx.stream_writer(buffer) as compressor:
@@ -1083,22 +1099,22 @@
         self.assertGreater(size, 100000)
 
     def test_write_size(self):
-        source = zstd.ZstdCompressor().compress(b'foobarfoobar')
+        source = zstd.ZstdCompressor().compress(b"foobarfoobar")
         dest = OpCountingBytesIO()
         dctx = zstd.ZstdDecompressor()
         with dctx.stream_writer(dest, write_size=1) as decompressor:
-            s = struct.Struct('>B')
+            s = struct.Struct(">B")
             for c in source:
                 if not isinstance(c, str):
                     c = s.pack(c)
                 decompressor.write(c)
 
-        self.assertEqual(dest.getvalue(), b'foobarfoobar')
+        self.assertEqual(dest.getvalue(), b"foobarfoobar")
         self.assertEqual(dest._write_count, len(dest.getvalue()))
 
 
 @make_cffi
-class TestDecompressor_read_to_iter(unittest.TestCase):
+class TestDecompressor_read_to_iter(TestCase):
     def test_type_validation(self):
         dctx = zstd.ZstdDecompressor()
 
@@ -1106,10 +1122,10 @@
         dctx.read_to_iter(io.BytesIO())
 
         # Buffer protocol works.
-        dctx.read_to_iter(b'foobar')
+        dctx.read_to_iter(b"foobar")
 
-        with self.assertRaisesRegexp(ValueError, 'must pass an object with a read'):
-            b''.join(dctx.read_to_iter(True))
+        with self.assertRaisesRegex(ValueError, "must pass an object with a read"):
+            b"".join(dctx.read_to_iter(True))
 
     def test_empty_input(self):
         dctx = zstd.ZstdDecompressor()
@@ -1120,25 +1136,25 @@
         with self.assertRaises(StopIteration):
             next(it)
 
-        it = dctx.read_to_iter(b'')
+        it = dctx.read_to_iter(b"")
         with self.assertRaises(StopIteration):
             next(it)
 
     def test_invalid_input(self):
         dctx = zstd.ZstdDecompressor()
 
-        source = io.BytesIO(b'foobar')
+        source = io.BytesIO(b"foobar")
         it = dctx.read_to_iter(source)
-        with self.assertRaisesRegexp(zstd.ZstdError, 'Unknown frame descriptor'):
+        with self.assertRaisesRegex(zstd.ZstdError, "Unknown frame descriptor"):
             next(it)
 
-        it = dctx.read_to_iter(b'foobar')
-        with self.assertRaisesRegexp(zstd.ZstdError, 'Unknown frame descriptor'):
+        it = dctx.read_to_iter(b"foobar")
+        with self.assertRaisesRegex(zstd.ZstdError, "Unknown frame descriptor"):
             next(it)
 
     def test_empty_roundtrip(self):
         cctx = zstd.ZstdCompressor(level=1, write_content_size=False)
-        empty = cctx.compress(b'')
+        empty = cctx.compress(b"")
 
         source = io.BytesIO(empty)
         source.seek(0)
@@ -1157,24 +1173,28 @@
     def test_skip_bytes_too_large(self):
         dctx = zstd.ZstdDecompressor()
 
-        with self.assertRaisesRegexp(ValueError, 'skip_bytes must be smaller than read_size'):
-            b''.join(dctx.read_to_iter(b'', skip_bytes=1, read_size=1))
+        with self.assertRaisesRegex(
+            ValueError, "skip_bytes must be smaller than read_size"
+        ):
+            b"".join(dctx.read_to_iter(b"", skip_bytes=1, read_size=1))
 
-        with self.assertRaisesRegexp(ValueError, 'skip_bytes larger than first input chunk'):
-            b''.join(dctx.read_to_iter(b'foobar', skip_bytes=10))
+        with self.assertRaisesRegex(
+            ValueError, "skip_bytes larger than first input chunk"
+        ):
+            b"".join(dctx.read_to_iter(b"foobar", skip_bytes=10))
 
     def test_skip_bytes(self):
         cctx = zstd.ZstdCompressor(write_content_size=False)
-        compressed = cctx.compress(b'foobar')
+        compressed = cctx.compress(b"foobar")
 
         dctx = zstd.ZstdDecompressor()
-        output = b''.join(dctx.read_to_iter(b'hdr' + compressed, skip_bytes=3))
-        self.assertEqual(output, b'foobar')
+        output = b"".join(dctx.read_to_iter(b"hdr" + compressed, skip_bytes=3))
+        self.assertEqual(output, b"foobar")
 
     def test_large_output(self):
         source = io.BytesIO()
-        source.write(b'f' * zstd.DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE)
-        source.write(b'o')
+        source.write(b"f" * zstd.DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE)
+        source.write(b"o")
         source.seek(0)
 
         cctx = zstd.ZstdCompressor(level=1)
@@ -1191,7 +1211,7 @@
         with self.assertRaises(StopIteration):
             next(it)
 
-        decompressed = b''.join(chunks)
+        decompressed = b"".join(chunks)
         self.assertEqual(decompressed, source.getvalue())
 
         # And again with buffer protocol.
@@ -1203,12 +1223,12 @@
         with self.assertRaises(StopIteration):
             next(it)
 
-        decompressed = b''.join(chunks)
+        decompressed = b"".join(chunks)
         self.assertEqual(decompressed, source.getvalue())
 
-    @unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
+    @unittest.skipUnless("ZSTD_SLOW_TESTS" in os.environ, "ZSTD_SLOW_TESTS not set")
     def test_large_input(self):
-        bytes = list(struct.Struct('>B').pack(i) for i in range(256))
+        bytes = list(struct.Struct(">B").pack(i) for i in range(256))
         compressed = NonClosingBytesIO()
         input_size = 0
         cctx = zstd.ZstdCompressor(level=1)
@@ -1217,14 +1237,18 @@
                 compressor.write(random.choice(bytes))
                 input_size += 1
 
-                have_compressed = len(compressed.getvalue()) > zstd.DECOMPRESSION_RECOMMENDED_INPUT_SIZE
+                have_compressed = (
+                    len(compressed.getvalue())
+                    > zstd.DECOMPRESSION_RECOMMENDED_INPUT_SIZE
+                )
                 have_raw = input_size > zstd.DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE * 2
                 if have_compressed and have_raw:
                     break
 
         compressed = io.BytesIO(compressed.getvalue())
-        self.assertGreater(len(compressed.getvalue()),
-                           zstd.DECOMPRESSION_RECOMMENDED_INPUT_SIZE)
+        self.assertGreater(
+            len(compressed.getvalue()), zstd.DECOMPRESSION_RECOMMENDED_INPUT_SIZE
+        )
 
         dctx = zstd.ZstdDecompressor()
         it = dctx.read_to_iter(compressed)
@@ -1237,7 +1261,7 @@
         with self.assertRaises(StopIteration):
             next(it)
 
-        decompressed = b''.join(chunks)
+        decompressed = b"".join(chunks)
         self.assertEqual(len(decompressed), input_size)
 
         # And again with buffer protocol.
@@ -1251,7 +1275,7 @@
         with self.assertRaises(StopIteration):
             next(it)
 
-        decompressed = b''.join(chunks)
+        decompressed = b"".join(chunks)
         self.assertEqual(len(decompressed), input_size)
 
     def test_interesting(self):
@@ -1263,22 +1287,23 @@
         compressed = NonClosingBytesIO()
         with cctx.stream_writer(compressed) as compressor:
             for i in range(256):
-                chunk = b'\0' * 1024
+                chunk = b"\0" * 1024
                 compressor.write(chunk)
                 source.write(chunk)
 
         dctx = zstd.ZstdDecompressor()
 
-        simple = dctx.decompress(compressed.getvalue(),
-                                 max_output_size=len(source.getvalue()))
+        simple = dctx.decompress(
+            compressed.getvalue(), max_output_size=len(source.getvalue())
+        )
         self.assertEqual(simple, source.getvalue())
 
         compressed = io.BytesIO(compressed.getvalue())
-        streamed = b''.join(dctx.read_to_iter(compressed))
+        streamed = b"".join(dctx.read_to_iter(compressed))
         self.assertEqual(streamed, source.getvalue())
 
     def test_read_write_size(self):
-        source = OpCountingBytesIO(zstd.ZstdCompressor().compress(b'foobarfoobar'))
+        source = OpCountingBytesIO(zstd.ZstdCompressor().compress(b"foobarfoobar"))
         dctx = zstd.ZstdDecompressor()
         for chunk in dctx.read_to_iter(source, read_size=1, write_size=1):
             self.assertEqual(len(chunk), 1)
@@ -1287,97 +1312,110 @@
 
     def test_magic_less(self):
         params = zstd.CompressionParameters.from_level(
-            1, format=zstd.FORMAT_ZSTD1_MAGICLESS)
+            1, format=zstd.FORMAT_ZSTD1_MAGICLESS
+        )
         cctx = zstd.ZstdCompressor(compression_params=params)
-        frame = cctx.compress(b'foobar')
+        frame = cctx.compress(b"foobar")
 
-        self.assertNotEqual(frame[0:4], b'\x28\xb5\x2f\xfd')
+        self.assertNotEqual(frame[0:4], b"\x28\xb5\x2f\xfd")
 
         dctx = zstd.ZstdDecompressor()
-        with self.assertRaisesRegexp(
-            zstd.ZstdError, 'error determining content size from frame header'):
+        with self.assertRaisesRegex(
+            zstd.ZstdError, "error determining content size from frame header"
+        ):
             dctx.decompress(frame)
 
         dctx = zstd.ZstdDecompressor(format=zstd.FORMAT_ZSTD1_MAGICLESS)
-        res = b''.join(dctx.read_to_iter(frame))
-        self.assertEqual(res, b'foobar')
+        res = b"".join(dctx.read_to_iter(frame))
+        self.assertEqual(res, b"foobar")
 
 
 @make_cffi
-class TestDecompressor_content_dict_chain(unittest.TestCase):
+class TestDecompressor_content_dict_chain(TestCase):
     def test_bad_inputs_simple(self):
         dctx = zstd.ZstdDecompressor()
 
         with self.assertRaises(TypeError):
-            dctx.decompress_content_dict_chain(b'foo')
+            dctx.decompress_content_dict_chain(b"foo")
 
         with self.assertRaises(TypeError):
-            dctx.decompress_content_dict_chain((b'foo', b'bar'))
+            dctx.decompress_content_dict_chain((b"foo", b"bar"))
 
-        with self.assertRaisesRegexp(ValueError, 'empty input chain'):
+        with self.assertRaisesRegex(ValueError, "empty input chain"):
             dctx.decompress_content_dict_chain([])
 
-        with self.assertRaisesRegexp(ValueError, 'chunk 0 must be bytes'):
-            dctx.decompress_content_dict_chain([u'foo'])
+        with self.assertRaisesRegex(ValueError, "chunk 0 must be bytes"):
+            dctx.decompress_content_dict_chain([u"foo"])
 
-        with self.assertRaisesRegexp(ValueError, 'chunk 0 must be bytes'):
+        with self.assertRaisesRegex(ValueError, "chunk 0 must be bytes"):
             dctx.decompress_content_dict_chain([True])
 
-        with self.assertRaisesRegexp(ValueError, 'chunk 0 is too small to contain a zstd frame'):
+        with self.assertRaisesRegex(
+            ValueError, "chunk 0 is too small to contain a zstd frame"
+        ):
             dctx.decompress_content_dict_chain([zstd.FRAME_HEADER])
 
-        with self.assertRaisesRegexp(ValueError, 'chunk 0 is not a valid zstd frame'):
-            dctx.decompress_content_dict_chain([b'foo' * 8])
+        with self.assertRaisesRegex(ValueError, "chunk 0 is not a valid zstd frame"):
+            dctx.decompress_content_dict_chain([b"foo" * 8])
 
-        no_size = zstd.ZstdCompressor(write_content_size=False).compress(b'foo' * 64)
+        no_size = zstd.ZstdCompressor(write_content_size=False).compress(b"foo" * 64)
 
-        with self.assertRaisesRegexp(ValueError, 'chunk 0 missing content size in frame'):
+        with self.assertRaisesRegex(
+            ValueError, "chunk 0 missing content size in frame"
+        ):
             dctx.decompress_content_dict_chain([no_size])
 
         # Corrupt first frame.
-        frame = zstd.ZstdCompressor().compress(b'foo' * 64)
+        frame = zstd.ZstdCompressor().compress(b"foo" * 64)
         frame = frame[0:12] + frame[15:]
-        with self.assertRaisesRegexp(zstd.ZstdError,
-                                     'chunk 0 did not decompress full frame'):
+        with self.assertRaisesRegex(
+            zstd.ZstdError, "chunk 0 did not decompress full frame"
+        ):
             dctx.decompress_content_dict_chain([frame])
 
     def test_bad_subsequent_input(self):
-        initial = zstd.ZstdCompressor().compress(b'foo' * 64)
+        initial = zstd.ZstdCompressor().compress(b"foo" * 64)
 
         dctx = zstd.ZstdDecompressor()
 
-        with self.assertRaisesRegexp(ValueError, 'chunk 1 must be bytes'):
-            dctx.decompress_content_dict_chain([initial, u'foo'])
+        with self.assertRaisesRegex(ValueError, "chunk 1 must be bytes"):
+            dctx.decompress_content_dict_chain([initial, u"foo"])
 
-        with self.assertRaisesRegexp(ValueError, 'chunk 1 must be bytes'):
+        with self.assertRaisesRegex(ValueError, "chunk 1 must be bytes"):
             dctx.decompress_content_dict_chain([initial, None])
 
-        with self.assertRaisesRegexp(ValueError, 'chunk 1 is too small to contain a zstd frame'):
+        with self.assertRaisesRegex(
+            ValueError, "chunk 1 is too small to contain a zstd frame"
+        ):
             dctx.decompress_content_dict_chain([initial, zstd.FRAME_HEADER])
 
-        with self.assertRaisesRegexp(ValueError, 'chunk 1 is not a valid zstd frame'):
-            dctx.decompress_content_dict_chain([initial, b'foo' * 8])
+        with self.assertRaisesRegex(ValueError, "chunk 1 is not a valid zstd frame"):
+            dctx.decompress_content_dict_chain([initial, b"foo" * 8])
 
-        no_size = zstd.ZstdCompressor(write_content_size=False).compress(b'foo' * 64)
+        no_size = zstd.ZstdCompressor(write_content_size=False).compress(b"foo" * 64)
 
-        with self.assertRaisesRegexp(ValueError, 'chunk 1 missing content size in frame'):
+        with self.assertRaisesRegex(
+            ValueError, "chunk 1 missing content size in frame"
+        ):
             dctx.decompress_content_dict_chain([initial, no_size])
 
         # Corrupt second frame.
-        cctx = zstd.ZstdCompressor(dict_data=zstd.ZstdCompressionDict(b'foo' * 64))
-        frame = cctx.compress(b'bar' * 64)
+        cctx = zstd.ZstdCompressor(dict_data=zstd.ZstdCompressionDict(b"foo" * 64))
+        frame = cctx.compress(b"bar" * 64)
         frame = frame[0:12] + frame[15:]
 
-        with self.assertRaisesRegexp(zstd.ZstdError, 'chunk 1 did not decompress full frame'):
+        with self.assertRaisesRegex(
+            zstd.ZstdError, "chunk 1 did not decompress full frame"
+        ):
             dctx.decompress_content_dict_chain([initial, frame])
 
     def test_simple(self):
         original = [
-            b'foo' * 64,
-            b'foobar' * 64,
-            b'baz' * 64,
-            b'foobaz' * 64,
-            b'foobarbaz' * 64,
+            b"foo" * 64,
+            b"foobar" * 64,
+            b"baz" * 64,
+            b"foobaz" * 64,
+            b"foobarbaz" * 64,
         ]
 
         chunks = []
@@ -1396,12 +1434,12 @@
 
 
 # TODO enable for CFFI
-class TestDecompressor_multi_decompress_to_buffer(unittest.TestCase):
+class TestDecompressor_multi_decompress_to_buffer(TestCase):
     def test_invalid_inputs(self):
         dctx = zstd.ZstdDecompressor()
 
-        if not hasattr(dctx, 'multi_decompress_to_buffer'):
-            self.skipTest('multi_decompress_to_buffer not available')
+        if not hasattr(dctx, "multi_decompress_to_buffer"):
+            self.skipTest("multi_decompress_to_buffer not available")
 
         with self.assertRaises(TypeError):
             dctx.multi_decompress_to_buffer(True)
@@ -1409,22 +1447,24 @@
         with self.assertRaises(TypeError):
             dctx.multi_decompress_to_buffer((1, 2))
 
-        with self.assertRaisesRegexp(TypeError, 'item 0 not a bytes like object'):
-            dctx.multi_decompress_to_buffer([u'foo'])
+        with self.assertRaisesRegex(TypeError, "item 0 not a bytes like object"):
+            dctx.multi_decompress_to_buffer([u"foo"])
 
-        with self.assertRaisesRegexp(ValueError, 'could not determine decompressed size of item 0'):
-            dctx.multi_decompress_to_buffer([b'foobarbaz'])
+        with self.assertRaisesRegex(
+            ValueError, "could not determine decompressed size of item 0"
+        ):
+            dctx.multi_decompress_to_buffer([b"foobarbaz"])
 
     def test_list_input(self):
         cctx = zstd.ZstdCompressor()
 
-        original = [b'foo' * 4, b'bar' * 6]
+        original = [b"foo" * 4, b"bar" * 6]
         frames = [cctx.compress(d) for d in original]
 
         dctx = zstd.ZstdDecompressor()
 
-        if not hasattr(dctx, 'multi_decompress_to_buffer'):
-            self.skipTest('multi_decompress_to_buffer not available')
+        if not hasattr(dctx, "multi_decompress_to_buffer"):
+            self.skipTest("multi_decompress_to_buffer not available")
 
         result = dctx.multi_decompress_to_buffer(frames)
 
@@ -1442,14 +1482,14 @@
     def test_list_input_frame_sizes(self):
         cctx = zstd.ZstdCompressor()
 
-        original = [b'foo' * 4, b'bar' * 6, b'baz' * 8]
+        original = [b"foo" * 4, b"bar" * 6, b"baz" * 8]
         frames = [cctx.compress(d) for d in original]
-        sizes = struct.pack('=' + 'Q' * len(original), *map(len, original))
+        sizes = struct.pack("=" + "Q" * len(original), *map(len, original))
 
         dctx = zstd.ZstdDecompressor()
 
-        if not hasattr(dctx, 'multi_decompress_to_buffer'):
-            self.skipTest('multi_decompress_to_buffer not available')
+        if not hasattr(dctx, "multi_decompress_to_buffer"):
+            self.skipTest("multi_decompress_to_buffer not available")
 
         result = dctx.multi_decompress_to_buffer(frames, decompressed_sizes=sizes)
 
@@ -1462,16 +1502,18 @@
     def test_buffer_with_segments_input(self):
         cctx = zstd.ZstdCompressor()
 
-        original = [b'foo' * 4, b'bar' * 6]
+        original = [b"foo" * 4, b"bar" * 6]
         frames = [cctx.compress(d) for d in original]
 
         dctx = zstd.ZstdDecompressor()
 
-        if not hasattr(dctx, 'multi_decompress_to_buffer'):
-            self.skipTest('multi_decompress_to_buffer not available')
+        if not hasattr(dctx, "multi_decompress_to_buffer"):
+            self.skipTest("multi_decompress_to_buffer not available")
 
-        segments = struct.pack('=QQQQ', 0, len(frames[0]), len(frames[0]), len(frames[1]))
-        b = zstd.BufferWithSegments(b''.join(frames), segments)
+        segments = struct.pack(
+            "=QQQQ", 0, len(frames[0]), len(frames[0]), len(frames[1])
+        )
+        b = zstd.BufferWithSegments(b"".join(frames), segments)
 
         result = dctx.multi_decompress_to_buffer(b)
 
@@ -1483,19 +1525,25 @@
 
     def test_buffer_with_segments_sizes(self):
         cctx = zstd.ZstdCompressor(write_content_size=False)
-        original = [b'foo' * 4, b'bar' * 6, b'baz' * 8]
+        original = [b"foo" * 4, b"bar" * 6, b"baz" * 8]
         frames = [cctx.compress(d) for d in original]
-        sizes = struct.pack('=' + 'Q' * len(original), *map(len, original))
+        sizes = struct.pack("=" + "Q" * len(original), *map(len, original))
 
         dctx = zstd.ZstdDecompressor()
 
-        if not hasattr(dctx, 'multi_decompress_to_buffer'):
-            self.skipTest('multi_decompress_to_buffer not available')
+        if not hasattr(dctx, "multi_decompress_to_buffer"):
+            self.skipTest("multi_decompress_to_buffer not available")
 
-        segments = struct.pack('=QQQQQQ', 0, len(frames[0]),
-                               len(frames[0]), len(frames[1]),
-                               len(frames[0]) + len(frames[1]), len(frames[2]))
-        b = zstd.BufferWithSegments(b''.join(frames), segments)
+        segments = struct.pack(
+            "=QQQQQQ",
+            0,
+            len(frames[0]),
+            len(frames[0]),
+            len(frames[1]),
+            len(frames[0]) + len(frames[1]),
+            len(frames[2]),
+        )
+        b = zstd.BufferWithSegments(b"".join(frames), segments)
 
         result = dctx.multi_decompress_to_buffer(b, decompressed_sizes=sizes)
 
@@ -1509,15 +1557,15 @@
         cctx = zstd.ZstdCompressor()
 
         original = [
-            b'foo0' * 2,
-            b'foo1' * 3,
-            b'foo2' * 4,
-            b'foo3' * 5,
-            b'foo4' * 6,
+            b"foo0" * 2,
+            b"foo1" * 3,
+            b"foo2" * 4,
+            b"foo3" * 5,
+            b"foo4" * 6,
         ]
 
-        if not hasattr(cctx, 'multi_compress_to_buffer'):
-            self.skipTest('multi_compress_to_buffer not available')
+        if not hasattr(cctx, "multi_compress_to_buffer"):
+            self.skipTest("multi_compress_to_buffer not available")
 
         frames = cctx.multi_compress_to_buffer(original)
 
@@ -1532,16 +1580,24 @@
             self.assertEqual(data, decompressed[i].tobytes())
 
         # And a manual mode.
-        b = b''.join([frames[0].tobytes(), frames[1].tobytes()])
-        b1 = zstd.BufferWithSegments(b, struct.pack('=QQQQ',
-                                                    0, len(frames[0]),
-                                                    len(frames[0]), len(frames[1])))
+        b = b"".join([frames[0].tobytes(), frames[1].tobytes()])
+        b1 = zstd.BufferWithSegments(
+            b, struct.pack("=QQQQ", 0, len(frames[0]), len(frames[0]), len(frames[1]))
+        )
 
-        b = b''.join([frames[2].tobytes(), frames[3].tobytes(), frames[4].tobytes()])
-        b2 = zstd.BufferWithSegments(b, struct.pack('=QQQQQQ',
-                                                    0, len(frames[2]),
-                                                    len(frames[2]), len(frames[3]),
-                                                    len(frames[2]) + len(frames[3]), len(frames[4])))
+        b = b"".join([frames[2].tobytes(), frames[3].tobytes(), frames[4].tobytes()])
+        b2 = zstd.BufferWithSegments(
+            b,
+            struct.pack(
+                "=QQQQQQ",
+                0,
+                len(frames[2]),
+                len(frames[2]),
+                len(frames[3]),
+                len(frames[2]) + len(frames[3]),
+                len(frames[4]),
+            ),
+        )
 
         c = zstd.BufferWithSegmentsCollection(b1, b2)
 
@@ -1560,8 +1616,8 @@
 
         dctx = zstd.ZstdDecompressor(dict_data=d)
 
-        if not hasattr(dctx, 'multi_decompress_to_buffer'):
-            self.skipTest('multi_decompress_to_buffer not available')
+        if not hasattr(dctx, "multi_decompress_to_buffer"):
+            self.skipTest("multi_decompress_to_buffer not available")
 
         result = dctx.multi_decompress_to_buffer(frames)
 
@@ -1571,41 +1627,44 @@
         cctx = zstd.ZstdCompressor()
 
         frames = []
-        frames.extend(cctx.compress(b'x' * 64) for i in range(256))
-        frames.extend(cctx.compress(b'y' * 64) for i in range(256))
+        frames.extend(cctx.compress(b"x" * 64) for i in range(256))
+        frames.extend(cctx.compress(b"y" * 64) for i in range(256))
 
         dctx = zstd.ZstdDecompressor()
 
-        if not hasattr(dctx, 'multi_decompress_to_buffer'):
-            self.skipTest('multi_decompress_to_buffer not available')
+        if not hasattr(dctx, "multi_decompress_to_buffer"):
+            self.skipTest("multi_decompress_to_buffer not available")
 
         result = dctx.multi_decompress_to_buffer(frames, threads=-1)
 
         self.assertEqual(len(result), len(frames))
         self.assertEqual(result.size(), 2 * 64 * 256)
-        self.assertEqual(result[0].tobytes(), b'x' * 64)
-        self.assertEqual(result[256].tobytes(), b'y' * 64)
+        self.assertEqual(result[0].tobytes(), b"x" * 64)
+        self.assertEqual(result[256].tobytes(), b"y" * 64)
 
     def test_item_failure(self):
         cctx = zstd.ZstdCompressor()
-        frames = [cctx.compress(b'x' * 128), cctx.compress(b'y' * 128)]
+        frames = [cctx.compress(b"x" * 128), cctx.compress(b"y" * 128)]
 
-        frames[1] = frames[1][0:15] + b'extra' + frames[1][15:]
+        frames[1] = frames[1][0:15] + b"extra" + frames[1][15:]
 
         dctx = zstd.ZstdDecompressor()
 
-        if not hasattr(dctx, 'multi_decompress_to_buffer'):
-            self.skipTest('multi_decompress_to_buffer not available')
+        if not hasattr(dctx, "multi_decompress_to_buffer"):
+            self.skipTest("multi_decompress_to_buffer not available")
 
-        with self.assertRaisesRegexp(zstd.ZstdError,
-                                     'error decompressing item 1: ('
-                                     'Corrupted block|'
-                                     'Destination buffer is too small)'):
+        with self.assertRaisesRegex(
+            zstd.ZstdError,
+            "error decompressing item 1: ("
+            "Corrupted block|"
+            "Destination buffer is too small)",
+        ):
             dctx.multi_decompress_to_buffer(frames)
 
-        with self.assertRaisesRegexp(zstd.ZstdError,
-                            'error decompressing item 1: ('
-                            'Corrupted block|'
-                            'Destination buffer is too small)'):
+        with self.assertRaisesRegex(
+            zstd.ZstdError,
+            "error decompressing item 1: ("
+            "Corrupted block|"
+            "Destination buffer is too small)",
+        ):
             dctx.multi_decompress_to_buffer(frames, threads=2)
-
--- a/contrib/python-zstandard/tests/test_decompressor_fuzzing.py	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/tests/test_decompressor_fuzzing.py	Sat Dec 28 09:55:45 2019 -0800
@@ -6,29 +6,37 @@
     import hypothesis
     import hypothesis.strategies as strategies
 except ImportError:
-    raise unittest.SkipTest('hypothesis not available')
+    raise unittest.SkipTest("hypothesis not available")
 
 import zstandard as zstd
 
-from . common import (
+from .common import (
     make_cffi,
     NonClosingBytesIO,
     random_input_data,
+    TestCase,
 )
 
 
-@unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
+@unittest.skipUnless("ZSTD_SLOW_TESTS" in os.environ, "ZSTD_SLOW_TESTS not set")
 @make_cffi
-class TestDecompressor_stream_reader_fuzzing(unittest.TestCase):
+class TestDecompressor_stream_reader_fuzzing(TestCase):
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      streaming=strategies.booleans(),
-                      source_read_size=strategies.integers(1, 1048576),
-                      read_sizes=strategies.data())
-    def test_stream_source_read_variance(self, original, level, streaming,
-                                         source_read_size, read_sizes):
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        streaming=strategies.booleans(),
+        source_read_size=strategies.integers(1, 1048576),
+        read_sizes=strategies.data(),
+    )
+    def test_stream_source_read_variance(
+        self, original, level, streaming, source_read_size, read_sizes
+    ):
         cctx = zstd.ZstdCompressor(level=level)
 
         if streaming:
@@ -53,18 +61,22 @@
 
                 chunks.append(chunk)
 
-        self.assertEqual(b''.join(chunks), original)
+        self.assertEqual(b"".join(chunks), original)
 
     # Similar to above except we have a constant read() size.
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      streaming=strategies.booleans(),
-                      source_read_size=strategies.integers(1, 1048576),
-                      read_size=strategies.integers(-1, 131072))
-    def test_stream_source_read_size(self, original, level, streaming,
-                                     source_read_size, read_size):
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        streaming=strategies.booleans(),
+        source_read_size=strategies.integers(1, 1048576),
+        read_size=strategies.integers(-1, 131072),
+    )
+    def test_stream_source_read_size(
+        self, original, level, streaming, source_read_size, read_size
+    ):
         if read_size == 0:
             read_size = 1
 
@@ -91,17 +103,24 @@
 
             chunks.append(chunk)
 
-        self.assertEqual(b''.join(chunks), original)
+        self.assertEqual(b"".join(chunks), original)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      streaming=strategies.booleans(),
-                      source_read_size=strategies.integers(1, 1048576),
-                      read_sizes=strategies.data())
-    def test_buffer_source_read_variance(self, original, level, streaming,
-                                         source_read_size, read_sizes):
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        streaming=strategies.booleans(),
+        source_read_size=strategies.integers(1, 1048576),
+        read_sizes=strategies.data(),
+    )
+    def test_buffer_source_read_variance(
+        self, original, level, streaming, source_read_size, read_sizes
+    ):
         cctx = zstd.ZstdCompressor(level=level)
 
         if streaming:
@@ -125,18 +144,22 @@
 
                 chunks.append(chunk)
 
-        self.assertEqual(b''.join(chunks), original)
+        self.assertEqual(b"".join(chunks), original)
 
     # Similar to above except we have a constant read() size.
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      streaming=strategies.booleans(),
-                      source_read_size=strategies.integers(1, 1048576),
-                      read_size=strategies.integers(-1, 131072))
-    def test_buffer_source_constant_read_size(self, original, level, streaming,
-                                              source_read_size, read_size):
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        streaming=strategies.booleans(),
+        source_read_size=strategies.integers(1, 1048576),
+        read_size=strategies.integers(-1, 131072),
+    )
+    def test_buffer_source_constant_read_size(
+        self, original, level, streaming, source_read_size, read_size
+    ):
         if read_size == 0:
             read_size = -1
 
@@ -162,16 +185,18 @@
 
             chunks.append(chunk)
 
-        self.assertEqual(b''.join(chunks), original)
+        self.assertEqual(b"".join(chunks), original)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      streaming=strategies.booleans(),
-                      source_read_size=strategies.integers(1, 1048576))
-    def test_stream_source_readall(self, original, level, streaming,
-                                         source_read_size):
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        streaming=strategies.booleans(),
+        source_read_size=strategies.integers(1, 1048576),
+    )
+    def test_stream_source_readall(self, original, level, streaming, source_read_size):
         cctx = zstd.ZstdCompressor(level=level)
 
         if streaming:
@@ -190,14 +215,21 @@
         self.assertEqual(data, original)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      streaming=strategies.booleans(),
-                      source_read_size=strategies.integers(1, 1048576),
-                      read_sizes=strategies.data())
-    def test_stream_source_read1_variance(self, original, level, streaming,
-                                          source_read_size, read_sizes):
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        streaming=strategies.booleans(),
+        source_read_size=strategies.integers(1, 1048576),
+        read_sizes=strategies.data(),
+    )
+    def test_stream_source_read1_variance(
+        self, original, level, streaming, source_read_size, read_sizes
+    ):
         cctx = zstd.ZstdCompressor(level=level)
 
         if streaming:
@@ -222,17 +254,24 @@
 
                 chunks.append(chunk)
 
-        self.assertEqual(b''.join(chunks), original)
+        self.assertEqual(b"".join(chunks), original)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      streaming=strategies.booleans(),
-                      source_read_size=strategies.integers(1, 1048576),
-                      read_sizes=strategies.data())
-    def test_stream_source_readinto1_variance(self, original, level, streaming,
-                                          source_read_size, read_sizes):
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        streaming=strategies.booleans(),
+        source_read_size=strategies.integers(1, 1048576),
+        read_sizes=strategies.data(),
+    )
+    def test_stream_source_readinto1_variance(
+        self, original, level, streaming, source_read_size, read_sizes
+    ):
         cctx = zstd.ZstdCompressor(level=level)
 
         if streaming:
@@ -259,18 +298,24 @@
 
                 chunks.append(bytes(b[0:count]))
 
-        self.assertEqual(b''.join(chunks), original)
+        self.assertEqual(b"".join(chunks), original)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
     @hypothesis.given(
         original=strategies.sampled_from(random_input_data()),
         level=strategies.integers(min_value=1, max_value=5),
         source_read_size=strategies.integers(1, 1048576),
         seek_amounts=strategies.data(),
-        read_sizes=strategies.data())
-    def test_relative_seeks(self, original, level, source_read_size, seek_amounts,
-                            read_sizes):
+        read_sizes=strategies.data(),
+    )
+    def test_relative_seeks(
+        self, original, level, source_read_size, seek_amounts, read_sizes
+    ):
         cctx = zstd.ZstdCompressor(level=level)
         frame = cctx.compress(original)
 
@@ -288,18 +333,24 @@
                 if not chunk:
                     break
 
-                self.assertEqual(original[offset:offset + len(chunk)], chunk)
+                self.assertEqual(original[offset : offset + len(chunk)], chunk)
 
     @hypothesis.settings(
-        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
     @hypothesis.given(
         originals=strategies.data(),
         frame_count=strategies.integers(min_value=2, max_value=10),
         level=strategies.integers(min_value=1, max_value=5),
         source_read_size=strategies.integers(1, 1048576),
-        read_sizes=strategies.data())
-    def test_multiple_frames(self, originals, frame_count, level,
-                             source_read_size, read_sizes):
+        read_sizes=strategies.data(),
+    )
+    def test_multiple_frames(
+        self, originals, frame_count, level, source_read_size, read_sizes
+    ):
 
         cctx = zstd.ZstdCompressor(level=level)
         source = io.BytesIO()
@@ -314,8 +365,9 @@
 
         dctx = zstd.ZstdDecompressor()
         buffer.seek(0)
-        reader = dctx.stream_reader(buffer, read_size=source_read_size,
-                                    read_across_frames=True)
+        reader = dctx.stream_reader(
+            buffer, read_size=source_read_size, read_across_frames=True
+        )
 
         chunks = []
 
@@ -328,16 +380,24 @@
 
             chunks.append(chunk)
 
-        self.assertEqual(source.getvalue(), b''.join(chunks))
+        self.assertEqual(source.getvalue(), b"".join(chunks))
 
 
-@unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
+@unittest.skipUnless("ZSTD_SLOW_TESTS" in os.environ, "ZSTD_SLOW_TESTS not set")
 @make_cffi
-class TestDecompressor_stream_writer_fuzzing(unittest.TestCase):
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      write_size=strategies.integers(min_value=1, max_value=8192),
-                      input_sizes=strategies.data())
+class TestDecompressor_stream_writer_fuzzing(TestCase):
+    @hypothesis.settings(
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        write_size=strategies.integers(min_value=1, max_value=8192),
+        input_sizes=strategies.data(),
+    )
     def test_write_size_variance(self, original, level, write_size, input_sizes):
         cctx = zstd.ZstdCompressor(level=level)
         frame = cctx.compress(original)
@@ -358,13 +418,21 @@
         self.assertEqual(dest.getvalue(), original)
 
 
-@unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
+@unittest.skipUnless("ZSTD_SLOW_TESTS" in os.environ, "ZSTD_SLOW_TESTS not set")
 @make_cffi
-class TestDecompressor_copy_stream_fuzzing(unittest.TestCase):
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      read_size=strategies.integers(min_value=1, max_value=8192),
-                      write_size=strategies.integers(min_value=1, max_value=8192))
+class TestDecompressor_copy_stream_fuzzing(TestCase):
+    @hypothesis.settings(
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        read_size=strategies.integers(min_value=1, max_value=8192),
+        write_size=strategies.integers(min_value=1, max_value=8192),
+    )
     def test_read_write_size_variance(self, original, level, read_size, write_size):
         cctx = zstd.ZstdCompressor(level=level)
         frame = cctx.compress(original)
@@ -378,12 +446,20 @@
         self.assertEqual(dest.getvalue(), original)
 
 
-@unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
+@unittest.skipUnless("ZSTD_SLOW_TESTS" in os.environ, "ZSTD_SLOW_TESTS not set")
 @make_cffi
-class TestDecompressor_decompressobj_fuzzing(unittest.TestCase):
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      chunk_sizes=strategies.data())
+class TestDecompressor_decompressobj_fuzzing(TestCase):
+    @hypothesis.settings(
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        chunk_sizes=strategies.data(),
+    )
     def test_random_input_sizes(self, original, level, chunk_sizes):
         cctx = zstd.ZstdCompressor(level=level)
         frame = cctx.compress(original)
@@ -402,13 +478,22 @@
 
             chunks.append(dobj.decompress(chunk))
 
-        self.assertEqual(b''.join(chunks), original)
+        self.assertEqual(b"".join(chunks), original)
 
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      write_size=strategies.integers(min_value=1,
-                                                     max_value=4 * zstd.DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE),
-                      chunk_sizes=strategies.data())
+    @hypothesis.settings(
+        suppress_health_check=[
+            hypothesis.HealthCheck.large_base_example,
+            hypothesis.HealthCheck.too_slow,
+        ]
+    )
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        write_size=strategies.integers(
+            min_value=1, max_value=4 * zstd.DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE
+        ),
+        chunk_sizes=strategies.data(),
+    )
     def test_random_output_sizes(self, original, level, write_size, chunk_sizes):
         cctx = zstd.ZstdCompressor(level=level)
         frame = cctx.compress(original)
@@ -427,16 +512,18 @@
 
             chunks.append(dobj.decompress(chunk))
 
-        self.assertEqual(b''.join(chunks), original)
+        self.assertEqual(b"".join(chunks), original)
 
 
-@unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
+@unittest.skipUnless("ZSTD_SLOW_TESTS" in os.environ, "ZSTD_SLOW_TESTS not set")
 @make_cffi
-class TestDecompressor_read_to_iter_fuzzing(unittest.TestCase):
-    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
-                      level=strategies.integers(min_value=1, max_value=5),
-                      read_size=strategies.integers(min_value=1, max_value=4096),
-                      write_size=strategies.integers(min_value=1, max_value=4096))
+class TestDecompressor_read_to_iter_fuzzing(TestCase):
+    @hypothesis.given(
+        original=strategies.sampled_from(random_input_data()),
+        level=strategies.integers(min_value=1, max_value=5),
+        read_size=strategies.integers(min_value=1, max_value=4096),
+        write_size=strategies.integers(min_value=1, max_value=4096),
+    )
     def test_read_write_size_variance(self, original, level, read_size, write_size):
         cctx = zstd.ZstdCompressor(level=level)
         frame = cctx.compress(original)
@@ -444,29 +531,33 @@
         source = io.BytesIO(frame)
 
         dctx = zstd.ZstdDecompressor()
-        chunks = list(dctx.read_to_iter(source, read_size=read_size, write_size=write_size))
+        chunks = list(
+            dctx.read_to_iter(source, read_size=read_size, write_size=write_size)
+        )
 
-        self.assertEqual(b''.join(chunks), original)
+        self.assertEqual(b"".join(chunks), original)
 
 
-@unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
-class TestDecompressor_multi_decompress_to_buffer_fuzzing(unittest.TestCase):
-    @hypothesis.given(original=strategies.lists(strategies.sampled_from(random_input_data()),
-                                        min_size=1, max_size=1024),
-                threads=strategies.integers(min_value=1, max_value=8),
-                use_dict=strategies.booleans())
+@unittest.skipUnless("ZSTD_SLOW_TESTS" in os.environ, "ZSTD_SLOW_TESTS not set")
+class TestDecompressor_multi_decompress_to_buffer_fuzzing(TestCase):
+    @hypothesis.given(
+        original=strategies.lists(
+            strategies.sampled_from(random_input_data()), min_size=1, max_size=1024
+        ),
+        threads=strategies.integers(min_value=1, max_value=8),
+        use_dict=strategies.booleans(),
+    )
     def test_data_equivalence(self, original, threads, use_dict):
         kwargs = {}
         if use_dict:
-            kwargs['dict_data'] = zstd.ZstdCompressionDict(original[0])
+            kwargs["dict_data"] = zstd.ZstdCompressionDict(original[0])
 
-        cctx = zstd.ZstdCompressor(level=1,
-                                   write_content_size=True,
-                                   write_checksum=True,
-                                   **kwargs)
+        cctx = zstd.ZstdCompressor(
+            level=1, write_content_size=True, write_checksum=True, **kwargs
+        )
 
-        if not hasattr(cctx, 'multi_compress_to_buffer'):
-            self.skipTest('multi_compress_to_buffer not available')
+        if not hasattr(cctx, "multi_compress_to_buffer"):
+            self.skipTest("multi_compress_to_buffer not available")
 
         frames_buffer = cctx.multi_compress_to_buffer(original, threads=-1)
 
--- a/contrib/python-zstandard/tests/test_estimate_sizes.py	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/tests/test_estimate_sizes.py	Sat Dec 28 09:55:45 2019 -0800
@@ -2,14 +2,14 @@
 
 import zstandard as zstd
 
-from . common import (
+from .common import (
     make_cffi,
+    TestCase,
 )
 
 
 @make_cffi
-class TestSizes(unittest.TestCase):
+class TestSizes(TestCase):
     def test_decompression_size(self):
         size = zstd.estimate_decompression_context_size()
         self.assertGreater(size, 100000)
-
--- a/contrib/python-zstandard/tests/test_module_attributes.py	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/tests/test_module_attributes.py	Sat Dec 28 09:55:45 2019 -0800
@@ -4,65 +4,66 @@
 
 import zstandard as zstd
 
-from . common import (
+from .common import (
     make_cffi,
+    TestCase,
 )
 
 
 @make_cffi
-class TestModuleAttributes(unittest.TestCase):
+class TestModuleAttributes(TestCase):
     def test_version(self):
-        self.assertEqual(zstd.ZSTD_VERSION, (1, 4, 3))
+        self.assertEqual(zstd.ZSTD_VERSION, (1, 4, 4))
 
-        self.assertEqual(zstd.__version__, '0.12.0')
+        self.assertEqual(zstd.__version__, "0.13.0")
 
     def test_constants(self):
         self.assertEqual(zstd.MAX_COMPRESSION_LEVEL, 22)
-        self.assertEqual(zstd.FRAME_HEADER, b'\x28\xb5\x2f\xfd')
+        self.assertEqual(zstd.FRAME_HEADER, b"\x28\xb5\x2f\xfd")
 
     def test_hasattr(self):
         attrs = (
-            'CONTENTSIZE_UNKNOWN',
-            'CONTENTSIZE_ERROR',
-            'COMPRESSION_RECOMMENDED_INPUT_SIZE',
-            'COMPRESSION_RECOMMENDED_OUTPUT_SIZE',
-            'DECOMPRESSION_RECOMMENDED_INPUT_SIZE',
-            'DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE',
-            'MAGIC_NUMBER',
-            'FLUSH_BLOCK',
-            'FLUSH_FRAME',
-            'BLOCKSIZELOG_MAX',
-            'BLOCKSIZE_MAX',
-            'WINDOWLOG_MIN',
-            'WINDOWLOG_MAX',
-            'CHAINLOG_MIN',
-            'CHAINLOG_MAX',
-            'HASHLOG_MIN',
-            'HASHLOG_MAX',
-            'HASHLOG3_MAX',
-            'MINMATCH_MIN',
-            'MINMATCH_MAX',
-            'SEARCHLOG_MIN',
-            'SEARCHLOG_MAX',
-            'SEARCHLENGTH_MIN',
-            'SEARCHLENGTH_MAX',
-            'TARGETLENGTH_MIN',
-            'TARGETLENGTH_MAX',
-            'LDM_MINMATCH_MIN',
-            'LDM_MINMATCH_MAX',
-            'LDM_BUCKETSIZELOG_MAX',
-            'STRATEGY_FAST',
-            'STRATEGY_DFAST',
-            'STRATEGY_GREEDY',
-            'STRATEGY_LAZY',
-            'STRATEGY_LAZY2',
-            'STRATEGY_BTLAZY2',
-            'STRATEGY_BTOPT',
-            'STRATEGY_BTULTRA',
-            'STRATEGY_BTULTRA2',
-            'DICT_TYPE_AUTO',
-            'DICT_TYPE_RAWCONTENT',
-            'DICT_TYPE_FULLDICT',
+            "CONTENTSIZE_UNKNOWN",
+            "CONTENTSIZE_ERROR",
+            "COMPRESSION_RECOMMENDED_INPUT_SIZE",
+            "COMPRESSION_RECOMMENDED_OUTPUT_SIZE",
+            "DECOMPRESSION_RECOMMENDED_INPUT_SIZE",
+            "DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE",
+            "MAGIC_NUMBER",
+            "FLUSH_BLOCK",
+            "FLUSH_FRAME",
+            "BLOCKSIZELOG_MAX",
+            "BLOCKSIZE_MAX",
+            "WINDOWLOG_MIN",
+            "WINDOWLOG_MAX",
+            "CHAINLOG_MIN",
+            "CHAINLOG_MAX",
+            "HASHLOG_MIN",
+            "HASHLOG_MAX",
+            "HASHLOG3_MAX",
+            "MINMATCH_MIN",
+            "MINMATCH_MAX",
+            "SEARCHLOG_MIN",
+            "SEARCHLOG_MAX",
+            "SEARCHLENGTH_MIN",
+            "SEARCHLENGTH_MAX",
+            "TARGETLENGTH_MIN",
+            "TARGETLENGTH_MAX",
+            "LDM_MINMATCH_MIN",
+            "LDM_MINMATCH_MAX",
+            "LDM_BUCKETSIZELOG_MAX",
+            "STRATEGY_FAST",
+            "STRATEGY_DFAST",
+            "STRATEGY_GREEDY",
+            "STRATEGY_LAZY",
+            "STRATEGY_LAZY2",
+            "STRATEGY_BTLAZY2",
+            "STRATEGY_BTOPT",
+            "STRATEGY_BTULTRA",
+            "STRATEGY_BTULTRA2",
+            "DICT_TYPE_AUTO",
+            "DICT_TYPE_RAWCONTENT",
+            "DICT_TYPE_FULLDICT",
         )
 
         for a in attrs:
--- a/contrib/python-zstandard/tests/test_train_dictionary.py	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/tests/test_train_dictionary.py	Sat Dec 28 09:55:45 2019 -0800
@@ -4,10 +4,11 @@
 
 import zstandard as zstd
 
-from . common import (
+from .common import (
     generate_samples,
     make_cffi,
     random_input_data,
+    TestCase,
 )
 
 if sys.version_info[0] >= 3:
@@ -17,24 +18,24 @@
 
 
 @make_cffi
-class TestTrainDictionary(unittest.TestCase):
+class TestTrainDictionary(TestCase):
     def test_no_args(self):
         with self.assertRaises(TypeError):
             zstd.train_dictionary()
 
     def test_bad_args(self):
         with self.assertRaises(TypeError):
-            zstd.train_dictionary(8192, u'foo')
+            zstd.train_dictionary(8192, u"foo")
 
         with self.assertRaises(ValueError):
-            zstd.train_dictionary(8192, [u'foo'])
+            zstd.train_dictionary(8192, [u"foo"])
 
     def test_no_params(self):
         d = zstd.train_dictionary(8192, random_input_data())
         self.assertIsInstance(d.dict_id(), int_type)
 
         # The dictionary ID may be different across platforms.
-        expected = b'\x37\xa4\x30\xec' + struct.pack('<I', d.dict_id())
+        expected = b"\x37\xa4\x30\xec" + struct.pack("<I", d.dict_id())
 
         data = d.as_bytes()
         self.assertEqual(data[0:8], expected)
@@ -44,46 +45,48 @@
         self.assertIsInstance(d.dict_id(), int_type)
 
         data = d.as_bytes()
-        self.assertEqual(data[0:4], b'\x37\xa4\x30\xec')
+        self.assertEqual(data[0:4], b"\x37\xa4\x30\xec")
 
         self.assertEqual(d.k, 64)
         self.assertEqual(d.d, 16)
 
     def test_set_dict_id(self):
-        d = zstd.train_dictionary(8192, generate_samples(), k=64, d=16,
-                                  dict_id=42)
+        d = zstd.train_dictionary(8192, generate_samples(), k=64, d=16, dict_id=42)
         self.assertEqual(d.dict_id(), 42)
 
     def test_optimize(self):
-        d = zstd.train_dictionary(8192, generate_samples(), threads=-1, steps=1,
-                                  d=16)
+        d = zstd.train_dictionary(8192, generate_samples(), threads=-1, steps=1, d=16)
 
         # This varies by platform.
         self.assertIn(d.k, (50, 2000))
         self.assertEqual(d.d, 16)
 
+
 @make_cffi
-class TestCompressionDict(unittest.TestCase):
+class TestCompressionDict(TestCase):
     def test_bad_mode(self):
-        with self.assertRaisesRegexp(ValueError, 'invalid dictionary load mode'):
-            zstd.ZstdCompressionDict(b'foo', dict_type=42)
+        with self.assertRaisesRegex(ValueError, "invalid dictionary load mode"):
+            zstd.ZstdCompressionDict(b"foo", dict_type=42)
 
     def test_bad_precompute_compress(self):
         d = zstd.train_dictionary(8192, generate_samples(), k=64, d=16)
 
-        with self.assertRaisesRegexp(ValueError, 'must specify one of level or '):
+        with self.assertRaisesRegex(ValueError, "must specify one of level or "):
             d.precompute_compress()
 
-        with self.assertRaisesRegexp(ValueError, 'must only specify one of level or '):
-            d.precompute_compress(level=3,
-                                  compression_params=zstd.CompressionParameters())
+        with self.assertRaisesRegex(ValueError, "must only specify one of level or "):
+            d.precompute_compress(
+                level=3, compression_params=zstd.CompressionParameters()
+            )
 
     def test_precompute_compress_rawcontent(self):
-        d = zstd.ZstdCompressionDict(b'dictcontent' * 64,
-                                     dict_type=zstd.DICT_TYPE_RAWCONTENT)
+        d = zstd.ZstdCompressionDict(
+            b"dictcontent" * 64, dict_type=zstd.DICT_TYPE_RAWCONTENT
+        )
         d.precompute_compress(level=1)
 
-        d = zstd.ZstdCompressionDict(b'dictcontent' * 64,
-                                     dict_type=zstd.DICT_TYPE_FULLDICT)
-        with self.assertRaisesRegexp(zstd.ZstdError, 'unable to precompute dictionary'):
+        d = zstd.ZstdCompressionDict(
+            b"dictcontent" * 64, dict_type=zstd.DICT_TYPE_FULLDICT
+        )
+        with self.assertRaisesRegex(zstd.ZstdError, "unable to precompute dictionary"):
             d.precompute_compress(level=1)
--- a/contrib/python-zstandard/zstandard/__init__.py	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstandard/__init__.py	Sat Dec 28 09:55:45 2019 -0800
@@ -28,38 +28,48 @@
 # defining a variable and `setup.py` could write the file with whatever
 # policy was specified at build time. Until someone needs it, we go with
 # the hacky but simple environment variable approach.
-_module_policy = os.environ.get('PYTHON_ZSTANDARD_IMPORT_POLICY', 'default')
+_module_policy = os.environ.get("PYTHON_ZSTANDARD_IMPORT_POLICY", "default")
 
-if _module_policy == 'default':
-    if platform.python_implementation() in ('CPython',):
+if _module_policy == "default":
+    if platform.python_implementation() in ("CPython",):
         from zstd import *
-        backend = 'cext'
-    elif platform.python_implementation() in ('PyPy',):
+
+        backend = "cext"
+    elif platform.python_implementation() in ("PyPy",):
         from .cffi import *
-        backend = 'cffi'
+
+        backend = "cffi"
     else:
         try:
             from zstd import *
-            backend = 'cext'
+
+            backend = "cext"
         except ImportError:
             from .cffi import *
-            backend = 'cffi'
-elif _module_policy == 'cffi_fallback':
+
+            backend = "cffi"
+elif _module_policy == "cffi_fallback":
     try:
         from zstd import *
-        backend = 'cext'
+
+        backend = "cext"
     except ImportError:
         from .cffi import *
-        backend = 'cffi'
-elif _module_policy == 'cext':
+
+        backend = "cffi"
+elif _module_policy == "cext":
     from zstd import *
-    backend = 'cext'
-elif _module_policy == 'cffi':
+
+    backend = "cext"
+elif _module_policy == "cffi":
     from .cffi import *
-    backend = 'cffi'
+
+    backend = "cffi"
 else:
-    raise ImportError('unknown module import policy: %s; use default, cffi_fallback, '
-                      'cext, or cffi' % _module_policy)
+    raise ImportError(
+        "unknown module import policy: %s; use default, cffi_fallback, "
+        "cext, or cffi" % _module_policy
+    )
 
 # Keep this in sync with python-zstandard.h.
-__version__ = '0.12.0'
+__version__ = "0.13.0"
--- a/contrib/python-zstandard/zstandard/cffi.py	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstandard/cffi.py	Sat Dec 28 09:55:45 2019 -0800
@@ -14,68 +14,67 @@
     #'BufferSegments',
     #'BufferWithSegments',
     #'BufferWithSegmentsCollection',
-    'CompressionParameters',
-    'ZstdCompressionDict',
-    'ZstdCompressionParameters',
-    'ZstdCompressor',
-    'ZstdError',
-    'ZstdDecompressor',
-    'FrameParameters',
-    'estimate_decompression_context_size',
-    'frame_content_size',
-    'frame_header_size',
-    'get_frame_parameters',
-    'train_dictionary',
-
+    "CompressionParameters",
+    "ZstdCompressionDict",
+    "ZstdCompressionParameters",
+    "ZstdCompressor",
+    "ZstdError",
+    "ZstdDecompressor",
+    "FrameParameters",
+    "estimate_decompression_context_size",
+    "frame_content_size",
+    "frame_header_size",
+    "get_frame_parameters",
+    "train_dictionary",
     # Constants.
-    'FLUSH_BLOCK',
-    'FLUSH_FRAME',
-    'COMPRESSOBJ_FLUSH_FINISH',
-    'COMPRESSOBJ_FLUSH_BLOCK',
-    'ZSTD_VERSION',
-    'FRAME_HEADER',
-    'CONTENTSIZE_UNKNOWN',
-    'CONTENTSIZE_ERROR',
-    'MAX_COMPRESSION_LEVEL',
-    'COMPRESSION_RECOMMENDED_INPUT_SIZE',
-    'COMPRESSION_RECOMMENDED_OUTPUT_SIZE',
-    'DECOMPRESSION_RECOMMENDED_INPUT_SIZE',
-    'DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE',
-    'MAGIC_NUMBER',
-    'BLOCKSIZELOG_MAX',
-    'BLOCKSIZE_MAX',
-    'WINDOWLOG_MIN',
-    'WINDOWLOG_MAX',
-    'CHAINLOG_MIN',
-    'CHAINLOG_MAX',
-    'HASHLOG_MIN',
-    'HASHLOG_MAX',
-    'HASHLOG3_MAX',
-    'MINMATCH_MIN',
-    'MINMATCH_MAX',
-    'SEARCHLOG_MIN',
-    'SEARCHLOG_MAX',
-    'SEARCHLENGTH_MIN',
-    'SEARCHLENGTH_MAX',
-    'TARGETLENGTH_MIN',
-    'TARGETLENGTH_MAX',
-    'LDM_MINMATCH_MIN',
-    'LDM_MINMATCH_MAX',
-    'LDM_BUCKETSIZELOG_MAX',
-    'STRATEGY_FAST',
-    'STRATEGY_DFAST',
-    'STRATEGY_GREEDY',
-    'STRATEGY_LAZY',
-    'STRATEGY_LAZY2',
-    'STRATEGY_BTLAZY2',
-    'STRATEGY_BTOPT',
-    'STRATEGY_BTULTRA',
-    'STRATEGY_BTULTRA2',
-    'DICT_TYPE_AUTO',
-    'DICT_TYPE_RAWCONTENT',
-    'DICT_TYPE_FULLDICT',
-    'FORMAT_ZSTD1',
-    'FORMAT_ZSTD1_MAGICLESS',
+    "FLUSH_BLOCK",
+    "FLUSH_FRAME",
+    "COMPRESSOBJ_FLUSH_FINISH",
+    "COMPRESSOBJ_FLUSH_BLOCK",
+    "ZSTD_VERSION",
+    "FRAME_HEADER",
+    "CONTENTSIZE_UNKNOWN",
+    "CONTENTSIZE_ERROR",
+    "MAX_COMPRESSION_LEVEL",
+    "COMPRESSION_RECOMMENDED_INPUT_SIZE",
+    "COMPRESSION_RECOMMENDED_OUTPUT_SIZE",
+    "DECOMPRESSION_RECOMMENDED_INPUT_SIZE",
+    "DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE",
+    "MAGIC_NUMBER",
+    "BLOCKSIZELOG_MAX",
+    "BLOCKSIZE_MAX",
+    "WINDOWLOG_MIN",
+    "WINDOWLOG_MAX",
+    "CHAINLOG_MIN",
+    "CHAINLOG_MAX",
+    "HASHLOG_MIN",
+    "HASHLOG_MAX",
+    "HASHLOG3_MAX",
+    "MINMATCH_MIN",
+    "MINMATCH_MAX",
+    "SEARCHLOG_MIN",
+    "SEARCHLOG_MAX",
+    "SEARCHLENGTH_MIN",
+    "SEARCHLENGTH_MAX",
+    "TARGETLENGTH_MIN",
+    "TARGETLENGTH_MAX",
+    "LDM_MINMATCH_MIN",
+    "LDM_MINMATCH_MAX",
+    "LDM_BUCKETSIZELOG_MAX",
+    "STRATEGY_FAST",
+    "STRATEGY_DFAST",
+    "STRATEGY_GREEDY",
+    "STRATEGY_LAZY",
+    "STRATEGY_LAZY2",
+    "STRATEGY_BTLAZY2",
+    "STRATEGY_BTOPT",
+    "STRATEGY_BTULTRA",
+    "STRATEGY_BTULTRA2",
+    "DICT_TYPE_AUTO",
+    "DICT_TYPE_RAWCONTENT",
+    "DICT_TYPE_FULLDICT",
+    "FORMAT_ZSTD1",
+    "FORMAT_ZSTD1_MAGICLESS",
 ]
 
 import io
@@ -105,10 +104,14 @@
 
 MAX_COMPRESSION_LEVEL = lib.ZSTD_maxCLevel()
 MAGIC_NUMBER = lib.ZSTD_MAGICNUMBER
-FRAME_HEADER = b'\x28\xb5\x2f\xfd'
+FRAME_HEADER = b"\x28\xb5\x2f\xfd"
 CONTENTSIZE_UNKNOWN = lib.ZSTD_CONTENTSIZE_UNKNOWN
 CONTENTSIZE_ERROR = lib.ZSTD_CONTENTSIZE_ERROR
-ZSTD_VERSION = (lib.ZSTD_VERSION_MAJOR, lib.ZSTD_VERSION_MINOR, lib.ZSTD_VERSION_RELEASE)
+ZSTD_VERSION = (
+    lib.ZSTD_VERSION_MAJOR,
+    lib.ZSTD_VERSION_MINOR,
+    lib.ZSTD_VERSION_RELEASE,
+)
 
 BLOCKSIZELOG_MAX = lib.ZSTD_BLOCKSIZELOG_MAX
 BLOCKSIZE_MAX = lib.ZSTD_BLOCKSIZE_MAX
@@ -165,9 +168,9 @@
     # Linux.
     try:
         if sys.version_info[0] == 2:
-            return os.sysconf(b'SC_NPROCESSORS_ONLN')
+            return os.sysconf(b"SC_NPROCESSORS_ONLN")
         else:
-            return os.sysconf(u'SC_NPROCESSORS_ONLN')
+            return os.sysconf("SC_NPROCESSORS_ONLN")
     except (AttributeError, ValueError):
         pass
 
@@ -183,7 +186,8 @@
     # Resolves to bytes on Python 2 and 3. We use the string for formatting
     # into error messages, which will be literal unicode. So convert it to
     # unicode.
-    return ffi.string(lib.ZSTD_getErrorName(zresult)).decode('utf-8')
+    return ffi.string(lib.ZSTD_getErrorName(zresult)).decode("utf-8")
+
 
 def _make_cctx_params(params):
     res = lib.ZSTD_createCCtxParams()
@@ -221,19 +225,20 @@
 
     return res
 
+
 class ZstdCompressionParameters(object):
     @staticmethod
     def from_level(level, source_size=0, dict_size=0, **kwargs):
         params = lib.ZSTD_getCParams(level, source_size, dict_size)
 
         args = {
-            'window_log': 'windowLog',
-            'chain_log': 'chainLog',
-            'hash_log': 'hashLog',
-            'search_log': 'searchLog',
-            'min_match': 'minMatch',
-            'target_length': 'targetLength',
-            'compression_strategy': 'strategy',
+            "window_log": "windowLog",
+            "chain_log": "chainLog",
+            "hash_log": "hashLog",
+            "search_log": "searchLog",
+            "min_match": "minMatch",
+            "target_length": "targetLength",
+            "compression_strategy": "strategy",
         }
 
         for arg, attr in args.items():
@@ -242,14 +247,33 @@
 
         return ZstdCompressionParameters(**kwargs)
 
-    def __init__(self, format=0, compression_level=0, window_log=0, hash_log=0,
-                 chain_log=0, search_log=0, min_match=0, target_length=0,
-                 strategy=-1, compression_strategy=-1,
-                 write_content_size=1, write_checksum=0,
-                 write_dict_id=0, job_size=0, overlap_log=-1,
-                 overlap_size_log=-1, force_max_window=0, enable_ldm=0,
-                 ldm_hash_log=0, ldm_min_match=0, ldm_bucket_size_log=0,
-                 ldm_hash_rate_log=-1, ldm_hash_every_log=-1, threads=0):
+    def __init__(
+        self,
+        format=0,
+        compression_level=0,
+        window_log=0,
+        hash_log=0,
+        chain_log=0,
+        search_log=0,
+        min_match=0,
+        target_length=0,
+        strategy=-1,
+        compression_strategy=-1,
+        write_content_size=1,
+        write_checksum=0,
+        write_dict_id=0,
+        job_size=0,
+        overlap_log=-1,
+        overlap_size_log=-1,
+        force_max_window=0,
+        enable_ldm=0,
+        ldm_hash_log=0,
+        ldm_min_match=0,
+        ldm_bucket_size_log=0,
+        ldm_hash_rate_log=-1,
+        ldm_hash_every_log=-1,
+        threads=0,
+    ):
 
         params = lib.ZSTD_createCCtxParams()
         if params == ffi.NULL:
@@ -267,7 +291,9 @@
         _set_compression_parameter(params, lib.ZSTD_c_nbWorkers, threads)
 
         _set_compression_parameter(params, lib.ZSTD_c_format, format)
-        _set_compression_parameter(params, lib.ZSTD_c_compressionLevel, compression_level)
+        _set_compression_parameter(
+            params, lib.ZSTD_c_compressionLevel, compression_level
+        )
         _set_compression_parameter(params, lib.ZSTD_c_windowLog, window_log)
         _set_compression_parameter(params, lib.ZSTD_c_hashLog, hash_log)
         _set_compression_parameter(params, lib.ZSTD_c_chainLog, chain_log)
@@ -276,7 +302,7 @@
         _set_compression_parameter(params, lib.ZSTD_c_targetLength, target_length)
 
         if strategy != -1 and compression_strategy != -1:
-            raise ValueError('cannot specify both compression_strategy and strategy')
+            raise ValueError("cannot specify both compression_strategy and strategy")
 
         if compression_strategy != -1:
             strategy = compression_strategy
@@ -284,13 +310,15 @@
             strategy = 0
 
         _set_compression_parameter(params, lib.ZSTD_c_strategy, strategy)
-        _set_compression_parameter(params, lib.ZSTD_c_contentSizeFlag, write_content_size)
+        _set_compression_parameter(
+            params, lib.ZSTD_c_contentSizeFlag, write_content_size
+        )
         _set_compression_parameter(params, lib.ZSTD_c_checksumFlag, write_checksum)
         _set_compression_parameter(params, lib.ZSTD_c_dictIDFlag, write_dict_id)
         _set_compression_parameter(params, lib.ZSTD_c_jobSize, job_size)
 
         if overlap_log != -1 and overlap_size_log != -1:
-            raise ValueError('cannot specify both overlap_log and overlap_size_log')
+            raise ValueError("cannot specify both overlap_log and overlap_size_log")
 
         if overlap_size_log != -1:
             overlap_log = overlap_size_log
@@ -299,13 +327,19 @@
 
         _set_compression_parameter(params, lib.ZSTD_c_overlapLog, overlap_log)
         _set_compression_parameter(params, lib.ZSTD_c_forceMaxWindow, force_max_window)
-        _set_compression_parameter(params, lib.ZSTD_c_enableLongDistanceMatching, enable_ldm)
+        _set_compression_parameter(
+            params, lib.ZSTD_c_enableLongDistanceMatching, enable_ldm
+        )
         _set_compression_parameter(params, lib.ZSTD_c_ldmHashLog, ldm_hash_log)
         _set_compression_parameter(params, lib.ZSTD_c_ldmMinMatch, ldm_min_match)
-        _set_compression_parameter(params, lib.ZSTD_c_ldmBucketSizeLog, ldm_bucket_size_log)
+        _set_compression_parameter(
+            params, lib.ZSTD_c_ldmBucketSizeLog, ldm_bucket_size_log
+        )
 
         if ldm_hash_rate_log != -1 and ldm_hash_every_log != -1:
-            raise ValueError('cannot specify both ldm_hash_rate_log and ldm_hash_every_log')
+            raise ValueError(
+                "cannot specify both ldm_hash_rate_log and ldm_hash_every_log"
+            )
 
         if ldm_hash_every_log != -1:
             ldm_hash_rate_log = ldm_hash_every_log
@@ -380,7 +414,9 @@
 
     @property
     def enable_ldm(self):
-        return _get_compression_parameter(self._params, lib.ZSTD_c_enableLongDistanceMatching)
+        return _get_compression_parameter(
+            self._params, lib.ZSTD_c_enableLongDistanceMatching
+        )
 
     @property
     def ldm_hash_log(self):
@@ -409,8 +445,10 @@
     def estimated_compression_context_size(self):
         return lib.ZSTD_estimateCCtxSize_usingCCtxParams(self._params)
 
+
 CompressionParameters = ZstdCompressionParameters
 
+
 def estimate_decompression_context_size():
     return lib.ZSTD_estimateDCtxSize()
 
@@ -418,24 +456,25 @@
 def _set_compression_parameter(params, param, value):
     zresult = lib.ZSTD_CCtxParams_setParameter(params, param, value)
     if lib.ZSTD_isError(zresult):
-        raise ZstdError('unable to set compression context parameter: %s' %
-                        _zstd_error(zresult))
+        raise ZstdError(
+            "unable to set compression context parameter: %s" % _zstd_error(zresult)
+        )
 
 
 def _get_compression_parameter(params, param):
-    result = ffi.new('int *')
+    result = ffi.new("int *")
 
     zresult = lib.ZSTD_CCtxParams_getParameter(params, param, result)
     if lib.ZSTD_isError(zresult):
-        raise ZstdError('unable to get compression context parameter: %s' %
-                        _zstd_error(zresult))
+        raise ZstdError(
+            "unable to get compression context parameter: %s" % _zstd_error(zresult)
+        )
 
     return result[0]
 
 
 class ZstdCompressionWriter(object):
-    def __init__(self, compressor, writer, source_size, write_size,
-                 write_return_read):
+    def __init__(self, compressor, writer, source_size, write_size, write_return_read):
         self._compressor = compressor
         self._writer = writer
         self._write_size = write_size
@@ -444,24 +483,22 @@
         self._closed = False
         self._bytes_compressed = 0
 
-        self._dst_buffer = ffi.new('char[]', write_size)
-        self._out_buffer = ffi.new('ZSTD_outBuffer *')
+        self._dst_buffer = ffi.new("char[]", write_size)
+        self._out_buffer = ffi.new("ZSTD_outBuffer *")
         self._out_buffer.dst = self._dst_buffer
         self._out_buffer.size = len(self._dst_buffer)
         self._out_buffer.pos = 0
 
-        zresult = lib.ZSTD_CCtx_setPledgedSrcSize(compressor._cctx,
-                                                  source_size)
+        zresult = lib.ZSTD_CCtx_setPledgedSrcSize(compressor._cctx, source_size)
         if lib.ZSTD_isError(zresult):
-            raise ZstdError('error setting source size: %s' %
-                            _zstd_error(zresult))
+            raise ZstdError("error setting source size: %s" % _zstd_error(zresult))
 
     def __enter__(self):
         if self._closed:
-            raise ValueError('stream is closed')
+            raise ValueError("stream is closed")
 
         if self._entered:
-            raise ZstdError('cannot __enter__ multiple times')
+            raise ZstdError("cannot __enter__ multiple times")
 
         self._entered = True
         return self
@@ -480,11 +517,11 @@
         return lib.ZSTD_sizeof_CCtx(self._compressor._cctx)
 
     def fileno(self):
-        f = getattr(self._writer, 'fileno', None)
+        f = getattr(self._writer, "fileno", None)
         if f:
             return f()
         else:
-            raise OSError('fileno not available on underlying writer')
+            raise OSError("fileno not available on underlying writer")
 
     def close(self):
         if self._closed:
@@ -496,7 +533,7 @@
             self._closed = True
 
         # Call close() on underlying stream as well.
-        f = getattr(self._writer, 'close', None)
+        f = getattr(self._writer, "close", None)
         if f:
             f()
 
@@ -529,7 +566,7 @@
         return True
 
     def writelines(self, lines):
-        raise NotImplementedError('writelines() is not yet implemented')
+        raise NotImplementedError("writelines() is not yet implemented")
 
     def read(self, size=-1):
         raise io.UnsupportedOperation()
@@ -542,13 +579,13 @@
 
     def write(self, data):
         if self._closed:
-            raise ValueError('stream is closed')
+            raise ValueError("stream is closed")
 
         total_write = 0
 
         data_buffer = ffi.from_buffer(data)
 
-        in_buffer = ffi.new('ZSTD_inBuffer *')
+        in_buffer = ffi.new("ZSTD_inBuffer *")
         in_buffer.src = data_buffer
         in_buffer.size = len(data_buffer)
         in_buffer.pos = 0
@@ -557,12 +594,11 @@
         out_buffer.pos = 0
 
         while in_buffer.pos < in_buffer.size:
-            zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
-                                               out_buffer, in_buffer,
-                                               lib.ZSTD_e_continue)
+            zresult = lib.ZSTD_compressStream2(
+                self._compressor._cctx, out_buffer, in_buffer, lib.ZSTD_e_continue
+            )
             if lib.ZSTD_isError(zresult):
-                raise ZstdError('zstd compress error: %s' %
-                                _zstd_error(zresult))
+                raise ZstdError("zstd compress error: %s" % _zstd_error(zresult))
 
             if out_buffer.pos:
                 self._writer.write(ffi.buffer(out_buffer.dst, out_buffer.pos)[:])
@@ -581,28 +617,27 @@
         elif flush_mode == FLUSH_FRAME:
             flush = lib.ZSTD_e_end
         else:
-            raise ValueError('unknown flush_mode: %r' % flush_mode)
+            raise ValueError("unknown flush_mode: %r" % flush_mode)
 
         if self._closed:
-            raise ValueError('stream is closed')
+            raise ValueError("stream is closed")
 
         total_write = 0
 
         out_buffer = self._out_buffer
         out_buffer.pos = 0
 
-        in_buffer = ffi.new('ZSTD_inBuffer *')
+        in_buffer = ffi.new("ZSTD_inBuffer *")
         in_buffer.src = ffi.NULL
         in_buffer.size = 0
         in_buffer.pos = 0
 
         while True:
-            zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
-                                               out_buffer, in_buffer,
-                                               flush)
+            zresult = lib.ZSTD_compressStream2(
+                self._compressor._cctx, out_buffer, in_buffer, flush
+            )
             if lib.ZSTD_isError(zresult):
-                raise ZstdError('zstd compress error: %s' %
-                                _zstd_error(zresult))
+                raise ZstdError("zstd compress error: %s" % _zstd_error(zresult))
 
             if out_buffer.pos:
                 self._writer.write(ffi.buffer(out_buffer.dst, out_buffer.pos)[:])
@@ -622,10 +657,10 @@
 class ZstdCompressionObj(object):
     def compress(self, data):
         if self._finished:
-            raise ZstdError('cannot call compress() after compressor finished')
+            raise ZstdError("cannot call compress() after compressor finished")
 
         data_buffer = ffi.from_buffer(data)
-        source = ffi.new('ZSTD_inBuffer *')
+        source = ffi.new("ZSTD_inBuffer *")
         source.src = data_buffer
         source.size = len(data_buffer)
         source.pos = 0
@@ -633,26 +668,24 @@
         chunks = []
 
         while source.pos < len(data):
-            zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
-                                               self._out,
-                                               source,
-                                               lib.ZSTD_e_continue)
+            zresult = lib.ZSTD_compressStream2(
+                self._compressor._cctx, self._out, source, lib.ZSTD_e_continue
+            )
             if lib.ZSTD_isError(zresult):
-                raise ZstdError('zstd compress error: %s' %
-                                _zstd_error(zresult))
+                raise ZstdError("zstd compress error: %s" % _zstd_error(zresult))
 
             if self._out.pos:
                 chunks.append(ffi.buffer(self._out.dst, self._out.pos)[:])
                 self._out.pos = 0
 
-        return b''.join(chunks)
+        return b"".join(chunks)
 
     def flush(self, flush_mode=COMPRESSOBJ_FLUSH_FINISH):
         if flush_mode not in (COMPRESSOBJ_FLUSH_FINISH, COMPRESSOBJ_FLUSH_BLOCK):
-            raise ValueError('flush mode not recognized')
+            raise ValueError("flush mode not recognized")
 
         if self._finished:
-            raise ZstdError('compressor object already finished')
+            raise ZstdError("compressor object already finished")
 
         if flush_mode == COMPRESSOBJ_FLUSH_BLOCK:
             z_flush_mode = lib.ZSTD_e_flush
@@ -660,11 +693,11 @@
             z_flush_mode = lib.ZSTD_e_end
             self._finished = True
         else:
-            raise ZstdError('unhandled flush mode')
+            raise ZstdError("unhandled flush mode")
 
         assert self._out.pos == 0
 
-        in_buffer = ffi.new('ZSTD_inBuffer *')
+        in_buffer = ffi.new("ZSTD_inBuffer *")
         in_buffer.src = ffi.NULL
         in_buffer.size = 0
         in_buffer.pos = 0
@@ -672,13 +705,13 @@
         chunks = []
 
         while True:
-            zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
-                                               self._out,
-                                               in_buffer,
-                                               z_flush_mode)
+            zresult = lib.ZSTD_compressStream2(
+                self._compressor._cctx, self._out, in_buffer, z_flush_mode
+            )
             if lib.ZSTD_isError(zresult):
-                raise ZstdError('error ending compression stream: %s' %
-                                _zstd_error(zresult))
+                raise ZstdError(
+                    "error ending compression stream: %s" % _zstd_error(zresult)
+                )
 
             if self._out.pos:
                 chunks.append(ffi.buffer(self._out.dst, self._out.pos)[:])
@@ -687,19 +720,19 @@
             if not zresult:
                 break
 
-        return b''.join(chunks)
+        return b"".join(chunks)
 
 
 class ZstdCompressionChunker(object):
     def __init__(self, compressor, chunk_size):
         self._compressor = compressor
-        self._out = ffi.new('ZSTD_outBuffer *')
-        self._dst_buffer = ffi.new('char[]', chunk_size)
+        self._out = ffi.new("ZSTD_outBuffer *")
+        self._dst_buffer = ffi.new("char[]", chunk_size)
         self._out.dst = self._dst_buffer
         self._out.size = chunk_size
         self._out.pos = 0
 
-        self._in = ffi.new('ZSTD_inBuffer *')
+        self._in = ffi.new("ZSTD_inBuffer *")
         self._in.src = ffi.NULL
         self._in.size = 0
         self._in.pos = 0
@@ -707,11 +740,13 @@
 
     def compress(self, data):
         if self._finished:
-            raise ZstdError('cannot call compress() after compression finished')
+            raise ZstdError("cannot call compress() after compression finished")
 
         if self._in.src != ffi.NULL:
-            raise ZstdError('cannot perform operation before consuming output '
-                            'from previous operation')
+            raise ZstdError(
+                "cannot perform operation before consuming output "
+                "from previous operation"
+            )
 
         data_buffer = ffi.from_buffer(data)
 
@@ -723,10 +758,9 @@
         self._in.pos = 0
 
         while self._in.pos < self._in.size:
-            zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
-                                               self._out,
-                                               self._in,
-                                               lib.ZSTD_e_continue)
+            zresult = lib.ZSTD_compressStream2(
+                self._compressor._cctx, self._out, self._in, lib.ZSTD_e_continue
+            )
 
             if self._in.pos == self._in.size:
                 self._in.src = ffi.NULL
@@ -734,8 +768,7 @@
                 self._in.pos = 0
 
             if lib.ZSTD_isError(zresult):
-                raise ZstdError('zstd compress error: %s' %
-                                _zstd_error(zresult))
+                raise ZstdError("zstd compress error: %s" % _zstd_error(zresult))
 
             if self._out.pos == self._out.size:
                 yield ffi.buffer(self._out.dst, self._out.pos)[:]
@@ -743,18 +776,19 @@
 
     def flush(self):
         if self._finished:
-            raise ZstdError('cannot call flush() after compression finished')
+            raise ZstdError("cannot call flush() after compression finished")
 
         if self._in.src != ffi.NULL:
-            raise ZstdError('cannot call flush() before consuming output from '
-                            'previous operation')
+            raise ZstdError(
+                "cannot call flush() before consuming output from " "previous operation"
+            )
 
         while True:
-            zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
-                                               self._out, self._in,
-                                               lib.ZSTD_e_flush)
+            zresult = lib.ZSTD_compressStream2(
+                self._compressor._cctx, self._out, self._in, lib.ZSTD_e_flush
+            )
             if lib.ZSTD_isError(zresult):
-                raise ZstdError('zstd compress error: %s' % _zstd_error(zresult))
+                raise ZstdError("zstd compress error: %s" % _zstd_error(zresult))
 
             if self._out.pos:
                 yield ffi.buffer(self._out.dst, self._out.pos)[:]
@@ -765,18 +799,20 @@
 
     def finish(self):
         if self._finished:
-            raise ZstdError('cannot call finish() after compression finished')
+            raise ZstdError("cannot call finish() after compression finished")
 
         if self._in.src != ffi.NULL:
-            raise ZstdError('cannot call finish() before consuming output from '
-                            'previous operation')
+            raise ZstdError(
+                "cannot call finish() before consuming output from "
+                "previous operation"
+            )
 
         while True:
-            zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
-                                               self._out, self._in,
-                                               lib.ZSTD_e_end)
+            zresult = lib.ZSTD_compressStream2(
+                self._compressor._cctx, self._out, self._in, lib.ZSTD_e_end
+            )
             if lib.ZSTD_isError(zresult):
-                raise ZstdError('zstd compress error: %s' % _zstd_error(zresult))
+                raise ZstdError("zstd compress error: %s" % _zstd_error(zresult))
 
             if self._out.pos:
                 yield ffi.buffer(self._out.dst, self._out.pos)[:]
@@ -798,13 +834,13 @@
         self._finished_input = False
         self._finished_output = False
 
-        self._in_buffer = ffi.new('ZSTD_inBuffer *')
+        self._in_buffer = ffi.new("ZSTD_inBuffer *")
         # Holds a ref so backing bytes in self._in_buffer stay alive.
         self._source_buffer = None
 
     def __enter__(self):
         if self._entered:
-            raise ValueError('cannot __enter__ multiple times')
+            raise ValueError("cannot __enter__ multiple times")
 
         self._entered = True
         return self
@@ -833,10 +869,10 @@
         raise io.UnsupportedOperation()
 
     def write(self, data):
-        raise OSError('stream is not writable')
+        raise OSError("stream is not writable")
 
     def writelines(self, ignored):
-        raise OSError('stream is not writable')
+        raise OSError("stream is not writable")
 
     def isatty(self):
         return False
@@ -865,7 +901,7 @@
 
             chunks.append(chunk)
 
-        return b''.join(chunks)
+        return b"".join(chunks)
 
     def __iter__(self):
         raise io.UnsupportedOperation()
@@ -879,7 +915,7 @@
         if self._finished_input:
             return
 
-        if hasattr(self._source, 'read'):
+        if hasattr(self._source, "read"):
             data = self._source.read(self._read_size)
 
             if not data:
@@ -902,9 +938,9 @@
 
         old_pos = out_buffer.pos
 
-        zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
-                                           out_buffer, self._in_buffer,
-                                           lib.ZSTD_e_continue)
+        zresult = lib.ZSTD_compressStream2(
+            self._compressor._cctx, out_buffer, self._in_buffer, lib.ZSTD_e_continue
+        )
 
         self._bytes_compressed += out_buffer.pos - old_pos
 
@@ -914,31 +950,30 @@
             self._in_buffer.size = 0
             self._source_buffer = None
 
-            if not hasattr(self._source, 'read'):
+            if not hasattr(self._source, "read"):
                 self._finished_input = True
 
         if lib.ZSTD_isError(zresult):
-            raise ZstdError('zstd compress error: %s',
-                            _zstd_error(zresult))
+            raise ZstdError("zstd compress error: %s", _zstd_error(zresult))
 
         return out_buffer.pos and out_buffer.pos == out_buffer.size
 
     def read(self, size=-1):
         if self._closed:
-            raise ValueError('stream is closed')
+            raise ValueError("stream is closed")
 
         if size < -1:
-            raise ValueError('cannot read negative amounts less than -1')
+            raise ValueError("cannot read negative amounts less than -1")
 
         if size == -1:
             return self.readall()
 
         if self._finished_output or size == 0:
-            return b''
+            return b""
 
         # Need a dedicated ref to dest buffer otherwise it gets collected.
-        dst_buffer = ffi.new('char[]', size)
-        out_buffer = ffi.new('ZSTD_outBuffer *')
+        dst_buffer = ffi.new("char[]", size)
+        out_buffer = ffi.new("ZSTD_outBuffer *")
         out_buffer.dst = dst_buffer
         out_buffer.size = size
         out_buffer.pos = 0
@@ -955,15 +990,14 @@
         # EOF
         old_pos = out_buffer.pos
 
-        zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
-                                           out_buffer, self._in_buffer,
-                                           lib.ZSTD_e_end)
+        zresult = lib.ZSTD_compressStream2(
+            self._compressor._cctx, out_buffer, self._in_buffer, lib.ZSTD_e_end
+        )
 
         self._bytes_compressed += out_buffer.pos - old_pos
 
         if lib.ZSTD_isError(zresult):
-            raise ZstdError('error ending compression stream: %s',
-                            _zstd_error(zresult))
+            raise ZstdError("error ending compression stream: %s", _zstd_error(zresult))
 
         if zresult == 0:
             self._finished_output = True
@@ -972,20 +1006,20 @@
 
     def read1(self, size=-1):
         if self._closed:
-            raise ValueError('stream is closed')
+            raise ValueError("stream is closed")
 
         if size < -1:
-            raise ValueError('cannot read negative amounts less than -1')
+            raise ValueError("cannot read negative amounts less than -1")
 
         if self._finished_output or size == 0:
-            return b''
+            return b""
 
         # -1 returns arbitrary number of bytes.
         if size == -1:
             size = COMPRESSION_RECOMMENDED_OUTPUT_SIZE
 
-        dst_buffer = ffi.new('char[]', size)
-        out_buffer = ffi.new('ZSTD_outBuffer *')
+        dst_buffer = ffi.new("char[]", size)
+        out_buffer = ffi.new("ZSTD_outBuffer *")
         out_buffer.dst = dst_buffer
         out_buffer.size = size
         out_buffer.pos = 0
@@ -1020,15 +1054,16 @@
         # EOF.
         old_pos = out_buffer.pos
 
-        zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
-                                           out_buffer, self._in_buffer,
-                                           lib.ZSTD_e_end)
+        zresult = lib.ZSTD_compressStream2(
+            self._compressor._cctx, out_buffer, self._in_buffer, lib.ZSTD_e_end
+        )
 
         self._bytes_compressed += out_buffer.pos - old_pos
 
         if lib.ZSTD_isError(zresult):
-            raise ZstdError('error ending compression stream: %s' %
-                            _zstd_error(zresult))
+            raise ZstdError(
+                "error ending compression stream: %s" % _zstd_error(zresult)
+            )
 
         if zresult == 0:
             self._finished_output = True
@@ -1037,15 +1072,15 @@
 
     def readinto(self, b):
         if self._closed:
-            raise ValueError('stream is closed')
+            raise ValueError("stream is closed")
 
         if self._finished_output:
             return 0
 
         # TODO use writable=True once we require CFFI >= 1.12.
         dest_buffer = ffi.from_buffer(b)
-        ffi.memmove(b, b'', 0)
-        out_buffer = ffi.new('ZSTD_outBuffer *')
+        ffi.memmove(b, b"", 0)
+        out_buffer = ffi.new("ZSTD_outBuffer *")
         out_buffer.dst = dest_buffer
         out_buffer.size = len(dest_buffer)
         out_buffer.pos = 0
@@ -1060,15 +1095,14 @@
 
         # EOF.
         old_pos = out_buffer.pos
-        zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
-                                           out_buffer, self._in_buffer,
-                                           lib.ZSTD_e_end)
+        zresult = lib.ZSTD_compressStream2(
+            self._compressor._cctx, out_buffer, self._in_buffer, lib.ZSTD_e_end
+        )
 
         self._bytes_compressed += out_buffer.pos - old_pos
 
         if lib.ZSTD_isError(zresult):
-            raise ZstdError('error ending compression stream: %s',
-                            _zstd_error(zresult))
+            raise ZstdError("error ending compression stream: %s", _zstd_error(zresult))
 
         if zresult == 0:
             self._finished_output = True
@@ -1077,16 +1111,16 @@
 
     def readinto1(self, b):
         if self._closed:
-            raise ValueError('stream is closed')
+            raise ValueError("stream is closed")
 
         if self._finished_output:
             return 0
 
         # TODO use writable=True once we require CFFI >= 1.12.
         dest_buffer = ffi.from_buffer(b)
-        ffi.memmove(b, b'', 0)
-
-        out_buffer = ffi.new('ZSTD_outBuffer *')
+        ffi.memmove(b, b"", 0)
+
+        out_buffer = ffi.new("ZSTD_outBuffer *")
         out_buffer.dst = dest_buffer
         out_buffer.size = len(dest_buffer)
         out_buffer.pos = 0
@@ -1107,15 +1141,16 @@
         # EOF.
         old_pos = out_buffer.pos
 
-        zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
-                                           out_buffer, self._in_buffer,
-                                           lib.ZSTD_e_end)
+        zresult = lib.ZSTD_compressStream2(
+            self._compressor._cctx, out_buffer, self._in_buffer, lib.ZSTD_e_end
+        )
 
         self._bytes_compressed += out_buffer.pos - old_pos
 
         if lib.ZSTD_isError(zresult):
-            raise ZstdError('error ending compression stream: %s' %
-                            _zstd_error(zresult))
+            raise ZstdError(
+                "error ending compression stream: %s" % _zstd_error(zresult)
+            )
 
         if zresult == 0:
             self._finished_output = True
@@ -1124,29 +1159,35 @@
 
 
 class ZstdCompressor(object):
-    def __init__(self, level=3, dict_data=None, compression_params=None,
-                 write_checksum=None, write_content_size=None,
-                 write_dict_id=None, threads=0):
+    def __init__(
+        self,
+        level=3,
+        dict_data=None,
+        compression_params=None,
+        write_checksum=None,
+        write_content_size=None,
+        write_dict_id=None,
+        threads=0,
+    ):
         if level > lib.ZSTD_maxCLevel():
-            raise ValueError('level must be less than %d' % lib.ZSTD_maxCLevel())
+            raise ValueError("level must be less than %d" % lib.ZSTD_maxCLevel())
 
         if threads < 0:
             threads = _cpu_count()
 
         if compression_params and write_checksum is not None:
-            raise ValueError('cannot define compression_params and '
-                             'write_checksum')
+            raise ValueError("cannot define compression_params and " "write_checksum")
 
         if compression_params and write_content_size is not None:
-            raise ValueError('cannot define compression_params and '
-                             'write_content_size')
+            raise ValueError(
+                "cannot define compression_params and " "write_content_size"
+            )
 
         if compression_params and write_dict_id is not None:
-            raise ValueError('cannot define compression_params and '
-                             'write_dict_id')
+            raise ValueError("cannot define compression_params and " "write_dict_id")
 
         if compression_params and threads:
-            raise ValueError('cannot define compression_params and threads')
+            raise ValueError("cannot define compression_params and threads")
 
         if compression_params:
             self._params = _make_cctx_params(compression_params)
@@ -1160,27 +1201,24 @@
 
             self._params = ffi.gc(params, lib.ZSTD_freeCCtxParams)
 
-            _set_compression_parameter(self._params,
-                                       lib.ZSTD_c_compressionLevel,
-                                       level)
+            _set_compression_parameter(self._params, lib.ZSTD_c_compressionLevel, level)
 
             _set_compression_parameter(
                 self._params,
                 lib.ZSTD_c_contentSizeFlag,
-                write_content_size if write_content_size is not None else 1)
-
-            _set_compression_parameter(self._params,
-                                       lib.ZSTD_c_checksumFlag,
-                                       1 if write_checksum else 0)
-
-            _set_compression_parameter(self._params,
-                                       lib.ZSTD_c_dictIDFlag,
-                                       1 if write_dict_id else 0)
+                write_content_size if write_content_size is not None else 1,
+            )
+
+            _set_compression_parameter(
+                self._params, lib.ZSTD_c_checksumFlag, 1 if write_checksum else 0
+            )
+
+            _set_compression_parameter(
+                self._params, lib.ZSTD_c_dictIDFlag, 1 if write_dict_id else 0
+            )
 
             if threads:
-                _set_compression_parameter(self._params,
-                                           lib.ZSTD_c_nbWorkers,
-                                           threads)
+                _set_compression_parameter(self._params, lib.ZSTD_c_nbWorkers, threads)
 
         cctx = lib.ZSTD_createCCtx()
         if cctx == ffi.NULL:
@@ -1194,15 +1232,16 @@
         try:
             self._setup_cctx()
         finally:
-            self._cctx = ffi.gc(cctx, lib.ZSTD_freeCCtx,
-                                size=lib.ZSTD_sizeof_CCtx(cctx))
+            self._cctx = ffi.gc(
+                cctx, lib.ZSTD_freeCCtx, size=lib.ZSTD_sizeof_CCtx(cctx)
+            )
 
     def _setup_cctx(self):
-        zresult = lib.ZSTD_CCtx_setParametersUsingCCtxParams(self._cctx,
-                                                             self._params)
+        zresult = lib.ZSTD_CCtx_setParametersUsingCCtxParams(self._cctx, self._params)
         if lib.ZSTD_isError(zresult):
-            raise ZstdError('could not set compression parameters: %s' %
-                            _zstd_error(zresult))
+            raise ZstdError(
+                "could not set compression parameters: %s" % _zstd_error(zresult)
+            )
 
         dict_data = self._dict_data
 
@@ -1211,12 +1250,17 @@
                 zresult = lib.ZSTD_CCtx_refCDict(self._cctx, dict_data._cdict)
             else:
                 zresult = lib.ZSTD_CCtx_loadDictionary_advanced(
-                    self._cctx, dict_data.as_bytes(), len(dict_data),
-                    lib.ZSTD_dlm_byRef, dict_data._dict_type)
+                    self._cctx,
+                    dict_data.as_bytes(),
+                    len(dict_data),
+                    lib.ZSTD_dlm_byRef,
+                    dict_data._dict_type,
+                )
 
             if lib.ZSTD_isError(zresult):
-                raise ZstdError('could not load compression dictionary: %s' %
-                                _zstd_error(zresult))
+                raise ZstdError(
+                    "could not load compression dictionary: %s" % _zstd_error(zresult)
+                )
 
     def memory_size(self):
         return lib.ZSTD_sizeof_CCtx(self._cctx)
@@ -1227,15 +1271,14 @@
         data_buffer = ffi.from_buffer(data)
 
         dest_size = lib.ZSTD_compressBound(len(data_buffer))
-        out = new_nonzero('char[]', dest_size)
+        out = new_nonzero("char[]", dest_size)
 
         zresult = lib.ZSTD_CCtx_setPledgedSrcSize(self._cctx, len(data_buffer))
         if lib.ZSTD_isError(zresult):
-            raise ZstdError('error setting source size: %s' %
-                            _zstd_error(zresult))
-
-        out_buffer = ffi.new('ZSTD_outBuffer *')
-        in_buffer = ffi.new('ZSTD_inBuffer *')
+            raise ZstdError("error setting source size: %s" % _zstd_error(zresult))
+
+        out_buffer = ffi.new("ZSTD_outBuffer *")
+        in_buffer = ffi.new("ZSTD_inBuffer *")
 
         out_buffer.dst = out
         out_buffer.size = dest_size
@@ -1245,16 +1288,14 @@
         in_buffer.size = len(data_buffer)
         in_buffer.pos = 0
 
-        zresult = lib.ZSTD_compressStream2(self._cctx,
-                                           out_buffer,
-                                           in_buffer,
-                                           lib.ZSTD_e_end)
+        zresult = lib.ZSTD_compressStream2(
+            self._cctx, out_buffer, in_buffer, lib.ZSTD_e_end
+        )
 
         if lib.ZSTD_isError(zresult):
-            raise ZstdError('cannot compress: %s' %
-                            _zstd_error(zresult))
+            raise ZstdError("cannot compress: %s" % _zstd_error(zresult))
         elif zresult:
-            raise ZstdError('unexpected partial frame flush')
+            raise ZstdError("unexpected partial frame flush")
 
         return ffi.buffer(out, out_buffer.pos)[:]
 
@@ -1266,12 +1307,11 @@
 
         zresult = lib.ZSTD_CCtx_setPledgedSrcSize(self._cctx, size)
         if lib.ZSTD_isError(zresult):
-            raise ZstdError('error setting source size: %s' %
-                            _zstd_error(zresult))
+            raise ZstdError("error setting source size: %s" % _zstd_error(zresult))
 
         cobj = ZstdCompressionObj()
-        cobj._out = ffi.new('ZSTD_outBuffer *')
-        cobj._dst_buffer = ffi.new('char[]', COMPRESSION_RECOMMENDED_OUTPUT_SIZE)
+        cobj._out = ffi.new("ZSTD_outBuffer *")
+        cobj._dst_buffer = ffi.new("char[]", COMPRESSION_RECOMMENDED_OUTPUT_SIZE)
         cobj._out.dst = cobj._dst_buffer
         cobj._out.size = COMPRESSION_RECOMMENDED_OUTPUT_SIZE
         cobj._out.pos = 0
@@ -1288,19 +1328,23 @@
 
         zresult = lib.ZSTD_CCtx_setPledgedSrcSize(self._cctx, size)
         if lib.ZSTD_isError(zresult):
-            raise ZstdError('error setting source size: %s' %
-                            _zstd_error(zresult))
+            raise ZstdError("error setting source size: %s" % _zstd_error(zresult))
 
         return ZstdCompressionChunker(self, chunk_size=chunk_size)
 
-    def copy_stream(self, ifh, ofh, size=-1,
-                    read_size=COMPRESSION_RECOMMENDED_INPUT_SIZE,
-                    write_size=COMPRESSION_RECOMMENDED_OUTPUT_SIZE):
-
-        if not hasattr(ifh, 'read'):
-            raise ValueError('first argument must have a read() method')
-        if not hasattr(ofh, 'write'):
-            raise ValueError('second argument must have a write() method')
+    def copy_stream(
+        self,
+        ifh,
+        ofh,
+        size=-1,
+        read_size=COMPRESSION_RECOMMENDED_INPUT_SIZE,
+        write_size=COMPRESSION_RECOMMENDED_OUTPUT_SIZE,
+    ):
+
+        if not hasattr(ifh, "read"):
+            raise ValueError("first argument must have a read() method")
+        if not hasattr(ofh, "write"):
+            raise ValueError("second argument must have a write() method")
 
         lib.ZSTD_CCtx_reset(self._cctx, lib.ZSTD_reset_session_only)
 
@@ -1309,13 +1353,12 @@
 
         zresult = lib.ZSTD_CCtx_setPledgedSrcSize(self._cctx, size)
         if lib.ZSTD_isError(zresult):
-            raise ZstdError('error setting source size: %s' %
-                            _zstd_error(zresult))
-
-        in_buffer = ffi.new('ZSTD_inBuffer *')
-        out_buffer = ffi.new('ZSTD_outBuffer *')
-
-        dst_buffer = ffi.new('char[]', write_size)
+            raise ZstdError("error setting source size: %s" % _zstd_error(zresult))
+
+        in_buffer = ffi.new("ZSTD_inBuffer *")
+        out_buffer = ffi.new("ZSTD_outBuffer *")
+
+        dst_buffer = ffi.new("char[]", write_size)
         out_buffer.dst = dst_buffer
         out_buffer.size = write_size
         out_buffer.pos = 0
@@ -1334,13 +1377,11 @@
             in_buffer.pos = 0
 
             while in_buffer.pos < in_buffer.size:
-                zresult = lib.ZSTD_compressStream2(self._cctx,
-                                                   out_buffer,
-                                                   in_buffer,
-                                                   lib.ZSTD_e_continue)
+                zresult = lib.ZSTD_compressStream2(
+                    self._cctx, out_buffer, in_buffer, lib.ZSTD_e_continue
+                )
                 if lib.ZSTD_isError(zresult):
-                    raise ZstdError('zstd compress error: %s' %
-                                    _zstd_error(zresult))
+                    raise ZstdError("zstd compress error: %s" % _zstd_error(zresult))
 
                 if out_buffer.pos:
                     ofh.write(ffi.buffer(out_buffer.dst, out_buffer.pos))
@@ -1349,13 +1390,13 @@
 
         # We've finished reading. Flush the compressor.
         while True:
-            zresult = lib.ZSTD_compressStream2(self._cctx,
-                                               out_buffer,
-                                               in_buffer,
-                                               lib.ZSTD_e_end)
+            zresult = lib.ZSTD_compressStream2(
+                self._cctx, out_buffer, in_buffer, lib.ZSTD_e_end
+            )
             if lib.ZSTD_isError(zresult):
-                raise ZstdError('error ending compression stream: %s' %
-                                _zstd_error(zresult))
+                raise ZstdError(
+                    "error ending compression stream: %s" % _zstd_error(zresult)
+                )
 
             if out_buffer.pos:
                 ofh.write(ffi.buffer(out_buffer.dst, out_buffer.pos))
@@ -1367,8 +1408,9 @@
 
         return total_read, total_write
 
-    def stream_reader(self, source, size=-1,
-                      read_size=COMPRESSION_RECOMMENDED_INPUT_SIZE):
+    def stream_reader(
+        self, source, size=-1, read_size=COMPRESSION_RECOMMENDED_INPUT_SIZE
+    ):
         lib.ZSTD_CCtx_reset(self._cctx, lib.ZSTD_reset_session_only)
 
         try:
@@ -1381,40 +1423,48 @@
 
         zresult = lib.ZSTD_CCtx_setPledgedSrcSize(self._cctx, size)
         if lib.ZSTD_isError(zresult):
-            raise ZstdError('error setting source size: %s' %
-                            _zstd_error(zresult))
+            raise ZstdError("error setting source size: %s" % _zstd_error(zresult))
 
         return ZstdCompressionReader(self, source, read_size)
 
-    def stream_writer(self, writer, size=-1,
-                 write_size=COMPRESSION_RECOMMENDED_OUTPUT_SIZE,
-                 write_return_read=False):
-
-        if not hasattr(writer, 'write'):
-            raise ValueError('must pass an object with a write() method')
+    def stream_writer(
+        self,
+        writer,
+        size=-1,
+        write_size=COMPRESSION_RECOMMENDED_OUTPUT_SIZE,
+        write_return_read=False,
+    ):
+
+        if not hasattr(writer, "write"):
+            raise ValueError("must pass an object with a write() method")
 
         lib.ZSTD_CCtx_reset(self._cctx, lib.ZSTD_reset_session_only)
 
         if size < 0:
             size = lib.ZSTD_CONTENTSIZE_UNKNOWN
 
-        return ZstdCompressionWriter(self, writer, size, write_size,
-                                     write_return_read)
+        return ZstdCompressionWriter(self, writer, size, write_size, write_return_read)
 
     write_to = stream_writer
 
-    def read_to_iter(self, reader, size=-1,
-                     read_size=COMPRESSION_RECOMMENDED_INPUT_SIZE,
-                     write_size=COMPRESSION_RECOMMENDED_OUTPUT_SIZE):
-        if hasattr(reader, 'read'):
+    def read_to_iter(
+        self,
+        reader,
+        size=-1,
+        read_size=COMPRESSION_RECOMMENDED_INPUT_SIZE,
+        write_size=COMPRESSION_RECOMMENDED_OUTPUT_SIZE,
+    ):
+        if hasattr(reader, "read"):
             have_read = True
-        elif hasattr(reader, '__getitem__'):
+        elif hasattr(reader, "__getitem__"):
             have_read = False
             buffer_offset = 0
             size = len(reader)
         else:
-            raise ValueError('must pass an object with a read() method or '
-                             'conforms to buffer protocol')
+            raise ValueError(
+                "must pass an object with a read() method or "
+                "conforms to buffer protocol"
+            )
 
         lib.ZSTD_CCtx_reset(self._cctx, lib.ZSTD_reset_session_only)
 
@@ -1423,17 +1473,16 @@
 
         zresult = lib.ZSTD_CCtx_setPledgedSrcSize(self._cctx, size)
         if lib.ZSTD_isError(zresult):
-            raise ZstdError('error setting source size: %s' %
-                            _zstd_error(zresult))
-
-        in_buffer = ffi.new('ZSTD_inBuffer *')
-        out_buffer = ffi.new('ZSTD_outBuffer *')
+            raise ZstdError("error setting source size: %s" % _zstd_error(zresult))
+
+        in_buffer = ffi.new("ZSTD_inBuffer *")
+        out_buffer = ffi.new("ZSTD_outBuffer *")
 
         in_buffer.src = ffi.NULL
         in_buffer.size = 0
         in_buffer.pos = 0
 
-        dst_buffer = ffi.new('char[]', write_size)
+        dst_buffer = ffi.new("char[]", write_size)
         out_buffer.dst = dst_buffer
         out_buffer.size = write_size
         out_buffer.pos = 0
@@ -1449,7 +1498,7 @@
             else:
                 remaining = len(reader) - buffer_offset
                 slice_size = min(remaining, read_size)
-                read_result = reader[buffer_offset:buffer_offset + slice_size]
+                read_result = reader[buffer_offset : buffer_offset + slice_size]
                 buffer_offset += slice_size
 
             # No new input data. Break out of the read loop.
@@ -1464,11 +1513,11 @@
             in_buffer.pos = 0
 
             while in_buffer.pos < in_buffer.size:
-                zresult = lib.ZSTD_compressStream2(self._cctx, out_buffer, in_buffer,
-                                                   lib.ZSTD_e_continue)
+                zresult = lib.ZSTD_compressStream2(
+                    self._cctx, out_buffer, in_buffer, lib.ZSTD_e_continue
+                )
                 if lib.ZSTD_isError(zresult):
-                    raise ZstdError('zstd compress error: %s' %
-                                    _zstd_error(zresult))
+                    raise ZstdError("zstd compress error: %s" % _zstd_error(zresult))
 
                 if out_buffer.pos:
                     data = ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
@@ -1484,13 +1533,13 @@
         # remains.
         while True:
             assert out_buffer.pos == 0
-            zresult = lib.ZSTD_compressStream2(self._cctx,
-                                               out_buffer,
-                                               in_buffer,
-                                               lib.ZSTD_e_end)
+            zresult = lib.ZSTD_compressStream2(
+                self._cctx, out_buffer, in_buffer, lib.ZSTD_e_end
+            )
             if lib.ZSTD_isError(zresult):
-                raise ZstdError('error ending compression stream: %s' %
-                                _zstd_error(zresult))
+                raise ZstdError(
+                    "error ending compression stream: %s" % _zstd_error(zresult)
+                )
 
             if out_buffer.pos:
                 data = ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
@@ -1522,7 +1571,7 @@
     size = lib.ZSTD_getFrameContentSize(data_buffer, len(data_buffer))
 
     if size == lib.ZSTD_CONTENTSIZE_ERROR:
-        raise ZstdError('error when determining content size')
+        raise ZstdError("error when determining content size")
     elif size == lib.ZSTD_CONTENTSIZE_UNKNOWN:
         return -1
     else:
@@ -1534,24 +1583,23 @@
 
     zresult = lib.ZSTD_frameHeaderSize(data_buffer, len(data_buffer))
     if lib.ZSTD_isError(zresult):
-        raise ZstdError('could not determine frame header size: %s' %
-                        _zstd_error(zresult))
+        raise ZstdError(
+            "could not determine frame header size: %s" % _zstd_error(zresult)
+        )
 
     return zresult
 
 
 def get_frame_parameters(data):
-    params = ffi.new('ZSTD_frameHeader *')
+    params = ffi.new("ZSTD_frameHeader *")
 
     data_buffer = ffi.from_buffer(data)
     zresult = lib.ZSTD_getFrameHeader(params, data_buffer, len(data_buffer))
     if lib.ZSTD_isError(zresult):
-        raise ZstdError('cannot get frame parameters: %s' %
-                        _zstd_error(zresult))
+        raise ZstdError("cannot get frame parameters: %s" % _zstd_error(zresult))
 
     if zresult:
-        raise ZstdError('not enough data for frame parameters; need %d bytes' %
-                        zresult)
+        raise ZstdError("not enough data for frame parameters; need %d bytes" % zresult)
 
     return FrameParameters(params[0])
 
@@ -1563,10 +1611,10 @@
         self.k = k
         self.d = d
 
-        if dict_type not in (DICT_TYPE_AUTO, DICT_TYPE_RAWCONTENT,
-                             DICT_TYPE_FULLDICT):
-            raise ValueError('invalid dictionary load mode: %d; must use '
-                             'DICT_TYPE_* constants')
+        if dict_type not in (DICT_TYPE_AUTO, DICT_TYPE_RAWCONTENT, DICT_TYPE_FULLDICT):
+            raise ValueError(
+                "invalid dictionary load mode: %d; must use " "DICT_TYPE_* constants"
+            )
 
         self._dict_type = dict_type
         self._cdict = None
@@ -1582,16 +1630,15 @@
 
     def precompute_compress(self, level=0, compression_params=None):
         if level and compression_params:
-            raise ValueError('must only specify one of level or '
-                             'compression_params')
+            raise ValueError("must only specify one of level or " "compression_params")
 
         if not level and not compression_params:
-            raise ValueError('must specify one of level or compression_params')
+            raise ValueError("must specify one of level or compression_params")
 
         if level:
             cparams = lib.ZSTD_getCParams(level, 0, len(self._data))
         else:
-            cparams = ffi.new('ZSTD_compressionParameters')
+            cparams = ffi.new("ZSTD_compressionParameters")
             cparams.chainLog = compression_params.chain_log
             cparams.hashLog = compression_params.hash_log
             cparams.minMatch = compression_params.min_match
@@ -1600,59 +1647,75 @@
             cparams.targetLength = compression_params.target_length
             cparams.windowLog = compression_params.window_log
 
-        cdict = lib.ZSTD_createCDict_advanced(self._data, len(self._data),
-                                              lib.ZSTD_dlm_byRef,
-                                              self._dict_type,
-                                              cparams,
-                                              lib.ZSTD_defaultCMem)
+        cdict = lib.ZSTD_createCDict_advanced(
+            self._data,
+            len(self._data),
+            lib.ZSTD_dlm_byRef,
+            self._dict_type,
+            cparams,
+            lib.ZSTD_defaultCMem,
+        )
         if cdict == ffi.NULL:
-            raise ZstdError('unable to precompute dictionary')
-
-        self._cdict = ffi.gc(cdict, lib.ZSTD_freeCDict,
-                             size=lib.ZSTD_sizeof_CDict(cdict))
+            raise ZstdError("unable to precompute dictionary")
+
+        self._cdict = ffi.gc(
+            cdict, lib.ZSTD_freeCDict, size=lib.ZSTD_sizeof_CDict(cdict)
+        )
 
     @property
     def _ddict(self):
-        ddict = lib.ZSTD_createDDict_advanced(self._data, len(self._data),
-                                              lib.ZSTD_dlm_byRef,
-                                              self._dict_type,
-                                              lib.ZSTD_defaultCMem)
+        ddict = lib.ZSTD_createDDict_advanced(
+            self._data,
+            len(self._data),
+            lib.ZSTD_dlm_byRef,
+            self._dict_type,
+            lib.ZSTD_defaultCMem,
+        )
 
         if ddict == ffi.NULL:
-            raise ZstdError('could not create decompression dict')
-
-        ddict = ffi.gc(ddict, lib.ZSTD_freeDDict,
-                       size=lib.ZSTD_sizeof_DDict(ddict))
-        self.__dict__['_ddict'] = ddict
+            raise ZstdError("could not create decompression dict")
+
+        ddict = ffi.gc(ddict, lib.ZSTD_freeDDict, size=lib.ZSTD_sizeof_DDict(ddict))
+        self.__dict__["_ddict"] = ddict
 
         return ddict
 
-def train_dictionary(dict_size, samples, k=0, d=0, notifications=0, dict_id=0,
-                     level=0, steps=0, threads=0):
+
+def train_dictionary(
+    dict_size,
+    samples,
+    k=0,
+    d=0,
+    notifications=0,
+    dict_id=0,
+    level=0,
+    steps=0,
+    threads=0,
+):
     if not isinstance(samples, list):
-        raise TypeError('samples must be a list')
+        raise TypeError("samples must be a list")
 
     if threads < 0:
         threads = _cpu_count()
 
     total_size = sum(map(len, samples))
 
-    samples_buffer = new_nonzero('char[]', total_size)
-    sample_sizes = new_nonzero('size_t[]', len(samples))
+    samples_buffer = new_nonzero("char[]", total_size)
+    sample_sizes = new_nonzero("size_t[]", len(samples))
 
     offset = 0
     for i, sample in enumerate(samples):
         if not isinstance(sample, bytes_type):
-            raise ValueError('samples must be bytes')
+            raise ValueError("samples must be bytes")
 
         l = len(sample)
         ffi.memmove(samples_buffer + offset, sample, l)
         offset += l
         sample_sizes[i] = l
 
-    dict_data = new_nonzero('char[]', dict_size)
-
-    dparams = ffi.new('ZDICT_cover_params_t *')[0]
+    dict_data = new_nonzero("char[]", dict_size)
+
+    dparams = ffi.new("ZDICT_cover_params_t *")[0]
     dparams.k = k
     dparams.d = d
     dparams.steps = steps
@@ -1661,34 +1724,51 @@
     dparams.zParams.dictID = dict_id
     dparams.zParams.compressionLevel = level
 
-    if (not dparams.k and not dparams.d and not dparams.steps
-        and not dparams.nbThreads and not dparams.zParams.notificationLevel
+    if (
+        not dparams.k
+        and not dparams.d
+        and not dparams.steps
+        and not dparams.nbThreads
+        and not dparams.zParams.notificationLevel
         and not dparams.zParams.dictID
-        and not dparams.zParams.compressionLevel):
+        and not dparams.zParams.compressionLevel
+    ):
         zresult = lib.ZDICT_trainFromBuffer(
-            ffi.addressof(dict_data), dict_size,
+            ffi.addressof(dict_data),
+            dict_size,
             ffi.addressof(samples_buffer),
-            ffi.addressof(sample_sizes, 0), len(samples))
+            ffi.addressof(sample_sizes, 0),
+            len(samples),
+        )
     elif dparams.steps or dparams.nbThreads:
         zresult = lib.ZDICT_optimizeTrainFromBuffer_cover(
-            ffi.addressof(dict_data), dict_size,
+            ffi.addressof(dict_data),
+            dict_size,
             ffi.addressof(samples_buffer),
-            ffi.addressof(sample_sizes, 0), len(samples),
-            ffi.addressof(dparams))
+            ffi.addressof(sample_sizes, 0),
+            len(samples),
+            ffi.addressof(dparams),
+        )
     else:
         zresult = lib.ZDICT_trainFromBuffer_cover(
-            ffi.addressof(dict_data), dict_size,
+            ffi.addressof(dict_data),
+            dict_size,
             ffi.addressof(samples_buffer),
-            ffi.addressof(sample_sizes, 0), len(samples),
-            dparams)
+            ffi.addressof(sample_sizes, 0),
+            len(samples),
+            dparams,
+        )
 
     if lib.ZDICT_isError(zresult):
-        msg = ffi.string(lib.ZDICT_getErrorName(zresult)).decode('utf-8')
-        raise ZstdError('cannot train dict: %s' % msg)
-
-    return ZstdCompressionDict(ffi.buffer(dict_data, zresult)[:],
-                               dict_type=DICT_TYPE_FULLDICT,
-                               k=dparams.k, d=dparams.d)
+        msg = ffi.string(lib.ZDICT_getErrorName(zresult)).decode("utf-8")
+        raise ZstdError("cannot train dict: %s" % msg)
+
+    return ZstdCompressionDict(
+        ffi.buffer(dict_data, zresult)[:],
+        dict_type=DICT_TYPE_FULLDICT,
+        k=dparams.k,
+        d=dparams.d,
+    )
 
 
 class ZstdDecompressionObj(object):
@@ -1699,21 +1779,21 @@
 
     def decompress(self, data):
         if self._finished:
-            raise ZstdError('cannot use a decompressobj multiple times')
-
-        in_buffer = ffi.new('ZSTD_inBuffer *')
-        out_buffer = ffi.new('ZSTD_outBuffer *')
+            raise ZstdError("cannot use a decompressobj multiple times")
+
+        in_buffer = ffi.new("ZSTD_inBuffer *")
+        out_buffer = ffi.new("ZSTD_outBuffer *")
 
         data_buffer = ffi.from_buffer(data)
 
         if len(data_buffer) == 0:
-            return b''
+            return b""
 
         in_buffer.src = data_buffer
         in_buffer.size = len(data_buffer)
         in_buffer.pos = 0
 
-        dst_buffer = ffi.new('char[]', self._write_size)
+        dst_buffer = ffi.new("char[]", self._write_size)
         out_buffer.dst = dst_buffer
         out_buffer.size = len(dst_buffer)
         out_buffer.pos = 0
@@ -1721,11 +1801,11 @@
         chunks = []
 
         while True:
-            zresult = lib.ZSTD_decompressStream(self._decompressor._dctx,
-                                                out_buffer, in_buffer)
+            zresult = lib.ZSTD_decompressStream(
+                self._decompressor._dctx, out_buffer, in_buffer
+            )
             if lib.ZSTD_isError(zresult):
-                raise ZstdError('zstd decompressor error: %s' %
-                                _zstd_error(zresult))
+                raise ZstdError("zstd decompressor error: %s" % _zstd_error(zresult))
 
             if zresult == 0:
                 self._finished = True
@@ -1734,13 +1814,14 @@
             if out_buffer.pos:
                 chunks.append(ffi.buffer(out_buffer.dst, out_buffer.pos)[:])
 
-            if (zresult == 0 or
-                    (in_buffer.pos == in_buffer.size and out_buffer.pos == 0)):
+            if zresult == 0 or (
+                in_buffer.pos == in_buffer.size and out_buffer.pos == 0
+            ):
                 break
 
             out_buffer.pos = 0
 
-        return b''.join(chunks)
+        return b"".join(chunks)
 
     def flush(self, length=0):
         pass
@@ -1757,13 +1838,13 @@
         self._bytes_decompressed = 0
         self._finished_input = False
         self._finished_output = False
-        self._in_buffer = ffi.new('ZSTD_inBuffer *')
+        self._in_buffer = ffi.new("ZSTD_inBuffer *")
         # Holds a ref to self._in_buffer.src.
         self._source_buffer = None
 
     def __enter__(self):
         if self._entered:
-            raise ValueError('cannot __enter__ multiple times')
+            raise ValueError("cannot __enter__ multiple times")
 
         self._entered = True
         return self
@@ -1824,7 +1905,7 @@
 
             chunks.append(chunk)
 
-        return b''.join(chunks)
+        return b"".join(chunks)
 
     def __iter__(self):
         raise io.UnsupportedOperation()
@@ -1844,7 +1925,7 @@
             return
 
         # Else populate the input buffer from our source.
-        if hasattr(self._source, 'read'):
+        if hasattr(self._source, "read"):
             data = self._source.read(self._read_size)
 
             if not data:
@@ -1866,8 +1947,9 @@
 
         Returns True if data in output buffer should be emitted.
         """
-        zresult = lib.ZSTD_decompressStream(self._decompressor._dctx,
-                                            out_buffer, self._in_buffer)
+        zresult = lib.ZSTD_decompressStream(
+            self._decompressor._dctx, out_buffer, self._in_buffer
+        )
 
         if self._in_buffer.pos == self._in_buffer.size:
             self._in_buffer.src = ffi.NULL
@@ -1875,38 +1957,39 @@
             self._in_buffer.size = 0
             self._source_buffer = None
 
-            if not hasattr(self._source, 'read'):
+            if not hasattr(self._source, "read"):
                 self._finished_input = True
 
         if lib.ZSTD_isError(zresult):
-            raise ZstdError('zstd decompress error: %s' %
-                            _zstd_error(zresult))
+            raise ZstdError("zstd decompress error: %s" % _zstd_error(zresult))
 
         # Emit data if there is data AND either:
         # a) output buffer is full (read amount is satisfied)
         # b) we're at end of a frame and not in frame spanning mode
-        return (out_buffer.pos and
-                (out_buffer.pos == out_buffer.size or
-                 zresult == 0 and not self._read_across_frames))
+        return out_buffer.pos and (
+            out_buffer.pos == out_buffer.size
+            or zresult == 0
+            and not self._read_across_frames
+        )
 
     def read(self, size=-1):
         if self._closed:
-            raise ValueError('stream is closed')
+            raise ValueError("stream is closed")
 
         if size < -1:
-            raise ValueError('cannot read negative amounts less than -1')
+            raise ValueError("cannot read negative amounts less than -1")
 
         if size == -1:
             # This is recursive. But it gets the job done.
             return self.readall()
 
         if self._finished_output or size == 0:
-            return b''
+            return b""
 
         # We /could/ call into readinto() here. But that introduces more
         # overhead.
-        dst_buffer = ffi.new('char[]', size)
-        out_buffer = ffi.new('ZSTD_outBuffer *')
+        dst_buffer = ffi.new("char[]", size)
+        out_buffer = ffi.new("ZSTD_outBuffer *")
         out_buffer.dst = dst_buffer
         out_buffer.size = size
         out_buffer.pos = 0
@@ -1927,15 +2010,15 @@
 
     def readinto(self, b):
         if self._closed:
-            raise ValueError('stream is closed')
+            raise ValueError("stream is closed")
 
         if self._finished_output:
             return 0
 
         # TODO use writable=True once we require CFFI >= 1.12.
         dest_buffer = ffi.from_buffer(b)
-        ffi.memmove(b, b'', 0)
-        out_buffer = ffi.new('ZSTD_outBuffer *')
+        ffi.memmove(b, b"", 0)
+        out_buffer = ffi.new("ZSTD_outBuffer *")
         out_buffer.dst = dest_buffer
         out_buffer.size = len(dest_buffer)
         out_buffer.pos = 0
@@ -1956,20 +2039,20 @@
 
     def read1(self, size=-1):
         if self._closed:
-            raise ValueError('stream is closed')
+            raise ValueError("stream is closed")
 
         if size < -1:
-            raise ValueError('cannot read negative amounts less than -1')
+            raise ValueError("cannot read negative amounts less than -1")
 
         if self._finished_output or size == 0:
-            return b''
+            return b""
 
         # -1 returns arbitrary number of bytes.
         if size == -1:
             size = DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE
 
-        dst_buffer = ffi.new('char[]', size)
-        out_buffer = ffi.new('ZSTD_outBuffer *')
+        dst_buffer = ffi.new("char[]", size)
+        out_buffer = ffi.new("ZSTD_outBuffer *")
         out_buffer.dst = dst_buffer
         out_buffer.size = size
         out_buffer.pos = 0
@@ -1990,16 +2073,16 @@
 
     def readinto1(self, b):
         if self._closed:
-            raise ValueError('stream is closed')
+            raise ValueError("stream is closed")
 
         if self._finished_output:
             return 0
 
         # TODO use writable=True once we require CFFI >= 1.12.
         dest_buffer = ffi.from_buffer(b)
-        ffi.memmove(b, b'', 0)
-
-        out_buffer = ffi.new('ZSTD_outBuffer *')
+        ffi.memmove(b, b"", 0)
+
+        out_buffer = ffi.new("ZSTD_outBuffer *")
         out_buffer.dst = dest_buffer
         out_buffer.size = len(dest_buffer)
         out_buffer.pos = 0
@@ -2016,33 +2099,31 @@
 
     def seek(self, pos, whence=os.SEEK_SET):
         if self._closed:
-            raise ValueError('stream is closed')
+            raise ValueError("stream is closed")
 
         read_amount = 0
 
         if whence == os.SEEK_SET:
             if pos < 0:
-                raise ValueError('cannot seek to negative position with SEEK_SET')
+                raise ValueError("cannot seek to negative position with SEEK_SET")
 
             if pos < self._bytes_decompressed:
-                raise ValueError('cannot seek zstd decompression stream '
-                                 'backwards')
+                raise ValueError("cannot seek zstd decompression stream " "backwards")
 
             read_amount = pos - self._bytes_decompressed
 
         elif whence == os.SEEK_CUR:
             if pos < 0:
-                raise ValueError('cannot seek zstd decompression stream '
-                                 'backwards')
+                raise ValueError("cannot seek zstd decompression stream " "backwards")
 
             read_amount = pos
         elif whence == os.SEEK_END:
-            raise ValueError('zstd decompression streams cannot be seeked '
-                             'with SEEK_END')
+            raise ValueError(
+                "zstd decompression streams cannot be seeked " "with SEEK_END"
+            )
 
         while read_amount:
-            result = self.read(min(read_amount,
-                                   DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE))
+            result = self.read(min(read_amount, DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE))
 
             if not result:
                 break
@@ -2051,6 +2132,7 @@
 
         return self._bytes_decompressed
 
+
 class ZstdDecompressionWriter(object):
     def __init__(self, decompressor, writer, write_size, write_return_read):
         decompressor._ensure_dctx()
@@ -2064,10 +2146,10 @@
 
     def __enter__(self):
         if self._closed:
-            raise ValueError('stream is closed')
+            raise ValueError("stream is closed")
 
         if self._entered:
-            raise ZstdError('cannot __enter__ multiple times')
+            raise ZstdError("cannot __enter__ multiple times")
 
         self._entered = True
 
@@ -2089,7 +2171,7 @@
         finally:
             self._closed = True
 
-        f = getattr(self._writer, 'close', None)
+        f = getattr(self._writer, "close", None)
         if f:
             f()
 
@@ -2098,17 +2180,17 @@
         return self._closed
 
     def fileno(self):
-        f = getattr(self._writer, 'fileno', None)
+        f = getattr(self._writer, "fileno", None)
         if f:
             return f()
         else:
-            raise OSError('fileno not available on underlying writer')
+            raise OSError("fileno not available on underlying writer")
 
     def flush(self):
         if self._closed:
-            raise ValueError('stream is closed')
-
-        f = getattr(self._writer, 'flush', None)
+            raise ValueError("stream is closed")
+
+        f = getattr(self._writer, "flush", None)
         if f:
             return f()
 
@@ -2153,19 +2235,19 @@
 
     def write(self, data):
         if self._closed:
-            raise ValueError('stream is closed')
+            raise ValueError("stream is closed")
 
         total_write = 0
 
-        in_buffer = ffi.new('ZSTD_inBuffer *')
-        out_buffer = ffi.new('ZSTD_outBuffer *')
+        in_buffer = ffi.new("ZSTD_inBuffer *")
+        out_buffer = ffi.new("ZSTD_outBuffer *")
 
         data_buffer = ffi.from_buffer(data)
         in_buffer.src = data_buffer
         in_buffer.size = len(data_buffer)
         in_buffer.pos = 0
 
-        dst_buffer = ffi.new('char[]', self._write_size)
+        dst_buffer = ffi.new("char[]", self._write_size)
         out_buffer.dst = dst_buffer
         out_buffer.size = len(dst_buffer)
         out_buffer.pos = 0
@@ -2175,8 +2257,7 @@
         while in_buffer.pos < in_buffer.size:
             zresult = lib.ZSTD_decompressStream(dctx, out_buffer, in_buffer)
             if lib.ZSTD_isError(zresult):
-                raise ZstdError('zstd decompress error: %s' %
-                                _zstd_error(zresult))
+                raise ZstdError("zstd decompress error: %s" % _zstd_error(zresult))
 
             if out_buffer.pos:
                 self._writer.write(ffi.buffer(out_buffer.dst, out_buffer.pos)[:])
@@ -2206,8 +2287,9 @@
         try:
             self._ensure_dctx()
         finally:
-            self._dctx = ffi.gc(dctx, lib.ZSTD_freeDCtx,
-                                size=lib.ZSTD_sizeof_DCtx(dctx))
+            self._dctx = ffi.gc(
+                dctx, lib.ZSTD_freeDCtx, size=lib.ZSTD_sizeof_DCtx(dctx)
+            )
 
     def memory_size(self):
         return lib.ZSTD_sizeof_DCtx(self._dctx)
@@ -2220,85 +2302,96 @@
         output_size = lib.ZSTD_getFrameContentSize(data_buffer, len(data_buffer))
 
         if output_size == lib.ZSTD_CONTENTSIZE_ERROR:
-            raise ZstdError('error determining content size from frame header')
+            raise ZstdError("error determining content size from frame header")
         elif output_size == 0:
-            return b''
+            return b""
         elif output_size == lib.ZSTD_CONTENTSIZE_UNKNOWN:
             if not max_output_size:
-                raise ZstdError('could not determine content size in frame header')
-
-            result_buffer = ffi.new('char[]', max_output_size)
+                raise ZstdError("could not determine content size in frame header")
+
+            result_buffer = ffi.new("char[]", max_output_size)
             result_size = max_output_size
             output_size = 0
         else:
-            result_buffer = ffi.new('char[]', output_size)
+            result_buffer = ffi.new("char[]", output_size)
             result_size = output_size
 
-        out_buffer = ffi.new('ZSTD_outBuffer *')
+        out_buffer = ffi.new("ZSTD_outBuffer *")
         out_buffer.dst = result_buffer
         out_buffer.size = result_size
         out_buffer.pos = 0
 
-        in_buffer = ffi.new('ZSTD_inBuffer *')
+        in_buffer = ffi.new("ZSTD_inBuffer *")
         in_buffer.src = data_buffer
         in_buffer.size = len(data_buffer)
         in_buffer.pos = 0
 
         zresult = lib.ZSTD_decompressStream(self._dctx, out_buffer, in_buffer)
         if lib.ZSTD_isError(zresult):
-            raise ZstdError('decompression error: %s' %
-                            _zstd_error(zresult))
+            raise ZstdError("decompression error: %s" % _zstd_error(zresult))
         elif zresult:
-            raise ZstdError('decompression error: did not decompress full frame')
+            raise ZstdError("decompression error: did not decompress full frame")
         elif output_size and out_buffer.pos != output_size:
-            raise ZstdError('decompression error: decompressed %d bytes; expected %d' %
-                            (zresult, output_size))
+            raise ZstdError(
+                "decompression error: decompressed %d bytes; expected %d"
+                % (zresult, output_size)
+            )
 
         return ffi.buffer(result_buffer, out_buffer.pos)[:]
 
-    def stream_reader(self, source, read_size=DECOMPRESSION_RECOMMENDED_INPUT_SIZE,
-                      read_across_frames=False):
+    def stream_reader(
+        self,
+        source,
+        read_size=DECOMPRESSION_RECOMMENDED_INPUT_SIZE,
+        read_across_frames=False,
+    ):
         self._ensure_dctx()
         return ZstdDecompressionReader(self, source, read_size, read_across_frames)
 
     def decompressobj(self, write_size=DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE):
         if write_size < 1:
-            raise ValueError('write_size must be positive')
+            raise ValueError("write_size must be positive")
 
         self._ensure_dctx()
         return ZstdDecompressionObj(self, write_size=write_size)
 
-    def read_to_iter(self, reader, read_size=DECOMPRESSION_RECOMMENDED_INPUT_SIZE,
-                     write_size=DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE,
-                     skip_bytes=0):
+    def read_to_iter(
+        self,
+        reader,
+        read_size=DECOMPRESSION_RECOMMENDED_INPUT_SIZE,
+        write_size=DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE,
+        skip_bytes=0,
+    ):
         if skip_bytes >= read_size:
-            raise ValueError('skip_bytes must be smaller than read_size')
-
-        if hasattr(reader, 'read'):
+            raise ValueError("skip_bytes must be smaller than read_size")
+
+        if hasattr(reader, "read"):
             have_read = True
-        elif hasattr(reader, '__getitem__'):
+        elif hasattr(reader, "__getitem__"):
             have_read = False
             buffer_offset = 0
             size = len(reader)
         else:
-            raise ValueError('must pass an object with a read() method or '
-                             'conforms to buffer protocol')
+            raise ValueError(
+                "must pass an object with a read() method or "
+                "conforms to buffer protocol"
+            )
 
         if skip_bytes:
             if have_read:
                 reader.read(skip_bytes)
             else:
                 if skip_bytes > size:
-                    raise ValueError('skip_bytes larger than first input chunk')
+                    raise ValueError("skip_bytes larger than first input chunk")
 
                 buffer_offset = skip_bytes
 
         self._ensure_dctx()
 
-        in_buffer = ffi.new('ZSTD_inBuffer *')
-        out_buffer = ffi.new('ZSTD_outBuffer *')
-
-        dst_buffer = ffi.new('char[]', write_size)
+        in_buffer = ffi.new("ZSTD_inBuffer *")
+        out_buffer = ffi.new("ZSTD_outBuffer *")
+
+        dst_buffer = ffi.new("char[]", write_size)
         out_buffer.dst = dst_buffer
         out_buffer.size = len(dst_buffer)
         out_buffer.pos = 0
@@ -2311,7 +2404,7 @@
             else:
                 remaining = size - buffer_offset
                 slice_size = min(remaining, read_size)
-                read_result = reader[buffer_offset:buffer_offset + slice_size]
+                read_result = reader[buffer_offset : buffer_offset + slice_size]
                 buffer_offset += slice_size
 
             # No new input. Break out of read loop.
@@ -2330,8 +2423,7 @@
 
                 zresult = lib.ZSTD_decompressStream(self._dctx, out_buffer, in_buffer)
                 if lib.ZSTD_isError(zresult):
-                    raise ZstdError('zstd decompress error: %s' %
-                                    _zstd_error(zresult))
+                    raise ZstdError("zstd decompress error: %s" % _zstd_error(zresult))
 
                 if out_buffer.pos:
                     data = ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
@@ -2348,30 +2440,37 @@
 
     read_from = read_to_iter
 
-    def stream_writer(self, writer, write_size=DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE,
-                      write_return_read=False):
-        if not hasattr(writer, 'write'):
-            raise ValueError('must pass an object with a write() method')
-
-        return ZstdDecompressionWriter(self, writer, write_size,
-                                       write_return_read)
+    def stream_writer(
+        self,
+        writer,
+        write_size=DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE,
+        write_return_read=False,
+    ):
+        if not hasattr(writer, "write"):
+            raise ValueError("must pass an object with a write() method")
+
+        return ZstdDecompressionWriter(self, writer, write_size, write_return_read)
 
     write_to = stream_writer
 
-    def copy_stream(self, ifh, ofh,
-                    read_size=DECOMPRESSION_RECOMMENDED_INPUT_SIZE,
-                    write_size=DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE):
-        if not hasattr(ifh, 'read'):
-            raise ValueError('first argument must have a read() method')
-        if not hasattr(ofh, 'write'):
-            raise ValueError('second argument must have a write() method')
+    def copy_stream(
+        self,
+        ifh,
+        ofh,
+        read_size=DECOMPRESSION_RECOMMENDED_INPUT_SIZE,
+        write_size=DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE,
+    ):
+        if not hasattr(ifh, "read"):
+            raise ValueError("first argument must have a read() method")
+        if not hasattr(ofh, "write"):
+            raise ValueError("second argument must have a write() method")
 
         self._ensure_dctx()
 
-        in_buffer = ffi.new('ZSTD_inBuffer *')
-        out_buffer = ffi.new('ZSTD_outBuffer *')
-
-        dst_buffer = ffi.new('char[]', write_size)
+        in_buffer = ffi.new("ZSTD_inBuffer *")
+        out_buffer = ffi.new("ZSTD_outBuffer *")
+
+        dst_buffer = ffi.new("char[]", write_size)
         out_buffer.dst = dst_buffer
         out_buffer.size = write_size
         out_buffer.pos = 0
@@ -2394,8 +2493,9 @@
             while in_buffer.pos < in_buffer.size:
                 zresult = lib.ZSTD_decompressStream(self._dctx, out_buffer, in_buffer)
                 if lib.ZSTD_isError(zresult):
-                    raise ZstdError('zstd decompressor error: %s' %
-                                    _zstd_error(zresult))
+                    raise ZstdError(
+                        "zstd decompressor error: %s" % _zstd_error(zresult)
+                    )
 
                 if out_buffer.pos:
                     ofh.write(ffi.buffer(out_buffer.dst, out_buffer.pos))
@@ -2408,48 +2508,47 @@
 
     def decompress_content_dict_chain(self, frames):
         if not isinstance(frames, list):
-            raise TypeError('argument must be a list')
+            raise TypeError("argument must be a list")
 
         if not frames:
-            raise ValueError('empty input chain')
+            raise ValueError("empty input chain")
 
         # First chunk should not be using a dictionary. We handle it specially.
         chunk = frames[0]
         if not isinstance(chunk, bytes_type):
-            raise ValueError('chunk 0 must be bytes')
+            raise ValueError("chunk 0 must be bytes")
 
         # All chunks should be zstd frames and should have content size set.
         chunk_buffer = ffi.from_buffer(chunk)
-        params = ffi.new('ZSTD_frameHeader *')
+        params = ffi.new("ZSTD_frameHeader *")
         zresult = lib.ZSTD_getFrameHeader(params, chunk_buffer, len(chunk_buffer))
         if lib.ZSTD_isError(zresult):
-            raise ValueError('chunk 0 is not a valid zstd frame')
+            raise ValueError("chunk 0 is not a valid zstd frame")
         elif zresult:
-            raise ValueError('chunk 0 is too small to contain a zstd frame')
+            raise ValueError("chunk 0 is too small to contain a zstd frame")
 
         if params.frameContentSize == lib.ZSTD_CONTENTSIZE_UNKNOWN:
-            raise ValueError('chunk 0 missing content size in frame')
+            raise ValueError("chunk 0 missing content size in frame")
 
         self._ensure_dctx(load_dict=False)
 
-        last_buffer = ffi.new('char[]', params.frameContentSize)
-
-        out_buffer = ffi.new('ZSTD_outBuffer *')
+        last_buffer = ffi.new("char[]", params.frameContentSize)
+
+        out_buffer = ffi.new("ZSTD_outBuffer *")
         out_buffer.dst = last_buffer
         out_buffer.size = len(last_buffer)
         out_buffer.pos = 0
 
-        in_buffer = ffi.new('ZSTD_inBuffer *')
+        in_buffer = ffi.new("ZSTD_inBuffer *")
         in_buffer.src = chunk_buffer
         in_buffer.size = len(chunk_buffer)
         in_buffer.pos = 0
 
         zresult = lib.ZSTD_decompressStream(self._dctx, out_buffer, in_buffer)
         if lib.ZSTD_isError(zresult):
-            raise ZstdError('could not decompress chunk 0: %s' %
-                            _zstd_error(zresult))
+            raise ZstdError("could not decompress chunk 0: %s" % _zstd_error(zresult))
         elif zresult:
-            raise ZstdError('chunk 0 did not decompress full frame')
+            raise ZstdError("chunk 0 did not decompress full frame")
 
         # Special case of chain length of 1
         if len(frames) == 1:
@@ -2459,19 +2558,19 @@
         while i < len(frames):
             chunk = frames[i]
             if not isinstance(chunk, bytes_type):
-                raise ValueError('chunk %d must be bytes' % i)
+                raise ValueError("chunk %d must be bytes" % i)
 
             chunk_buffer = ffi.from_buffer(chunk)
             zresult = lib.ZSTD_getFrameHeader(params, chunk_buffer, len(chunk_buffer))
             if lib.ZSTD_isError(zresult):
-                raise ValueError('chunk %d is not a valid zstd frame' % i)
+                raise ValueError("chunk %d is not a valid zstd frame" % i)
             elif zresult:
-                raise ValueError('chunk %d is too small to contain a zstd frame' % i)
+                raise ValueError("chunk %d is too small to contain a zstd frame" % i)
 
             if params.frameContentSize == lib.ZSTD_CONTENTSIZE_UNKNOWN:
-                raise ValueError('chunk %d missing content size in frame' % i)
-
-            dest_buffer = ffi.new('char[]', params.frameContentSize)
+                raise ValueError("chunk %d missing content size in frame" % i)
+
+            dest_buffer = ffi.new("char[]", params.frameContentSize)
 
             out_buffer.dst = dest_buffer
             out_buffer.size = len(dest_buffer)
@@ -2483,10 +2582,11 @@
 
             zresult = lib.ZSTD_decompressStream(self._dctx, out_buffer, in_buffer)
             if lib.ZSTD_isError(zresult):
-                raise ZstdError('could not decompress chunk %d: %s' %
-                                _zstd_error(zresult))
+                raise ZstdError(
+                    "could not decompress chunk %d: %s" % _zstd_error(zresult)
+                )
             elif zresult:
-                raise ZstdError('chunk %d did not decompress full frame' % i)
+                raise ZstdError("chunk %d did not decompress full frame" % i)
 
             last_buffer = dest_buffer
             i += 1
@@ -2497,19 +2597,19 @@
         lib.ZSTD_DCtx_reset(self._dctx, lib.ZSTD_reset_session_only)
 
         if self._max_window_size:
-            zresult = lib.ZSTD_DCtx_setMaxWindowSize(self._dctx,
-                                                     self._max_window_size)
+            zresult = lib.ZSTD_DCtx_setMaxWindowSize(self._dctx, self._max_window_size)
             if lib.ZSTD_isError(zresult):
-                raise ZstdError('unable to set max window size: %s' %
-                                _zstd_error(zresult))
+                raise ZstdError(
+                    "unable to set max window size: %s" % _zstd_error(zresult)
+                )
 
         zresult = lib.ZSTD_DCtx_setFormat(self._dctx, self._format)
         if lib.ZSTD_isError(zresult):
-            raise ZstdError('unable to set decoding format: %s' %
-                            _zstd_error(zresult))
+            raise ZstdError("unable to set decoding format: %s" % _zstd_error(zresult))
 
         if self._dict_data and load_dict:
             zresult = lib.ZSTD_DCtx_refDDict(self._dctx, self._dict_data._ddict)
             if lib.ZSTD_isError(zresult):
-                raise ZstdError('unable to reference prepared dictionary: %s' %
-                                _zstd_error(zresult))
+                raise ZstdError(
+                    "unable to reference prepared dictionary: %s" % _zstd_error(zresult)
+                )
--- a/contrib/python-zstandard/zstd.c	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd.c	Sat Dec 28 09:55:45 2019 -0800
@@ -210,7 +210,7 @@
 	   We detect this mismatch here and refuse to load the module if this
 	   scenario is detected.
 	*/
-	if (ZSTD_VERSION_NUMBER != 10403 || ZSTD_versionNumber() != 10403) {
+	if (ZSTD_VERSION_NUMBER != 10404 || ZSTD_versionNumber() != 10404) {
 		PyErr_SetString(PyExc_ImportError, "zstd C API mismatch; Python bindings not compiled against expected zstd version");
 		return;
 	}
--- a/contrib/python-zstandard/zstd/common/bitstream.h	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/common/bitstream.h	Sat Dec 28 09:55:45 2019 -0800
@@ -164,7 +164,7 @@
         _BitScanReverse ( &r, val );
         return (unsigned) r;
 #   elif defined(__GNUC__) && (__GNUC__ >= 3)   /* Use GCC Intrinsic */
-        return 31 - __builtin_clz (val);
+        return __builtin_clz (val) ^ 31;
 #   elif defined(__ICCARM__)    /* IAR Intrinsic */
         return 31 - __CLZ(val);
 #   else   /* Software version */
@@ -244,9 +244,9 @@
 {
     size_t const nbBytes = bitC->bitPos >> 3;
     assert(bitC->bitPos < sizeof(bitC->bitContainer) * 8);
+    assert(bitC->ptr <= bitC->endPtr);
     MEM_writeLEST(bitC->ptr, bitC->bitContainer);
     bitC->ptr += nbBytes;
-    assert(bitC->ptr <= bitC->endPtr);
     bitC->bitPos &= 7;
     bitC->bitContainer >>= nbBytes*8;
 }
@@ -260,6 +260,7 @@
 {
     size_t const nbBytes = bitC->bitPos >> 3;
     assert(bitC->bitPos < sizeof(bitC->bitContainer) * 8);
+    assert(bitC->ptr <= bitC->endPtr);
     MEM_writeLEST(bitC->ptr, bitC->bitContainer);
     bitC->ptr += nbBytes;
     if (bitC->ptr > bitC->endPtr) bitC->ptr = bitC->endPtr;
--- a/contrib/python-zstandard/zstd/common/compiler.h	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/common/compiler.h	Sat Dec 28 09:55:45 2019 -0800
@@ -61,6 +61,13 @@
 #  define HINT_INLINE static INLINE_KEYWORD FORCE_INLINE_ATTR
 #endif
 
+/* UNUSED_ATTR tells the compiler it is okay if the function is unused. */
+#if defined(__GNUC__)
+#  define UNUSED_ATTR __attribute__((unused))
+#else
+#  define UNUSED_ATTR
+#endif
+
 /* force no inlining */
 #ifdef _MSC_VER
 #  define FORCE_NOINLINE static __declspec(noinline)
@@ -127,9 +134,14 @@
     }                                     \
 }
 
-/* vectorization */
+/* vectorization
+ * older GCC (pre gcc-4.3 picked as the cutoff) uses a different syntax */
 #if !defined(__clang__) && defined(__GNUC__)
-#  define DONT_VECTORIZE __attribute__((optimize("no-tree-vectorize")))
+#  if (__GNUC__ == 4 && __GNUC_MINOR__ > 3) || (__GNUC__ >= 5)
+#    define DONT_VECTORIZE __attribute__((optimize("no-tree-vectorize")))
+#  else
+#    define DONT_VECTORIZE _Pragma("GCC optimize(\"no-tree-vectorize\")")
+#  endif
 #else
 #  define DONT_VECTORIZE
 #endif
--- a/contrib/python-zstandard/zstd/common/fse.h	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/common/fse.h	Sat Dec 28 09:55:45 2019 -0800
@@ -308,7 +308,7 @@
 *******************************************/
 /* FSE buffer bounds */
 #define FSE_NCOUNTBOUND 512
-#define FSE_BLOCKBOUND(size) (size + (size>>7))
+#define FSE_BLOCKBOUND(size) (size + (size>>7) + 4 /* fse states */ + sizeof(size_t) /* bitContainer */)
 #define FSE_COMPRESSBOUND(size) (FSE_NCOUNTBOUND + FSE_BLOCKBOUND(size))   /* Macro version, useful for static allocation */
 
 /* It is possible to statically allocate FSE CTable/DTable as a table of FSE_CTable/FSE_DTable using below macros */
--- a/contrib/python-zstandard/zstd/common/fse_decompress.c	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/common/fse_decompress.c	Sat Dec 28 09:55:45 2019 -0800
@@ -52,7 +52,9 @@
 #define FSE_STATIC_ASSERT(c) DEBUG_STATIC_ASSERT(c)   /* use only *after* variable declarations */
 
 /* check and forward error code */
+#ifndef CHECK_F
 #define CHECK_F(f) { size_t const e = f; if (FSE_isError(e)) return e; }
+#endif
 
 
 /* **************************************************************
--- a/contrib/python-zstandard/zstd/common/mem.h	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/common/mem.h	Sat Dec 28 09:55:45 2019 -0800
@@ -47,6 +47,79 @@
 #define MEM_STATIC_ASSERT(c)   { enum { MEM_static_assert = 1/(int)(!!(c)) }; }
 MEM_STATIC void MEM_check(void) { MEM_STATIC_ASSERT((sizeof(size_t)==4) || (sizeof(size_t)==8)); }
 
+/* detects whether we are being compiled under msan */
+#if defined (__has_feature)
+#  if __has_feature(memory_sanitizer)
+#    define MEMORY_SANITIZER 1
+#  endif
+#endif
+
+#if defined (MEMORY_SANITIZER)
+/* Not all platforms that support msan provide sanitizers/msan_interface.h.
+ * We therefore declare the functions we need ourselves, rather than trying to
+ * include the header file... */
+
+#include <stdint.h> /* intptr_t */
+
+/* Make memory region fully initialized (without changing its contents). */
+void __msan_unpoison(const volatile void *a, size_t size);
+
+/* Make memory region fully uninitialized (without changing its contents).
+   This is a legacy interface that does not update origin information. Use
+   __msan_allocated_memory() instead. */
+void __msan_poison(const volatile void *a, size_t size);
+
+/* Returns the offset of the first (at least partially) poisoned byte in the
+   memory range, or -1 if the whole range is good. */
+intptr_t __msan_test_shadow(const volatile void *x, size_t size);
+#endif
+
+/* detects whether we are being compiled under asan */
+#if defined (__has_feature)
+#  if __has_feature(address_sanitizer)
+#    define ADDRESS_SANITIZER 1
+#  endif
+#elif defined(__SANITIZE_ADDRESS__)
+#  define ADDRESS_SANITIZER 1
+#endif
+
+#if defined (ADDRESS_SANITIZER)
+/* Not all platforms that support asan provide sanitizers/asan_interface.h.
+ * We therefore declare the functions we need ourselves, rather than trying to
+ * include the header file... */
+
+/**
+ * Marks a memory region (<c>[addr, addr+size)</c>) as unaddressable.
+ *
+ * This memory must be previously allocated by your program. Instrumented
+ * code is forbidden from accessing addresses in this region until it is
+ * unpoisoned. This function is not guaranteed to poison the entire region -
+ * it could poison only a subregion of <c>[addr, addr+size)</c> due to ASan
+ * alignment restrictions.
+ *
+ * \note This function is not thread-safe because no two threads can poison or
+ * unpoison memory in the same memory region simultaneously.
+ *
+ * \param addr Start of memory region.
+ * \param size Size of memory region. */
+void __asan_poison_memory_region(void const volatile *addr, size_t size);
+
+/**
+ * Marks a memory region (<c>[addr, addr+size)</c>) as addressable.
+ *
+ * This memory must be previously allocated by your program. Accessing
+ * addresses in this region is allowed until this region is poisoned again.
+ * This function could unpoison a super-region of <c>[addr, addr+size)</c> due
+ * to ASan alignment restrictions.
+ *
+ * \note This function is not thread-safe because no two threads can
+ * poison or unpoison memory in the same memory region simultaneously.
+ *
+ * \param addr Start of memory region.
+ * \param size Size of memory region. */
+void __asan_unpoison_memory_region(void const volatile *addr, size_t size);
+#endif
+
 
 /*-**************************************************************
 *  Basic Types
--- a/contrib/python-zstandard/zstd/common/pool.c	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/common/pool.c	Sat Dec 28 09:55:45 2019 -0800
@@ -127,9 +127,13 @@
     ctx->queueTail = 0;
     ctx->numThreadsBusy = 0;
     ctx->queueEmpty = 1;
-    (void)ZSTD_pthread_mutex_init(&ctx->queueMutex, NULL);
-    (void)ZSTD_pthread_cond_init(&ctx->queuePushCond, NULL);
-    (void)ZSTD_pthread_cond_init(&ctx->queuePopCond, NULL);
+    {
+        int error = 0;
+        error |= ZSTD_pthread_mutex_init(&ctx->queueMutex, NULL);
+        error |= ZSTD_pthread_cond_init(&ctx->queuePushCond, NULL);
+        error |= ZSTD_pthread_cond_init(&ctx->queuePopCond, NULL);
+        if (error) { POOL_free(ctx); return NULL; }
+    }
     ctx->shutdown = 0;
     /* Allocate space for the thread handles */
     ctx->threads = (ZSTD_pthread_t*)ZSTD_malloc(numThreads * sizeof(ZSTD_pthread_t), customMem);
--- a/contrib/python-zstandard/zstd/common/threading.c	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/common/threading.c	Sat Dec 28 09:55:45 2019 -0800
@@ -14,6 +14,8 @@
  * This file will hold wrapper for systems, which do not support pthreads
  */
 
+#include "threading.h"
+
 /* create fake symbol to avoid empty translation unit warning */
 int g_ZSTD_threading_useless_symbol;
 
@@ -28,7 +30,6 @@
 /* ===  Dependencies  === */
 #include <process.h>
 #include <errno.h>
-#include "threading.h"
 
 
 /* ===  Implementation  === */
@@ -73,3 +74,47 @@
 }
 
 #endif   /* ZSTD_MULTITHREAD */
+
+#if defined(ZSTD_MULTITHREAD) && DEBUGLEVEL >= 1 && !defined(_WIN32)
+
+#include <stdlib.h>
+
+int ZSTD_pthread_mutex_init(ZSTD_pthread_mutex_t* mutex, pthread_mutexattr_t const* attr)
+{
+    *mutex = (pthread_mutex_t*)malloc(sizeof(pthread_mutex_t));
+    if (!*mutex)
+        return 1;
+    return pthread_mutex_init(*mutex, attr);
+}
+
+int ZSTD_pthread_mutex_destroy(ZSTD_pthread_mutex_t* mutex)
+{
+    if (!*mutex)
+        return 0;
+    {
+        int const ret = pthread_mutex_destroy(*mutex);
+        free(*mutex);
+        return ret;
+    }
+}
+
+int ZSTD_pthread_cond_init(ZSTD_pthread_cond_t* cond, pthread_condattr_t const* attr)
+{
+    *cond = (pthread_cond_t*)malloc(sizeof(pthread_cond_t));
+    if (!*cond)
+        return 1;
+    return pthread_cond_init(*cond, attr);
+}
+
+int ZSTD_pthread_cond_destroy(ZSTD_pthread_cond_t* cond)
+{
+    if (!*cond)
+        return 0;
+    {
+        int const ret = pthread_cond_destroy(*cond);
+        free(*cond);
+        return ret;
+    }
+}
+
+#endif
--- a/contrib/python-zstandard/zstd/common/threading.h	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/common/threading.h	Sat Dec 28 09:55:45 2019 -0800
@@ -13,6 +13,8 @@
 #ifndef THREADING_H_938743
 #define THREADING_H_938743
 
+#include "debug.h"
+
 #if defined (__cplusplus)
 extern "C" {
 #endif
@@ -75,10 +77,12 @@
  */
 
 
-#elif defined(ZSTD_MULTITHREAD)   /* posix assumed ; need a better detection method */
+#elif defined(ZSTD_MULTITHREAD)    /* posix assumed ; need a better detection method */
 /* ===   POSIX Systems   === */
 #  include <pthread.h>
 
+#if DEBUGLEVEL < 1
+
 #define ZSTD_pthread_mutex_t            pthread_mutex_t
 #define ZSTD_pthread_mutex_init(a, b)   pthread_mutex_init((a), (b))
 #define ZSTD_pthread_mutex_destroy(a)   pthread_mutex_destroy((a))
@@ -96,6 +100,33 @@
 #define ZSTD_pthread_create(a, b, c, d) pthread_create((a), (b), (c), (d))
 #define ZSTD_pthread_join(a, b)         pthread_join((a),(b))
 
+#else /* DEBUGLEVEL >= 1 */
+
+/* Debug implementation of threading.
+ * In this implementation we use pointers for mutexes and condition variables.
+ * This way, if we forget to init/destroy them the program will crash or ASAN
+ * will report leaks.
+ */
+
+#define ZSTD_pthread_mutex_t            pthread_mutex_t*
+int ZSTD_pthread_mutex_init(ZSTD_pthread_mutex_t* mutex, pthread_mutexattr_t const* attr);
+int ZSTD_pthread_mutex_destroy(ZSTD_pthread_mutex_t* mutex);
+#define ZSTD_pthread_mutex_lock(a)      pthread_mutex_lock(*(a))
+#define ZSTD_pthread_mutex_unlock(a)    pthread_mutex_unlock(*(a))
+
+#define ZSTD_pthread_cond_t             pthread_cond_t*
+int ZSTD_pthread_cond_init(ZSTD_pthread_cond_t* cond, pthread_condattr_t const* attr);
+int ZSTD_pthread_cond_destroy(ZSTD_pthread_cond_t* cond);
+#define ZSTD_pthread_cond_wait(a, b)    pthread_cond_wait(*(a), *(b))
+#define ZSTD_pthread_cond_signal(a)     pthread_cond_signal(*(a))
+#define ZSTD_pthread_cond_broadcast(a)  pthread_cond_broadcast(*(a))
+
+#define ZSTD_pthread_t                  pthread_t
+#define ZSTD_pthread_create(a, b, c, d) pthread_create((a), (b), (c), (d))
+#define ZSTD_pthread_join(a, b)         pthread_join((a),(b))
+
+#endif
+
 #else  /* ZSTD_MULTITHREAD not defined */
 /* No multithreading support */
 
--- a/contrib/python-zstandard/zstd/common/zstd_internal.h	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/common/zstd_internal.h	Sat Dec 28 09:55:45 2019 -0800
@@ -197,79 +197,56 @@
 static void ZSTD_copy16(void* dst, const void* src) { memcpy(dst, src, 16); }
 #define COPY16(d,s) { ZSTD_copy16(d,s); d+=16; s+=16; }
 
-#define WILDCOPY_OVERLENGTH 8
-#define VECLEN 16
+#define WILDCOPY_OVERLENGTH 32
+#define WILDCOPY_VECLEN 16
 
 typedef enum {
     ZSTD_no_overlap,
-    ZSTD_overlap_src_before_dst,
+    ZSTD_overlap_src_before_dst
     /*  ZSTD_overlap_dst_before_src, */
 } ZSTD_overlap_e;
 
 /*! ZSTD_wildcopy() :
- *  custom version of memcpy(), can overwrite up to WILDCOPY_OVERLENGTH bytes (if length==0) */
+ *  Custom version of memcpy(), can over read/write up to WILDCOPY_OVERLENGTH bytes (if length==0)
+ *  @param ovtype controls the overlap detection
+ *         - ZSTD_no_overlap: The source and destination are guaranteed to be at least WILDCOPY_VECLEN bytes apart.
+ *         - ZSTD_overlap_src_before_dst: The src and dst may overlap, but they MUST be at least 8 bytes apart.
+ *           The src buffer must be before the dst buffer.
+ */
 MEM_STATIC FORCE_INLINE_ATTR DONT_VECTORIZE
-void ZSTD_wildcopy(void* dst, const void* src, ptrdiff_t length, ZSTD_overlap_e ovtype)
+void ZSTD_wildcopy(void* dst, const void* src, ptrdiff_t length, ZSTD_overlap_e const ovtype)
 {
     ptrdiff_t diff = (BYTE*)dst - (const BYTE*)src;
     const BYTE* ip = (const BYTE*)src;
     BYTE* op = (BYTE*)dst;
     BYTE* const oend = op + length;
 
-    assert(diff >= 8 || (ovtype == ZSTD_no_overlap && diff < -8));
-    if (length < VECLEN || (ovtype == ZSTD_overlap_src_before_dst && diff < VECLEN)) {
-      do
-          COPY8(op, ip)
-      while (op < oend);
-    }
-    else {
-      if ((length & 8) == 0)
-        COPY8(op, ip);
-      do {
-        COPY16(op, ip);
-      }
-      while (op < oend);
-    }
-}
-
-/*! ZSTD_wildcopy_16min() :
- *  same semantics as ZSTD_wilcopy() except guaranteed to be able to copy 16 bytes at the start */
-MEM_STATIC FORCE_INLINE_ATTR DONT_VECTORIZE
-void ZSTD_wildcopy_16min(void* dst, const void* src, ptrdiff_t length, ZSTD_overlap_e ovtype)
-{
-    ptrdiff_t diff = (BYTE*)dst - (const BYTE*)src;
-    const BYTE* ip = (const BYTE*)src;
-    BYTE* op = (BYTE*)dst;
-    BYTE* const oend = op + length;
+    assert(diff >= 8 || (ovtype == ZSTD_no_overlap && diff <= -WILDCOPY_VECLEN));
 
-    assert(length >= 8);
-    assert(diff >= 8 || (ovtype == ZSTD_no_overlap && diff < -8));
-
-    if (ovtype == ZSTD_overlap_src_before_dst && diff < VECLEN) {
-      do
-          COPY8(op, ip)
-      while (op < oend);
-    }
-    else {
-      if ((length & 8) == 0)
-        COPY8(op, ip);
-      do {
+    if (ovtype == ZSTD_overlap_src_before_dst && diff < WILDCOPY_VECLEN) {
+        /* Handle short offset copies. */
+        do {
+            COPY8(op, ip)
+        } while (op < oend);
+    } else {
+        assert(diff >= WILDCOPY_VECLEN || diff <= -WILDCOPY_VECLEN);
+        /* Separate out the first two COPY16() calls because the copy length is
+         * almost certain to be short, so the branches have different
+         * probabilities.
+         * On gcc-9 unrolling once is +1.6%, twice is +2%, thrice is +1.8%.
+         * On clang-8 unrolling once is +1.4%, twice is +3.3%, thrice is +3%.
+         */
         COPY16(op, ip);
-      }
-      while (op < oend);
+        COPY16(op, ip);
+        if (op >= oend) return;
+        do {
+            COPY16(op, ip);
+            COPY16(op, ip);
+        }
+        while (op < oend);
     }
 }
 
-MEM_STATIC void ZSTD_wildcopy_e(void* dst, const void* src, void* dstEnd)   /* should be faster for decoding, but strangely, not verified on all platform */
-{
-    const BYTE* ip = (const BYTE*)src;
-    BYTE* op = (BYTE*)dst;
-    BYTE* const oend = (BYTE*)dstEnd;
-    do
-        COPY8(op, ip)
-    while (op < oend);
-}
-
 
 /*-*******************************************
 *  Private declarations
@@ -323,7 +300,7 @@
         _BitScanReverse(&r, val);
         return (unsigned)r;
 #   elif defined(__GNUC__) && (__GNUC__ >= 3)   /* GCC Intrinsic */
-        return 31 - __builtin_clz(val);
+        return __builtin_clz (val) ^ 31;
 #   elif defined(__ICCARM__)    /* IAR Intrinsic */
         return 31 - __CLZ(val);
 #   else   /* Software version */
--- a/contrib/python-zstandard/zstd/compress/zstd_compress.c	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/compress/zstd_compress.c	Sat Dec 28 09:55:45 2019 -0800
@@ -42,15 +42,15 @@
 *  Context memory management
 ***************************************/
 struct ZSTD_CDict_s {
-    void* dictBuffer;
     const void* dictContent;
     size_t dictContentSize;
-    void* workspace;
-    size_t workspaceSize;
+    U32* entropyWorkspace; /* entropy workspace of HUF_WORKSPACE_SIZE bytes */
+    ZSTD_cwksp workspace;
     ZSTD_matchState_t matchState;
     ZSTD_compressedBlockState_t cBlockState;
     ZSTD_customMem customMem;
     U32 dictID;
+    int compressionLevel; /* 0 indicates that advanced API was used to select CDict params */
 };  /* typedef'd to ZSTD_CDict within "zstd.h" */
 
 ZSTD_CCtx* ZSTD_createCCtx(void)
@@ -84,23 +84,26 @@
 
 ZSTD_CCtx* ZSTD_initStaticCCtx(void *workspace, size_t workspaceSize)
 {
-    ZSTD_CCtx* const cctx = (ZSTD_CCtx*) workspace;
+    ZSTD_cwksp ws;
+    ZSTD_CCtx* cctx;
     if (workspaceSize <= sizeof(ZSTD_CCtx)) return NULL;  /* minimum size */
     if ((size_t)workspace & 7) return NULL;  /* must be 8-aligned */
-    memset(workspace, 0, workspaceSize);   /* may be a bit generous, could memset be smaller ? */
+    ZSTD_cwksp_init(&ws, workspace, workspaceSize);
+
+    cctx = (ZSTD_CCtx*)ZSTD_cwksp_reserve_object(&ws, sizeof(ZSTD_CCtx));
+    if (cctx == NULL) {
+        return NULL;
+    }
+    memset(cctx, 0, sizeof(ZSTD_CCtx));
+    ZSTD_cwksp_move(&cctx->workspace, &ws);
     cctx->staticSize = workspaceSize;
-    cctx->workSpace = (void*)(cctx+1);
-    cctx->workSpaceSize = workspaceSize - sizeof(ZSTD_CCtx);
 
     /* statically sized space. entropyWorkspace never moves (but prev/next block swap places) */
-    if (cctx->workSpaceSize < HUF_WORKSPACE_SIZE + 2 * sizeof(ZSTD_compressedBlockState_t)) return NULL;
-    assert(((size_t)cctx->workSpace & (sizeof(void*)-1)) == 0);   /* ensure correct alignment */
-    cctx->blockState.prevCBlock = (ZSTD_compressedBlockState_t*)cctx->workSpace;
-    cctx->blockState.nextCBlock = cctx->blockState.prevCBlock + 1;
-    {
-        void* const ptr = cctx->blockState.nextCBlock + 1;
-        cctx->entropyWorkspace = (U32*)ptr;
-    }
+    if (!ZSTD_cwksp_check_available(&cctx->workspace, HUF_WORKSPACE_SIZE + 2 * sizeof(ZSTD_compressedBlockState_t))) return NULL;
+    cctx->blockState.prevCBlock = (ZSTD_compressedBlockState_t*)ZSTD_cwksp_reserve_object(&cctx->workspace, sizeof(ZSTD_compressedBlockState_t));
+    cctx->blockState.nextCBlock = (ZSTD_compressedBlockState_t*)ZSTD_cwksp_reserve_object(&cctx->workspace, sizeof(ZSTD_compressedBlockState_t));
+    cctx->entropyWorkspace = (U32*)ZSTD_cwksp_reserve_object(
+        &cctx->workspace, HUF_WORKSPACE_SIZE);
     cctx->bmi2 = ZSTD_cpuid_bmi2(ZSTD_cpuid());
     return cctx;
 }
@@ -128,11 +131,11 @@
 {
     assert(cctx != NULL);
     assert(cctx->staticSize == 0);
-    ZSTD_free(cctx->workSpace, cctx->customMem); cctx->workSpace = NULL;
     ZSTD_clearAllDicts(cctx);
 #ifdef ZSTD_MULTITHREAD
     ZSTDMT_freeCCtx(cctx->mtctx); cctx->mtctx = NULL;
 #endif
+    ZSTD_cwksp_free(&cctx->workspace, cctx->customMem);
 }
 
 size_t ZSTD_freeCCtx(ZSTD_CCtx* cctx)
@@ -140,8 +143,13 @@
     if (cctx==NULL) return 0;   /* support free on NULL */
     RETURN_ERROR_IF(cctx->staticSize, memory_allocation,
                     "not compatible with static CCtx");
-    ZSTD_freeCCtxContent(cctx);
-    ZSTD_free(cctx, cctx->customMem);
+    {
+        int cctxInWorkspace = ZSTD_cwksp_owns_buffer(&cctx->workspace, cctx);
+        ZSTD_freeCCtxContent(cctx);
+        if (!cctxInWorkspace) {
+            ZSTD_free(cctx, cctx->customMem);
+        }
+    }
     return 0;
 }
 
@@ -160,7 +168,9 @@
 size_t ZSTD_sizeof_CCtx(const ZSTD_CCtx* cctx)
 {
     if (cctx==NULL) return 0;   /* support sizeof on NULL */
-    return sizeof(*cctx) + cctx->workSpaceSize
+    /* cctx may be in the workspace */
+    return (cctx->workspace.workspace == cctx ? 0 : sizeof(*cctx))
+           + ZSTD_cwksp_sizeof(&cctx->workspace)
            + ZSTD_sizeof_localDict(cctx->localDict)
            + ZSTD_sizeof_mtctx(cctx);
 }
@@ -229,23 +239,23 @@
     RETURN_ERROR_IF(!cctxParams, GENERIC);
     FORWARD_IF_ERROR( ZSTD_checkCParams(params.cParams) );
     memset(cctxParams, 0, sizeof(*cctxParams));
+    assert(!ZSTD_checkCParams(params.cParams));
     cctxParams->cParams = params.cParams;
     cctxParams->fParams = params.fParams;
     cctxParams->compressionLevel = ZSTD_CLEVEL_DEFAULT;   /* should not matter, as all cParams are presumed properly defined */
-    assert(!ZSTD_checkCParams(params.cParams));
     return 0;
 }
 
 /* ZSTD_assignParamsToCCtxParams() :
  * params is presumed valid at this stage */
 static ZSTD_CCtx_params ZSTD_assignParamsToCCtxParams(
-        ZSTD_CCtx_params cctxParams, ZSTD_parameters params)
+        const ZSTD_CCtx_params* cctxParams, ZSTD_parameters params)
 {
-    ZSTD_CCtx_params ret = cctxParams;
+    ZSTD_CCtx_params ret = *cctxParams;
+    assert(!ZSTD_checkCParams(params.cParams));
     ret.cParams = params.cParams;
     ret.fParams = params.fParams;
     ret.compressionLevel = ZSTD_CLEVEL_DEFAULT;   /* should not matter, as all cParams are presumed properly defined */
-    assert(!ZSTD_checkCParams(params.cParams));
     return ret;
 }
 
@@ -378,7 +388,7 @@
     case ZSTD_c_forceAttachDict:
         ZSTD_STATIC_ASSERT(ZSTD_dictDefaultAttach < ZSTD_dictForceCopy);
         bounds.lowerBound = ZSTD_dictDefaultAttach;
-        bounds.upperBound = ZSTD_dictForceCopy;       /* note : how to ensure at compile time that this is the highest value enum ? */
+        bounds.upperBound = ZSTD_dictForceLoad;       /* note : how to ensure at compile time that this is the highest value enum ? */
         return bounds;
 
     case ZSTD_c_literalCompressionMode:
@@ -392,6 +402,11 @@
         bounds.upperBound = ZSTD_TARGETCBLOCKSIZE_MAX;
         return bounds;
 
+    case ZSTD_c_srcSizeHint:
+        bounds.lowerBound = ZSTD_SRCSIZEHINT_MIN;
+        bounds.upperBound = ZSTD_SRCSIZEHINT_MAX;
+        return bounds;
+
     default:
         {   ZSTD_bounds const boundError = { ERROR(parameter_unsupported), 0, 0 };
             return boundError;
@@ -448,6 +463,7 @@
     case ZSTD_c_forceAttachDict:
     case ZSTD_c_literalCompressionMode:
     case ZSTD_c_targetCBlockSize:
+    case ZSTD_c_srcSizeHint:
     default:
         return 0;
     }
@@ -494,6 +510,7 @@
     case ZSTD_c_ldmMinMatch:
     case ZSTD_c_ldmBucketSizeLog:
     case ZSTD_c_targetCBlockSize:
+    case ZSTD_c_srcSizeHint:
         break;
 
     default: RETURN_ERROR(parameter_unsupported);
@@ -517,33 +534,33 @@
         if (value) {  /* 0 : does not change current level */
             CCtxParams->compressionLevel = value;
         }
-        if (CCtxParams->compressionLevel >= 0) return CCtxParams->compressionLevel;
+        if (CCtxParams->compressionLevel >= 0) return (size_t)CCtxParams->compressionLevel;
         return 0;  /* return type (size_t) cannot represent negative values */
     }
 
     case ZSTD_c_windowLog :
         if (value!=0)   /* 0 => use default */
             BOUNDCHECK(ZSTD_c_windowLog, value);
-        CCtxParams->cParams.windowLog = value;
+        CCtxParams->cParams.windowLog = (U32)value;
         return CCtxParams->cParams.windowLog;
 
     case ZSTD_c_hashLog :
         if (value!=0)   /* 0 => use default */
             BOUNDCHECK(ZSTD_c_hashLog, value);
-        CCtxParams->cParams.hashLog = value;
+        CCtxParams->cParams.hashLog = (U32)value;
         return CCtxParams->cParams.hashLog;
 
     case ZSTD_c_chainLog :
         if (value!=0)   /* 0 => use default */
             BOUNDCHECK(ZSTD_c_chainLog, value);
-        CCtxParams->cParams.chainLog = value;
+        CCtxParams->cParams.chainLog = (U32)value;
         return CCtxParams->cParams.chainLog;
 
     case ZSTD_c_searchLog :
         if (value!=0)   /* 0 => use default */
             BOUNDCHECK(ZSTD_c_searchLog, value);
-        CCtxParams->cParams.searchLog = value;
-        return value;
+        CCtxParams->cParams.searchLog = (U32)value;
+        return (size_t)value;
 
     case ZSTD_c_minMatch :
         if (value!=0)   /* 0 => use default */
@@ -674,6 +691,12 @@
         CCtxParams->targetCBlockSize = value;
         return CCtxParams->targetCBlockSize;
 
+    case ZSTD_c_srcSizeHint :
+        if (value!=0)    /* 0 ==> default */
+            BOUNDCHECK(ZSTD_c_srcSizeHint, value);
+        CCtxParams->srcSizeHint = value;
+        return CCtxParams->srcSizeHint;
+
     default: RETURN_ERROR(parameter_unsupported, "unknown parameter");
     }
 }
@@ -779,6 +802,9 @@
     case ZSTD_c_targetCBlockSize :
         *value = (int)CCtxParams->targetCBlockSize;
         break;
+    case ZSTD_c_srcSizeHint :
+        *value = (int)CCtxParams->srcSizeHint;
+        break;
     default: RETURN_ERROR(parameter_unsupported, "unknown parameter");
     }
     return 0;
@@ -1029,7 +1055,11 @@
 ZSTD_compressionParameters ZSTD_getCParamsFromCCtxParams(
         const ZSTD_CCtx_params* CCtxParams, U64 srcSizeHint, size_t dictSize)
 {
-    ZSTD_compressionParameters cParams = ZSTD_getCParams(CCtxParams->compressionLevel, srcSizeHint, dictSize);
+    ZSTD_compressionParameters cParams;
+    if (srcSizeHint == ZSTD_CONTENTSIZE_UNKNOWN && CCtxParams->srcSizeHint > 0) {
+      srcSizeHint = CCtxParams->srcSizeHint;
+    }
+    cParams = ZSTD_getCParams(CCtxParams->compressionLevel, srcSizeHint, dictSize);
     if (CCtxParams->ldmParams.enableLdm) cParams.windowLog = ZSTD_LDM_DEFAULT_WINDOW_LOG;
     if (CCtxParams->cParams.windowLog) cParams.windowLog = CCtxParams->cParams.windowLog;
     if (CCtxParams->cParams.hashLog) cParams.hashLog = CCtxParams->cParams.hashLog;
@@ -1049,10 +1079,19 @@
     size_t const chainSize = (cParams->strategy == ZSTD_fast) ? 0 : ((size_t)1 << cParams->chainLog);
     size_t const hSize = ((size_t)1) << cParams->hashLog;
     U32    const hashLog3 = (forCCtx && cParams->minMatch==3) ? MIN(ZSTD_HASHLOG3_MAX, cParams->windowLog) : 0;
-    size_t const h3Size = ((size_t)1) << hashLog3;
-    size_t const tableSpace = (chainSize + hSize + h3Size) * sizeof(U32);
-    size_t const optPotentialSpace = ((MaxML+1) + (MaxLL+1) + (MaxOff+1) + (1<<Litbits)) * sizeof(U32)
-                          + (ZSTD_OPT_NUM+1) * (sizeof(ZSTD_match_t)+sizeof(ZSTD_optimal_t));
+    size_t const h3Size = hashLog3 ? ((size_t)1) << hashLog3 : 0;
+    /* We don't use ZSTD_cwksp_alloc_size() here because the tables aren't
+     * surrounded by redzones in ASAN. */
+    size_t const tableSpace = chainSize * sizeof(U32)
+                            + hSize * sizeof(U32)
+                            + h3Size * sizeof(U32);
+    size_t const optPotentialSpace =
+        ZSTD_cwksp_alloc_size((MaxML+1) * sizeof(U32))
+      + ZSTD_cwksp_alloc_size((MaxLL+1) * sizeof(U32))
+      + ZSTD_cwksp_alloc_size((MaxOff+1) * sizeof(U32))
+      + ZSTD_cwksp_alloc_size((1<<Litbits) * sizeof(U32))
+      + ZSTD_cwksp_alloc_size((ZSTD_OPT_NUM+1) * sizeof(ZSTD_match_t))
+      + ZSTD_cwksp_alloc_size((ZSTD_OPT_NUM+1) * sizeof(ZSTD_optimal_t));
     size_t const optSpace = (forCCtx && (cParams->strategy >= ZSTD_btopt))
                                 ? optPotentialSpace
                                 : 0;
@@ -1069,20 +1108,23 @@
         size_t const blockSize = MIN(ZSTD_BLOCKSIZE_MAX, (size_t)1 << cParams.windowLog);
         U32    const divider = (cParams.minMatch==3) ? 3 : 4;
         size_t const maxNbSeq = blockSize / divider;
-        size_t const tokenSpace = WILDCOPY_OVERLENGTH + blockSize + 11*maxNbSeq;
-        size_t const entropySpace = HUF_WORKSPACE_SIZE;
-        size_t const blockStateSpace = 2 * sizeof(ZSTD_compressedBlockState_t);
+        size_t const tokenSpace = ZSTD_cwksp_alloc_size(WILDCOPY_OVERLENGTH + blockSize)
+                                + ZSTD_cwksp_alloc_size(maxNbSeq * sizeof(seqDef))
+                                + 3 * ZSTD_cwksp_alloc_size(maxNbSeq * sizeof(BYTE));
+        size_t const entropySpace = ZSTD_cwksp_alloc_size(HUF_WORKSPACE_SIZE);
+        size_t const blockStateSpace = 2 * ZSTD_cwksp_alloc_size(sizeof(ZSTD_compressedBlockState_t));
         size_t const matchStateSize = ZSTD_sizeof_matchState(&cParams, /* forCCtx */ 1);
 
         size_t const ldmSpace = ZSTD_ldm_getTableSize(params->ldmParams);
-        size_t const ldmSeqSpace = ZSTD_ldm_getMaxNbSeq(params->ldmParams, blockSize) * sizeof(rawSeq);
+        size_t const ldmSeqSpace = ZSTD_cwksp_alloc_size(ZSTD_ldm_getMaxNbSeq(params->ldmParams, blockSize) * sizeof(rawSeq));
 
         size_t const neededSpace = entropySpace + blockStateSpace + tokenSpace +
                                    matchStateSize + ldmSpace + ldmSeqSpace;
-
-        DEBUGLOG(5, "sizeof(ZSTD_CCtx) : %u", (U32)sizeof(ZSTD_CCtx));
-        DEBUGLOG(5, "estimate workSpace : %u", (U32)neededSpace);
-        return sizeof(ZSTD_CCtx) + neededSpace;
+        size_t const cctxSpace = ZSTD_cwksp_alloc_size(sizeof(ZSTD_CCtx));
+
+        DEBUGLOG(5, "sizeof(ZSTD_CCtx) : %u", (U32)cctxSpace);
+        DEBUGLOG(5, "estimate workspace : %u", (U32)neededSpace);
+        return cctxSpace + neededSpace;
     }
 }
 
@@ -1118,7 +1160,8 @@
         size_t const blockSize = MIN(ZSTD_BLOCKSIZE_MAX, (size_t)1 << cParams.windowLog);
         size_t const inBuffSize = ((size_t)1 << cParams.windowLog) + blockSize;
         size_t const outBuffSize = ZSTD_compressBound(blockSize) + 1;
-        size_t const streamingSize = inBuffSize + outBuffSize;
+        size_t const streamingSize = ZSTD_cwksp_alloc_size(inBuffSize)
+                                   + ZSTD_cwksp_alloc_size(outBuffSize);
 
         return CCtxSize + streamingSize;
     }
@@ -1186,17 +1229,6 @@
     return 0;   /* over-simplification; could also check if context is currently running in streaming mode, and in which case, report how many bytes are left to be flushed within output buffer */
 }
 
-
-
-static U32 ZSTD_equivalentCParams(ZSTD_compressionParameters cParams1,
-                                  ZSTD_compressionParameters cParams2)
-{
-    return (cParams1.hashLog  == cParams2.hashLog)
-         & (cParams1.chainLog == cParams2.chainLog)
-         & (cParams1.strategy == cParams2.strategy)   /* opt parser space */
-         & ((cParams1.minMatch==3) == (cParams2.minMatch==3));  /* hashlog3 space */
-}
-
 static void ZSTD_assertEqualCParams(ZSTD_compressionParameters cParams1,
                                     ZSTD_compressionParameters cParams2)
 {
@@ -1211,71 +1243,6 @@
     assert(cParams1.strategy     == cParams2.strategy);
 }
 
-/** The parameters are equivalent if ldm is not enabled in both sets or
- *  all the parameters are equivalent. */
-static U32 ZSTD_equivalentLdmParams(ldmParams_t ldmParams1,
-                                    ldmParams_t ldmParams2)
-{
-    return (!ldmParams1.enableLdm && !ldmParams2.enableLdm) ||
-           (ldmParams1.enableLdm == ldmParams2.enableLdm &&
-            ldmParams1.hashLog == ldmParams2.hashLog &&
-            ldmParams1.bucketSizeLog == ldmParams2.bucketSizeLog &&
-            ldmParams1.minMatchLength == ldmParams2.minMatchLength &&
-            ldmParams1.hashRateLog == ldmParams2.hashRateLog);
-}
-
-typedef enum { ZSTDb_not_buffered, ZSTDb_buffered } ZSTD_buffered_policy_e;
-
-/* ZSTD_sufficientBuff() :
- * check internal buffers exist for streaming if buffPol == ZSTDb_buffered .
- * Note : they are assumed to be correctly sized if ZSTD_equivalentCParams()==1 */
-static U32 ZSTD_sufficientBuff(size_t bufferSize1, size_t maxNbSeq1,
-                            size_t maxNbLit1,
-                            ZSTD_buffered_policy_e buffPol2,
-                            ZSTD_compressionParameters cParams2,
-                            U64 pledgedSrcSize)
-{
-    size_t const windowSize2 = MAX(1, (size_t)MIN(((U64)1 << cParams2.windowLog), pledgedSrcSize));
-    size_t const blockSize2 = MIN(ZSTD_BLOCKSIZE_MAX, windowSize2);
-    size_t const maxNbSeq2 = blockSize2 / ((cParams2.minMatch == 3) ? 3 : 4);
-    size_t const maxNbLit2 = blockSize2;
-    size_t const neededBufferSize2 = (buffPol2==ZSTDb_buffered) ? windowSize2 + blockSize2 : 0;
-    DEBUGLOG(4, "ZSTD_sufficientBuff: is neededBufferSize2=%u <= bufferSize1=%u",
-                (U32)neededBufferSize2, (U32)bufferSize1);
-    DEBUGLOG(4, "ZSTD_sufficientBuff: is maxNbSeq2=%u <= maxNbSeq1=%u",
-                (U32)maxNbSeq2, (U32)maxNbSeq1);
-    DEBUGLOG(4, "ZSTD_sufficientBuff: is maxNbLit2=%u <= maxNbLit1=%u",
-                (U32)maxNbLit2, (U32)maxNbLit1);
-    return (maxNbLit2 <= maxNbLit1)
-         & (maxNbSeq2 <= maxNbSeq1)
-         & (neededBufferSize2 <= bufferSize1);
-}
-
-/** Equivalence for resetCCtx purposes */
-static U32 ZSTD_equivalentParams(ZSTD_CCtx_params params1,
-                                 ZSTD_CCtx_params params2,
-                                 size_t buffSize1,
-                                 size_t maxNbSeq1, size_t maxNbLit1,
-                                 ZSTD_buffered_policy_e buffPol2,
-                                 U64 pledgedSrcSize)
-{
-    DEBUGLOG(4, "ZSTD_equivalentParams: pledgedSrcSize=%u", (U32)pledgedSrcSize);
-    if (!ZSTD_equivalentCParams(params1.cParams, params2.cParams)) {
-      DEBUGLOG(4, "ZSTD_equivalentCParams() == 0");
-      return 0;
-    }
-    if (!ZSTD_equivalentLdmParams(params1.ldmParams, params2.ldmParams)) {
-      DEBUGLOG(4, "ZSTD_equivalentLdmParams() == 0");
-      return 0;
-    }
-    if (!ZSTD_sufficientBuff(buffSize1, maxNbSeq1, maxNbLit1, buffPol2,
-                             params2.cParams, pledgedSrcSize)) {
-      DEBUGLOG(4, "ZSTD_sufficientBuff() == 0");
-      return 0;
-    }
-    return 1;
-}
-
 static void ZSTD_reset_compressedBlockState(ZSTD_compressedBlockState_t* bs)
 {
     int i;
@@ -1301,87 +1268,104 @@
     ms->dictMatchState = NULL;
 }
 
-/*! ZSTD_continueCCtx() :
- *  reuse CCtx without reset (note : requires no dictionary) */
-static size_t ZSTD_continueCCtx(ZSTD_CCtx* cctx, ZSTD_CCtx_params params, U64 pledgedSrcSize)
-{
-    size_t const windowSize = MAX(1, (size_t)MIN(((U64)1 << params.cParams.windowLog), pledgedSrcSize));
-    size_t const blockSize = MIN(ZSTD_BLOCKSIZE_MAX, windowSize);
-    DEBUGLOG(4, "ZSTD_continueCCtx: re-use context in place");
-
-    cctx->blockSize = blockSize;   /* previous block size could be different even for same windowLog, due to pledgedSrcSize */
-    cctx->appliedParams = params;
-    cctx->blockState.matchState.cParams = params.cParams;
-    cctx->pledgedSrcSizePlusOne = pledgedSrcSize+1;
-    cctx->consumedSrcSize = 0;
-    cctx->producedCSize = 0;
-    if (pledgedSrcSize == ZSTD_CONTENTSIZE_UNKNOWN)
-        cctx->appliedParams.fParams.contentSizeFlag = 0;
-    DEBUGLOG(4, "pledged content size : %u ; flag : %u",
-        (U32)pledgedSrcSize, cctx->appliedParams.fParams.contentSizeFlag);
-    cctx->stage = ZSTDcs_init;
-    cctx->dictID = 0;
-    if (params.ldmParams.enableLdm)
-        ZSTD_window_clear(&cctx->ldmState.window);
-    ZSTD_referenceExternalSequences(cctx, NULL, 0);
-    ZSTD_invalidateMatchState(&cctx->blockState.matchState);
-    ZSTD_reset_compressedBlockState(cctx->blockState.prevCBlock);
-    XXH64_reset(&cctx->xxhState, 0);
-    return 0;
-}
-
-typedef enum { ZSTDcrp_continue, ZSTDcrp_noMemset } ZSTD_compResetPolicy_e;
-
-typedef enum { ZSTD_resetTarget_CDict, ZSTD_resetTarget_CCtx } ZSTD_resetTarget_e;
-
-static void*
+/**
+ * Indicates whether this compression proceeds directly from user-provided
+ * source buffer to user-provided destination buffer (ZSTDb_not_buffered), or
+ * whether the context needs to buffer the input/output (ZSTDb_buffered).
+ */
+typedef enum {
+    ZSTDb_not_buffered,
+    ZSTDb_buffered
+} ZSTD_buffered_policy_e;
+
+/**
+ * Controls, for this matchState reset, whether the tables need to be cleared /
+ * prepared for the coming compression (ZSTDcrp_makeClean), or whether the
+ * tables can be left unclean (ZSTDcrp_leaveDirty), because we know that a
+ * subsequent operation will overwrite the table space anyways (e.g., copying
+ * the matchState contents in from a CDict).
+ */
+typedef enum {
+    ZSTDcrp_makeClean,
+    ZSTDcrp_leaveDirty
+} ZSTD_compResetPolicy_e;
+
+/**
+ * Controls, for this matchState reset, whether indexing can continue where it
+ * left off (ZSTDirp_continue), or whether it needs to be restarted from zero
+ * (ZSTDirp_reset).
+ */
+typedef enum {
+    ZSTDirp_continue,
+    ZSTDirp_reset
+} ZSTD_indexResetPolicy_e;
+
+typedef enum {
+    ZSTD_resetTarget_CDict,
+    ZSTD_resetTarget_CCtx
+} ZSTD_resetTarget_e;
+
+static size_t
 ZSTD_reset_matchState(ZSTD_matchState_t* ms,
-                      void* ptr,
+                      ZSTD_cwksp* ws,
                 const ZSTD_compressionParameters* cParams,
-                      ZSTD_compResetPolicy_e const crp, ZSTD_resetTarget_e const forWho)
+                const ZSTD_compResetPolicy_e crp,
+                const ZSTD_indexResetPolicy_e forceResetIndex,
+                const ZSTD_resetTarget_e forWho)
 {
     size_t const chainSize = (cParams->strategy == ZSTD_fast) ? 0 : ((size_t)1 << cParams->chainLog);
     size_t const hSize = ((size_t)1) << cParams->hashLog;
     U32    const hashLog3 = ((forWho == ZSTD_resetTarget_CCtx) && cParams->minMatch==3) ? MIN(ZSTD_HASHLOG3_MAX, cParams->windowLog) : 0;
-    size_t const h3Size = ((size_t)1) << hashLog3;
-    size_t const tableSpace = (chainSize + hSize + h3Size) * sizeof(U32);
-
-    assert(((size_t)ptr & 3) == 0);
+    size_t const h3Size = hashLog3 ? ((size_t)1) << hashLog3 : 0;
+
+    DEBUGLOG(4, "reset indices : %u", forceResetIndex == ZSTDirp_reset);
+    if (forceResetIndex == ZSTDirp_reset) {
+        memset(&ms->window, 0, sizeof(ms->window));
+        ms->window.dictLimit = 1;    /* start from 1, so that 1st position is valid */
+        ms->window.lowLimit = 1;     /* it ensures first and later CCtx usages compress the same */
+        ms->window.nextSrc = ms->window.base + 1;   /* see issue #1241 */
+        ZSTD_cwksp_mark_tables_dirty(ws);
+    }
 
     ms->hashLog3 = hashLog3;
-    memset(&ms->window, 0, sizeof(ms->window));
-    ms->window.dictLimit = 1;    /* start from 1, so that 1st position is valid */
-    ms->window.lowLimit = 1;     /* it ensures first and later CCtx usages compress the same */
-    ms->window.nextSrc = ms->window.base + 1;   /* see issue #1241 */
+
     ZSTD_invalidateMatchState(ms);
 
+    assert(!ZSTD_cwksp_reserve_failed(ws)); /* check that allocation hasn't already failed */
+
+    ZSTD_cwksp_clear_tables(ws);
+
+    DEBUGLOG(5, "reserving table space");
+    /* table Space */
+    ms->hashTable = (U32*)ZSTD_cwksp_reserve_table(ws, hSize * sizeof(U32));
+    ms->chainTable = (U32*)ZSTD_cwksp_reserve_table(ws, chainSize * sizeof(U32));
+    ms->hashTable3 = (U32*)ZSTD_cwksp_reserve_table(ws, h3Size * sizeof(U32));
+    RETURN_ERROR_IF(ZSTD_cwksp_reserve_failed(ws), memory_allocation,
+                    "failed a workspace allocation in ZSTD_reset_matchState");
+
+    DEBUGLOG(4, "reset table : %u", crp!=ZSTDcrp_leaveDirty);
+    if (crp!=ZSTDcrp_leaveDirty) {
+        /* reset tables only */
+        ZSTD_cwksp_clean_tables(ws);
+    }
+
     /* opt parser space */
     if ((forWho == ZSTD_resetTarget_CCtx) && (cParams->strategy >= ZSTD_btopt)) {
         DEBUGLOG(4, "reserving optimal parser space");
-        ms->opt.litFreq = (unsigned*)ptr;
-        ms->opt.litLengthFreq = ms->opt.litFreq + (1<<Litbits);
-        ms->opt.matchLengthFreq = ms->opt.litLengthFreq + (MaxLL+1);
-        ms->opt.offCodeFreq = ms->opt.matchLengthFreq + (MaxML+1);
-        ptr = ms->opt.offCodeFreq + (MaxOff+1);
-        ms->opt.matchTable = (ZSTD_match_t*)ptr;
-        ptr = ms->opt.matchTable + ZSTD_OPT_NUM+1;
-        ms->opt.priceTable = (ZSTD_optimal_t*)ptr;
-        ptr = ms->opt.priceTable + ZSTD_OPT_NUM+1;
+        ms->opt.litFreq = (unsigned*)ZSTD_cwksp_reserve_aligned(ws, (1<<Litbits) * sizeof(unsigned));
+        ms->opt.litLengthFreq = (unsigned*)ZSTD_cwksp_reserve_aligned(ws, (MaxLL+1) * sizeof(unsigned));
+        ms->opt.matchLengthFreq = (unsigned*)ZSTD_cwksp_reserve_aligned(ws, (MaxML+1) * sizeof(unsigned));
+        ms->opt.offCodeFreq = (unsigned*)ZSTD_cwksp_reserve_aligned(ws, (MaxOff+1) * sizeof(unsigned));
+        ms->opt.matchTable = (ZSTD_match_t*)ZSTD_cwksp_reserve_aligned(ws, (ZSTD_OPT_NUM+1) * sizeof(ZSTD_match_t));
+        ms->opt.priceTable = (ZSTD_optimal_t*)ZSTD_cwksp_reserve_aligned(ws, (ZSTD_OPT_NUM+1) * sizeof(ZSTD_optimal_t));
     }
 
-    /* table Space */
-    DEBUGLOG(4, "reset table : %u", crp!=ZSTDcrp_noMemset);
-    assert(((size_t)ptr & 3) == 0);  /* ensure ptr is properly aligned */
-    if (crp!=ZSTDcrp_noMemset) memset(ptr, 0, tableSpace);   /* reset tables only */
-    ms->hashTable = (U32*)(ptr);
-    ms->chainTable = ms->hashTable + hSize;
-    ms->hashTable3 = ms->chainTable + chainSize;
-    ptr = ms->hashTable3 + h3Size;
-
     ms->cParams = *cParams;
 
-    assert(((size_t)ptr & 3) == 0);
-    return ptr;
+    RETURN_ERROR_IF(ZSTD_cwksp_reserve_failed(ws), memory_allocation,
+                    "failed a workspace allocation in ZSTD_reset_matchState");
+
+    return 0;
 }
 
 /* ZSTD_indexTooCloseToMax() :
@@ -1397,13 +1381,6 @@
     return (size_t)(w.nextSrc - w.base) > (ZSTD_CURRENT_MAX - ZSTD_INDEXOVERFLOW_MARGIN);
 }
 
-#define ZSTD_WORKSPACETOOLARGE_FACTOR 3 /* define "workspace is too large" as this number of times larger than needed */
-#define ZSTD_WORKSPACETOOLARGE_MAXDURATION 128  /* when workspace is continuously too large
-                                         * during at least this number of times,
-                                         * context's memory usage is considered wasteful,
-                                         * because it's sized to handle a worst case scenario which rarely happens.
-                                         * In which case, resize it down to free some memory */
-
 /*! ZSTD_resetCCtx_internal() :
     note : `params` are assumed fully validated at this stage */
 static size_t ZSTD_resetCCtx_internal(ZSTD_CCtx* zc,
@@ -1412,30 +1389,12 @@
                                       ZSTD_compResetPolicy_e const crp,
                                       ZSTD_buffered_policy_e const zbuff)
 {
+    ZSTD_cwksp* const ws = &zc->workspace;
     DEBUGLOG(4, "ZSTD_resetCCtx_internal: pledgedSrcSize=%u, wlog=%u",
                 (U32)pledgedSrcSize, params.cParams.windowLog);
     assert(!ZSTD_isError(ZSTD_checkCParams(params.cParams)));
 
-    if (crp == ZSTDcrp_continue) {
-        if (ZSTD_equivalentParams(zc->appliedParams, params,
-                                  zc->inBuffSize,
-                                  zc->seqStore.maxNbSeq, zc->seqStore.maxNbLit,
-                                  zbuff, pledgedSrcSize) ) {
-            DEBUGLOG(4, "ZSTD_equivalentParams()==1 -> consider continue mode");
-            zc->workSpaceOversizedDuration += (zc->workSpaceOversizedDuration > 0);   /* if it was too large, it still is */
-            if (zc->workSpaceOversizedDuration <= ZSTD_WORKSPACETOOLARGE_MAXDURATION) {
-                DEBUGLOG(4, "continue mode confirmed (wLog1=%u, blockSize1=%zu)",
-                            zc->appliedParams.cParams.windowLog, zc->blockSize);
-                if (ZSTD_indexTooCloseToMax(zc->blockState.matchState.window)) {
-                    /* prefer a reset, faster than a rescale */
-                    ZSTD_reset_matchState(&zc->blockState.matchState,
-                                           zc->entropyWorkspace + HUF_WORKSPACE_SIZE_U32,
-                                          &params.cParams,
-                                           crp, ZSTD_resetTarget_CCtx);
-                }
-                return ZSTD_continueCCtx(zc, params, pledgedSrcSize);
-    }   }   }
-    DEBUGLOG(4, "ZSTD_equivalentParams()==0 -> reset CCtx");
+    zc->isFirstBlock = 1;
 
     if (params.ldmParams.enableLdm) {
         /* Adjust long distance matching parameters */
@@ -1449,58 +1408,74 @@
         size_t const blockSize = MIN(ZSTD_BLOCKSIZE_MAX, windowSize);
         U32    const divider = (params.cParams.minMatch==3) ? 3 : 4;
         size_t const maxNbSeq = blockSize / divider;
-        size_t const tokenSpace = WILDCOPY_OVERLENGTH + blockSize + 11*maxNbSeq;
+        size_t const tokenSpace = ZSTD_cwksp_alloc_size(WILDCOPY_OVERLENGTH + blockSize)
+                                + ZSTD_cwksp_alloc_size(maxNbSeq * sizeof(seqDef))
+                                + 3 * ZSTD_cwksp_alloc_size(maxNbSeq * sizeof(BYTE));
         size_t const buffOutSize = (zbuff==ZSTDb_buffered) ? ZSTD_compressBound(blockSize)+1 : 0;
         size_t const buffInSize = (zbuff==ZSTDb_buffered) ? windowSize + blockSize : 0;
         size_t const matchStateSize = ZSTD_sizeof_matchState(&params.cParams, /* forCCtx */ 1);
         size_t const maxNbLdmSeq = ZSTD_ldm_getMaxNbSeq(params.ldmParams, blockSize);
-        void* ptr;   /* used to partition workSpace */
-
-        /* Check if workSpace is large enough, alloc a new one if needed */
-        {   size_t const entropySpace = HUF_WORKSPACE_SIZE;
-            size_t const blockStateSpace = 2 * sizeof(ZSTD_compressedBlockState_t);
-            size_t const bufferSpace = buffInSize + buffOutSize;
+
+        ZSTD_indexResetPolicy_e needsIndexReset = ZSTDirp_continue;
+
+        if (ZSTD_indexTooCloseToMax(zc->blockState.matchState.window)) {
+            needsIndexReset = ZSTDirp_reset;
+        }
+
+        ZSTD_cwksp_bump_oversized_duration(ws, 0);
+
+        /* Check if workspace is large enough, alloc a new one if needed */
+        {   size_t const cctxSpace = zc->staticSize ? ZSTD_cwksp_alloc_size(sizeof(ZSTD_CCtx)) : 0;
+            size_t const entropySpace = ZSTD_cwksp_alloc_size(HUF_WORKSPACE_SIZE);
+            size_t const blockStateSpace = 2 * ZSTD_cwksp_alloc_size(sizeof(ZSTD_compressedBlockState_t));
+            size_t const bufferSpace = ZSTD_cwksp_alloc_size(buffInSize) + ZSTD_cwksp_alloc_size(buffOutSize);
             size_t const ldmSpace = ZSTD_ldm_getTableSize(params.ldmParams);
-            size_t const ldmSeqSpace = maxNbLdmSeq * sizeof(rawSeq);
-
-            size_t const neededSpace = entropySpace + blockStateSpace + ldmSpace +
-                                       ldmSeqSpace + matchStateSize + tokenSpace +
-                                       bufferSpace;
-
-            int const workSpaceTooSmall = zc->workSpaceSize < neededSpace;
-            int const workSpaceTooLarge = zc->workSpaceSize > ZSTD_WORKSPACETOOLARGE_FACTOR * neededSpace;
-            int const workSpaceWasteful = workSpaceTooLarge && (zc->workSpaceOversizedDuration > ZSTD_WORKSPACETOOLARGE_MAXDURATION);
-            zc->workSpaceOversizedDuration = workSpaceTooLarge ? zc->workSpaceOversizedDuration+1 : 0;
+            size_t const ldmSeqSpace = ZSTD_cwksp_alloc_size(maxNbLdmSeq * sizeof(rawSeq));
+
+            size_t const neededSpace =
+                cctxSpace +
+                entropySpace +
+                blockStateSpace +
+                ldmSpace +
+                ldmSeqSpace +
+                matchStateSize +
+                tokenSpace +
+                bufferSpace;
+
+            int const workspaceTooSmall = ZSTD_cwksp_sizeof(ws) < neededSpace;
+            int const workspaceWasteful = ZSTD_cwksp_check_wasteful(ws, neededSpace);
 
             DEBUGLOG(4, "Need %zuKB workspace, including %zuKB for match state, and %zuKB for buffers",
                         neededSpace>>10, matchStateSize>>10, bufferSpace>>10);
             DEBUGLOG(4, "windowSize: %zu - blockSize: %zu", windowSize, blockSize);
 
-            if (workSpaceTooSmall || workSpaceWasteful) {
-                DEBUGLOG(4, "Resize workSpaceSize from %zuKB to %zuKB",
-                            zc->workSpaceSize >> 10,
+            if (workspaceTooSmall || workspaceWasteful) {
+                DEBUGLOG(4, "Resize workspaceSize from %zuKB to %zuKB",
+                            ZSTD_cwksp_sizeof(ws) >> 10,
                             neededSpace >> 10);
 
                 RETURN_ERROR_IF(zc->staticSize, memory_allocation, "static cctx : no resize");
 
-                zc->workSpaceSize = 0;
-                ZSTD_free(zc->workSpace, zc->customMem);
-                zc->workSpace = ZSTD_malloc(neededSpace, zc->customMem);
-                RETURN_ERROR_IF(zc->workSpace == NULL, memory_allocation);
-                zc->workSpaceSize = neededSpace;
-                zc->workSpaceOversizedDuration = 0;
-
+                needsIndexReset = ZSTDirp_reset;
+
+                ZSTD_cwksp_free(ws, zc->customMem);
+                FORWARD_IF_ERROR(ZSTD_cwksp_create(ws, neededSpace, zc->customMem));
+
+                DEBUGLOG(5, "reserving object space");
                 /* Statically sized space.
                  * entropyWorkspace never moves,
                  * though prev/next block swap places */
-                assert(((size_t)zc->workSpace & 3) == 0);   /* ensure correct alignment */
-                assert(zc->workSpaceSize >= 2 * sizeof(ZSTD_compressedBlockState_t));
-                zc->blockState.prevCBlock = (ZSTD_compressedBlockState_t*)zc->workSpace;
-                zc->blockState.nextCBlock = zc->blockState.prevCBlock + 1;
-                ptr = zc->blockState.nextCBlock + 1;
-                zc->entropyWorkspace = (U32*)ptr;
+                assert(ZSTD_cwksp_check_available(ws, 2 * sizeof(ZSTD_compressedBlockState_t)));
+                zc->blockState.prevCBlock = (ZSTD_compressedBlockState_t*) ZSTD_cwksp_reserve_object(ws, sizeof(ZSTD_compressedBlockState_t));
+                RETURN_ERROR_IF(zc->blockState.prevCBlock == NULL, memory_allocation, "couldn't allocate prevCBlock");
+                zc->blockState.nextCBlock = (ZSTD_compressedBlockState_t*) ZSTD_cwksp_reserve_object(ws, sizeof(ZSTD_compressedBlockState_t));
+                RETURN_ERROR_IF(zc->blockState.nextCBlock == NULL, memory_allocation, "couldn't allocate nextCBlock");
+                zc->entropyWorkspace = (U32*) ZSTD_cwksp_reserve_object(ws, HUF_WORKSPACE_SIZE);
+                RETURN_ERROR_IF(zc->blockState.nextCBlock == NULL, memory_allocation, "couldn't allocate entropyWorkspace");
         }   }
 
+        ZSTD_cwksp_clear(ws);
+
         /* init params */
         zc->appliedParams = params;
         zc->blockState.matchState.cParams = params.cParams;
@@ -1519,58 +1494,58 @@
 
         ZSTD_reset_compressedBlockState(zc->blockState.prevCBlock);
 
-        ptr = ZSTD_reset_matchState(&zc->blockState.matchState,
-                                     zc->entropyWorkspace + HUF_WORKSPACE_SIZE_U32,
-                                    &params.cParams,
-                                     crp, ZSTD_resetTarget_CCtx);
+        /* ZSTD_wildcopy() is used to copy into the literals buffer,
+         * so we have to oversize the buffer by WILDCOPY_OVERLENGTH bytes.
+         */
+        zc->seqStore.litStart = ZSTD_cwksp_reserve_buffer(ws, blockSize + WILDCOPY_OVERLENGTH);
+        zc->seqStore.maxNbLit = blockSize;
+
+        /* buffers */
+        zc->inBuffSize = buffInSize;
+        zc->inBuff = (char*)ZSTD_cwksp_reserve_buffer(ws, buffInSize);
+        zc->outBuffSize = buffOutSize;
+        zc->outBuff = (char*)ZSTD_cwksp_reserve_buffer(ws, buffOutSize);
+
+        /* ldm bucketOffsets table */
+        if (params.ldmParams.enableLdm) {
+            /* TODO: avoid memset? */
+            size_t const ldmBucketSize =
+                  ((size_t)1) << (params.ldmParams.hashLog -
+                                  params.ldmParams.bucketSizeLog);
+            zc->ldmState.bucketOffsets = ZSTD_cwksp_reserve_buffer(ws, ldmBucketSize);
+            memset(zc->ldmState.bucketOffsets, 0, ldmBucketSize);
+        }
+
+        /* sequences storage */
+        ZSTD_referenceExternalSequences(zc, NULL, 0);
+        zc->seqStore.maxNbSeq = maxNbSeq;
+        zc->seqStore.llCode = ZSTD_cwksp_reserve_buffer(ws, maxNbSeq * sizeof(BYTE));
+        zc->seqStore.mlCode = ZSTD_cwksp_reserve_buffer(ws, maxNbSeq * sizeof(BYTE));
+        zc->seqStore.ofCode = ZSTD_cwksp_reserve_buffer(ws, maxNbSeq * sizeof(BYTE));
+        zc->seqStore.sequencesStart = (seqDef*)ZSTD_cwksp_reserve_aligned(ws, maxNbSeq * sizeof(seqDef));
+
+        FORWARD_IF_ERROR(ZSTD_reset_matchState(
+            &zc->blockState.matchState,
+            ws,
+            &params.cParams,
+            crp,
+            needsIndexReset,
+            ZSTD_resetTarget_CCtx));
 
         /* ldm hash table */
-        /* initialize bucketOffsets table later for pointer alignment */
         if (params.ldmParams.enableLdm) {
+            /* TODO: avoid memset? */
             size_t const ldmHSize = ((size_t)1) << params.ldmParams.hashLog;
-            memset(ptr, 0, ldmHSize * sizeof(ldmEntry_t));
-            assert(((size_t)ptr & 3) == 0); /* ensure ptr is properly aligned */
-            zc->ldmState.hashTable = (ldmEntry_t*)ptr;
-            ptr = zc->ldmState.hashTable + ldmHSize;
-            zc->ldmSequences = (rawSeq*)ptr;
-            ptr = zc->ldmSequences + maxNbLdmSeq;
+            zc->ldmState.hashTable = (ldmEntry_t*)ZSTD_cwksp_reserve_aligned(ws, ldmHSize * sizeof(ldmEntry_t));
+            memset(zc->ldmState.hashTable, 0, ldmHSize * sizeof(ldmEntry_t));
+            zc->ldmSequences = (rawSeq*)ZSTD_cwksp_reserve_aligned(ws, maxNbLdmSeq * sizeof(rawSeq));
             zc->maxNbLdmSequences = maxNbLdmSeq;
 
             memset(&zc->ldmState.window, 0, sizeof(zc->ldmState.window));
-        }
-        assert(((size_t)ptr & 3) == 0); /* ensure ptr is properly aligned */
-
-        /* sequences storage */
-        zc->seqStore.maxNbSeq = maxNbSeq;
-        zc->seqStore.sequencesStart = (seqDef*)ptr;
-        ptr = zc->seqStore.sequencesStart + maxNbSeq;
-        zc->seqStore.llCode = (BYTE*) ptr;
-        zc->seqStore.mlCode = zc->seqStore.llCode + maxNbSeq;
-        zc->seqStore.ofCode = zc->seqStore.mlCode + maxNbSeq;
-        zc->seqStore.litStart = zc->seqStore.ofCode + maxNbSeq;
-        /* ZSTD_wildcopy() is used to copy into the literals buffer,
-         * so we have to oversize the buffer by WILDCOPY_OVERLENGTH bytes.
-         */
-        zc->seqStore.maxNbLit = blockSize;
-        ptr = zc->seqStore.litStart + blockSize + WILDCOPY_OVERLENGTH;
-
-        /* ldm bucketOffsets table */
-        if (params.ldmParams.enableLdm) {
-            size_t const ldmBucketSize =
-                  ((size_t)1) << (params.ldmParams.hashLog -
-                                  params.ldmParams.bucketSizeLog);
-            memset(ptr, 0, ldmBucketSize);
-            zc->ldmState.bucketOffsets = (BYTE*)ptr;
-            ptr = zc->ldmState.bucketOffsets + ldmBucketSize;
             ZSTD_window_clear(&zc->ldmState.window);
         }
-        ZSTD_referenceExternalSequences(zc, NULL, 0);
-
-        /* buffers */
-        zc->inBuffSize = buffInSize;
-        zc->inBuff = (char*)ptr;
-        zc->outBuffSize = buffOutSize;
-        zc->outBuff = zc->inBuff + buffInSize;
+
+        DEBUGLOG(3, "wksp: finished allocating, %zd bytes remain available", ZSTD_cwksp_available_space(ws));
 
         return 0;
     }
@@ -1604,15 +1579,15 @@
 };
 
 static int ZSTD_shouldAttachDict(const ZSTD_CDict* cdict,
-                                 ZSTD_CCtx_params params,
+                                 const ZSTD_CCtx_params* params,
                                  U64 pledgedSrcSize)
 {
     size_t cutoff = attachDictSizeCutoffs[cdict->matchState.cParams.strategy];
     return ( pledgedSrcSize <= cutoff
           || pledgedSrcSize == ZSTD_CONTENTSIZE_UNKNOWN
-          || params.attachDictPref == ZSTD_dictForceAttach )
-        && params.attachDictPref != ZSTD_dictForceCopy
-        && !params.forceWindow; /* dictMatchState isn't correctly
+          || params->attachDictPref == ZSTD_dictForceAttach )
+        && params->attachDictPref != ZSTD_dictForceCopy
+        && !params->forceWindow; /* dictMatchState isn't correctly
                                  * handled in _enforceMaxDist */
 }
 
@@ -1630,8 +1605,8 @@
          * has its own tables. */
         params.cParams = ZSTD_adjustCParams_internal(*cdict_cParams, pledgedSrcSize, 0);
         params.cParams.windowLog = windowLog;
-        ZSTD_resetCCtx_internal(cctx, params, pledgedSrcSize,
-                                ZSTDcrp_continue, zbuff);
+        FORWARD_IF_ERROR(ZSTD_resetCCtx_internal(cctx, params, pledgedSrcSize,
+                                                 ZSTDcrp_makeClean, zbuff));
         assert(cctx->appliedParams.cParams.strategy == cdict_cParams->strategy);
     }
 
@@ -1679,30 +1654,36 @@
         /* Copy only compression parameters related to tables. */
         params.cParams = *cdict_cParams;
         params.cParams.windowLog = windowLog;
-        ZSTD_resetCCtx_internal(cctx, params, pledgedSrcSize,
-                                ZSTDcrp_noMemset, zbuff);
+        FORWARD_IF_ERROR(ZSTD_resetCCtx_internal(cctx, params, pledgedSrcSize,
+                                                 ZSTDcrp_leaveDirty, zbuff));
         assert(cctx->appliedParams.cParams.strategy == cdict_cParams->strategy);
         assert(cctx->appliedParams.cParams.hashLog == cdict_cParams->hashLog);
         assert(cctx->appliedParams.cParams.chainLog == cdict_cParams->chainLog);
     }
 
+    ZSTD_cwksp_mark_tables_dirty(&cctx->workspace);
+
     /* copy tables */
     {   size_t const chainSize = (cdict_cParams->strategy == ZSTD_fast) ? 0 : ((size_t)1 << cdict_cParams->chainLog);
         size_t const hSize =  (size_t)1 << cdict_cParams->hashLog;
-        size_t const tableSpace = (chainSize + hSize) * sizeof(U32);
-        assert((U32*)cctx->blockState.matchState.chainTable == (U32*)cctx->blockState.matchState.hashTable + hSize);  /* chainTable must follow hashTable */
-        assert((U32*)cctx->blockState.matchState.hashTable3 == (U32*)cctx->blockState.matchState.chainTable + chainSize);
-        assert((U32*)cdict->matchState.chainTable == (U32*)cdict->matchState.hashTable + hSize);  /* chainTable must follow hashTable */
-        assert((U32*)cdict->matchState.hashTable3 == (U32*)cdict->matchState.chainTable + chainSize);
-        memcpy(cctx->blockState.matchState.hashTable, cdict->matchState.hashTable, tableSpace);   /* presumes all tables follow each other */
+
+        memcpy(cctx->blockState.matchState.hashTable,
+               cdict->matchState.hashTable,
+               hSize * sizeof(U32));
+        memcpy(cctx->blockState.matchState.chainTable,
+               cdict->matchState.chainTable,
+               chainSize * sizeof(U32));
     }
 
     /* Zero the hashTable3, since the cdict never fills it */
-    {   size_t const h3Size = (size_t)1 << cctx->blockState.matchState.hashLog3;
+    {   int const h3log = cctx->blockState.matchState.hashLog3;
+        size_t const h3Size = h3log ? ((size_t)1 << h3log) : 0;
         assert(cdict->matchState.hashLog3 == 0);
         memset(cctx->blockState.matchState.hashTable3, 0, h3Size * sizeof(U32));
     }
 
+    ZSTD_cwksp_mark_tables_clean(&cctx->workspace);
+
     /* copy dictionary offsets */
     {   ZSTD_matchState_t const* srcMatchState = &cdict->matchState;
         ZSTD_matchState_t* dstMatchState = &cctx->blockState.matchState;
@@ -1724,7 +1705,7 @@
  * in-place. We decide here which strategy to use. */
 static size_t ZSTD_resetCCtx_usingCDict(ZSTD_CCtx* cctx,
                             const ZSTD_CDict* cdict,
-                            ZSTD_CCtx_params params,
+                            const ZSTD_CCtx_params* params,
                             U64 pledgedSrcSize,
                             ZSTD_buffered_policy_e zbuff)
 {
@@ -1734,10 +1715,10 @@
 
     if (ZSTD_shouldAttachDict(cdict, params, pledgedSrcSize)) {
         return ZSTD_resetCCtx_byAttachingCDict(
-            cctx, cdict, params, pledgedSrcSize, zbuff);
+            cctx, cdict, *params, pledgedSrcSize, zbuff);
     } else {
         return ZSTD_resetCCtx_byCopyingCDict(
-            cctx, cdict, params, pledgedSrcSize, zbuff);
+            cctx, cdict, *params, pledgedSrcSize, zbuff);
     }
 }
 
@@ -1763,7 +1744,7 @@
         params.cParams = srcCCtx->appliedParams.cParams;
         params.fParams = fParams;
         ZSTD_resetCCtx_internal(dstCCtx, params, pledgedSrcSize,
-                                ZSTDcrp_noMemset, zbuff);
+                                ZSTDcrp_leaveDirty, zbuff);
         assert(dstCCtx->appliedParams.cParams.windowLog == srcCCtx->appliedParams.cParams.windowLog);
         assert(dstCCtx->appliedParams.cParams.strategy == srcCCtx->appliedParams.cParams.strategy);
         assert(dstCCtx->appliedParams.cParams.hashLog == srcCCtx->appliedParams.cParams.hashLog);
@@ -1771,16 +1752,27 @@
         assert(dstCCtx->blockState.matchState.hashLog3 == srcCCtx->blockState.matchState.hashLog3);
     }
 
+    ZSTD_cwksp_mark_tables_dirty(&dstCCtx->workspace);
+
     /* copy tables */
     {   size_t const chainSize = (srcCCtx->appliedParams.cParams.strategy == ZSTD_fast) ? 0 : ((size_t)1 << srcCCtx->appliedParams.cParams.chainLog);
         size_t const hSize =  (size_t)1 << srcCCtx->appliedParams.cParams.hashLog;
-        size_t const h3Size = (size_t)1 << srcCCtx->blockState.matchState.hashLog3;
-        size_t const tableSpace = (chainSize + hSize + h3Size) * sizeof(U32);
-        assert((U32*)dstCCtx->blockState.matchState.chainTable == (U32*)dstCCtx->blockState.matchState.hashTable + hSize);  /* chainTable must follow hashTable */
-        assert((U32*)dstCCtx->blockState.matchState.hashTable3 == (U32*)dstCCtx->blockState.matchState.chainTable + chainSize);
-        memcpy(dstCCtx->blockState.matchState.hashTable, srcCCtx->blockState.matchState.hashTable, tableSpace);   /* presumes all tables follow each other */
+        int const h3log = srcCCtx->blockState.matchState.hashLog3;
+        size_t const h3Size = h3log ? ((size_t)1 << h3log) : 0;
+
+        memcpy(dstCCtx->blockState.matchState.hashTable,
+               srcCCtx->blockState.matchState.hashTable,
+               hSize * sizeof(U32));
+        memcpy(dstCCtx->blockState.matchState.chainTable,
+               srcCCtx->blockState.matchState.chainTable,
+               chainSize * sizeof(U32));
+        memcpy(dstCCtx->blockState.matchState.hashTable3,
+               srcCCtx->blockState.matchState.hashTable3,
+               h3Size * sizeof(U32));
     }
 
+    ZSTD_cwksp_mark_tables_clean(&dstCCtx->workspace);
+
     /* copy dictionary offsets */
     {
         const ZSTD_matchState_t* srcMatchState = &srcCCtx->blockState.matchState;
@@ -1831,6 +1823,20 @@
     int rowNb;
     assert((size & (ZSTD_ROWSIZE-1)) == 0);  /* multiple of ZSTD_ROWSIZE */
     assert(size < (1U<<31));   /* can be casted to int */
+
+#if defined (MEMORY_SANITIZER) && !defined (ZSTD_MSAN_DONT_POISON_WORKSPACE)
+    /* To validate that the table re-use logic is sound, and that we don't
+     * access table space that we haven't cleaned, we re-"poison" the table
+     * space every time we mark it dirty.
+     *
+     * This function however is intended to operate on those dirty tables and
+     * re-clean them. So when this function is used correctly, we can unpoison
+     * the memory it operated on. This introduces a blind spot though, since
+     * if we now try to operate on __actually__ poisoned memory, we will not
+     * detect that. */
+    __msan_unpoison(table, size * sizeof(U32));
+#endif
+
     for (rowNb=0 ; rowNb < nbRows ; rowNb++) {
         int column;
         for (column=0; column<ZSTD_ROWSIZE; column++) {
@@ -1938,7 +1944,7 @@
                                 ZSTD_entropyCTables_t* nextEntropy,
                           const ZSTD_CCtx_params* cctxParams,
                                 void* dst, size_t dstCapacity,
-                                void* workspace, size_t wkspSize,
+                                void* entropyWorkspace, size_t entropyWkspSize,
                           const int bmi2)
 {
     const int longOffsets = cctxParams->cParams.windowLog > STREAM_ACCUMULATOR_MIN;
@@ -1971,7 +1977,7 @@
                                     ZSTD_disableLiteralsCompression(cctxParams),
                                     op, dstCapacity,
                                     literals, litSize,
-                                    workspace, wkspSize,
+                                    entropyWorkspace, entropyWkspSize,
                                     bmi2);
         FORWARD_IF_ERROR(cSize);
         assert(cSize <= dstCapacity);
@@ -1981,12 +1987,17 @@
     /* Sequences Header */
     RETURN_ERROR_IF((oend-op) < 3 /*max nbSeq Size*/ + 1 /*seqHead*/,
                     dstSize_tooSmall);
-    if (nbSeq < 0x7F)
+    if (nbSeq < 128) {
         *op++ = (BYTE)nbSeq;
-    else if (nbSeq < LONGNBSEQ)
-        op[0] = (BYTE)((nbSeq>>8) + 0x80), op[1] = (BYTE)nbSeq, op+=2;
-    else
-        op[0]=0xFF, MEM_writeLE16(op+1, (U16)(nbSeq - LONGNBSEQ)), op+=3;
+    } else if (nbSeq < LONGNBSEQ) {
+        op[0] = (BYTE)((nbSeq>>8) + 0x80);
+        op[1] = (BYTE)nbSeq;
+        op+=2;
+    } else {
+        op[0]=0xFF;
+        MEM_writeLE16(op+1, (U16)(nbSeq - LONGNBSEQ));
+        op+=3;
+    }
     assert(op <= oend);
     if (nbSeq==0) {
         /* Copy the old tables over as if we repeated them */
@@ -2002,7 +2013,7 @@
     ZSTD_seqToCodes(seqStorePtr);
     /* build CTable for Literal Lengths */
     {   unsigned max = MaxLL;
-        size_t const mostFrequent = HIST_countFast_wksp(count, &max, llCodeTable, nbSeq, workspace, wkspSize);   /* can't fail */
+        size_t const mostFrequent = HIST_countFast_wksp(count, &max, llCodeTable, nbSeq, entropyWorkspace, entropyWkspSize);   /* can't fail */
         DEBUGLOG(5, "Building LL table");
         nextEntropy->fse.litlength_repeatMode = prevEntropy->fse.litlength_repeatMode;
         LLtype = ZSTD_selectEncodingType(&nextEntropy->fse.litlength_repeatMode,
@@ -2012,10 +2023,14 @@
                                         ZSTD_defaultAllowed, strategy);
         assert(set_basic < set_compressed && set_rle < set_compressed);
         assert(!(LLtype < set_compressed && nextEntropy->fse.litlength_repeatMode != FSE_repeat_none)); /* We don't copy tables */
-        {   size_t const countSize = ZSTD_buildCTable(op, (size_t)(oend - op), CTable_LitLength, LLFSELog, (symbolEncodingType_e)LLtype,
-                                                    count, max, llCodeTable, nbSeq, LL_defaultNorm, LL_defaultNormLog, MaxLL,
-                                                    prevEntropy->fse.litlengthCTable, sizeof(prevEntropy->fse.litlengthCTable),
-                                                    workspace, wkspSize);
+        {   size_t const countSize = ZSTD_buildCTable(
+                op, (size_t)(oend - op),
+                CTable_LitLength, LLFSELog, (symbolEncodingType_e)LLtype,
+                count, max, llCodeTable, nbSeq,
+                LL_defaultNorm, LL_defaultNormLog, MaxLL,
+                prevEntropy->fse.litlengthCTable,
+                sizeof(prevEntropy->fse.litlengthCTable),
+                entropyWorkspace, entropyWkspSize);
             FORWARD_IF_ERROR(countSize);
             if (LLtype == set_compressed)
                 lastNCount = op;
@@ -2024,7 +2039,8 @@
     }   }
     /* build CTable for Offsets */
     {   unsigned max = MaxOff;
-        size_t const mostFrequent = HIST_countFast_wksp(count, &max, ofCodeTable, nbSeq, workspace, wkspSize);  /* can't fail */
+        size_t const mostFrequent = HIST_countFast_wksp(
+            count, &max, ofCodeTable, nbSeq, entropyWorkspace, entropyWkspSize);  /* can't fail */
         /* We can only use the basic table if max <= DefaultMaxOff, otherwise the offsets are too large */
         ZSTD_defaultPolicy_e const defaultPolicy = (max <= DefaultMaxOff) ? ZSTD_defaultAllowed : ZSTD_defaultDisallowed;
         DEBUGLOG(5, "Building OF table");
@@ -2035,10 +2051,14 @@
                                         OF_defaultNorm, OF_defaultNormLog,
                                         defaultPolicy, strategy);
         assert(!(Offtype < set_compressed && nextEntropy->fse.offcode_repeatMode != FSE_repeat_none)); /* We don't copy tables */
-        {   size_t const countSize = ZSTD_buildCTable(op, (size_t)(oend - op), CTable_OffsetBits, OffFSELog, (symbolEncodingType_e)Offtype,
-                                                    count, max, ofCodeTable, nbSeq, OF_defaultNorm, OF_defaultNormLog, DefaultMaxOff,
-                                                    prevEntropy->fse.offcodeCTable, sizeof(prevEntropy->fse.offcodeCTable),
-                                                    workspace, wkspSize);
+        {   size_t const countSize = ZSTD_buildCTable(
+                op, (size_t)(oend - op),
+                CTable_OffsetBits, OffFSELog, (symbolEncodingType_e)Offtype,
+                count, max, ofCodeTable, nbSeq,
+                OF_defaultNorm, OF_defaultNormLog, DefaultMaxOff,
+                prevEntropy->fse.offcodeCTable,
+                sizeof(prevEntropy->fse.offcodeCTable),
+                entropyWorkspace, entropyWkspSize);
             FORWARD_IF_ERROR(countSize);
             if (Offtype == set_compressed)
                 lastNCount = op;
@@ -2047,7 +2067,8 @@
     }   }
     /* build CTable for MatchLengths */
     {   unsigned max = MaxML;
-        size_t const mostFrequent = HIST_countFast_wksp(count, &max, mlCodeTable, nbSeq, workspace, wkspSize);   /* can't fail */
+        size_t const mostFrequent = HIST_countFast_wksp(
+            count, &max, mlCodeTable, nbSeq, entropyWorkspace, entropyWkspSize);   /* can't fail */
         DEBUGLOG(5, "Building ML table (remaining space : %i)", (int)(oend-op));
         nextEntropy->fse.matchlength_repeatMode = prevEntropy->fse.matchlength_repeatMode;
         MLtype = ZSTD_selectEncodingType(&nextEntropy->fse.matchlength_repeatMode,
@@ -2056,10 +2077,14 @@
                                         ML_defaultNorm, ML_defaultNormLog,
                                         ZSTD_defaultAllowed, strategy);
         assert(!(MLtype < set_compressed && nextEntropy->fse.matchlength_repeatMode != FSE_repeat_none)); /* We don't copy tables */
-        {   size_t const countSize = ZSTD_buildCTable(op, (size_t)(oend - op), CTable_MatchLength, MLFSELog, (symbolEncodingType_e)MLtype,
-                                                    count, max, mlCodeTable, nbSeq, ML_defaultNorm, ML_defaultNormLog, MaxML,
-                                                    prevEntropy->fse.matchlengthCTable, sizeof(prevEntropy->fse.matchlengthCTable),
-                                                    workspace, wkspSize);
+        {   size_t const countSize = ZSTD_buildCTable(
+                op, (size_t)(oend - op),
+                CTable_MatchLength, MLFSELog, (symbolEncodingType_e)MLtype,
+                count, max, mlCodeTable, nbSeq,
+                ML_defaultNorm, ML_defaultNormLog, MaxML,
+                prevEntropy->fse.matchlengthCTable,
+                sizeof(prevEntropy->fse.matchlengthCTable),
+                entropyWorkspace, entropyWkspSize);
             FORWARD_IF_ERROR(countSize);
             if (MLtype == set_compressed)
                 lastNCount = op;
@@ -2107,13 +2132,13 @@
                        const ZSTD_CCtx_params* cctxParams,
                              void* dst, size_t dstCapacity,
                              size_t srcSize,
-                             void* workspace, size_t wkspSize,
+                             void* entropyWorkspace, size_t entropyWkspSize,
                              int bmi2)
 {
     size_t const cSize = ZSTD_compressSequences_internal(
                             seqStorePtr, prevEntropy, nextEntropy, cctxParams,
                             dst, dstCapacity,
-                            workspace, wkspSize, bmi2);
+                            entropyWorkspace, entropyWkspSize, bmi2);
     if (cSize == 0) return 0;
     /* When srcSize <= dstCapacity, there is enough space to write a raw uncompressed block.
      * Since we ran out of space, block must be not compressible, so fall back to raw uncompressed block.
@@ -2264,11 +2289,99 @@
     return ZSTDbss_compress;
 }
 
+static void ZSTD_copyBlockSequences(ZSTD_CCtx* zc)
+{
+    const seqStore_t* seqStore = ZSTD_getSeqStore(zc);
+    const seqDef* seqs = seqStore->sequencesStart;
+    size_t seqsSize = seqStore->sequences - seqs;
+
+    ZSTD_Sequence* outSeqs = &zc->seqCollector.seqStart[zc->seqCollector.seqIndex];
+    size_t i; size_t position; int repIdx;
+
+    assert(zc->seqCollector.seqIndex + 1 < zc->seqCollector.maxSequences);
+    for (i = 0, position = 0; i < seqsSize; ++i) {
+        outSeqs[i].offset = seqs[i].offset;
+        outSeqs[i].litLength = seqs[i].litLength;
+        outSeqs[i].matchLength = seqs[i].matchLength + MINMATCH;
+
+        if (i == seqStore->longLengthPos) {
+            if (seqStore->longLengthID == 1) {
+                outSeqs[i].litLength += 0x10000;
+            } else if (seqStore->longLengthID == 2) {
+                outSeqs[i].matchLength += 0x10000;
+            }
+        }
+
+        if (outSeqs[i].offset <= ZSTD_REP_NUM) {
+            outSeqs[i].rep = outSeqs[i].offset;
+            repIdx = (unsigned int)i - outSeqs[i].offset;
+
+            if (outSeqs[i].litLength == 0) {
+                if (outSeqs[i].offset < 3) {
+                    --repIdx;
+                } else {
+                    repIdx = (unsigned int)i - 1;
+                }
+                ++outSeqs[i].rep;
+            }
+            assert(repIdx >= -3);
+            outSeqs[i].offset = repIdx >= 0 ? outSeqs[repIdx].offset : repStartValue[-repIdx - 1];
+            if (outSeqs[i].rep == 4) {
+                --outSeqs[i].offset;
+            }
+        } else {
+            outSeqs[i].offset -= ZSTD_REP_NUM;
+        }
+
+        position += outSeqs[i].litLength;
+        outSeqs[i].matchPos = (unsigned int)position;
+        position += outSeqs[i].matchLength;
+    }
+    zc->seqCollector.seqIndex += seqsSize;
+}
+
+size_t ZSTD_getSequences(ZSTD_CCtx* zc, ZSTD_Sequence* outSeqs,
+    size_t outSeqsSize, const void* src, size_t srcSize)
+{
+    const size_t dstCapacity = ZSTD_compressBound(srcSize);
+    void* dst = ZSTD_malloc(dstCapacity, ZSTD_defaultCMem);
+    SeqCollector seqCollector;
+
+    RETURN_ERROR_IF(dst == NULL, memory_allocation);
+
+    seqCollector.collectSequences = 1;
+    seqCollector.seqStart = outSeqs;
+    seqCollector.seqIndex = 0;
+    seqCollector.maxSequences = outSeqsSize;
+    zc->seqCollector = seqCollector;
+
+    ZSTD_compress2(zc, dst, dstCapacity, src, srcSize);
+    ZSTD_free(dst, ZSTD_defaultCMem);
+    return zc->seqCollector.seqIndex;
+}
+
+/* Returns true if the given block is a RLE block */
+static int ZSTD_isRLE(const BYTE *ip, size_t length) {
+    size_t i;
+    if (length < 2) return 1;
+    for (i = 1; i < length; ++i) {
+        if (ip[0] != ip[i]) return 0;
+    }
+    return 1;
+}
+
 static size_t ZSTD_compressBlock_internal(ZSTD_CCtx* zc,
                                         void* dst, size_t dstCapacity,
-                                        const void* src, size_t srcSize)
+                                        const void* src, size_t srcSize, U32 frame)
 {
+    /* This the upper bound for the length of an rle block.
+     * This isn't the actual upper bound. Finding the real threshold
+     * needs further investigation.
+     */
+    const U32 rleMaxLength = 25;
     size_t cSize;
+    const BYTE* ip = (const BYTE*)src;
+    BYTE* op = (BYTE*)dst;
     DEBUGLOG(5, "ZSTD_compressBlock_internal (dstCapacity=%u, dictLimit=%u, nextToUpdate=%u)",
                 (unsigned)dstCapacity, (unsigned)zc->blockState.matchState.window.dictLimit,
                 (unsigned)zc->blockState.matchState.nextToUpdate);
@@ -2278,6 +2391,11 @@
         if (bss == ZSTDbss_noCompress) { cSize = 0; goto out; }
     }
 
+    if (zc->seqCollector.collectSequences) {
+        ZSTD_copyBlockSequences(zc);
+        return 0;
+    }
+
     /* encode sequences and literals */
     cSize = ZSTD_compressSequences(&zc->seqStore,
             &zc->blockState.prevCBlock->entropy, &zc->blockState.nextCBlock->entropy,
@@ -2287,8 +2405,21 @@
             zc->entropyWorkspace, HUF_WORKSPACE_SIZE /* statically allocated in resetCCtx */,
             zc->bmi2);
 
+    if (frame &&
+        /* We don't want to emit our first block as a RLE even if it qualifies because
+         * doing so will cause the decoder (cli only) to throw a "should consume all input error."
+         * This is only an issue for zstd <= v1.4.3
+         */
+        !zc->isFirstBlock &&
+        cSize < rleMaxLength &&
+        ZSTD_isRLE(ip, srcSize))
+    {
+        cSize = 1;
+        op[0] = ip[0];
+    }
+
 out:
-    if (!ZSTD_isError(cSize) && cSize != 0) {
+    if (!ZSTD_isError(cSize) && cSize > 1) {
         /* confirm repcodes and entropy tables when emitting a compressed block */
         ZSTD_compressedBlockState_t* const tmp = zc->blockState.prevCBlock;
         zc->blockState.prevCBlock = zc->blockState.nextCBlock;
@@ -2305,7 +2436,11 @@
 }
 
 
-static void ZSTD_overflowCorrectIfNeeded(ZSTD_matchState_t* ms, ZSTD_CCtx_params const* params, void const* ip, void const* iend)
+static void ZSTD_overflowCorrectIfNeeded(ZSTD_matchState_t* ms,
+                                         ZSTD_cwksp* ws,
+                                         ZSTD_CCtx_params const* params,
+                                         void const* ip,
+                                         void const* iend)
 {
     if (ZSTD_window_needOverflowCorrection(ms->window, iend)) {
         U32 const maxDist = (U32)1 << params->cParams.windowLog;
@@ -2314,7 +2449,9 @@
         ZSTD_STATIC_ASSERT(ZSTD_CHAINLOG_MAX <= 30);
         ZSTD_STATIC_ASSERT(ZSTD_WINDOWLOG_MAX_32 <= 30);
         ZSTD_STATIC_ASSERT(ZSTD_WINDOWLOG_MAX <= 31);
+        ZSTD_cwksp_mark_tables_dirty(ws);
         ZSTD_reduceIndex(ms, params, correction);
+        ZSTD_cwksp_mark_tables_clean(ws);
         if (ms->nextToUpdate < correction) ms->nextToUpdate = 0;
         else ms->nextToUpdate -= correction;
         /* invalidate dictionaries on overflow correction */
@@ -2323,7 +2460,6 @@
     }
 }
 
-
 /*! ZSTD_compress_frameChunk() :
 *   Compress a chunk of data into one or multiple blocks.
 *   All blocks will be terminated, all input will be consumed.
@@ -2357,7 +2493,8 @@
                         "not enough space to store compressed block");
         if (remaining < blockSize) blockSize = remaining;
 
-        ZSTD_overflowCorrectIfNeeded(ms, &cctx->appliedParams, ip, ip + blockSize);
+        ZSTD_overflowCorrectIfNeeded(
+            ms, &cctx->workspace, &cctx->appliedParams, ip, ip + blockSize);
         ZSTD_checkDictValidity(&ms->window, ip + blockSize, maxDist, &ms->loadedDictEnd, &ms->dictMatchState);
 
         /* Ensure hash/chain table insertion resumes no sooner than lowlimit */
@@ -2365,15 +2502,16 @@
 
         {   size_t cSize = ZSTD_compressBlock_internal(cctx,
                                 op+ZSTD_blockHeaderSize, dstCapacity-ZSTD_blockHeaderSize,
-                                ip, blockSize);
+                                ip, blockSize, 1 /* frame */);
             FORWARD_IF_ERROR(cSize);
-
             if (cSize == 0) {  /* block is not compressible */
                 cSize = ZSTD_noCompressBlock(op, dstCapacity, ip, blockSize, lastBlock);
                 FORWARD_IF_ERROR(cSize);
             } else {
-                U32 const cBlockHeader24 = lastBlock + (((U32)bt_compressed)<<1) + (U32)(cSize << 3);
-                MEM_writeLE24(op, cBlockHeader24);
+                const U32 cBlockHeader = cSize == 1 ?
+                    lastBlock + (((U32)bt_rle)<<1) + (U32)(blockSize << 3) :
+                    lastBlock + (((U32)bt_compressed)<<1) + (U32)(cSize << 3);
+                MEM_writeLE24(op, cBlockHeader);
                 cSize += ZSTD_blockHeaderSize;
             }
 
@@ -2383,6 +2521,7 @@
             op += cSize;
             assert(dstCapacity >= cSize);
             dstCapacity -= cSize;
+            cctx->isFirstBlock = 0;
             DEBUGLOG(5, "ZSTD_compress_frameChunk: adding a block of size %u",
                         (unsigned)cSize);
     }   }
@@ -2393,25 +2532,25 @@
 
 
 static size_t ZSTD_writeFrameHeader(void* dst, size_t dstCapacity,
-                                    ZSTD_CCtx_params params, U64 pledgedSrcSize, U32 dictID)
+                                    const ZSTD_CCtx_params* params, U64 pledgedSrcSize, U32 dictID)
 {   BYTE* const op = (BYTE*)dst;
     U32   const dictIDSizeCodeLength = (dictID>0) + (dictID>=256) + (dictID>=65536);   /* 0-3 */
-    U32   const dictIDSizeCode = params.fParams.noDictIDFlag ? 0 : dictIDSizeCodeLength;   /* 0-3 */
-    U32   const checksumFlag = params.fParams.checksumFlag>0;
-    U32   const windowSize = (U32)1 << params.cParams.windowLog;
-    U32   const singleSegment = params.fParams.contentSizeFlag && (windowSize >= pledgedSrcSize);
-    BYTE  const windowLogByte = (BYTE)((params.cParams.windowLog - ZSTD_WINDOWLOG_ABSOLUTEMIN) << 3);
-    U32   const fcsCode = params.fParams.contentSizeFlag ?
+    U32   const dictIDSizeCode = params->fParams.noDictIDFlag ? 0 : dictIDSizeCodeLength;   /* 0-3 */
+    U32   const checksumFlag = params->fParams.checksumFlag>0;
+    U32   const windowSize = (U32)1 << params->cParams.windowLog;
+    U32   const singleSegment = params->fParams.contentSizeFlag && (windowSize >= pledgedSrcSize);
+    BYTE  const windowLogByte = (BYTE)((params->cParams.windowLog - ZSTD_WINDOWLOG_ABSOLUTEMIN) << 3);
+    U32   const fcsCode = params->fParams.contentSizeFlag ?
                      (pledgedSrcSize>=256) + (pledgedSrcSize>=65536+256) + (pledgedSrcSize>=0xFFFFFFFFU) : 0;  /* 0-3 */
     BYTE  const frameHeaderDescriptionByte = (BYTE)(dictIDSizeCode + (checksumFlag<<2) + (singleSegment<<5) + (fcsCode<<6) );
     size_t pos=0;
 
-    assert(!(params.fParams.contentSizeFlag && pledgedSrcSize == ZSTD_CONTENTSIZE_UNKNOWN));
+    assert(!(params->fParams.contentSizeFlag && pledgedSrcSize == ZSTD_CONTENTSIZE_UNKNOWN));
     RETURN_ERROR_IF(dstCapacity < ZSTD_FRAMEHEADERSIZE_MAX, dstSize_tooSmall);
     DEBUGLOG(4, "ZSTD_writeFrameHeader : dictIDFlag : %u ; dictID : %u ; dictIDSizeCode : %u",
-                !params.fParams.noDictIDFlag, (unsigned)dictID, (unsigned)dictIDSizeCode);
-
-    if (params.format == ZSTD_f_zstd1) {
+                !params->fParams.noDictIDFlag, (unsigned)dictID, (unsigned)dictIDSizeCode);
+
+    if (params->format == ZSTD_f_zstd1) {
         MEM_writeLE32(dst, ZSTD_MAGICNUMBER);
         pos = 4;
     }
@@ -2477,7 +2616,7 @@
                     "missing init (ZSTD_compressBegin)");
 
     if (frame && (cctx->stage==ZSTDcs_init)) {
-        fhSize = ZSTD_writeFrameHeader(dst, dstCapacity, cctx->appliedParams,
+        fhSize = ZSTD_writeFrameHeader(dst, dstCapacity, &cctx->appliedParams,
                                        cctx->pledgedSrcSizePlusOne-1, cctx->dictID);
         FORWARD_IF_ERROR(fhSize);
         assert(fhSize <= dstCapacity);
@@ -2497,13 +2636,15 @@
 
     if (!frame) {
         /* overflow check and correction for block mode */
-        ZSTD_overflowCorrectIfNeeded(ms, &cctx->appliedParams, src, (BYTE const*)src + srcSize);
+        ZSTD_overflowCorrectIfNeeded(
+            ms, &cctx->workspace, &cctx->appliedParams,
+            src, (BYTE const*)src + srcSize);
     }
 
     DEBUGLOG(5, "ZSTD_compressContinue_internal (blockSize=%u)", (unsigned)cctx->blockSize);
     {   size_t const cSize = frame ?
                              ZSTD_compress_frameChunk (cctx, dst, dstCapacity, src, srcSize, lastFrameChunk) :
-                             ZSTD_compressBlock_internal (cctx, dst, dstCapacity, src, srcSize);
+                             ZSTD_compressBlock_internal (cctx, dst, dstCapacity, src, srcSize, 0 /* frame */);
         FORWARD_IF_ERROR(cSize);
         cctx->consumedSrcSize += srcSize;
         cctx->producedCSize += (cSize + fhSize);
@@ -2550,6 +2691,7 @@
  *  @return : 0, or an error code
  */
 static size_t ZSTD_loadDictionaryContent(ZSTD_matchState_t* ms,
+                                         ZSTD_cwksp* ws,
                                          ZSTD_CCtx_params const* params,
                                          const void* src, size_t srcSize,
                                          ZSTD_dictTableLoadMethod_e dtlm)
@@ -2570,7 +2712,7 @@
         size_t const chunk = MIN(remaining, ZSTD_CHUNKSIZE_MAX);
         const BYTE* const ichunk = ip + chunk;
 
-        ZSTD_overflowCorrectIfNeeded(ms, params, ip, ichunk);
+        ZSTD_overflowCorrectIfNeeded(ms, ws, params, ip, ichunk);
 
         switch(params->cParams.strategy)
         {
@@ -2629,10 +2771,11 @@
 /*! ZSTD_loadZstdDictionary() :
  * @return : dictID, or an error code
  *  assumptions : magic number supposed already checked
- *                dictSize supposed > 8
+ *                dictSize supposed >= 8
  */
 static size_t ZSTD_loadZstdDictionary(ZSTD_compressedBlockState_t* bs,
                                       ZSTD_matchState_t* ms,
+                                      ZSTD_cwksp* ws,
                                       ZSTD_CCtx_params const* params,
                                       const void* dict, size_t dictSize,
                                       ZSTD_dictTableLoadMethod_e dtlm,
@@ -2645,7 +2788,7 @@
     size_t dictID;
 
     ZSTD_STATIC_ASSERT(HUF_WORKSPACE_SIZE >= (1<<MAX(MLFSELog,LLFSELog)));
-    assert(dictSize > 8);
+    assert(dictSize >= 8);
     assert(MEM_readLE32(dictPtr) == ZSTD_MAGIC_DICTIONARY);
 
     dictPtr += 4;   /* skip magic number */
@@ -2728,7 +2871,8 @@
         bs->entropy.fse.offcode_repeatMode = FSE_repeat_valid;
         bs->entropy.fse.matchlength_repeatMode = FSE_repeat_valid;
         bs->entropy.fse.litlength_repeatMode = FSE_repeat_valid;
-        FORWARD_IF_ERROR(ZSTD_loadDictionaryContent(ms, params, dictPtr, dictContentSize, dtlm));
+        FORWARD_IF_ERROR(ZSTD_loadDictionaryContent(
+            ms, ws, params, dictPtr, dictContentSize, dtlm));
         return dictID;
     }
 }
@@ -2738,6 +2882,7 @@
 static size_t
 ZSTD_compress_insertDictionary(ZSTD_compressedBlockState_t* bs,
                                ZSTD_matchState_t* ms,
+                               ZSTD_cwksp* ws,
                          const ZSTD_CCtx_params* params,
                          const void* dict, size_t dictSize,
                                ZSTD_dictContentType_e dictContentType,
@@ -2745,27 +2890,35 @@
                                void* workspace)
 {
     DEBUGLOG(4, "ZSTD_compress_insertDictionary (dictSize=%u)", (U32)dictSize);
-    if ((dict==NULL) || (dictSize<=8)) return 0;
+    if ((dict==NULL) || (dictSize<8)) {
+        RETURN_ERROR_IF(dictContentType == ZSTD_dct_fullDict, dictionary_wrong);
+        return 0;
+    }
 
     ZSTD_reset_compressedBlockState(bs);
 
     /* dict restricted modes */
     if (dictContentType == ZSTD_dct_rawContent)
-        return ZSTD_loadDictionaryContent(ms, params, dict, dictSize, dtlm);
+        return ZSTD_loadDictionaryContent(ms, ws, params, dict, dictSize, dtlm);
 
     if (MEM_readLE32(dict) != ZSTD_MAGIC_DICTIONARY) {
         if (dictContentType == ZSTD_dct_auto) {
             DEBUGLOG(4, "raw content dictionary detected");
-            return ZSTD_loadDictionaryContent(ms, params, dict, dictSize, dtlm);
+            return ZSTD_loadDictionaryContent(
+                ms, ws, params, dict, dictSize, dtlm);
         }
         RETURN_ERROR_IF(dictContentType == ZSTD_dct_fullDict, dictionary_wrong);
         assert(0);   /* impossible */
     }
 
     /* dict as full zstd dictionary */
-    return ZSTD_loadZstdDictionary(bs, ms, params, dict, dictSize, dtlm, workspace);
+    return ZSTD_loadZstdDictionary(
+        bs, ms, ws, params, dict, dictSize, dtlm, workspace);
 }
 
+#define ZSTD_USE_CDICT_PARAMS_SRCSIZE_CUTOFF (128 KB)
+#define ZSTD_USE_CDICT_PARAMS_DICTSIZE_MULTIPLIER (6)
+
 /*! ZSTD_compressBegin_internal() :
  * @return : 0, or an error code */
 static size_t ZSTD_compressBegin_internal(ZSTD_CCtx* cctx,
@@ -2773,23 +2926,34 @@
                                     ZSTD_dictContentType_e dictContentType,
                                     ZSTD_dictTableLoadMethod_e dtlm,
                                     const ZSTD_CDict* cdict,
-                                    ZSTD_CCtx_params params, U64 pledgedSrcSize,
+                                    const ZSTD_CCtx_params* params, U64 pledgedSrcSize,
                                     ZSTD_buffered_policy_e zbuff)
 {
-    DEBUGLOG(4, "ZSTD_compressBegin_internal: wlog=%u", params.cParams.windowLog);
+    DEBUGLOG(4, "ZSTD_compressBegin_internal: wlog=%u", params->cParams.windowLog);
     /* params are supposed to be fully validated at this point */
-    assert(!ZSTD_isError(ZSTD_checkCParams(params.cParams)));
+    assert(!ZSTD_isError(ZSTD_checkCParams(params->cParams)));
     assert(!((dict) && (cdict)));  /* either dict or cdict, not both */
-
-    if (cdict && cdict->dictContentSize>0) {
+    if ( (cdict)
+      && (cdict->dictContentSize > 0)
+      && ( pledgedSrcSize < ZSTD_USE_CDICT_PARAMS_SRCSIZE_CUTOFF
+        || pledgedSrcSize < cdict->dictContentSize * ZSTD_USE_CDICT_PARAMS_DICTSIZE_MULTIPLIER
+        || pledgedSrcSize == ZSTD_CONTENTSIZE_UNKNOWN
+        || cdict->compressionLevel == 0)
+      && (params->attachDictPref != ZSTD_dictForceLoad) ) {
         return ZSTD_resetCCtx_usingCDict(cctx, cdict, params, pledgedSrcSize, zbuff);
     }
 
-    FORWARD_IF_ERROR( ZSTD_resetCCtx_internal(cctx, params, pledgedSrcSize,
-                                     ZSTDcrp_continue, zbuff) );
-    {   size_t const dictID = ZSTD_compress_insertDictionary(
-                cctx->blockState.prevCBlock, &cctx->blockState.matchState,
-                &params, dict, dictSize, dictContentType, dtlm, cctx->entropyWorkspace);
+    FORWARD_IF_ERROR( ZSTD_resetCCtx_internal(cctx, *params, pledgedSrcSize,
+                                     ZSTDcrp_makeClean, zbuff) );
+    {   size_t const dictID = cdict ?
+                ZSTD_compress_insertDictionary(
+                        cctx->blockState.prevCBlock, &cctx->blockState.matchState,
+                        &cctx->workspace, params, cdict->dictContent, cdict->dictContentSize,
+                        dictContentType, dtlm, cctx->entropyWorkspace)
+              : ZSTD_compress_insertDictionary(
+                        cctx->blockState.prevCBlock, &cctx->blockState.matchState,
+                        &cctx->workspace, params, dict, dictSize,
+                        dictContentType, dtlm, cctx->entropyWorkspace);
         FORWARD_IF_ERROR(dictID);
         assert(dictID <= UINT_MAX);
         cctx->dictID = (U32)dictID;
@@ -2802,12 +2966,12 @@
                                     ZSTD_dictContentType_e dictContentType,
                                     ZSTD_dictTableLoadMethod_e dtlm,
                                     const ZSTD_CDict* cdict,
-                                    ZSTD_CCtx_params params,
+                                    const ZSTD_CCtx_params* params,
                                     unsigned long long pledgedSrcSize)
 {
-    DEBUGLOG(4, "ZSTD_compressBegin_advanced_internal: wlog=%u", params.cParams.windowLog);
+    DEBUGLOG(4, "ZSTD_compressBegin_advanced_internal: wlog=%u", params->cParams.windowLog);
     /* compression parameters verification and optimization */
-    FORWARD_IF_ERROR( ZSTD_checkCParams(params.cParams) );
+    FORWARD_IF_ERROR( ZSTD_checkCParams(params->cParams) );
     return ZSTD_compressBegin_internal(cctx,
                                        dict, dictSize, dictContentType, dtlm,
                                        cdict,
@@ -2822,21 +2986,21 @@
                                    ZSTD_parameters params, unsigned long long pledgedSrcSize)
 {
     ZSTD_CCtx_params const cctxParams =
-            ZSTD_assignParamsToCCtxParams(cctx->requestedParams, params);
+            ZSTD_assignParamsToCCtxParams(&cctx->requestedParams, params);
     return ZSTD_compressBegin_advanced_internal(cctx,
                                             dict, dictSize, ZSTD_dct_auto, ZSTD_dtlm_fast,
                                             NULL /*cdict*/,
-                                            cctxParams, pledgedSrcSize);
+                                            &cctxParams, pledgedSrcSize);
 }
 
 size_t ZSTD_compressBegin_usingDict(ZSTD_CCtx* cctx, const void* dict, size_t dictSize, int compressionLevel)
 {
     ZSTD_parameters const params = ZSTD_getParams(compressionLevel, ZSTD_CONTENTSIZE_UNKNOWN, dictSize);
     ZSTD_CCtx_params const cctxParams =
-            ZSTD_assignParamsToCCtxParams(cctx->requestedParams, params);
+            ZSTD_assignParamsToCCtxParams(&cctx->requestedParams, params);
     DEBUGLOG(4, "ZSTD_compressBegin_usingDict (dictSize=%u)", (unsigned)dictSize);
     return ZSTD_compressBegin_internal(cctx, dict, dictSize, ZSTD_dct_auto, ZSTD_dtlm_fast, NULL,
-                                       cctxParams, ZSTD_CONTENTSIZE_UNKNOWN, ZSTDb_not_buffered);
+                                       &cctxParams, ZSTD_CONTENTSIZE_UNKNOWN, ZSTDb_not_buffered);
 }
 
 size_t ZSTD_compressBegin(ZSTD_CCtx* cctx, int compressionLevel)
@@ -2859,7 +3023,7 @@
 
     /* special case : empty frame */
     if (cctx->stage == ZSTDcs_init) {
-        fhSize = ZSTD_writeFrameHeader(dst, dstCapacity, cctx->appliedParams, 0, 0);
+        fhSize = ZSTD_writeFrameHeader(dst, dstCapacity, &cctx->appliedParams, 0, 0);
         FORWARD_IF_ERROR(fhSize);
         dstCapacity -= fhSize;
         op += fhSize;
@@ -2920,13 +3084,13 @@
                                       ZSTD_parameters params)
 {
     ZSTD_CCtx_params const cctxParams =
-            ZSTD_assignParamsToCCtxParams(cctx->requestedParams, params);
+            ZSTD_assignParamsToCCtxParams(&cctx->requestedParams, params);
     DEBUGLOG(4, "ZSTD_compress_internal");
     return ZSTD_compress_advanced_internal(cctx,
                                            dst, dstCapacity,
                                            src, srcSize,
                                            dict, dictSize,
-                                           cctxParams);
+                                           &cctxParams);
 }
 
 size_t ZSTD_compress_advanced (ZSTD_CCtx* cctx,
@@ -2950,7 +3114,7 @@
         void* dst, size_t dstCapacity,
         const void* src, size_t srcSize,
         const void* dict,size_t dictSize,
-        ZSTD_CCtx_params params)
+        const ZSTD_CCtx_params* params)
 {
     DEBUGLOG(4, "ZSTD_compress_advanced_internal (srcSize:%u)", (unsigned)srcSize);
     FORWARD_IF_ERROR( ZSTD_compressBegin_internal(cctx,
@@ -2966,9 +3130,9 @@
                                int compressionLevel)
 {
     ZSTD_parameters const params = ZSTD_getParams(compressionLevel, srcSize + (!srcSize), dict ? dictSize : 0);
-    ZSTD_CCtx_params cctxParams = ZSTD_assignParamsToCCtxParams(cctx->requestedParams, params);
+    ZSTD_CCtx_params cctxParams = ZSTD_assignParamsToCCtxParams(&cctx->requestedParams, params);
     assert(params.fParams.contentSizeFlag == 1);
-    return ZSTD_compress_advanced_internal(cctx, dst, dstCapacity, src, srcSize, dict, dictSize, cctxParams);
+    return ZSTD_compress_advanced_internal(cctx, dst, dstCapacity, src, srcSize, dict, dictSize, &cctxParams);
 }
 
 size_t ZSTD_compressCCtx(ZSTD_CCtx* cctx,
@@ -3003,8 +3167,11 @@
         ZSTD_dictLoadMethod_e dictLoadMethod)
 {
     DEBUGLOG(5, "sizeof(ZSTD_CDict) : %u", (unsigned)sizeof(ZSTD_CDict));
-    return sizeof(ZSTD_CDict) + HUF_WORKSPACE_SIZE + ZSTD_sizeof_matchState(&cParams, /* forCCtx */ 0)
-           + (dictLoadMethod == ZSTD_dlm_byRef ? 0 : dictSize);
+    return ZSTD_cwksp_alloc_size(sizeof(ZSTD_CDict))
+         + ZSTD_cwksp_alloc_size(HUF_WORKSPACE_SIZE)
+         + ZSTD_sizeof_matchState(&cParams, /* forCCtx */ 0)
+         + (dictLoadMethod == ZSTD_dlm_byRef ? 0
+            : ZSTD_cwksp_alloc_size(ZSTD_cwksp_align(dictSize, sizeof(void *))));
 }
 
 size_t ZSTD_estimateCDictSize(size_t dictSize, int compressionLevel)
@@ -3017,7 +3184,9 @@
 {
     if (cdict==NULL) return 0;   /* support sizeof on NULL */
     DEBUGLOG(5, "sizeof(*cdict) : %u", (unsigned)sizeof(*cdict));
-    return cdict->workspaceSize + (cdict->dictBuffer ? cdict->dictContentSize : 0) + sizeof(*cdict);
+    /* cdict may be in the workspace */
+    return (cdict->workspace.workspace == cdict ? 0 : sizeof(*cdict))
+        + ZSTD_cwksp_sizeof(&cdict->workspace);
 }
 
 static size_t ZSTD_initCDict_internal(
@@ -3031,28 +3200,29 @@
     assert(!ZSTD_checkCParams(cParams));
     cdict->matchState.cParams = cParams;
     if ((dictLoadMethod == ZSTD_dlm_byRef) || (!dictBuffer) || (!dictSize)) {
-        cdict->dictBuffer = NULL;
         cdict->dictContent = dictBuffer;
     } else {
-        void* const internalBuffer = ZSTD_malloc(dictSize, cdict->customMem);
-        cdict->dictBuffer = internalBuffer;
+         void *internalBuffer = ZSTD_cwksp_reserve_object(&cdict->workspace, ZSTD_cwksp_align(dictSize, sizeof(void*)));
+        RETURN_ERROR_IF(!internalBuffer, memory_allocation);
         cdict->dictContent = internalBuffer;
-        RETURN_ERROR_IF(!internalBuffer, memory_allocation);
         memcpy(internalBuffer, dictBuffer, dictSize);
     }
     cdict->dictContentSize = dictSize;
 
+    cdict->entropyWorkspace = (U32*)ZSTD_cwksp_reserve_object(&cdict->workspace, HUF_WORKSPACE_SIZE);
+
+
     /* Reset the state to no dictionary */
     ZSTD_reset_compressedBlockState(&cdict->cBlockState);
-    {   void* const end = ZSTD_reset_matchState(&cdict->matchState,
-                            (U32*)cdict->workspace + HUF_WORKSPACE_SIZE_U32,
-                            &cParams,
-                             ZSTDcrp_continue, ZSTD_resetTarget_CDict);
-        assert(end == (char*)cdict->workspace + cdict->workspaceSize);
-        (void)end;
-    }
+    FORWARD_IF_ERROR(ZSTD_reset_matchState(
+        &cdict->matchState,
+        &cdict->workspace,
+        &cParams,
+        ZSTDcrp_makeClean,
+        ZSTDirp_reset,
+        ZSTD_resetTarget_CDict));
     /* (Maybe) load the dictionary
-     * Skips loading the dictionary if it is <= 8 bytes.
+     * Skips loading the dictionary if it is < 8 bytes.
      */
     {   ZSTD_CCtx_params params;
         memset(&params, 0, sizeof(params));
@@ -3060,9 +3230,9 @@
         params.fParams.contentSizeFlag = 1;
         params.cParams = cParams;
         {   size_t const dictID = ZSTD_compress_insertDictionary(
-                    &cdict->cBlockState, &cdict->matchState, &params,
-                    cdict->dictContent, cdict->dictContentSize,
-                    dictContentType, ZSTD_dtlm_full, cdict->workspace);
+                    &cdict->cBlockState, &cdict->matchState, &cdict->workspace,
+                    &params, cdict->dictContent, cdict->dictContentSize,
+                    dictContentType, ZSTD_dtlm_full, cdict->entropyWorkspace);
             FORWARD_IF_ERROR(dictID);
             assert(dictID <= (size_t)(U32)-1);
             cdict->dictID = (U32)dictID;
@@ -3080,18 +3250,29 @@
     DEBUGLOG(3, "ZSTD_createCDict_advanced, mode %u", (unsigned)dictContentType);
     if (!customMem.customAlloc ^ !customMem.customFree) return NULL;
 
-    {   ZSTD_CDict* const cdict = (ZSTD_CDict*)ZSTD_malloc(sizeof(ZSTD_CDict), customMem);
-        size_t const workspaceSize = HUF_WORKSPACE_SIZE + ZSTD_sizeof_matchState(&cParams, /* forCCtx */ 0);
+    {   size_t const workspaceSize =
+            ZSTD_cwksp_alloc_size(sizeof(ZSTD_CDict)) +
+            ZSTD_cwksp_alloc_size(HUF_WORKSPACE_SIZE) +
+            ZSTD_sizeof_matchState(&cParams, /* forCCtx */ 0) +
+            (dictLoadMethod == ZSTD_dlm_byRef ? 0
+             : ZSTD_cwksp_alloc_size(ZSTD_cwksp_align(dictSize, sizeof(void*))));
         void* const workspace = ZSTD_malloc(workspaceSize, customMem);
-
-        if (!cdict || !workspace) {
-            ZSTD_free(cdict, customMem);
+        ZSTD_cwksp ws;
+        ZSTD_CDict* cdict;
+
+        if (!workspace) {
             ZSTD_free(workspace, customMem);
             return NULL;
         }
+
+        ZSTD_cwksp_init(&ws, workspace, workspaceSize);
+
+        cdict = (ZSTD_CDict*)ZSTD_cwksp_reserve_object(&ws, sizeof(ZSTD_CDict));
+        assert(cdict != NULL);
+        ZSTD_cwksp_move(&cdict->workspace, &ws);
         cdict->customMem = customMem;
-        cdict->workspace = workspace;
-        cdict->workspaceSize = workspaceSize;
+        cdict->compressionLevel = 0; /* signals advanced API usage */
+
         if (ZSTD_isError( ZSTD_initCDict_internal(cdict,
                                         dictBuffer, dictSize,
                                         dictLoadMethod, dictContentType,
@@ -3107,9 +3288,12 @@
 ZSTD_CDict* ZSTD_createCDict(const void* dict, size_t dictSize, int compressionLevel)
 {
     ZSTD_compressionParameters cParams = ZSTD_getCParams(compressionLevel, 0, dictSize);
-    return ZSTD_createCDict_advanced(dict, dictSize,
-                                     ZSTD_dlm_byCopy, ZSTD_dct_auto,
-                                     cParams, ZSTD_defaultCMem);
+    ZSTD_CDict* cdict = ZSTD_createCDict_advanced(dict, dictSize,
+                                                  ZSTD_dlm_byCopy, ZSTD_dct_auto,
+                                                  cParams, ZSTD_defaultCMem);
+    if (cdict)
+        cdict->compressionLevel = compressionLevel == 0 ? ZSTD_CLEVEL_DEFAULT : compressionLevel;
+    return cdict;
 }
 
 ZSTD_CDict* ZSTD_createCDict_byReference(const void* dict, size_t dictSize, int compressionLevel)
@@ -3124,9 +3308,11 @@
 {
     if (cdict==NULL) return 0;   /* support free on NULL */
     {   ZSTD_customMem const cMem = cdict->customMem;
-        ZSTD_free(cdict->workspace, cMem);
-        ZSTD_free(cdict->dictBuffer, cMem);
-        ZSTD_free(cdict, cMem);
+        int cdictInWorkspace = ZSTD_cwksp_owns_buffer(&cdict->workspace, cdict);
+        ZSTD_cwksp_free(&cdict->workspace, cMem);
+        if (!cdictInWorkspace) {
+            ZSTD_free(cdict, cMem);
+        }
         return 0;
     }
 }
@@ -3152,28 +3338,30 @@
                                  ZSTD_compressionParameters cParams)
 {
     size_t const matchStateSize = ZSTD_sizeof_matchState(&cParams, /* forCCtx */ 0);
-    size_t const neededSize = sizeof(ZSTD_CDict) + (dictLoadMethod == ZSTD_dlm_byRef ? 0 : dictSize)
-                            + HUF_WORKSPACE_SIZE + matchStateSize;
-    ZSTD_CDict* const cdict = (ZSTD_CDict*) workspace;
-    void* ptr;
+    size_t const neededSize = ZSTD_cwksp_alloc_size(sizeof(ZSTD_CDict))
+                            + (dictLoadMethod == ZSTD_dlm_byRef ? 0
+                               : ZSTD_cwksp_alloc_size(ZSTD_cwksp_align(dictSize, sizeof(void*))))
+                            + ZSTD_cwksp_alloc_size(HUF_WORKSPACE_SIZE)
+                            + matchStateSize;
+    ZSTD_CDict* cdict;
+
     if ((size_t)workspace & 7) return NULL;  /* 8-aligned */
+
+    {
+        ZSTD_cwksp ws;
+        ZSTD_cwksp_init(&ws, workspace, workspaceSize);
+        cdict = (ZSTD_CDict*)ZSTD_cwksp_reserve_object(&ws, sizeof(ZSTD_CDict));
+        if (cdict == NULL) return NULL;
+        ZSTD_cwksp_move(&cdict->workspace, &ws);
+    }
+
     DEBUGLOG(4, "(workspaceSize < neededSize) : (%u < %u) => %u",
         (unsigned)workspaceSize, (unsigned)neededSize, (unsigned)(workspaceSize < neededSize));
     if (workspaceSize < neededSize) return NULL;
 
-    if (dictLoadMethod == ZSTD_dlm_byCopy) {
-        memcpy(cdict+1, dict, dictSize);
-        dict = cdict+1;
-        ptr = (char*)workspace + sizeof(ZSTD_CDict) + dictSize;
-    } else {
-        ptr = cdict+1;
-    }
-    cdict->workspace = ptr;
-    cdict->workspaceSize = HUF_WORKSPACE_SIZE + matchStateSize;
-
     if (ZSTD_isError( ZSTD_initCDict_internal(cdict,
                                               dict, dictSize,
-                                              ZSTD_dlm_byRef, dictContentType,
+                                              dictLoadMethod, dictContentType,
                                               cParams) ))
         return NULL;
 
@@ -3195,7 +3383,15 @@
     DEBUGLOG(4, "ZSTD_compressBegin_usingCDict_advanced");
     RETURN_ERROR_IF(cdict==NULL, dictionary_wrong);
     {   ZSTD_CCtx_params params = cctx->requestedParams;
-        params.cParams = ZSTD_getCParamsFromCDict(cdict);
+        params.cParams = ( pledgedSrcSize < ZSTD_USE_CDICT_PARAMS_SRCSIZE_CUTOFF
+                        || pledgedSrcSize < cdict->dictContentSize * ZSTD_USE_CDICT_PARAMS_DICTSIZE_MULTIPLIER
+                        || pledgedSrcSize == ZSTD_CONTENTSIZE_UNKNOWN
+                        || cdict->compressionLevel == 0 )
+                      && (params.attachDictPref != ZSTD_dictForceLoad) ?
+                ZSTD_getCParamsFromCDict(cdict)
+              : ZSTD_getCParams(cdict->compressionLevel,
+                                pledgedSrcSize,
+                                cdict->dictContentSize);
         /* Increase window log to fit the entire dictionary and source if the
          * source size is known. Limit the increase to 19, which is the
          * window log for compression level 1 with the largest source size.
@@ -3209,7 +3405,7 @@
         return ZSTD_compressBegin_internal(cctx,
                                            NULL, 0, ZSTD_dct_auto, ZSTD_dtlm_fast,
                                            cdict,
-                                           params, pledgedSrcSize,
+                                           &params, pledgedSrcSize,
                                            ZSTDb_not_buffered);
     }
 }
@@ -3300,7 +3496,7 @@
     FORWARD_IF_ERROR( ZSTD_compressBegin_internal(cctx,
                                          dict, dictSize, dictContentType, ZSTD_dtlm_fast,
                                          cdict,
-                                         params, pledgedSrcSize,
+                                         &params, pledgedSrcSize,
                                          ZSTDb_buffered) );
 
     cctx->inToCompress = 0;
@@ -3334,13 +3530,14 @@
  *  Assumption 2 : either dict, or cdict, is defined, not both */
 size_t ZSTD_initCStream_internal(ZSTD_CStream* zcs,
                     const void* dict, size_t dictSize, const ZSTD_CDict* cdict,
-                    ZSTD_CCtx_params params, unsigned long long pledgedSrcSize)
+                    const ZSTD_CCtx_params* params,
+                    unsigned long long pledgedSrcSize)
 {
     DEBUGLOG(4, "ZSTD_initCStream_internal");
     FORWARD_IF_ERROR( ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only) );
     FORWARD_IF_ERROR( ZSTD_CCtx_setPledgedSrcSize(zcs, pledgedSrcSize) );
-    assert(!ZSTD_isError(ZSTD_checkCParams(params.cParams)));
-    zcs->requestedParams = params;
+    assert(!ZSTD_isError(ZSTD_checkCParams(params->cParams)));
+    zcs->requestedParams = *params;
     assert(!((dict) && (cdict)));  /* either dict or cdict, not both */
     if (dict) {
         FORWARD_IF_ERROR( ZSTD_CCtx_loadDictionary(zcs, dict, dictSize) );
@@ -3379,7 +3576,7 @@
 /* ZSTD_initCStream_advanced() :
  * pledgedSrcSize must be exact.
  * if srcSize is not known at init time, use value ZSTD_CONTENTSIZE_UNKNOWN.
- * dict is loaded with default parameters ZSTD_dm_auto and ZSTD_dlm_byCopy. */
+ * dict is loaded with default parameters ZSTD_dct_auto and ZSTD_dlm_byCopy. */
 size_t ZSTD_initCStream_advanced(ZSTD_CStream* zcs,
                                  const void* dict, size_t dictSize,
                                  ZSTD_parameters params, unsigned long long pss)
@@ -3393,7 +3590,7 @@
     FORWARD_IF_ERROR( ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only) );
     FORWARD_IF_ERROR( ZSTD_CCtx_setPledgedSrcSize(zcs, pledgedSrcSize) );
     FORWARD_IF_ERROR( ZSTD_checkCParams(params.cParams) );
-    zcs->requestedParams = ZSTD_assignParamsToCCtxParams(zcs->requestedParams, params);
+    zcs->requestedParams = ZSTD_assignParamsToCCtxParams(&zcs->requestedParams, params);
     FORWARD_IF_ERROR( ZSTD_CCtx_loadDictionary(zcs, dict, dictSize) );
     return 0;
 }
@@ -3643,7 +3840,7 @@
             if (cctx->mtctx == NULL) {
                 DEBUGLOG(4, "ZSTD_compressStream2: creating new mtctx for nbWorkers=%u",
                             params.nbWorkers);
-                cctx->mtctx = ZSTDMT_createCCtx_advanced(params.nbWorkers, cctx->customMem);
+                cctx->mtctx = ZSTDMT_createCCtx_advanced((U32)params.nbWorkers, cctx->customMem);
                 RETURN_ERROR_IF(cctx->mtctx == NULL, memory_allocation);
             }
             /* mt compression */
@@ -3771,8 +3968,8 @@
     { 19, 12, 13,  1,  6,  1, ZSTD_fast    },  /* base for negative levels */
     { 19, 13, 14,  1,  7,  0, ZSTD_fast    },  /* level  1 */
     { 20, 15, 16,  1,  6,  0, ZSTD_fast    },  /* level  2 */
-    { 21, 16, 17,  1,  5,  1, ZSTD_dfast   },  /* level  3 */
-    { 21, 18, 18,  1,  5,  1, ZSTD_dfast   },  /* level  4 */
+    { 21, 16, 17,  1,  5,  0, ZSTD_dfast   },  /* level  3 */
+    { 21, 18, 18,  1,  5,  0, ZSTD_dfast   },  /* level  4 */
     { 21, 18, 19,  2,  5,  2, ZSTD_greedy  },  /* level  5 */
     { 21, 19, 19,  3,  5,  4, ZSTD_greedy  },  /* level  6 */
     { 21, 19, 19,  3,  5,  8, ZSTD_lazy    },  /* level  7 */
@@ -3796,8 +3993,8 @@
     /* W,  C,  H,  S,  L,  T, strat */
     { 18, 12, 13,  1,  5,  1, ZSTD_fast    },  /* base for negative levels */
     { 18, 13, 14,  1,  6,  0, ZSTD_fast    },  /* level  1 */
-    { 18, 14, 14,  1,  5,  1, ZSTD_dfast   },  /* level  2 */
-    { 18, 16, 16,  1,  4,  1, ZSTD_dfast   },  /* level  3 */
+    { 18, 14, 14,  1,  5,  0, ZSTD_dfast   },  /* level  2 */
+    { 18, 16, 16,  1,  4,  0, ZSTD_dfast   },  /* level  3 */
     { 18, 16, 17,  2,  5,  2, ZSTD_greedy  },  /* level  4.*/
     { 18, 18, 18,  3,  5,  2, ZSTD_greedy  },  /* level  5.*/
     { 18, 18, 19,  3,  5,  4, ZSTD_lazy    },  /* level  6.*/
@@ -3823,8 +4020,8 @@
     { 17, 12, 12,  1,  5,  1, ZSTD_fast    },  /* base for negative levels */
     { 17, 12, 13,  1,  6,  0, ZSTD_fast    },  /* level  1 */
     { 17, 13, 15,  1,  5,  0, ZSTD_fast    },  /* level  2 */
-    { 17, 15, 16,  2,  5,  1, ZSTD_dfast   },  /* level  3 */
-    { 17, 17, 17,  2,  4,  1, ZSTD_dfast   },  /* level  4 */
+    { 17, 15, 16,  2,  5,  0, ZSTD_dfast   },  /* level  3 */
+    { 17, 17, 17,  2,  4,  0, ZSTD_dfast   },  /* level  4 */
     { 17, 16, 17,  3,  4,  2, ZSTD_greedy  },  /* level  5 */
     { 17, 17, 17,  3,  4,  4, ZSTD_lazy    },  /* level  6 */
     { 17, 17, 17,  3,  4,  8, ZSTD_lazy2   },  /* level  7 */
@@ -3849,7 +4046,7 @@
     { 14, 12, 13,  1,  5,  1, ZSTD_fast    },  /* base for negative levels */
     { 14, 14, 15,  1,  5,  0, ZSTD_fast    },  /* level  1 */
     { 14, 14, 15,  1,  4,  0, ZSTD_fast    },  /* level  2 */
-    { 14, 14, 15,  2,  4,  1, ZSTD_dfast   },  /* level  3 */
+    { 14, 14, 15,  2,  4,  0, ZSTD_dfast   },  /* level  3 */
     { 14, 14, 14,  4,  4,  2, ZSTD_greedy  },  /* level  4 */
     { 14, 14, 14,  3,  4,  4, ZSTD_lazy    },  /* level  5.*/
     { 14, 14, 14,  4,  4,  8, ZSTD_lazy2   },  /* level  6 */
--- a/contrib/python-zstandard/zstd/compress/zstd_compress_internal.h	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/compress/zstd_compress_internal.h	Sat Dec 28 09:55:45 2019 -0800
@@ -19,6 +19,7 @@
 *  Dependencies
 ***************************************/
 #include "zstd_internal.h"
+#include "zstd_cwksp.h"
 #ifdef ZSTD_MULTITHREAD
 #  include "zstdmt_compress.h"
 #endif
@@ -192,6 +193,13 @@
   size_t capacity; /* The capacity starting from `seq` pointer */
 } rawSeqStore_t;
 
+typedef struct {
+    int collectSequences;
+    ZSTD_Sequence* seqStart;
+    size_t seqIndex;
+    size_t maxSequences;
+} SeqCollector;
+
 struct ZSTD_CCtx_params_s {
     ZSTD_format_e format;
     ZSTD_compressionParameters cParams;
@@ -203,6 +211,9 @@
     size_t targetCBlockSize;   /* Tries to fit compressed block size to be around targetCBlockSize.
                                 * No target when targetCBlockSize == 0.
                                 * There is no guarantee on compressed block size */
+    int srcSizeHint;           /* User's best guess of source size.
+                                * Hint is not valid when srcSizeHint == 0.
+                                * There is no guarantee that hint is close to actual source size */
 
     ZSTD_dictAttachPref_e attachDictPref;
     ZSTD_literalCompressionMode_e literalCompressionMode;
@@ -228,9 +239,7 @@
     ZSTD_CCtx_params appliedParams;
     U32   dictID;
 
-    int workSpaceOversizedDuration;
-    void* workSpace;
-    size_t workSpaceSize;
+    ZSTD_cwksp workspace; /* manages buffer for dynamic allocations */
     size_t blockSize;
     unsigned long long pledgedSrcSizePlusOne;  /* this way, 0 (default) == unknown */
     unsigned long long consumedSrcSize;
@@ -238,6 +247,8 @@
     XXH64_state_t xxhState;
     ZSTD_customMem customMem;
     size_t staticSize;
+    SeqCollector seqCollector;
+    int isFirstBlock;
 
     seqStore_t seqStore;      /* sequences storage ptrs */
     ldmState_t ldmState;      /* long distance matching state */
@@ -337,26 +348,57 @@
     return (srcSize >> minlog) + 2;
 }
 
+/*! ZSTD_safecopyLiterals() :
+ *  memcpy() function that won't read beyond more than WILDCOPY_OVERLENGTH bytes past ilimit_w.
+ *  Only called when the sequence ends past ilimit_w, so it only needs to be optimized for single
+ *  large copies.
+ */
+static void ZSTD_safecopyLiterals(BYTE* op, BYTE const* ip, BYTE const* const iend, BYTE const* ilimit_w) {
+    assert(iend > ilimit_w);
+    if (ip <= ilimit_w) {
+        ZSTD_wildcopy(op, ip, ilimit_w - ip, ZSTD_no_overlap);
+        op += ilimit_w - ip;
+        ip = ilimit_w;
+    }
+    while (ip < iend) *op++ = *ip++;
+}
+
 /*! ZSTD_storeSeq() :
- *  Store a sequence (literal length, literals, offset code and match length code) into seqStore_t.
- *  `offsetCode` : distance to match + 3 (values 1-3 are repCodes).
+ *  Store a sequence (litlen, litPtr, offCode and mlBase) into seqStore_t.
+ *  `offCode` : distance to match + ZSTD_REP_MOVE (values <= ZSTD_REP_MOVE are repCodes).
  *  `mlBase` : matchLength - MINMATCH
+ *  Allowed to overread literals up to litLimit.
 */
-MEM_STATIC void ZSTD_storeSeq(seqStore_t* seqStorePtr, size_t litLength, const void* literals, U32 offsetCode, size_t mlBase)
+HINT_INLINE UNUSED_ATTR
+void ZSTD_storeSeq(seqStore_t* seqStorePtr, size_t litLength, const BYTE* literals, const BYTE* litLimit, U32 offCode, size_t mlBase)
 {
+    BYTE const* const litLimit_w = litLimit - WILDCOPY_OVERLENGTH;
+    BYTE const* const litEnd = literals + litLength;
 #if defined(DEBUGLEVEL) && (DEBUGLEVEL >= 6)
     static const BYTE* g_start = NULL;
     if (g_start==NULL) g_start = (const BYTE*)literals;  /* note : index only works for compression within a single segment */
     {   U32 const pos = (U32)((const BYTE*)literals - g_start);
         DEBUGLOG(6, "Cpos%7u :%3u literals, match%4u bytes at offCode%7u",
-               pos, (U32)litLength, (U32)mlBase+MINMATCH, (U32)offsetCode);
+               pos, (U32)litLength, (U32)mlBase+MINMATCH, (U32)offCode);
     }
 #endif
     assert((size_t)(seqStorePtr->sequences - seqStorePtr->sequencesStart) < seqStorePtr->maxNbSeq);
     /* copy Literals */
     assert(seqStorePtr->maxNbLit <= 128 KB);
     assert(seqStorePtr->lit + litLength <= seqStorePtr->litStart + seqStorePtr->maxNbLit);
-    ZSTD_wildcopy(seqStorePtr->lit, literals, (ptrdiff_t)litLength, ZSTD_no_overlap);
+    assert(literals + litLength <= litLimit);
+    if (litEnd <= litLimit_w) {
+        /* Common case we can use wildcopy.
+	 * First copy 16 bytes, because literals are likely short.
+	 */
+        assert(WILDCOPY_OVERLENGTH >= 16);
+        ZSTD_copy16(seqStorePtr->lit, literals);
+        if (litLength > 16) {
+            ZSTD_wildcopy(seqStorePtr->lit+16, literals+16, (ptrdiff_t)litLength-16, ZSTD_no_overlap);
+        }
+    } else {
+        ZSTD_safecopyLiterals(seqStorePtr->lit, literals, litEnd, litLimit_w);
+    }
     seqStorePtr->lit += litLength;
 
     /* literal Length */
@@ -368,7 +410,7 @@
     seqStorePtr->sequences[0].litLength = (U16)litLength;
 
     /* match offset */
-    seqStorePtr->sequences[0].offset = offsetCode + 1;
+    seqStorePtr->sequences[0].offset = offCode + 1;
 
     /* match Length */
     if (mlBase>0xFFFF) {
@@ -910,7 +952,7 @@
 size_t ZSTD_initCStream_internal(ZSTD_CStream* zcs,
                      const void* dict, size_t dictSize,
                      const ZSTD_CDict* cdict,
-                     ZSTD_CCtx_params  params, unsigned long long pledgedSrcSize);
+                     const ZSTD_CCtx_params* params, unsigned long long pledgedSrcSize);
 
 void ZSTD_resetSeqStore(seqStore_t* ssPtr);
 
@@ -925,7 +967,7 @@
                                     ZSTD_dictContentType_e dictContentType,
                                     ZSTD_dictTableLoadMethod_e dtlm,
                                     const ZSTD_CDict* cdict,
-                                    ZSTD_CCtx_params params,
+                                    const ZSTD_CCtx_params* params,
                                     unsigned long long pledgedSrcSize);
 
 /* ZSTD_compress_advanced_internal() :
@@ -934,7 +976,7 @@
                                        void* dst, size_t dstCapacity,
                                  const void* src, size_t srcSize,
                                  const void* dict,size_t dictSize,
-                                 ZSTD_CCtx_params params);
+                                 const ZSTD_CCtx_params* params);
 
 
 /* ZSTD_writeLastEmptyBlock() :
--- a/contrib/python-zstandard/zstd/compress/zstd_compress_literals.c	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/compress/zstd_compress_literals.c	Sat Dec 28 09:55:45 2019 -0800
@@ -70,7 +70,7 @@
                               ZSTD_strategy strategy, int disableLiteralCompression,
                               void* dst, size_t dstCapacity,
                         const void* src, size_t srcSize,
-                              void* workspace, size_t wkspSize,
+                              void* entropyWorkspace, size_t entropyWorkspaceSize,
                         const int bmi2)
 {
     size_t const minGain = ZSTD_minGain(srcSize, strategy);
@@ -99,10 +99,15 @@
     {   HUF_repeat repeat = prevHuf->repeatMode;
         int const preferRepeat = strategy < ZSTD_lazy ? srcSize <= 1024 : 0;
         if (repeat == HUF_repeat_valid && lhSize == 3) singleStream = 1;
-        cLitSize = singleStream ? HUF_compress1X_repeat(ostart+lhSize, dstCapacity-lhSize, src, srcSize, 255, 11,
-                                      workspace, wkspSize, (HUF_CElt*)nextHuf->CTable, &repeat, preferRepeat, bmi2)
-                                : HUF_compress4X_repeat(ostart+lhSize, dstCapacity-lhSize, src, srcSize, 255, 11,
-                                      workspace, wkspSize, (HUF_CElt*)nextHuf->CTable, &repeat, preferRepeat, bmi2);
+        cLitSize = singleStream ?
+            HUF_compress1X_repeat(
+                ostart+lhSize, dstCapacity-lhSize, src, srcSize,
+                255, 11, entropyWorkspace, entropyWorkspaceSize,
+                (HUF_CElt*)nextHuf->CTable, &repeat, preferRepeat, bmi2) :
+            HUF_compress4X_repeat(
+                ostart+lhSize, dstCapacity-lhSize, src, srcSize,
+                255, 11, entropyWorkspace, entropyWorkspaceSize,
+                (HUF_CElt*)nextHuf->CTable, &repeat, preferRepeat, bmi2);
         if (repeat != HUF_repeat_none) {
             /* reused the existing table */
             hType = set_repeat;
--- a/contrib/python-zstandard/zstd/compress/zstd_compress_literals.h	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/compress/zstd_compress_literals.h	Sat Dec 28 09:55:45 2019 -0800
@@ -23,7 +23,7 @@
                               ZSTD_strategy strategy, int disableLiteralCompression,
                               void* dst, size_t dstCapacity,
                         const void* src, size_t srcSize,
-                              void* workspace, size_t wkspSize,
+                              void* entropyWorkspace, size_t entropyWorkspaceSize,
                         const int bmi2);
 
 #endif /* ZSTD_COMPRESS_LITERALS_H */
--- a/contrib/python-zstandard/zstd/compress/zstd_compress_sequences.c	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/compress/zstd_compress_sequences.c	Sat Dec 28 09:55:45 2019 -0800
@@ -222,7 +222,7 @@
                 const BYTE* codeTable, size_t nbSeq,
                 const S16* defaultNorm, U32 defaultNormLog, U32 defaultMax,
                 const FSE_CTable* prevCTable, size_t prevCTableSize,
-                void* workspace, size_t workspaceSize)
+                void* entropyWorkspace, size_t entropyWorkspaceSize)
 {
     BYTE* op = (BYTE*)dst;
     const BYTE* const oend = op + dstCapacity;
@@ -238,7 +238,7 @@
         memcpy(nextCTable, prevCTable, prevCTableSize);
         return 0;
     case set_basic:
-        FORWARD_IF_ERROR(FSE_buildCTable_wksp(nextCTable, defaultNorm, defaultMax, defaultNormLog, workspace, workspaceSize));  /* note : could be pre-calculated */
+        FORWARD_IF_ERROR(FSE_buildCTable_wksp(nextCTable, defaultNorm, defaultMax, defaultNormLog, entropyWorkspace, entropyWorkspaceSize));  /* note : could be pre-calculated */
         return 0;
     case set_compressed: {
         S16 norm[MaxSeq + 1];
@@ -252,7 +252,7 @@
         FORWARD_IF_ERROR(FSE_normalizeCount(norm, tableLog, count, nbSeq_1, max));
         {   size_t const NCountSize = FSE_writeNCount(op, oend - op, norm, max, tableLog);   /* overflow protected */
             FORWARD_IF_ERROR(NCountSize);
-            FORWARD_IF_ERROR(FSE_buildCTable_wksp(nextCTable, norm, max, tableLog, workspace, workspaceSize));
+            FORWARD_IF_ERROR(FSE_buildCTable_wksp(nextCTable, norm, max, tableLog, entropyWorkspace, entropyWorkspaceSize));
             return NCountSize;
         }
     }
--- a/contrib/python-zstandard/zstd/compress/zstd_compress_sequences.h	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/compress/zstd_compress_sequences.h	Sat Dec 28 09:55:45 2019 -0800
@@ -35,7 +35,7 @@
                 const BYTE* codeTable, size_t nbSeq,
                 const S16* defaultNorm, U32 defaultNormLog, U32 defaultMax,
                 const FSE_CTable* prevCTable, size_t prevCTableSize,
-                void* workspace, size_t workspaceSize);
+                void* entropyWorkspace, size_t entropyWorkspaceSize);
 
 size_t ZSTD_encodeSequences(
             void* dst, size_t dstCapacity,
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/python-zstandard/zstd/compress/zstd_cwksp.h	Sat Dec 28 09:55:45 2019 -0800
@@ -0,0 +1,535 @@
+/*
+ * Copyright (c) 2016-present, Yann Collet, Facebook, Inc.
+ * All rights reserved.
+ *
+ * This source code is licensed under both the BSD-style license (found in the
+ * LICENSE file in the root directory of this source tree) and the GPLv2 (found
+ * in the COPYING file in the root directory of this source tree).
+ * You may select, at your option, one of the above-listed licenses.
+ */
+
+#ifndef ZSTD_CWKSP_H
+#define ZSTD_CWKSP_H
+
+/*-*************************************
+*  Dependencies
+***************************************/
+#include "zstd_internal.h"
+
+#if defined (__cplusplus)
+extern "C" {
+#endif
+
+/*-*************************************
+*  Constants
+***************************************/
+
+/* define "workspace is too large" as this number of times larger than needed */
+#define ZSTD_WORKSPACETOOLARGE_FACTOR 3
+
+/* when workspace is continuously too large
+ * during at least this number of times,
+ * context's memory usage is considered wasteful,
+ * because it's sized to handle a worst case scenario which rarely happens.
+ * In which case, resize it down to free some memory */
+#define ZSTD_WORKSPACETOOLARGE_MAXDURATION 128
+
+/* Since the workspace is effectively its own little malloc implementation /
+ * arena, when we run under ASAN, we should similarly insert redzones between
+ * each internal element of the workspace, so ASAN will catch overruns that
+ * reach outside an object but that stay inside the workspace.
+ *
+ * This defines the size of that redzone.
+ */
+#ifndef ZSTD_CWKSP_ASAN_REDZONE_SIZE
+#define ZSTD_CWKSP_ASAN_REDZONE_SIZE 128
+#endif
+
+/*-*************************************
+*  Structures
+***************************************/
+typedef enum {
+    ZSTD_cwksp_alloc_objects,
+    ZSTD_cwksp_alloc_buffers,
+    ZSTD_cwksp_alloc_aligned
+} ZSTD_cwksp_alloc_phase_e;
+
+/**
+ * Zstd fits all its internal datastructures into a single continuous buffer,
+ * so that it only needs to perform a single OS allocation (or so that a buffer
+ * can be provided to it and it can perform no allocations at all). This buffer
+ * is called the workspace.
+ *
+ * Several optimizations complicate that process of allocating memory ranges
+ * from this workspace for each internal datastructure:
+ *
+ * - These different internal datastructures have different setup requirements:
+ *
+ *   - The static objects need to be cleared once and can then be trivially
+ *     reused for each compression.
+ *
+ *   - Various buffers don't need to be initialized at all--they are always
+ *     written into before they're read.
+ *
+ *   - The matchstate tables have a unique requirement that they don't need
+ *     their memory to be totally cleared, but they do need the memory to have
+ *     some bound, i.e., a guarantee that all values in the memory they've been
+ *     allocated is less than some maximum value (which is the starting value
+ *     for the indices that they will then use for compression). When this
+ *     guarantee is provided to them, they can use the memory without any setup
+ *     work. When it can't, they have to clear the area.
+ *
+ * - These buffers also have different alignment requirements.
+ *
+ * - We would like to reuse the objects in the workspace for multiple
+ *   compressions without having to perform any expensive reallocation or
+ *   reinitialization work.
+ *
+ * - We would like to be able to efficiently reuse the workspace across
+ *   multiple compressions **even when the compression parameters change** and
+ *   we need to resize some of the objects (where possible).
+ *
+ * To attempt to manage this buffer, given these constraints, the ZSTD_cwksp
+ * abstraction was created. It works as follows:
+ *
+ * Workspace Layout:
+ *
+ * [                        ... workspace ...                         ]
+ * [objects][tables ... ->] free space [<- ... aligned][<- ... buffers]
+ *
+ * The various objects that live in the workspace are divided into the
+ * following categories, and are allocated separately:
+ *
+ * - Static objects: this is optionally the enclosing ZSTD_CCtx or ZSTD_CDict,
+ *   so that literally everything fits in a single buffer. Note: if present,
+ *   this must be the first object in the workspace, since ZSTD_free{CCtx,
+ *   CDict}() rely on a pointer comparison to see whether one or two frees are
+ *   required.
+ *
+ * - Fixed size objects: these are fixed-size, fixed-count objects that are
+ *   nonetheless "dynamically" allocated in the workspace so that we can
+ *   control how they're initialized separately from the broader ZSTD_CCtx.
+ *   Examples:
+ *   - Entropy Workspace
+ *   - 2 x ZSTD_compressedBlockState_t
+ *   - CDict dictionary contents
+ *
+ * - Tables: these are any of several different datastructures (hash tables,
+ *   chain tables, binary trees) that all respect a common format: they are
+ *   uint32_t arrays, all of whose values are between 0 and (nextSrc - base).
+ *   Their sizes depend on the cparams.
+ *
+ * - Aligned: these buffers are used for various purposes that require 4 byte
+ *   alignment, but don't require any initialization before they're used.
+ *
+ * - Buffers: these buffers are used for various purposes that don't require
+ *   any alignment or initialization before they're used. This means they can
+ *   be moved around at no cost for a new compression.
+ *
+ * Allocating Memory:
+ *
+ * The various types of objects must be allocated in order, so they can be
+ * correctly packed into the workspace buffer. That order is:
+ *
+ * 1. Objects
+ * 2. Buffers
+ * 3. Aligned
+ * 4. Tables
+ *
+ * Attempts to reserve objects of different types out of order will fail.
+ */
+typedef struct {
+    void* workspace;
+    void* workspaceEnd;
+
+    void* objectEnd;
+    void* tableEnd;
+    void* tableValidEnd;
+    void* allocStart;
+
+    int allocFailed;
+    int workspaceOversizedDuration;
+    ZSTD_cwksp_alloc_phase_e phase;
+} ZSTD_cwksp;
+
+/*-*************************************
+*  Functions
+***************************************/
+
+MEM_STATIC size_t ZSTD_cwksp_available_space(ZSTD_cwksp* ws);
+
+MEM_STATIC void ZSTD_cwksp_assert_internal_consistency(ZSTD_cwksp* ws) {
+    (void)ws;
+    assert(ws->workspace <= ws->objectEnd);
+    assert(ws->objectEnd <= ws->tableEnd);
+    assert(ws->objectEnd <= ws->tableValidEnd);
+    assert(ws->tableEnd <= ws->allocStart);
+    assert(ws->tableValidEnd <= ws->allocStart);
+    assert(ws->allocStart <= ws->workspaceEnd);
+}
+
+/**
+ * Align must be a power of 2.
+ */
+MEM_STATIC size_t ZSTD_cwksp_align(size_t size, size_t const align) {
+    size_t const mask = align - 1;
+    assert((align & mask) == 0);
+    return (size + mask) & ~mask;
+}
+
+/**
+ * Use this to determine how much space in the workspace we will consume to
+ * allocate this object. (Normally it should be exactly the size of the object,
+ * but under special conditions, like ASAN, where we pad each object, it might
+ * be larger.)
+ *
+ * Since tables aren't currently redzoned, you don't need to call through this
+ * to figure out how much space you need for the matchState tables. Everything
+ * else is though.
+ */
+MEM_STATIC size_t ZSTD_cwksp_alloc_size(size_t size) {
+#if defined (ADDRESS_SANITIZER) && !defined (ZSTD_ASAN_DONT_POISON_WORKSPACE)
+    return size + 2 * ZSTD_CWKSP_ASAN_REDZONE_SIZE;
+#else
+    return size;
+#endif
+}
+
+MEM_STATIC void ZSTD_cwksp_internal_advance_phase(
+        ZSTD_cwksp* ws, ZSTD_cwksp_alloc_phase_e phase) {
+    assert(phase >= ws->phase);
+    if (phase > ws->phase) {
+        if (ws->phase < ZSTD_cwksp_alloc_buffers &&
+                phase >= ZSTD_cwksp_alloc_buffers) {
+            ws->tableValidEnd = ws->objectEnd;
+        }
+        if (ws->phase < ZSTD_cwksp_alloc_aligned &&
+                phase >= ZSTD_cwksp_alloc_aligned) {
+            /* If unaligned allocations down from a too-large top have left us
+             * unaligned, we need to realign our alloc ptr. Technically, this
+             * can consume space that is unaccounted for in the neededSpace
+             * calculation. However, I believe this can only happen when the
+             * workspace is too large, and specifically when it is too large
+             * by a larger margin than the space that will be consumed. */
+            /* TODO: cleaner, compiler warning friendly way to do this??? */
+            ws->allocStart = (BYTE*)ws->allocStart - ((size_t)ws->allocStart & (sizeof(U32)-1));
+            if (ws->allocStart < ws->tableValidEnd) {
+                ws->tableValidEnd = ws->allocStart;
+            }
+        }
+        ws->phase = phase;
+    }
+}
+
+/**
+ * Returns whether this object/buffer/etc was allocated in this workspace.
+ */
+MEM_STATIC int ZSTD_cwksp_owns_buffer(const ZSTD_cwksp* ws, const void* ptr) {
+    return (ptr != NULL) && (ws->workspace <= ptr) && (ptr <= ws->workspaceEnd);
+}
+
+/**
+ * Internal function. Do not use directly.
+ */
+MEM_STATIC void* ZSTD_cwksp_reserve_internal(
+        ZSTD_cwksp* ws, size_t bytes, ZSTD_cwksp_alloc_phase_e phase) {
+    void* alloc;
+    void* bottom = ws->tableEnd;
+    ZSTD_cwksp_internal_advance_phase(ws, phase);
+    alloc = (BYTE *)ws->allocStart - bytes;
+
+#if defined (ADDRESS_SANITIZER) && !defined (ZSTD_ASAN_DONT_POISON_WORKSPACE)
+    /* over-reserve space */
+    alloc = (BYTE *)alloc - 2 * ZSTD_CWKSP_ASAN_REDZONE_SIZE;
+#endif
+
+    DEBUGLOG(5, "cwksp: reserving %p %zd bytes, %zd bytes remaining",
+        alloc, bytes, ZSTD_cwksp_available_space(ws) - bytes);
+    ZSTD_cwksp_assert_internal_consistency(ws);
+    assert(alloc >= bottom);
+    if (alloc < bottom) {
+        DEBUGLOG(4, "cwksp: alloc failed!");
+        ws->allocFailed = 1;
+        return NULL;
+    }
+    if (alloc < ws->tableValidEnd) {
+        ws->tableValidEnd = alloc;
+    }
+    ws->allocStart = alloc;
+
+#if defined (ADDRESS_SANITIZER) && !defined (ZSTD_ASAN_DONT_POISON_WORKSPACE)
+    /* Move alloc so there's ZSTD_CWKSP_ASAN_REDZONE_SIZE unused space on
+     * either size. */
+    alloc = (BYTE *)alloc + ZSTD_CWKSP_ASAN_REDZONE_SIZE;
+    __asan_unpoison_memory_region(alloc, bytes);
+#endif
+
+    return alloc;
+}
+
+/**
+ * Reserves and returns unaligned memory.
+ */
+MEM_STATIC BYTE* ZSTD_cwksp_reserve_buffer(ZSTD_cwksp* ws, size_t bytes) {
+    return (BYTE*)ZSTD_cwksp_reserve_internal(ws, bytes, ZSTD_cwksp_alloc_buffers);
+}
+
+/**
+ * Reserves and returns memory sized on and aligned on sizeof(unsigned).
+ */
+MEM_STATIC void* ZSTD_cwksp_reserve_aligned(ZSTD_cwksp* ws, size_t bytes) {
+    assert((bytes & (sizeof(U32)-1)) == 0);
+    return ZSTD_cwksp_reserve_internal(ws, ZSTD_cwksp_align(bytes, sizeof(U32)), ZSTD_cwksp_alloc_aligned);
+}
+
+/**
+ * Aligned on sizeof(unsigned). These buffers have the special property that
+ * their values remain constrained, allowing us to re-use them without
+ * memset()-ing them.
+ */
+MEM_STATIC void* ZSTD_cwksp_reserve_table(ZSTD_cwksp* ws, size_t bytes) {
+    const ZSTD_cwksp_alloc_phase_e phase = ZSTD_cwksp_alloc_aligned;
+    void* alloc = ws->tableEnd;
+    void* end = (BYTE *)alloc + bytes;
+    void* top = ws->allocStart;
+
+    DEBUGLOG(5, "cwksp: reserving %p table %zd bytes, %zd bytes remaining",
+        alloc, bytes, ZSTD_cwksp_available_space(ws) - bytes);
+    assert((bytes & (sizeof(U32)-1)) == 0);
+    ZSTD_cwksp_internal_advance_phase(ws, phase);
+    ZSTD_cwksp_assert_internal_consistency(ws);
+    assert(end <= top);
+    if (end > top) {
+        DEBUGLOG(4, "cwksp: table alloc failed!");
+        ws->allocFailed = 1;
+        return NULL;
+    }
+    ws->tableEnd = end;
+
+#if defined (ADDRESS_SANITIZER) && !defined (ZSTD_ASAN_DONT_POISON_WORKSPACE)
+    __asan_unpoison_memory_region(alloc, bytes);
+#endif
+
+    return alloc;
+}
+
+/**
+ * Aligned on sizeof(void*).
+ */
+MEM_STATIC void* ZSTD_cwksp_reserve_object(ZSTD_cwksp* ws, size_t bytes) {
+    size_t roundedBytes = ZSTD_cwksp_align(bytes, sizeof(void*));
+    void* alloc = ws->objectEnd;
+    void* end = (BYTE*)alloc + roundedBytes;
+
+#if defined (ADDRESS_SANITIZER) && !defined (ZSTD_ASAN_DONT_POISON_WORKSPACE)
+    /* over-reserve space */
+    end = (BYTE *)end + 2 * ZSTD_CWKSP_ASAN_REDZONE_SIZE;
+#endif
+
+    DEBUGLOG(5,
+        "cwksp: reserving %p object %zd bytes (rounded to %zd), %zd bytes remaining",
+        alloc, bytes, roundedBytes, ZSTD_cwksp_available_space(ws) - roundedBytes);
+    assert(((size_t)alloc & (sizeof(void*)-1)) == 0);
+    assert((bytes & (sizeof(void*)-1)) == 0);
+    ZSTD_cwksp_assert_internal_consistency(ws);
+    /* we must be in the first phase, no advance is possible */
+    if (ws->phase != ZSTD_cwksp_alloc_objects || end > ws->workspaceEnd) {
+        DEBUGLOG(4, "cwksp: object alloc failed!");
+        ws->allocFailed = 1;
+        return NULL;
+    }
+    ws->objectEnd = end;
+    ws->tableEnd = end;
+    ws->tableValidEnd = end;
+
+#if defined (ADDRESS_SANITIZER) && !defined (ZSTD_ASAN_DONT_POISON_WORKSPACE)
+    /* Move alloc so there's ZSTD_CWKSP_ASAN_REDZONE_SIZE unused space on
+     * either size. */
+    alloc = (BYTE *)alloc + ZSTD_CWKSP_ASAN_REDZONE_SIZE;
+    __asan_unpoison_memory_region(alloc, bytes);
+#endif
+
+    return alloc;
+}
+
+MEM_STATIC void ZSTD_cwksp_mark_tables_dirty(ZSTD_cwksp* ws) {
+    DEBUGLOG(4, "cwksp: ZSTD_cwksp_mark_tables_dirty");
+
+#if defined (MEMORY_SANITIZER) && !defined (ZSTD_MSAN_DONT_POISON_WORKSPACE)
+    /* To validate that the table re-use logic is sound, and that we don't
+     * access table space that we haven't cleaned, we re-"poison" the table
+     * space every time we mark it dirty. */
+    {
+        size_t size = (BYTE*)ws->tableValidEnd - (BYTE*)ws->objectEnd;
+        assert(__msan_test_shadow(ws->objectEnd, size) == -1);
+        __msan_poison(ws->objectEnd, size);
+    }
+#endif
+
+    assert(ws->tableValidEnd >= ws->objectEnd);
+    assert(ws->tableValidEnd <= ws->allocStart);
+    ws->tableValidEnd = ws->objectEnd;
+    ZSTD_cwksp_assert_internal_consistency(ws);
+}
+
+MEM_STATIC void ZSTD_cwksp_mark_tables_clean(ZSTD_cwksp* ws) {
+    DEBUGLOG(4, "cwksp: ZSTD_cwksp_mark_tables_clean");
+    assert(ws->tableValidEnd >= ws->objectEnd);
+    assert(ws->tableValidEnd <= ws->allocStart);
+    if (ws->tableValidEnd < ws->tableEnd) {
+        ws->tableValidEnd = ws->tableEnd;
+    }
+    ZSTD_cwksp_assert_internal_consistency(ws);
+}
+
+/**
+ * Zero the part of the allocated tables not already marked clean.
+ */
+MEM_STATIC void ZSTD_cwksp_clean_tables(ZSTD_cwksp* ws) {
+    DEBUGLOG(4, "cwksp: ZSTD_cwksp_clean_tables");
+    assert(ws->tableValidEnd >= ws->objectEnd);
+    assert(ws->tableValidEnd <= ws->allocStart);
+    if (ws->tableValidEnd < ws->tableEnd) {
+        memset(ws->tableValidEnd, 0, (BYTE*)ws->tableEnd - (BYTE*)ws->tableValidEnd);
+    }
+    ZSTD_cwksp_mark_tables_clean(ws);
+}
+
+/**
+ * Invalidates table allocations.
+ * All other allocations remain valid.
+ */
+MEM_STATIC void ZSTD_cwksp_clear_tables(ZSTD_cwksp* ws) {
+    DEBUGLOG(4, "cwksp: clearing tables!");
+
+#if defined (ADDRESS_SANITIZER) && !defined (ZSTD_ASAN_DONT_POISON_WORKSPACE)
+    {
+        size_t size = (BYTE*)ws->tableValidEnd - (BYTE*)ws->objectEnd;
+        __asan_poison_memory_region(ws->objectEnd, size);
+    }
+#endif
+
+    ws->tableEnd = ws->objectEnd;
+    ZSTD_cwksp_assert_internal_consistency(ws);
+}
+
+/**
+ * Invalidates all buffer, aligned, and table allocations.
+ * Object allocations remain valid.
+ */
+MEM_STATIC void ZSTD_cwksp_clear(ZSTD_cwksp* ws) {
+    DEBUGLOG(4, "cwksp: clearing!");
+
+#if defined (MEMORY_SANITIZER) && !defined (ZSTD_MSAN_DONT_POISON_WORKSPACE)
+    /* To validate that the context re-use logic is sound, and that we don't
+     * access stuff that this compression hasn't initialized, we re-"poison"
+     * the workspace (or at least the non-static, non-table parts of it)
+     * every time we start a new compression. */
+    {
+        size_t size = (BYTE*)ws->workspaceEnd - (BYTE*)ws->tableValidEnd;
+        __msan_poison(ws->tableValidEnd, size);
+    }
+#endif
+
+#if defined (ADDRESS_SANITIZER) && !defined (ZSTD_ASAN_DONT_POISON_WORKSPACE)
+    {
+        size_t size = (BYTE*)ws->workspaceEnd - (BYTE*)ws->objectEnd;
+        __asan_poison_memory_region(ws->objectEnd, size);
+    }
+#endif
+
+    ws->tableEnd = ws->objectEnd;
+    ws->allocStart = ws->workspaceEnd;
+    ws->allocFailed = 0;
+    if (ws->phase > ZSTD_cwksp_alloc_buffers) {
+        ws->phase = ZSTD_cwksp_alloc_buffers;
+    }
+    ZSTD_cwksp_assert_internal_consistency(ws);
+}
+
+/**
+ * The provided workspace takes ownership of the buffer [start, start+size).
+ * Any existing values in the workspace are ignored (the previously managed
+ * buffer, if present, must be separately freed).
+ */
+MEM_STATIC void ZSTD_cwksp_init(ZSTD_cwksp* ws, void* start, size_t size) {
+    DEBUGLOG(4, "cwksp: init'ing workspace with %zd bytes", size);
+    assert(((size_t)start & (sizeof(void*)-1)) == 0); /* ensure correct alignment */
+    ws->workspace = start;
+    ws->workspaceEnd = (BYTE*)start + size;
+    ws->objectEnd = ws->workspace;
+    ws->tableValidEnd = ws->objectEnd;
+    ws->phase = ZSTD_cwksp_alloc_objects;
+    ZSTD_cwksp_clear(ws);
+    ws->workspaceOversizedDuration = 0;
+    ZSTD_cwksp_assert_internal_consistency(ws);
+}
+
+MEM_STATIC size_t ZSTD_cwksp_create(ZSTD_cwksp* ws, size_t size, ZSTD_customMem customMem) {
+    void* workspace = ZSTD_malloc(size, customMem);
+    DEBUGLOG(4, "cwksp: creating new workspace with %zd bytes", size);
+    RETURN_ERROR_IF(workspace == NULL, memory_allocation);
+    ZSTD_cwksp_init(ws, workspace, size);
+    return 0;
+}
+
+MEM_STATIC void ZSTD_cwksp_free(ZSTD_cwksp* ws, ZSTD_customMem customMem) {
+    void *ptr = ws->workspace;
+    DEBUGLOG(4, "cwksp: freeing workspace");
+    memset(ws, 0, sizeof(ZSTD_cwksp));
+    ZSTD_free(ptr, customMem);
+}
+
+/**
+ * Moves the management of a workspace from one cwksp to another. The src cwksp
+ * is left in an invalid state (src must be re-init()'ed before its used again).
+ */
+MEM_STATIC void ZSTD_cwksp_move(ZSTD_cwksp* dst, ZSTD_cwksp* src) {
+    *dst = *src;
+    memset(src, 0, sizeof(ZSTD_cwksp));
+}
+
+MEM_STATIC size_t ZSTD_cwksp_sizeof(const ZSTD_cwksp* ws) {
+    return (size_t)((BYTE*)ws->workspaceEnd - (BYTE*)ws->workspace);
+}
+
+MEM_STATIC int ZSTD_cwksp_reserve_failed(const ZSTD_cwksp* ws) {
+    return ws->allocFailed;
+}
+
+/*-*************************************
+*  Functions Checking Free Space
+***************************************/
+
+MEM_STATIC size_t ZSTD_cwksp_available_space(ZSTD_cwksp* ws) {
+    return (size_t)((BYTE*)ws->allocStart - (BYTE*)ws->tableEnd);
+}
+
+MEM_STATIC int ZSTD_cwksp_check_available(ZSTD_cwksp* ws, size_t additionalNeededSpace) {
+    return ZSTD_cwksp_available_space(ws) >= additionalNeededSpace;
+}
+
+MEM_STATIC int ZSTD_cwksp_check_too_large(ZSTD_cwksp* ws, size_t additionalNeededSpace) {
+    return ZSTD_cwksp_check_available(
+        ws, additionalNeededSpace * ZSTD_WORKSPACETOOLARGE_FACTOR);
+}
+
+MEM_STATIC int ZSTD_cwksp_check_wasteful(ZSTD_cwksp* ws, size_t additionalNeededSpace) {
+    return ZSTD_cwksp_check_too_large(ws, additionalNeededSpace)
+        && ws->workspaceOversizedDuration > ZSTD_WORKSPACETOOLARGE_MAXDURATION;
+}
+
+MEM_STATIC void ZSTD_cwksp_bump_oversized_duration(
+        ZSTD_cwksp* ws, size_t additionalNeededSpace) {
+    if (ZSTD_cwksp_check_too_large(ws, additionalNeededSpace)) {
+        ws->workspaceOversizedDuration++;
+    } else {
+        ws->workspaceOversizedDuration = 0;
+    }
+}
+
+#if defined (__cplusplus)
+}
+#endif
+
+#endif /* ZSTD_CWKSP_H */
--- a/contrib/python-zstandard/zstd/compress/zstd_double_fast.c	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/compress/zstd_double_fast.c	Sat Dec 28 09:55:45 2019 -0800
@@ -148,7 +148,7 @@
             const BYTE* repMatchEnd = repIndex < prefixLowestIndex ? dictEnd : iend;
             mLength = ZSTD_count_2segments(ip+1+4, repMatch+4, iend, repMatchEnd, prefixLowest) + 4;
             ip++;
-            ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, 0, mLength-MINMATCH);
+            ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, iend, 0, mLength-MINMATCH);
             goto _match_stored;
         }
 
@@ -157,7 +157,7 @@
           && ((offset_1 > 0) & (MEM_read32(ip+1-offset_1) == MEM_read32(ip+1)))) {
             mLength = ZSTD_count(ip+1+4, ip+1+4-offset_1, iend) + 4;
             ip++;
-            ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, 0, mLength-MINMATCH);
+            ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, iend, 0, mLength-MINMATCH);
             goto _match_stored;
         }
 
@@ -247,7 +247,7 @@
         offset_2 = offset_1;
         offset_1 = offset;
 
-        ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, offset + ZSTD_REP_MOVE, mLength-MINMATCH);
+        ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, iend, offset + ZSTD_REP_MOVE, mLength-MINMATCH);
 
 _match_stored:
         /* match found */
@@ -278,7 +278,7 @@
                         const BYTE* const repEnd2 = repIndex2 < prefixLowestIndex ? dictEnd : iend;
                         size_t const repLength2 = ZSTD_count_2segments(ip+4, repMatch2+4, iend, repEnd2, prefixLowest) + 4;
                         U32 tmpOffset = offset_2; offset_2 = offset_1; offset_1 = tmpOffset;   /* swap offset_2 <=> offset_1 */
-                        ZSTD_storeSeq(seqStore, 0, anchor, 0, repLength2-MINMATCH);
+                        ZSTD_storeSeq(seqStore, 0, anchor, iend, 0, repLength2-MINMATCH);
                         hashSmall[ZSTD_hashPtr(ip, hBitsS, mls)] = current2;
                         hashLong[ZSTD_hashPtr(ip, hBitsL, 8)] = current2;
                         ip += repLength2;
@@ -297,7 +297,7 @@
                     U32 const tmpOff = offset_2; offset_2 = offset_1; offset_1 = tmpOff;  /* swap offset_2 <=> offset_1 */
                     hashSmall[ZSTD_hashPtr(ip, hBitsS, mls)] = (U32)(ip-base);
                     hashLong[ZSTD_hashPtr(ip, hBitsL, 8)] = (U32)(ip-base);
-                    ZSTD_storeSeq(seqStore, 0, anchor, 0, rLength-MINMATCH);
+                    ZSTD_storeSeq(seqStore, 0, anchor, iend, 0, rLength-MINMATCH);
                     ip += rLength;
                     anchor = ip;
                     continue;   /* faster when present ... (?) */
@@ -411,7 +411,7 @@
             const BYTE* repMatchEnd = repIndex < prefixStartIndex ? dictEnd : iend;
             mLength = ZSTD_count_2segments(ip+1+4, repMatch+4, iend, repMatchEnd, prefixStart) + 4;
             ip++;
-            ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, 0, mLength-MINMATCH);
+            ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, iend, 0, mLength-MINMATCH);
         } else {
             if ((matchLongIndex > dictStartIndex) && (MEM_read64(matchLong) == MEM_read64(ip))) {
                 const BYTE* const matchEnd = matchLongIndex < prefixStartIndex ? dictEnd : iend;
@@ -422,7 +422,7 @@
                 while (((ip>anchor) & (matchLong>lowMatchPtr)) && (ip[-1] == matchLong[-1])) { ip--; matchLong--; mLength++; }   /* catch up */
                 offset_2 = offset_1;
                 offset_1 = offset;
-                ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, offset + ZSTD_REP_MOVE, mLength-MINMATCH);
+                ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, iend, offset + ZSTD_REP_MOVE, mLength-MINMATCH);
 
             } else if ((matchIndex > dictStartIndex) && (MEM_read32(match) == MEM_read32(ip))) {
                 size_t const h3 = ZSTD_hashPtr(ip+1, hBitsL, 8);
@@ -447,7 +447,7 @@
                 }
                 offset_2 = offset_1;
                 offset_1 = offset;
-                ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, offset + ZSTD_REP_MOVE, mLength-MINMATCH);
+                ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, iend, offset + ZSTD_REP_MOVE, mLength-MINMATCH);
 
             } else {
                 ip += ((ip-anchor) >> kSearchStrength) + 1;
@@ -479,7 +479,7 @@
                     const BYTE* const repEnd2 = repIndex2 < prefixStartIndex ? dictEnd : iend;
                     size_t const repLength2 = ZSTD_count_2segments(ip+4, repMatch2+4, iend, repEnd2, prefixStart) + 4;
                     U32 const tmpOffset = offset_2; offset_2 = offset_1; offset_1 = tmpOffset;   /* swap offset_2 <=> offset_1 */
-                    ZSTD_storeSeq(seqStore, 0, anchor, 0, repLength2-MINMATCH);
+                    ZSTD_storeSeq(seqStore, 0, anchor, iend, 0, repLength2-MINMATCH);
                     hashSmall[ZSTD_hashPtr(ip, hBitsS, mls)] = current2;
                     hashLong[ZSTD_hashPtr(ip, hBitsL, 8)] = current2;
                     ip += repLength2;
--- a/contrib/python-zstandard/zstd/compress/zstd_fast.c	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/compress/zstd_fast.c	Sat Dec 28 09:55:45 2019 -0800
@@ -8,7 +8,7 @@
  * You may select, at your option, one of the above-listed licenses.
  */
 
-#include "zstd_compress_internal.h"
+#include "zstd_compress_internal.h"  /* ZSTD_hashPtr, ZSTD_count, ZSTD_storeSeq */
 #include "zstd_fast.h"
 
 
@@ -43,8 +43,8 @@
 }
 
 
-FORCE_INLINE_TEMPLATE
-size_t ZSTD_compressBlock_fast_generic(
+FORCE_INLINE_TEMPLATE size_t
+ZSTD_compressBlock_fast_generic(
         ZSTD_matchState_t* ms, seqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
         void const* src, size_t srcSize,
         U32 const mls)
@@ -74,8 +74,7 @@
     DEBUGLOG(5, "ZSTD_compressBlock_fast_generic");
     ip0 += (ip0 == prefixStart);
     ip1 = ip0 + 1;
-    {
-        U32 const maxRep = (U32)(ip0 - prefixStart);
+    {   U32 const maxRep = (U32)(ip0 - prefixStart);
         if (offset_2 > maxRep) offsetSaved = offset_2, offset_2 = 0;
         if (offset_1 > maxRep) offsetSaved = offset_1, offset_1 = 0;
     }
@@ -118,8 +117,7 @@
             match0 = match1;
             goto _offset;
         }
-        {
-            size_t const step = ((ip0-anchor) >> (kSearchStrength - 1)) + stepSize;
+        {   size_t const step = ((size_t)(ip0-anchor) >> (kSearchStrength - 1)) + stepSize;
             assert(step >= 2);
             ip0 += step;
             ip1 += step;
@@ -138,7 +136,7 @@
 _match: /* Requires: ip0, match0, offcode */
         /* Count the forward length */
         mLength += ZSTD_count(ip0+mLength+4, match0+mLength+4, iend) + 4;
-        ZSTD_storeSeq(seqStore, ip0-anchor, anchor, offcode, mLength-MINMATCH);
+        ZSTD_storeSeq(seqStore, (size_t)(ip0-anchor), anchor, iend, offcode, mLength-MINMATCH);
         /* match found */
         ip0 += mLength;
         anchor = ip0;
@@ -150,16 +148,15 @@
             hashTable[ZSTD_hashPtr(base+current0+2, hlog, mls)] = current0+2;  /* here because current+2 could be > iend-8 */
             hashTable[ZSTD_hashPtr(ip0-2, hlog, mls)] = (U32)(ip0-2-base);
 
-            while ( (ip0 <= ilimit)
-                 && ( (offset_2>0)
-                    & (MEM_read32(ip0) == MEM_read32(ip0 - offset_2)) )) {
+            while ( ((ip0 <= ilimit) & (offset_2>0))  /* offset_2==0 means offset_2 is invalidated */
+                 && (MEM_read32(ip0) == MEM_read32(ip0 - offset_2)) ) {
                 /* store sequence */
                 size_t const rLength = ZSTD_count(ip0+4, ip0+4-offset_2, iend) + 4;
-                U32 const tmpOff = offset_2; offset_2 = offset_1; offset_1 = tmpOff;  /* swap offset_2 <=> offset_1 */
+                { U32 const tmpOff = offset_2; offset_2 = offset_1; offset_1 = tmpOff; } /* swap offset_2 <=> offset_1 */
                 hashTable[ZSTD_hashPtr(ip0, hlog, mls)] = (U32)(ip0-base);
                 ip0 += rLength;
                 ip1 = ip0 + 1;
-                ZSTD_storeSeq(seqStore, 0, anchor, 0, rLength-MINMATCH);
+                ZSTD_storeSeq(seqStore, 0 /*litLen*/, anchor, iend, 0 /*offCode*/, rLength-MINMATCH);
                 anchor = ip0;
                 continue;   /* faster when present (confirmed on gcc-8) ... (?) */
             }
@@ -179,8 +176,7 @@
         ZSTD_matchState_t* ms, seqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
         void const* src, size_t srcSize)
 {
-    ZSTD_compressionParameters const* cParams = &ms->cParams;
-    U32 const mls = cParams->minMatch;
+    U32 const mls = ms->cParams.minMatch;
     assert(ms->dictMatchState == NULL);
     switch(mls)
     {
@@ -265,7 +261,7 @@
             const BYTE* const repMatchEnd = repIndex < prefixStartIndex ? dictEnd : iend;
             mLength = ZSTD_count_2segments(ip+1+4, repMatch+4, iend, repMatchEnd, prefixStart) + 4;
             ip++;
-            ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, 0, mLength-MINMATCH);
+            ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, iend, 0, mLength-MINMATCH);
         } else if ( (matchIndex <= prefixStartIndex) ) {
             size_t const dictHash = ZSTD_hashPtr(ip, dictHLog, mls);
             U32 const dictMatchIndex = dictHashTable[dictHash];
@@ -285,7 +281,7 @@
                 } /* catch up */
                 offset_2 = offset_1;
                 offset_1 = offset;
-                ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, offset + ZSTD_REP_MOVE, mLength-MINMATCH);
+                ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, iend, offset + ZSTD_REP_MOVE, mLength-MINMATCH);
             }
         } else if (MEM_read32(match) != MEM_read32(ip)) {
             /* it's not a match, and we're not going to check the dictionary */
@@ -300,7 +296,7 @@
                  && (ip[-1] == match[-1])) { ip--; match--; mLength++; } /* catch up */
             offset_2 = offset_1;
             offset_1 = offset;
-            ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, offset + ZSTD_REP_MOVE, mLength-MINMATCH);
+            ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, iend, offset + ZSTD_REP_MOVE, mLength-MINMATCH);
         }
 
         /* match found */
@@ -325,7 +321,7 @@
                     const BYTE* const repEnd2 = repIndex2 < prefixStartIndex ? dictEnd : iend;
                     size_t const repLength2 = ZSTD_count_2segments(ip+4, repMatch2+4, iend, repEnd2, prefixStart) + 4;
                     U32 tmpOffset = offset_2; offset_2 = offset_1; offset_1 = tmpOffset;   /* swap offset_2 <=> offset_1 */
-                    ZSTD_storeSeq(seqStore, 0, anchor, 0, repLength2-MINMATCH);
+                    ZSTD_storeSeq(seqStore, 0, anchor, iend, 0, repLength2-MINMATCH);
                     hashTable[ZSTD_hashPtr(ip, hlog, mls)] = current2;
                     ip += repLength2;
                     anchor = ip;
@@ -348,8 +344,7 @@
         ZSTD_matchState_t* ms, seqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
         void const* src, size_t srcSize)
 {
-    ZSTD_compressionParameters const* cParams = &ms->cParams;
-    U32 const mls = cParams->minMatch;
+    U32 const mls = ms->cParams.minMatch;
     assert(ms->dictMatchState != NULL);
     switch(mls)
     {
@@ -408,16 +403,17 @@
         const U32    repIndex = current + 1 - offset_1;
         const BYTE* const repBase = repIndex < prefixStartIndex ? dictBase : base;
         const BYTE* const repMatch = repBase + repIndex;
-        size_t mLength;
         hashTable[h] = current;   /* update hash table */
         assert(offset_1 <= current +1);   /* check repIndex */
 
         if ( (((U32)((prefixStartIndex-1) - repIndex) >= 3) /* intentional underflow */ & (repIndex > dictStartIndex))
            && (MEM_read32(repMatch) == MEM_read32(ip+1)) ) {
             const BYTE* const repMatchEnd = repIndex < prefixStartIndex ? dictEnd : iend;
-            mLength = ZSTD_count_2segments(ip+1 +4, repMatch +4, iend, repMatchEnd, prefixStart) + 4;
+            size_t const rLength = ZSTD_count_2segments(ip+1 +4, repMatch +4, iend, repMatchEnd, prefixStart) + 4;
             ip++;
-            ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, 0, mLength-MINMATCH);
+            ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, iend, 0, rLength-MINMATCH);
+            ip += rLength;
+            anchor = ip;
         } else {
             if ( (matchIndex < dictStartIndex) ||
                  (MEM_read32(match) != MEM_read32(ip)) ) {
@@ -427,19 +423,15 @@
             }
             {   const BYTE* const matchEnd = matchIndex < prefixStartIndex ? dictEnd : iend;
                 const BYTE* const lowMatchPtr = matchIndex < prefixStartIndex ? dictStart : prefixStart;
-                U32 offset;
-                mLength = ZSTD_count_2segments(ip+4, match+4, iend, matchEnd, prefixStart) + 4;
+                U32 const offset = current - matchIndex;
+                size_t mLength = ZSTD_count_2segments(ip+4, match+4, iend, matchEnd, prefixStart) + 4;
                 while (((ip>anchor) & (match>lowMatchPtr)) && (ip[-1] == match[-1])) { ip--; match--; mLength++; }   /* catch up */
-                offset = current - matchIndex;
-                offset_2 = offset_1;
-                offset_1 = offset;
-                ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, offset + ZSTD_REP_MOVE, mLength-MINMATCH);
+                offset_2 = offset_1; offset_1 = offset;  /* update offset history */
+                ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, iend, offset + ZSTD_REP_MOVE, mLength-MINMATCH);
+                ip += mLength;
+                anchor = ip;
         }   }
 
-        /* found a match : store it */
-        ip += mLength;
-        anchor = ip;
-
         if (ip <= ilimit) {
             /* Fill Table */
             hashTable[ZSTD_hashPtr(base+current+2, hlog, mls)] = current+2;
@@ -448,13 +440,13 @@
             while (ip <= ilimit) {
                 U32 const current2 = (U32)(ip-base);
                 U32 const repIndex2 = current2 - offset_2;
-                const BYTE* repMatch2 = repIndex2 < prefixStartIndex ? dictBase + repIndex2 : base + repIndex2;
+                const BYTE* const repMatch2 = repIndex2 < prefixStartIndex ? dictBase + repIndex2 : base + repIndex2;
                 if ( (((U32)((prefixStartIndex-1) - repIndex2) >= 3) & (repIndex2 > dictStartIndex))  /* intentional overflow */
                    && (MEM_read32(repMatch2) == MEM_read32(ip)) ) {
                     const BYTE* const repEnd2 = repIndex2 < prefixStartIndex ? dictEnd : iend;
                     size_t const repLength2 = ZSTD_count_2segments(ip+4, repMatch2+4, iend, repEnd2, prefixStart) + 4;
-                    U32 const tmpOffset = offset_2; offset_2 = offset_1; offset_1 = tmpOffset;   /* swap offset_2 <=> offset_1 */
-                    ZSTD_storeSeq(seqStore, 0, anchor, 0, repLength2-MINMATCH);
+                    { U32 const tmpOffset = offset_2; offset_2 = offset_1; offset_1 = tmpOffset; }  /* swap offset_2 <=> offset_1 */
+                    ZSTD_storeSeq(seqStore, 0 /*litlen*/, anchor, iend, 0 /*offcode*/, repLength2-MINMATCH);
                     hashTable[ZSTD_hashPtr(ip, hlog, mls)] = current2;
                     ip += repLength2;
                     anchor = ip;
@@ -476,8 +468,7 @@
         ZSTD_matchState_t* ms, seqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
         void const* src, size_t srcSize)
 {
-    ZSTD_compressionParameters const* cParams = &ms->cParams;
-    U32 const mls = cParams->minMatch;
+    U32 const mls = ms->cParams.minMatch;
     switch(mls)
     {
     default: /* includes case 3 */
--- a/contrib/python-zstandard/zstd/compress/zstd_lazy.c	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/compress/zstd_lazy.c	Sat Dec 28 09:55:45 2019 -0800
@@ -810,7 +810,7 @@
         /* store sequence */
 _storeSequence:
         {   size_t const litLength = start - anchor;
-            ZSTD_storeSeq(seqStore, litLength, anchor, (U32)offset, matchLength-MINMATCH);
+            ZSTD_storeSeq(seqStore, litLength, anchor, iend, (U32)offset, matchLength-MINMATCH);
             anchor = ip = start + matchLength;
         }
 
@@ -828,7 +828,7 @@
                     const BYTE* const repEnd2 = repIndex < prefixLowestIndex ? dictEnd : iend;
                     matchLength = ZSTD_count_2segments(ip+4, repMatch+4, iend, repEnd2, prefixLowest) + 4;
                     offset = offset_2; offset_2 = offset_1; offset_1 = (U32)offset;   /* swap offset_2 <=> offset_1 */
-                    ZSTD_storeSeq(seqStore, 0, anchor, 0, matchLength-MINMATCH);
+                    ZSTD_storeSeq(seqStore, 0, anchor, iend, 0, matchLength-MINMATCH);
                     ip += matchLength;
                     anchor = ip;
                     continue;
@@ -843,7 +843,7 @@
                 /* store sequence */
                 matchLength = ZSTD_count(ip+4, ip+4-offset_2, iend) + 4;
                 offset = offset_2; offset_2 = offset_1; offset_1 = (U32)offset; /* swap repcodes */
-                ZSTD_storeSeq(seqStore, 0, anchor, 0, matchLength-MINMATCH);
+                ZSTD_storeSeq(seqStore, 0, anchor, iend, 0, matchLength-MINMATCH);
                 ip += matchLength;
                 anchor = ip;
                 continue;   /* faster when present ... (?) */
@@ -1051,7 +1051,7 @@
         /* store sequence */
 _storeSequence:
         {   size_t const litLength = start - anchor;
-            ZSTD_storeSeq(seqStore, litLength, anchor, (U32)offset, matchLength-MINMATCH);
+            ZSTD_storeSeq(seqStore, litLength, anchor, iend, (U32)offset, matchLength-MINMATCH);
             anchor = ip = start + matchLength;
         }
 
@@ -1066,7 +1066,7 @@
                 const BYTE* const repEnd = repIndex < dictLimit ? dictEnd : iend;
                 matchLength = ZSTD_count_2segments(ip+4, repMatch+4, iend, repEnd, prefixStart) + 4;
                 offset = offset_2; offset_2 = offset_1; offset_1 = (U32)offset;   /* swap offset history */
-                ZSTD_storeSeq(seqStore, 0, anchor, 0, matchLength-MINMATCH);
+                ZSTD_storeSeq(seqStore, 0, anchor, iend, 0, matchLength-MINMATCH);
                 ip += matchLength;
                 anchor = ip;
                 continue;   /* faster when present ... (?) */
--- a/contrib/python-zstandard/zstd/compress/zstd_ldm.c	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/compress/zstd_ldm.c	Sat Dec 28 09:55:45 2019 -0800
@@ -49,9 +49,9 @@
 {
     size_t const ldmHSize = ((size_t)1) << params.hashLog;
     size_t const ldmBucketSizeLog = MIN(params.bucketSizeLog, params.hashLog);
-    size_t const ldmBucketSize =
-        ((size_t)1) << (params.hashLog - ldmBucketSizeLog);
-    size_t const totalSize = ldmBucketSize + ldmHSize * sizeof(ldmEntry_t);
+    size_t const ldmBucketSize = ((size_t)1) << (params.hashLog - ldmBucketSizeLog);
+    size_t const totalSize = ZSTD_cwksp_alloc_size(ldmBucketSize)
+                           + ZSTD_cwksp_alloc_size(ldmHSize * sizeof(ldmEntry_t));
     return params.enableLdm ? totalSize : 0;
 }
 
@@ -583,7 +583,7 @@
                 rep[i] = rep[i-1];
             rep[0] = sequence.offset;
             /* Store the sequence */
-            ZSTD_storeSeq(seqStore, newLitLength, ip - newLitLength,
+            ZSTD_storeSeq(seqStore, newLitLength, ip - newLitLength, iend,
                           sequence.offset + ZSTD_REP_MOVE,
                           sequence.matchLength - MINMATCH);
             ip += sequence.matchLength;
--- a/contrib/python-zstandard/zstd/compress/zstd_opt.c	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/compress/zstd_opt.c	Sat Dec 28 09:55:45 2019 -0800
@@ -1098,7 +1098,7 @@
 
                     assert(anchor + llen <= iend);
                     ZSTD_updateStats(optStatePtr, llen, anchor, offCode, mlen);
-                    ZSTD_storeSeq(seqStore, llen, anchor, offCode, mlen-MINMATCH);
+                    ZSTD_storeSeq(seqStore, llen, anchor, iend, offCode, mlen-MINMATCH);
                     anchor += advance;
                     ip = anchor;
             }   }
--- a/contrib/python-zstandard/zstd/compress/zstdmt_compress.c	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/compress/zstdmt_compress.c	Sat Dec 28 09:55:45 2019 -0800
@@ -668,7 +668,7 @@
 
     /* init */
     if (job->cdict) {
-        size_t const initError = ZSTD_compressBegin_advanced_internal(cctx, NULL, 0, ZSTD_dct_auto, ZSTD_dtlm_fast, job->cdict, jobParams, job->fullFrameSize);
+        size_t const initError = ZSTD_compressBegin_advanced_internal(cctx, NULL, 0, ZSTD_dct_auto, ZSTD_dtlm_fast, job->cdict, &jobParams, job->fullFrameSize);
         assert(job->firstJob);  /* only allowed for first job */
         if (ZSTD_isError(initError)) JOB_ERROR(initError);
     } else {  /* srcStart points at reloaded section */
@@ -680,7 +680,7 @@
                                         job->prefix.start, job->prefix.size, ZSTD_dct_rawContent, /* load dictionary in "content-only" mode (no header analysis) */
                                         ZSTD_dtlm_fast,
                                         NULL, /*cdict*/
-                                        jobParams, pledgedSrcSize);
+                                        &jobParams, pledgedSrcSize);
             if (ZSTD_isError(initError)) JOB_ERROR(initError);
     }   }
 
@@ -927,12 +927,18 @@
     unsigned jobID;
     DEBUGLOG(3, "ZSTDMT_releaseAllJobResources");
     for (jobID=0; jobID <= mtctx->jobIDMask; jobID++) {
+        /* Copy the mutex/cond out */
+        ZSTD_pthread_mutex_t const mutex = mtctx->jobs[jobID].job_mutex;
+        ZSTD_pthread_cond_t const cond = mtctx->jobs[jobID].job_cond;
+
         DEBUGLOG(4, "job%02u: release dst address %08X", jobID, (U32)(size_t)mtctx->jobs[jobID].dstBuff.start);
         ZSTDMT_releaseBuffer(mtctx->bufPool, mtctx->jobs[jobID].dstBuff);
-        mtctx->jobs[jobID].dstBuff = g_nullBuffer;
-        mtctx->jobs[jobID].cSize = 0;
+
+        /* Clear the job description, but keep the mutex/cond */
+        memset(&mtctx->jobs[jobID], 0, sizeof(mtctx->jobs[jobID]));
+        mtctx->jobs[jobID].job_mutex = mutex;
+        mtctx->jobs[jobID].job_cond = cond;
     }
-    memset(mtctx->jobs, 0, (mtctx->jobIDMask+1)*sizeof(ZSTDMT_jobDescription));
     mtctx->inBuff.buffer = g_nullBuffer;
     mtctx->inBuff.filled = 0;
     mtctx->allJobsCompleted = 1;
@@ -1028,9 +1034,9 @@
 
 /* Sets parameters relevant to the compression job,
  * initializing others to default values. */
-static ZSTD_CCtx_params ZSTDMT_initJobCCtxParams(ZSTD_CCtx_params const params)
+static ZSTD_CCtx_params ZSTDMT_initJobCCtxParams(const ZSTD_CCtx_params* params)
 {
-    ZSTD_CCtx_params jobParams = params;
+    ZSTD_CCtx_params jobParams = *params;
     /* Clear parameters related to multithreading */
     jobParams.forceWindow = 0;
     jobParams.nbWorkers = 0;
@@ -1151,16 +1157,16 @@
 /* =====   Multi-threaded compression   ===== */
 /* ------------------------------------------ */
 
-static unsigned ZSTDMT_computeTargetJobLog(ZSTD_CCtx_params const params)
+static unsigned ZSTDMT_computeTargetJobLog(const ZSTD_CCtx_params* params)
 {
     unsigned jobLog;
-    if (params.ldmParams.enableLdm) {
+    if (params->ldmParams.enableLdm) {
         /* In Long Range Mode, the windowLog is typically oversized.
          * In which case, it's preferable to determine the jobSize
          * based on chainLog instead. */
-        jobLog = MAX(21, params.cParams.chainLog + 4);
+        jobLog = MAX(21, params->cParams.chainLog + 4);
     } else {
-        jobLog = MAX(20, params.cParams.windowLog + 2);
+        jobLog = MAX(20, params->cParams.windowLog + 2);
     }
     return MIN(jobLog, (unsigned)ZSTDMT_JOBLOG_MAX);
 }
@@ -1193,27 +1199,27 @@
     return ovlog;
 }
 
-static size_t ZSTDMT_computeOverlapSize(ZSTD_CCtx_params const params)
+static size_t ZSTDMT_computeOverlapSize(const ZSTD_CCtx_params* params)
 {
-    int const overlapRLog = 9 - ZSTDMT_overlapLog(params.overlapLog, params.cParams.strategy);
-    int ovLog = (overlapRLog >= 8) ? 0 : (params.cParams.windowLog - overlapRLog);
+    int const overlapRLog = 9 - ZSTDMT_overlapLog(params->overlapLog, params->cParams.strategy);
+    int ovLog = (overlapRLog >= 8) ? 0 : (params->cParams.windowLog - overlapRLog);
     assert(0 <= overlapRLog && overlapRLog <= 8);
-    if (params.ldmParams.enableLdm) {
+    if (params->ldmParams.enableLdm) {
         /* In Long Range Mode, the windowLog is typically oversized.
          * In which case, it's preferable to determine the jobSize
          * based on chainLog instead.
          * Then, ovLog becomes a fraction of the jobSize, rather than windowSize */
-        ovLog = MIN(params.cParams.windowLog, ZSTDMT_computeTargetJobLog(params) - 2)
+        ovLog = MIN(params->cParams.windowLog, ZSTDMT_computeTargetJobLog(params) - 2)
                 - overlapRLog;
     }
     assert(0 <= ovLog && ovLog <= ZSTD_WINDOWLOG_MAX);
-    DEBUGLOG(4, "overlapLog : %i", params.overlapLog);
+    DEBUGLOG(4, "overlapLog : %i", params->overlapLog);
     DEBUGLOG(4, "overlap size : %i", 1 << ovLog);
     return (ovLog==0) ? 0 : (size_t)1 << ovLog;
 }
 
 static unsigned
-ZSTDMT_computeNbJobs(ZSTD_CCtx_params params, size_t srcSize, unsigned nbWorkers)
+ZSTDMT_computeNbJobs(const ZSTD_CCtx_params* params, size_t srcSize, unsigned nbWorkers)
 {
     assert(nbWorkers>0);
     {   size_t const jobSizeTarget = (size_t)1 << ZSTDMT_computeTargetJobLog(params);
@@ -1236,9 +1242,9 @@
           const ZSTD_CDict* cdict,
                 ZSTD_CCtx_params params)
 {
-    ZSTD_CCtx_params const jobParams = ZSTDMT_initJobCCtxParams(params);
-    size_t const overlapSize = ZSTDMT_computeOverlapSize(params);
-    unsigned const nbJobs = ZSTDMT_computeNbJobs(params, srcSize, params.nbWorkers);
+    ZSTD_CCtx_params const jobParams = ZSTDMT_initJobCCtxParams(&params);
+    size_t const overlapSize = ZSTDMT_computeOverlapSize(&params);
+    unsigned const nbJobs = ZSTDMT_computeNbJobs(&params, srcSize, params.nbWorkers);
     size_t const proposedJobSize = (srcSize + (nbJobs-1)) / nbJobs;
     size_t const avgJobSize = (((proposedJobSize-1) & 0x1FFFF) < 0x7FFF) ? proposedJobSize + 0xFFFF : proposedJobSize;   /* avoid too small last block */
     const char* const srcStart = (const char*)src;
@@ -1256,7 +1262,7 @@
         ZSTD_CCtx* const cctx = mtctx->cctxPool->cctx[0];
         DEBUGLOG(4, "ZSTDMT_compress_advanced_internal: fallback to single-thread mode");
         if (cdict) return ZSTD_compress_usingCDict_advanced(cctx, dst, dstCapacity, src, srcSize, cdict, jobParams.fParams);
-        return ZSTD_compress_advanced_internal(cctx, dst, dstCapacity, src, srcSize, NULL, 0, jobParams);
+        return ZSTD_compress_advanced_internal(cctx, dst, dstCapacity, src, srcSize, NULL, 0, &jobParams);
     }
 
     assert(avgJobSize >= 256 KB);  /* condition for ZSTD_compressBound(A) + ZSTD_compressBound(B) <= ZSTD_compressBound(A+B), required to compress directly into Dst (no additional buffer) */
@@ -1404,12 +1410,12 @@
 
     mtctx->singleBlockingThread = (pledgedSrcSize <= ZSTDMT_JOBSIZE_MIN);  /* do not trigger multi-threading when srcSize is too small */
     if (mtctx->singleBlockingThread) {
-        ZSTD_CCtx_params const singleThreadParams = ZSTDMT_initJobCCtxParams(params);
+        ZSTD_CCtx_params const singleThreadParams = ZSTDMT_initJobCCtxParams(&params);
         DEBUGLOG(5, "ZSTDMT_initCStream_internal: switch to single blocking thread mode");
         assert(singleThreadParams.nbWorkers == 0);
         return ZSTD_initCStream_internal(mtctx->cctxPool->cctx[0],
                                          dict, dictSize, cdict,
-                                         singleThreadParams, pledgedSrcSize);
+                                         &singleThreadParams, pledgedSrcSize);
     }
 
     DEBUGLOG(4, "ZSTDMT_initCStream_internal: %u workers", params.nbWorkers);
@@ -1435,11 +1441,11 @@
         mtctx->cdict = cdict;
     }
 
-    mtctx->targetPrefixSize = ZSTDMT_computeOverlapSize(params);
+    mtctx->targetPrefixSize = ZSTDMT_computeOverlapSize(&params);
     DEBUGLOG(4, "overlapLog=%i => %u KB", params.overlapLog, (U32)(mtctx->targetPrefixSize>>10));
     mtctx->targetSectionSize = params.jobSize;
     if (mtctx->targetSectionSize == 0) {
-        mtctx->targetSectionSize = 1ULL << ZSTDMT_computeTargetJobLog(params);
+        mtctx->targetSectionSize = 1ULL << ZSTDMT_computeTargetJobLog(&params);
     }
     assert(mtctx->targetSectionSize <= (size_t)ZSTDMT_JOBSIZE_MAX);
 
--- a/contrib/python-zstandard/zstd/decompress/huf_decompress.c	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/decompress/huf_decompress.c	Sat Dec 28 09:55:45 2019 -0800
@@ -61,7 +61,9 @@
 *  Error Management
 ****************************************************************/
 #define HUF_isError ERR_isError
+#ifndef CHECK_F
 #define CHECK_F(f) { size_t const err_ = (f); if (HUF_isError(err_)) return err_; }
+#endif
 
 
 /* **************************************************************
--- a/contrib/python-zstandard/zstd/decompress/zstd_decompress.c	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/decompress/zstd_decompress.c	Sat Dec 28 09:55:45 2019 -0800
@@ -88,10 +88,7 @@
 
 static size_t ZSTD_startingInputLength(ZSTD_format_e format)
 {
-    size_t const startingInputLength = (format==ZSTD_f_zstd1_magicless) ?
-                    ZSTD_FRAMEHEADERSIZE_PREFIX - ZSTD_FRAMEIDSIZE :
-                    ZSTD_FRAMEHEADERSIZE_PREFIX;
-    ZSTD_STATIC_ASSERT(ZSTD_FRAMEHEADERSIZE_PREFIX >= ZSTD_FRAMEIDSIZE);
+    size_t const startingInputLength = ZSTD_FRAMEHEADERSIZE_PREFIX(format);
     /* only supports formats ZSTD_f_zstd1 and ZSTD_f_zstd1_magicless */
     assert( (format == ZSTD_f_zstd1) || (format == ZSTD_f_zstd1_magicless) );
     return startingInputLength;
@@ -376,7 +373,7 @@
 {
     unsigned long long totalDstSize = 0;
 
-    while (srcSize >= ZSTD_FRAMEHEADERSIZE_PREFIX) {
+    while (srcSize >= ZSTD_startingInputLength(ZSTD_f_zstd1)) {
         U32 const magicNumber = MEM_readLE32(src);
 
         if ((magicNumber & ZSTD_MAGIC_SKIPPABLE_MASK) == ZSTD_MAGIC_SKIPPABLE_START) {
@@ -629,11 +626,12 @@
 
     /* check */
     RETURN_ERROR_IF(
-        remainingSrcSize < ZSTD_FRAMEHEADERSIZE_MIN+ZSTD_blockHeaderSize,
+        remainingSrcSize < ZSTD_FRAMEHEADERSIZE_MIN(dctx->format)+ZSTD_blockHeaderSize,
         srcSize_wrong);
 
     /* Frame Header */
-    {   size_t const frameHeaderSize = ZSTD_frameHeaderSize(ip, ZSTD_FRAMEHEADERSIZE_PREFIX);
+    {   size_t const frameHeaderSize = ZSTD_frameHeaderSize_internal(
+                ip, ZSTD_FRAMEHEADERSIZE_PREFIX(dctx->format), dctx->format);
         if (ZSTD_isError(frameHeaderSize)) return frameHeaderSize;
         RETURN_ERROR_IF(remainingSrcSize < frameHeaderSize+ZSTD_blockHeaderSize,
                         srcSize_wrong);
@@ -714,7 +712,7 @@
         dictSize = ZSTD_DDict_dictSize(ddict);
     }
 
-    while (srcSize >= ZSTD_FRAMEHEADERSIZE_PREFIX) {
+    while (srcSize >= ZSTD_startingInputLength(dctx->format)) {
 
 #if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT >= 1)
         if (ZSTD_isLegacy(src, srcSize)) {
@@ -1098,7 +1096,7 @@
         size_t const dictContentSize = (size_t)(dictEnd - (dictPtr+12));
         for (i=0; i<3; i++) {
             U32 const rep = MEM_readLE32(dictPtr); dictPtr += 4;
-            RETURN_ERROR_IF(rep==0 || rep >= dictContentSize,
+            RETURN_ERROR_IF(rep==0 || rep > dictContentSize,
                             dictionary_corrupted);
             entropy->rep[i] = rep;
     }   }
@@ -1267,7 +1265,7 @@
 {
     RETURN_ERROR_IF(dctx->streamStage != zdss_init, stage_wrong);
     ZSTD_clearDict(dctx);
-    if (dict && dictSize >= 8) {
+    if (dict && dictSize != 0) {
         dctx->ddictLocal = ZSTD_createDDict_advanced(dict, dictSize, dictLoadMethod, dictContentType, dctx->customMem);
         RETURN_ERROR_IF(dctx->ddictLocal == NULL, memory_allocation);
         dctx->ddict = dctx->ddictLocal;
@@ -1300,14 +1298,14 @@
 
 
 /* ZSTD_initDStream_usingDict() :
- * return : expected size, aka ZSTD_FRAMEHEADERSIZE_PREFIX.
+ * return : expected size, aka ZSTD_startingInputLength().
  * this function cannot fail */
 size_t ZSTD_initDStream_usingDict(ZSTD_DStream* zds, const void* dict, size_t dictSize)
 {
     DEBUGLOG(4, "ZSTD_initDStream_usingDict");
     FORWARD_IF_ERROR( ZSTD_DCtx_reset(zds, ZSTD_reset_session_only) );
     FORWARD_IF_ERROR( ZSTD_DCtx_loadDictionary(zds, dict, dictSize) );
-    return ZSTD_FRAMEHEADERSIZE_PREFIX;
+    return ZSTD_startingInputLength(zds->format);
 }
 
 /* note : this variant can't fail */
@@ -1324,16 +1322,16 @@
 {
     FORWARD_IF_ERROR( ZSTD_DCtx_reset(dctx, ZSTD_reset_session_only) );
     FORWARD_IF_ERROR( ZSTD_DCtx_refDDict(dctx, ddict) );
-    return ZSTD_FRAMEHEADERSIZE_PREFIX;
+    return ZSTD_startingInputLength(dctx->format);
 }
 
 /* ZSTD_resetDStream() :
- * return : expected size, aka ZSTD_FRAMEHEADERSIZE_PREFIX.
+ * return : expected size, aka ZSTD_startingInputLength().
  * this function cannot fail */
 size_t ZSTD_resetDStream(ZSTD_DStream* dctx)
 {
     FORWARD_IF_ERROR(ZSTD_DCtx_reset(dctx, ZSTD_reset_session_only));
-    return ZSTD_FRAMEHEADERSIZE_PREFIX;
+    return ZSTD_startingInputLength(dctx->format);
 }
 
 
@@ -1564,7 +1562,7 @@
                             zds->lhSize += remainingInput;
                         }
                         input->pos = input->size;
-                        return (MAX(ZSTD_FRAMEHEADERSIZE_MIN, hSize) - zds->lhSize) + ZSTD_blockHeaderSize;   /* remaining header bytes + next block header */
+                        return (MAX((size_t)ZSTD_FRAMEHEADERSIZE_MIN(zds->format), hSize) - zds->lhSize) + ZSTD_blockHeaderSize;   /* remaining header bytes + next block header */
                     }
                     assert(ip != NULL);
                     memcpy(zds->headerBuffer + zds->lhSize, ip, toLoad); zds->lhSize = hSize; ip += toLoad;
--- a/contrib/python-zstandard/zstd/decompress/zstd_decompress_block.c	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/decompress/zstd_decompress_block.c	Sat Dec 28 09:55:45 2019 -0800
@@ -573,38 +573,118 @@
     size_t pos;
 } seqState_t;
 
+/*! ZSTD_overlapCopy8() :
+ *  Copies 8 bytes from ip to op and updates op and ip where ip <= op.
+ *  If the offset is < 8 then the offset is spread to at least 8 bytes.
+ *
+ *  Precondition: *ip <= *op
+ *  Postcondition: *op - *op >= 8
+ */
+static void ZSTD_overlapCopy8(BYTE** op, BYTE const** ip, size_t offset) {
+    assert(*ip <= *op);
+    if (offset < 8) {
+        /* close range match, overlap */
+        static const U32 dec32table[] = { 0, 1, 2, 1, 4, 4, 4, 4 };   /* added */
+        static const int dec64table[] = { 8, 8, 8, 7, 8, 9,10,11 };   /* subtracted */
+        int const sub2 = dec64table[offset];
+        (*op)[0] = (*ip)[0];
+        (*op)[1] = (*ip)[1];
+        (*op)[2] = (*ip)[2];
+        (*op)[3] = (*ip)[3];
+        *ip += dec32table[offset];
+        ZSTD_copy4(*op+4, *ip);
+        *ip -= sub2;
+    } else {
+        ZSTD_copy8(*op, *ip);
+    }
+    *ip += 8;
+    *op += 8;
+    assert(*op - *ip >= 8);
+}
 
-/* ZSTD_execSequenceLast7():
- * exceptional case : decompress a match starting within last 7 bytes of output buffer.
- * requires more careful checks, to ensure there is no overflow.
- * performance does not matter though.
- * note : this case is supposed to be never generated "naturally" by reference encoder,
- *        since in most cases it needs at least 8 bytes to look for a match.
- *        but it's allowed by the specification. */
+/*! ZSTD_safecopy() :
+ *  Specialized version of memcpy() that is allowed to READ up to WILDCOPY_OVERLENGTH past the input buffer
+ *  and write up to 16 bytes past oend_w (op >= oend_w is allowed).
+ *  This function is only called in the uncommon case where the sequence is near the end of the block. It
+ *  should be fast for a single long sequence, but can be slow for several short sequences.
+ *
+ *  @param ovtype controls the overlap detection
+ *         - ZSTD_no_overlap: The source and destination are guaranteed to be at least WILDCOPY_VECLEN bytes apart.
+ *         - ZSTD_overlap_src_before_dst: The src and dst may overlap and may be any distance apart.
+ *           The src buffer must be before the dst buffer.
+ */
+static void ZSTD_safecopy(BYTE* op, BYTE* const oend_w, BYTE const* ip, ptrdiff_t length, ZSTD_overlap_e ovtype) {
+    ptrdiff_t const diff = op - ip;
+    BYTE* const oend = op + length;
+
+    assert((ovtype == ZSTD_no_overlap && (diff <= -8 || diff >= 8 || op >= oend_w)) ||
+           (ovtype == ZSTD_overlap_src_before_dst && diff >= 0));
+
+    if (length < 8) {
+        /* Handle short lengths. */
+        while (op < oend) *op++ = *ip++;
+        return;
+    }
+    if (ovtype == ZSTD_overlap_src_before_dst) {
+        /* Copy 8 bytes and ensure the offset >= 8 when there can be overlap. */
+        assert(length >= 8);
+        ZSTD_overlapCopy8(&op, &ip, diff);
+        assert(op - ip >= 8);
+        assert(op <= oend);
+    }
+
+    if (oend <= oend_w) {
+        /* No risk of overwrite. */
+        ZSTD_wildcopy(op, ip, length, ovtype);
+        return;
+    }
+    if (op <= oend_w) {
+        /* Wildcopy until we get close to the end. */
+        assert(oend > oend_w);
+        ZSTD_wildcopy(op, ip, oend_w - op, ovtype);
+        ip += oend_w - op;
+        op = oend_w;
+    }
+    /* Handle the leftovers. */
+    while (op < oend) *op++ = *ip++;
+}
+
+/* ZSTD_execSequenceEnd():
+ * This version handles cases that are near the end of the output buffer. It requires
+ * more careful checks to make sure there is no overflow. By separating out these hard
+ * and unlikely cases, we can speed up the common cases.
+ *
+ * NOTE: This function needs to be fast for a single long sequence, but doesn't need
+ * to be optimized for many small sequences, since those fall into ZSTD_execSequence().
+ */
 FORCE_NOINLINE
-size_t ZSTD_execSequenceLast7(BYTE* op,
-                              BYTE* const oend, seq_t sequence,
-                              const BYTE** litPtr, const BYTE* const litLimit,
-                              const BYTE* const base, const BYTE* const vBase, const BYTE* const dictEnd)
+size_t ZSTD_execSequenceEnd(BYTE* op,
+                            BYTE* const oend, seq_t sequence,
+                            const BYTE** litPtr, const BYTE* const litLimit,
+                            const BYTE* const prefixStart, const BYTE* const virtualStart, const BYTE* const dictEnd)
 {
     BYTE* const oLitEnd = op + sequence.litLength;
     size_t const sequenceLength = sequence.litLength + sequence.matchLength;
     BYTE* const oMatchEnd = op + sequenceLength;   /* risk : address space overflow (32-bits) */
     const BYTE* const iLitEnd = *litPtr + sequence.litLength;
     const BYTE* match = oLitEnd - sequence.offset;
+    BYTE* const oend_w = oend - WILDCOPY_OVERLENGTH;
 
-    /* check */
-    RETURN_ERROR_IF(oMatchEnd>oend, dstSize_tooSmall, "last match must fit within dstBuffer");
+    /* bounds checks */
+    assert(oLitEnd < oMatchEnd);
+    RETURN_ERROR_IF(oMatchEnd > oend, dstSize_tooSmall, "last match must fit within dstBuffer");
     RETURN_ERROR_IF(iLitEnd > litLimit, corruption_detected, "try to read beyond literal buffer");
 
     /* copy literals */
-    while (op < oLitEnd) *op++ = *(*litPtr)++;
+    ZSTD_safecopy(op, oend_w, *litPtr, sequence.litLength, ZSTD_no_overlap);
+    op = oLitEnd;
+    *litPtr = iLitEnd;
 
     /* copy Match */
-    if (sequence.offset > (size_t)(oLitEnd - base)) {
+    if (sequence.offset > (size_t)(oLitEnd - prefixStart)) {
         /* offset beyond prefix */
-        RETURN_ERROR_IF(sequence.offset > (size_t)(oLitEnd - vBase),corruption_detected);
-        match = dictEnd - (base-match);
+        RETURN_ERROR_IF(sequence.offset > (size_t)(oLitEnd - virtualStart), corruption_detected);
+        match = dictEnd - (prefixStart-match);
         if (match + sequence.matchLength <= dictEnd) {
             memmove(oLitEnd, match, sequence.matchLength);
             return sequenceLength;
@@ -614,13 +694,12 @@
             memmove(oLitEnd, match, length1);
             op = oLitEnd + length1;
             sequence.matchLength -= length1;
-            match = base;
+            match = prefixStart;
     }   }
-    while (op < oMatchEnd) *op++ = *match++;
+    ZSTD_safecopy(op, oend_w, match, sequence.matchLength, ZSTD_overlap_src_before_dst);
     return sequenceLength;
 }
 
-
 HINT_INLINE
 size_t ZSTD_execSequence(BYTE* op,
                          BYTE* const oend, seq_t sequence,
@@ -634,20 +713,29 @@
     const BYTE* const iLitEnd = *litPtr + sequence.litLength;
     const BYTE* match = oLitEnd - sequence.offset;
 
-    /* check */
-    RETURN_ERROR_IF(oMatchEnd>oend, dstSize_tooSmall, "last match must start at a minimum distance of WILDCOPY_OVERLENGTH from oend");
-    RETURN_ERROR_IF(iLitEnd > litLimit, corruption_detected, "over-read beyond lit buffer");
-    if (oLitEnd>oend_w) return ZSTD_execSequenceLast7(op, oend, sequence, litPtr, litLimit, prefixStart, virtualStart, dictEnd);
+    /* Errors and uncommon cases handled here. */
+    assert(oLitEnd < oMatchEnd);
+    if (iLitEnd > litLimit || oMatchEnd > oend_w)
+        return ZSTD_execSequenceEnd(op, oend, sequence, litPtr, litLimit, prefixStart, virtualStart, dictEnd);
+
+    /* Assumptions (everything else goes into ZSTD_execSequenceEnd()) */
+    assert(iLitEnd <= litLimit /* Literal length is in bounds */);
+    assert(oLitEnd <= oend_w /* Can wildcopy literals */);
+    assert(oMatchEnd <= oend_w /* Can wildcopy matches */);
 
-    /* copy Literals */
-    if (sequence.litLength > 8)
-        ZSTD_wildcopy_16min(op, (*litPtr), sequence.litLength, ZSTD_no_overlap);   /* note : since oLitEnd <= oend-WILDCOPY_OVERLENGTH, no risk of overwrite beyond oend */
-    else
-        ZSTD_copy8(op, *litPtr);
+    /* Copy Literals:
+     * Split out litLength <= 16 since it is nearly always true. +1.6% on gcc-9.
+     * We likely don't need the full 32-byte wildcopy.
+     */
+    assert(WILDCOPY_OVERLENGTH >= 16);
+    ZSTD_copy16(op, (*litPtr));
+    if (sequence.litLength > 16) {
+        ZSTD_wildcopy(op+16, (*litPtr)+16, sequence.litLength-16, ZSTD_no_overlap);
+    }
     op = oLitEnd;
     *litPtr = iLitEnd;   /* update for next sequence */
 
-    /* copy Match */
+    /* Copy Match */
     if (sequence.offset > (size_t)(oLitEnd - prefixStart)) {
         /* offset beyond prefix -> go into extDict */
         RETURN_ERROR_IF(sequence.offset > (size_t)(oLitEnd - virtualStart), corruption_detected);
@@ -662,123 +750,33 @@
             op = oLitEnd + length1;
             sequence.matchLength -= length1;
             match = prefixStart;
-            if (op > oend_w || sequence.matchLength < MINMATCH) {
-              U32 i;
-              for (i = 0; i < sequence.matchLength; ++i) op[i] = match[i];
-              return sequenceLength;
-            }
     }   }
-    /* Requirement: op <= oend_w && sequence.matchLength >= MINMATCH */
-
-    /* match within prefix */
-    if (sequence.offset < 8) {
-        /* close range match, overlap */
-        static const U32 dec32table[] = { 0, 1, 2, 1, 4, 4, 4, 4 };   /* added */
-        static const int dec64table[] = { 8, 8, 8, 7, 8, 9,10,11 };   /* subtracted */
-        int const sub2 = dec64table[sequence.offset];
-        op[0] = match[0];
-        op[1] = match[1];
-        op[2] = match[2];
-        op[3] = match[3];
-        match += dec32table[sequence.offset];
-        ZSTD_copy4(op+4, match);
-        match -= sub2;
-    } else {
-        ZSTD_copy8(op, match);
-    }
-    op += 8; match += 8;
-
-    if (oMatchEnd > oend-(16-MINMATCH)) {
-        if (op < oend_w) {
-            ZSTD_wildcopy(op, match, oend_w - op, ZSTD_overlap_src_before_dst);
-            match += oend_w - op;
-            op = oend_w;
-        }
-        while (op < oMatchEnd) *op++ = *match++;
-    } else {
-        ZSTD_wildcopy(op, match, (ptrdiff_t)sequence.matchLength-8, ZSTD_overlap_src_before_dst);   /* works even if matchLength < 8 */
-    }
-    return sequenceLength;
-}
-
-
-HINT_INLINE
-size_t ZSTD_execSequenceLong(BYTE* op,
-                             BYTE* const oend, seq_t sequence,
-                             const BYTE** litPtr, const BYTE* const litLimit,
-                             const BYTE* const prefixStart, const BYTE* const dictStart, const BYTE* const dictEnd)
-{
-    BYTE* const oLitEnd = op + sequence.litLength;
-    size_t const sequenceLength = sequence.litLength + sequence.matchLength;
-    BYTE* const oMatchEnd = op + sequenceLength;   /* risk : address space overflow (32-bits) */
-    BYTE* const oend_w = oend - WILDCOPY_OVERLENGTH;
-    const BYTE* const iLitEnd = *litPtr + sequence.litLength;
-    const BYTE* match = sequence.match;
-
-    /* check */
-    RETURN_ERROR_IF(oMatchEnd > oend, dstSize_tooSmall, "last match must start at a minimum distance of WILDCOPY_OVERLENGTH from oend");
-    RETURN_ERROR_IF(iLitEnd > litLimit, corruption_detected, "over-read beyond lit buffer");
-    if (oLitEnd > oend_w) return ZSTD_execSequenceLast7(op, oend, sequence, litPtr, litLimit, prefixStart, dictStart, dictEnd);
+    /* Match within prefix of 1 or more bytes */
+    assert(op <= oMatchEnd);
+    assert(oMatchEnd <= oend_w);
+    assert(match >= prefixStart);
+    assert(sequence.matchLength >= 1);
 
-    /* copy Literals */
-    if (sequence.litLength > 8)
-        ZSTD_wildcopy_16min(op, *litPtr, sequence.litLength, ZSTD_no_overlap);   /* note : since oLitEnd <= oend-WILDCOPY_OVERLENGTH, no risk of overwrite beyond oend */
-    else
-        ZSTD_copy8(op, *litPtr);  /* note : op <= oLitEnd <= oend_w == oend - 8 */
-
-    op = oLitEnd;
-    *litPtr = iLitEnd;   /* update for next sequence */
+    /* Nearly all offsets are >= WILDCOPY_VECLEN bytes, which means we can use wildcopy
+     * without overlap checking.
+     */
+    if (sequence.offset >= WILDCOPY_VECLEN) {
+        /* We bet on a full wildcopy for matches, since we expect matches to be
+         * longer than literals (in general). In silesia, ~10% of matches are longer
+         * than 16 bytes.
+         */
+        ZSTD_wildcopy(op, match, (ptrdiff_t)sequence.matchLength, ZSTD_no_overlap);
+        return sequenceLength;
+    }
+    assert(sequence.offset < WILDCOPY_VECLEN);
 
-    /* copy Match */
-    if (sequence.offset > (size_t)(oLitEnd - prefixStart)) {
-        /* offset beyond prefix */
-        RETURN_ERROR_IF(sequence.offset > (size_t)(oLitEnd - dictStart), corruption_detected);
-        if (match + sequence.matchLength <= dictEnd) {
-            memmove(oLitEnd, match, sequence.matchLength);
-            return sequenceLength;
-        }
-        /* span extDict & currentPrefixSegment */
-        {   size_t const length1 = dictEnd - match;
-            memmove(oLitEnd, match, length1);
-            op = oLitEnd + length1;
-            sequence.matchLength -= length1;
-            match = prefixStart;
-            if (op > oend_w || sequence.matchLength < MINMATCH) {
-              U32 i;
-              for (i = 0; i < sequence.matchLength; ++i) op[i] = match[i];
-              return sequenceLength;
-            }
-    }   }
-    assert(op <= oend_w);
-    assert(sequence.matchLength >= MINMATCH);
+    /* Copy 8 bytes and spread the offset to be >= 8. */
+    ZSTD_overlapCopy8(&op, &match, sequence.offset);
 
-    /* match within prefix */
-    if (sequence.offset < 8) {
-        /* close range match, overlap */
-        static const U32 dec32table[] = { 0, 1, 2, 1, 4, 4, 4, 4 };   /* added */
-        static const int dec64table[] = { 8, 8, 8, 7, 8, 9,10,11 };   /* subtracted */
-        int const sub2 = dec64table[sequence.offset];
-        op[0] = match[0];
-        op[1] = match[1];
-        op[2] = match[2];
-        op[3] = match[3];
-        match += dec32table[sequence.offset];
-        ZSTD_copy4(op+4, match);
-        match -= sub2;
-    } else {
-        ZSTD_copy8(op, match);
-    }
-    op += 8; match += 8;
-
-    if (oMatchEnd > oend-(16-MINMATCH)) {
-        if (op < oend_w) {
-            ZSTD_wildcopy(op, match, oend_w - op, ZSTD_overlap_src_before_dst);
-            match += oend_w - op;
-            op = oend_w;
-        }
-        while (op < oMatchEnd) *op++ = *match++;
-    } else {
-        ZSTD_wildcopy(op, match, (ptrdiff_t)sequence.matchLength-8, ZSTD_overlap_src_before_dst);   /* works even if matchLength < 8 */
+    /* If the match length is > 8 bytes, then continue with the wildcopy. */
+    if (sequence.matchLength > 8) {
+        assert(op < oMatchEnd);
+        ZSTD_wildcopy(op, match, (ptrdiff_t)sequence.matchLength-8, ZSTD_overlap_src_before_dst);
     }
     return sequenceLength;
 }
@@ -1098,7 +1096,7 @@
         /* decode and decompress */
         for ( ; (BIT_reloadDStream(&(seqState.DStream)) <= BIT_DStream_completed) && (seqNb<nbSeq) ; seqNb++) {
             seq_t const sequence = ZSTD_decodeSequenceLong(&seqState, isLongOffset);
-            size_t const oneSeqSize = ZSTD_execSequenceLong(op, oend, sequences[(seqNb-ADVANCED_SEQS) & STORED_SEQS_MASK], &litPtr, litEnd, prefixStart, dictStart, dictEnd);
+            size_t const oneSeqSize = ZSTD_execSequence(op, oend, sequences[(seqNb-ADVANCED_SEQS) & STORED_SEQS_MASK], &litPtr, litEnd, prefixStart, dictStart, dictEnd);
             if (ZSTD_isError(oneSeqSize)) return oneSeqSize;
             PREFETCH_L1(sequence.match); PREFETCH_L1(sequence.match + sequence.matchLength - 1); /* note : it's safe to invoke PREFETCH() on any memory address, including invalid ones */
             sequences[seqNb & STORED_SEQS_MASK] = sequence;
@@ -1109,7 +1107,7 @@
         /* finish queue */
         seqNb -= seqAdvance;
         for ( ; seqNb<nbSeq ; seqNb++) {
-            size_t const oneSeqSize = ZSTD_execSequenceLong(op, oend, sequences[seqNb&STORED_SEQS_MASK], &litPtr, litEnd, prefixStart, dictStart, dictEnd);
+            size_t const oneSeqSize = ZSTD_execSequence(op, oend, sequences[seqNb&STORED_SEQS_MASK], &litPtr, litEnd, prefixStart, dictStart, dictEnd);
             if (ZSTD_isError(oneSeqSize)) return oneSeqSize;
             op += oneSeqSize;
         }
--- a/contrib/python-zstandard/zstd/deprecated/zbuff.h	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/deprecated/zbuff.h	Sat Dec 28 09:55:45 2019 -0800
@@ -36,16 +36,17 @@
 *****************************************************************/
 /* Deprecation warnings */
 /* Should these warnings be a problem,
-   it is generally possible to disable them,
-   typically with -Wno-deprecated-declarations for gcc
-   or _CRT_SECURE_NO_WARNINGS in Visual.
-   Otherwise, it's also possible to define ZBUFF_DISABLE_DEPRECATE_WARNINGS */
+ * it is generally possible to disable them,
+ * typically with -Wno-deprecated-declarations for gcc
+ * or _CRT_SECURE_NO_WARNINGS in Visual.
+ * Otherwise, it's also possible to define ZBUFF_DISABLE_DEPRECATE_WARNINGS
+ */
 #ifdef ZBUFF_DISABLE_DEPRECATE_WARNINGS
 #  define ZBUFF_DEPRECATED(message) ZSTDLIB_API  /* disable deprecation warnings */
 #else
 #  if defined (__cplusplus) && (__cplusplus >= 201402) /* C++14 or greater */
 #    define ZBUFF_DEPRECATED(message) [[deprecated(message)]] ZSTDLIB_API
-#  elif (defined(__GNUC__) && (__GNUC__ >= 5)) || defined(__clang__)
+#  elif (defined(GNUC) && (GNUC > 4 || (GNUC == 4 && GNUC_MINOR >= 5))) || defined(__clang__)
 #    define ZBUFF_DEPRECATED(message) ZSTDLIB_API __attribute__((deprecated(message)))
 #  elif defined(__GNUC__) && (__GNUC__ >= 3)
 #    define ZBUFF_DEPRECATED(message) ZSTDLIB_API __attribute__((deprecated))
--- a/contrib/python-zstandard/zstd/dictBuilder/cover.c	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/dictBuilder/cover.c	Sat Dec 28 09:55:45 2019 -0800
@@ -638,8 +638,8 @@
                     "compared to the source size %u! "
                     "size(source)/size(dictionary) = %f, but it should be >= "
                     "10! This may lead to a subpar dictionary! We recommend "
-                    "training on sources at least 10x, and up to 100x the "
-                    "size of the dictionary!\n", (U32)maxDictSize,
+                    "training on sources at least 10x, and preferably 100x "
+                    "the size of the dictionary! \n", (U32)maxDictSize,
                     (U32)nbDmers, ratio);
 }
 
--- a/contrib/python-zstandard/zstd/dictBuilder/zdict.c	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/dictBuilder/zdict.c	Sat Dec 28 09:55:45 2019 -0800
@@ -571,7 +571,7 @@
     unsigned const prime1 = 2654435761U;
     unsigned const prime2 = 2246822519U;
     unsigned acc = prime1;
-    size_t p=0;;
+    size_t p=0;
     for (p=0; p<length; p++) {
         acc *= prime2;
         ((unsigned char*)buffer)[p] = (unsigned char)(acc >> 21);
--- a/contrib/python-zstandard/zstd/zstd.h	Fri Dec 27 18:54:57 2019 -0500
+++ b/contrib/python-zstandard/zstd/zstd.h	Sat Dec 28 09:55:45 2019 -0800
@@ -15,6 +15,7 @@
 #define ZSTD_H_235446
 
 /* ======   Dependency   ======*/
+#include <limits.h>   /* INT_MAX */
 #include <stddef.h>   /* size_t */
 
 
@@ -71,7 +72,7 @@
 /*------   Version   ------*/
 #define ZSTD_VERSION_MAJOR    1
 #define ZSTD_VERSION_MINOR    4
-#define ZSTD_VERSION_RELEASE  3
+#define ZSTD_VERSION_RELEASE  4
 
 #define ZSTD_VERSION_NUMBER  (ZSTD_VERSION_MAJOR *100*100 + ZSTD_VERSION_MINOR *100 + ZSTD_VERSION_RELEASE)
 ZSTDLIB_API unsigned ZSTD_versionNumber(void);   /**< to check runtime library version */
@@ -196,9 +197,13 @@
 ZSTDLIB_API size_t     ZSTD_freeCCtx(ZSTD_CCtx* cctx);
 
 /*! ZSTD_compressCCtx() :
- *  Same as ZSTD_compress(), using an explicit ZSTD_CCtx
- *  The function will compress at requested compression level,
- *  ignoring any other parameter */
+ *  Same as ZSTD_compress(), using an explicit ZSTD_CCtx.
+ *  Important : in order to behave similarly to `ZSTD_compress()`,
+ *  this function compresses at requested compression level,
+ *  __ignoring any other parameter__ .
+ *  If any advanced parameter was set using the advanced API,
+ *  they will all be reset. Only `compressionLevel` remains.
+ */
 ZSTDLIB_API size_t ZSTD_compressCCtx(ZSTD_CCtx* cctx,
                                      void* dst, size_t dstCapacity,
                                const void* src, size_t srcSize,
@@ -233,7 +238,7 @@
  *   using ZSTD_CCtx_set*() functions.
  *   Pushed parameters are sticky : they are valid for next compressed frame, and any subsequent frame.
  *   "sticky" parameters are applicable to `ZSTD_compress2()` and `ZSTD_compressStream*()` !
- *   They do not apply to "simple" one-shot variants such as ZSTD_compressCCtx()
+ *   __They do not apply to "simple" one-shot variants such as ZSTD_compressCCtx()__ .
  *
  *   It's possible to reset all parameters to "default" using ZSTD_CCtx_reset().
  *
@@ -261,18 +266,26 @@
 
     /* compression parameters
      * Note: When compressing with a ZSTD_CDict these parameters are superseded
-     * by the parameters used to construct the ZSTD_CDict. See ZSTD_CCtx_refCDict()
-     * for more info (superseded-by-cdict). */
-    ZSTD_c_compressionLevel=100, /* Update all compression parameters according to pre-defined cLevel table
+     * by the parameters used to construct the ZSTD_CDict.
+     * See ZSTD_CCtx_refCDict() for more info (superseded-by-cdict). */
+    ZSTD_c_compressionLevel=100, /* Set compression parameters according to pre-defined cLevel table.
+                              * Note that exact compression parameters are dynamically determined,
+                              * depending on both compression level and srcSize (when known).
                               * Default level is ZSTD_CLEVEL_DEFAULT==3.
                               * Special: value 0 means default, which is controlled by ZSTD_CLEVEL_DEFAULT.
                               * Note 1 : it's possible to pass a negative compression level.
-                              * Note 2 : setting a level sets all default values of other compression parameters */
+                              * Note 2 : setting a level resets all other compression parameters to default */
+    /* Advanced compression parameters :
+     * It's possible to pin down compression parameters to some specific values.
+     * In which case, these values are no longer dynamically selected by the compressor */
     ZSTD_c_windowLog=101,    /* Maximum allowed back-reference distance, expressed as power of 2.
+                              * This will set a memory budget for streaming decompression,
+                              * with larger values requiring more memory
+                              * and typically compressing more.
                               * Must be clamped between ZSTD_WINDOWLOG_MIN and ZSTD_WINDOWLOG_MAX.
                               * Special: value 0 means "use default windowLog".
                               * Note: Using a windowLog greater than ZSTD_WINDOWLOG_LIMIT_DEFAULT
-                              *       requires explicitly allowing such window size at decompression stage if using streaming. */
+                              *       requires explicitly allowing such size at streaming decompression stage. */
     ZSTD_c_hashLog=102,      /* Size of the initial probe table, as a power of 2.
                               * Resulting memory usage is (1 << (hashLog+2)).
                               * Must be clamped between ZSTD_HASHLOG_MIN and ZSTD_HASHLOG_MAX.
@@ -283,13 +296,13 @@
                               * Resulting memory usage is (1 << (chainLog+2)).
                               * Must be clamped between ZSTD_CHAINLOG_MIN and ZSTD_CHAINLOG_MAX.
                               * Larger tables result in better and slower compression.
-                              * This parameter is useless when using "fast" strategy.
+                              * This parameter is useless for "fast" strategy.
                               * It's still useful when using "dfast" strategy,
                               * in which case it defines a secondary probe table.
                               * Special: value 0 means "use default chainLog". */
     ZSTD_c_searchLog=104,    /* Number of search attempts, as a power of 2.
                               * More attempts result in better and slower compression.
-                              * This parameter is useless when using "fast" and "dFast" strategies.
+                              * This parameter is useless for "fast" and "dFast" strategies.
                               * Special: value 0 means "use default searchLog". */
     ZSTD_c_minMatch=105,     /* Minimum size of searched matches.
                               * Note that Zstandard can still find matches of smaller size,
@@ -344,7 +357,7 @@
     ZSTD_c_contentSizeFlag=200, /* Content size will be written into frame header _whenever known_ (default:1)
                               * Content size must be known at the beginning of compression.
                               * This is automatically the case when using ZSTD_compress2(),
-                              * For streaming variants, content size must be provided with ZSTD_CCtx_setPledgedSrcSize() */
+                              * For streaming scenarios, content size must be provided with ZSTD_CCtx_setPledgedSrcSize() */
     ZSTD_c_checksumFlag=201, /* A 32-bits checksum of content is written at end of frame (default:0) */
     ZSTD_c_dictIDFlag=202,   /* When applicable, dictionary's ID is written into frame header (default:1) */
 
@@ -363,7 +376,7 @@
                               * Each compression job is completed in parallel, so this value can indirectly impact the nb of active threads.
                               * 0 means default, which is dynamically determined based on compression parameters.
                               * Job size must be a minimum of overlap size, or 1 MB, whichever is largest.
-                              * The minimum size is automatically and transparently enforced */
+                              * The minimum size is automatically and transparently enforced. */
     ZSTD_c_overlapLog=402,   /* Control the overlap size, as a fraction of window size.
                               * The overlap size is an amount of data reloaded from previous job at the beginning of a new job.
                               * It helps preserve compression ratio, while each job is compressed in parallel.
@@ -386,6 +399,7 @@
      * ZSTD_c_forceAttachDict
      * ZSTD_c_literalCompressionMode
      * ZSTD_c_targetCBlockSize
+     * ZSTD_c_srcSizeHint
      * Because they are not stable, it's necessary to define ZSTD_STATIC_LINKING_ONLY to access them.
      * note : never ever use experimentalParam? names directly;
      *        also, the enums values themselves are unstable and can still change.
@@ -396,6 +410,7 @@
      ZSTD_c_experimentalParam4=1001,
      ZSTD_c_experimentalParam5=1002,
      ZSTD_c_experimentalParam6=1003,
+     ZSTD_c_experimentalParam7=1004
 } ZSTD_cParameter;
 
 typedef struct {
@@ -793,12 +808,17 @@
 typedef struct ZSTD_CDict_s ZSTD_CDict;
 
 /*! ZSTD_createCDict() :
- *  When compressing multiple messages / blocks using the same dictionary, it's recommended to load it only once.
- *  ZSTD_createCDict() will create a digested dictionary, ready to start future compression operations without startup cost.
+ *  When compressing multiple messages or blocks using the same dictionary,
+ *  it's recommended to digest the dictionary only once, since it's a costly operation.
+ *  ZSTD_createCDict() will create a state from digesting a dictionary.
+ *  The resulting state can be used for future compression operations with very limited startup cost.
  *  ZSTD_CDict can be created once and shared by multiple threads concurrently, since its usage is read-only.
- * `dictBuffer` can be released after ZSTD_CDict creation, because its content is copied within CDict.
- *  Consider experimental function `ZSTD_createCDict_byReference()` if you prefer to not duplicate `dictBuffer` content.
- *  Note : A ZSTD_CDict can be created from an empty dictBuffer, but it is inefficient when used to compress small data. */
+ * @dictBuffer can be released after ZSTD_CDict creation, because its content is copied within CDict.
+ *  Note 1 : Consider experimental function `ZSTD_createCDict_byReference()` if you prefer to not duplicate @dictBuffer content.
+ *  Note 2 : A ZSTD_CDict can be created from an empty @dictBuffer,
+ *      in which case the only thing that it transports is the @compressionLevel.
+ *      This can be useful in a pipeline featuring ZSTD_compress_usingCDict() exclusively,
+ *      expecting a ZSTD_CDict parameter with any data, including those without a known dictionary. */
 ZSTDLIB_API ZSTD_CDict* ZSTD_createCDict(const void* dictBuffer, size_t dictSize,
                                          int compressionLevel);
 
@@ -925,7 +945,7 @@
  *  Note 3 : Referencing a prefix involves building tables, which are dependent on compression parameters.
  *           It's a CPU consuming operation, with non-negligible impact on latency.
  *           If there is a need to use the same prefix multiple times, consider loadDictionary instead.
- *  Note 4 : By default, the prefix is interpreted as raw content (ZSTD_dm_rawContent).
+ *  Note 4 : By default, the prefix is interpreted as raw content (ZSTD_dct_rawContent).
  *           Use experimental ZSTD_CCtx_refPrefix_advanced() to alter dictionary interpretation. */
 ZSTDLIB_API size_t ZSTD_CCtx_refPrefix(ZSTD_CCtx* cctx,
                                  const void* prefix, size_t prefixSize);
@@ -969,7 +989,7 @@
  *  Note 2 : Prefix buffer is referenced. It **must** outlive decompression.
  *           Prefix buffer must remain unmodified up to the end of frame,
  *           reached when ZSTD_decompressStream() returns 0.
- *  Note 3 : By default, the prefix is treated as raw content (ZSTD_dm_rawContent).
+ *  Note 3 : By default, the prefix is treated as raw content (ZSTD_dct_rawContent).
  *           Use ZSTD_CCtx_refPrefix_advanced() to alter dictMode (Experimental section)
  *  Note 4 : Referencing a raw content prefix has almost no cpu nor memory cost.
  *           A full dictionary is more costly, as it requires building tables.
@@ -1014,8 +1034,8 @@
  * Some of them might be removed in the future (especially when redundant with existing stable functions)
  * ***************************************************************************************/
 
-#define ZSTD_FRAMEHEADERSIZE_PREFIX 5   /* minimum input size required to query frame header size */
-#define ZSTD_FRAMEHEADERSIZE_MIN    6
+#define ZSTD_FRAMEHEADERSIZE_PREFIX(format) ((format) == ZSTD_f_zstd1 ? 5 : 1)   /* minimum input size required to query frame header size */
+#define ZSTD_FRAMEHEADERSIZE_MIN(format)    ((format) == ZSTD_f_zstd1 ? 6 : 2)
 #define ZSTD_FRAMEHEADERSIZE_MAX   18   /* can be useful for static allocation */
 #define ZSTD_SKIPPABLEHEADERSIZE    8
 
@@ -1063,6 +1083,8 @@
 /* Advanced parameter bounds */
 #define ZSTD_TARGETCBLOCKSIZE_MIN   64
 #define ZSTD_TARGETCBLOCKSIZE_MAX   ZSTD_BLOCKSIZE_MAX
+#define ZSTD_SRCSIZEHINT_MIN        0
+#define ZSTD_SRCSIZEHINT_MAX        INT_MAX
 
 /* internal */
 #define ZSTD_HASHLOG3_MAX           17
@@ -1073,6 +1095,24 @@
 typedef struct ZSTD_CCtx_params_s ZSTD_CCtx_params;
 
 typedef struct {
+    unsigned int matchPos; /* Match pos in dst */
+    /* If seqDef.offset > 3, then this is seqDef.offset - 3
+     * If seqDef.offset < 3, then this is the corresponding repeat offset
+     * But if seqDef.offset < 3 and litLength == 0, this is the
+     *   repeat offset before the corresponding repeat offset
+     * And if seqDef.offset == 3 and litLength == 0, this is the
+     *   most recent repeat offset - 1
+     */
+    unsigned int offset;
+    unsigned int litLength; /* Literal length */
+    unsigned int matchLength; /* Match length */
+    /* 0 when seq not rep and seqDef.offset otherwise
+     * when litLength == 0 this will be <= 4, otherwise <= 3 like normal
+     */
+    unsigned int rep;
+} ZSTD_Sequence;
+
+typedef struct {
     unsigned windowLog;       /**< largest match distance : larger == more compression, more memory needed during decompression */
     unsigned chainLog;        /**< fully searched segment : larger == more compression, slower, more memory (useless for fast) */
     unsigned hashLog;         /**< dispatch table : larger == faster, more memory */
@@ -1101,21 +1141,12 @@
 
 typedef enum {
     ZSTD_dlm_byCopy = 0,  /**< Copy dictionary content internally */
-    ZSTD_dlm_byRef = 1,   /**< Reference dictionary content -- the dictionary buffer must outlive its users. */
+    ZSTD_dlm_byRef = 1    /**< Reference dictionary content -- the dictionary buffer must outlive its users. */
 } ZSTD_dictLoadMethod_e;
 
 typedef enum {
-    /* Opened question : should we have a format ZSTD_f_auto ?
-     * Today, it would mean exactly the same as ZSTD_f_zstd1.
-     * But, in the future, should several formats become supported,
-     * on the compression side, it would mean "default format".
-     * On the decompression side, it would mean "automatic format detection",
-     * so that ZSTD_f_zstd1 would mean "accept *only* zstd frames".
-     * Since meaning is a little different, another option could be to define different enums for compression and decompression.
-     * This question could be kept for later, when there are actually multiple formats to support,
-     * but there is also the question of pinning enum values, and pinning value `0` is especially important */
     ZSTD_f_zstd1 = 0,           /* zstd frame format, specified in zstd_compression_format.md (default) */
-    ZSTD_f_zstd1_magicless = 1, /* Variant of zstd frame format, without initial 4-bytes magic number.
+    ZSTD_f_zstd1_magicless = 1  /* Variant of zstd frame format, without initial 4-bytes magic number.
                                  * Useful to save 4 bytes per generated frame.
                                  * Decoder cannot recognise automatically this format, requiring this instruction. */
 } ZSTD_format_e;
@@ -1126,7 +1157,7 @@
      * to evolve and should be considered only in the context of extremely
      * advanced performance tuning.
      *
-     * Zstd currently supports the use of a CDict in two ways:
+     * Zstd currently supports the use of a CDict in three ways:
      *
      * - The contents of the CDict can be copied into the working context. This
      *   means that the compression can search both the dictionary and input
@@ -1142,6 +1173,12 @@
      *   working context's tables can be reused). For small inputs, this can be
      *   faster than copying the CDict's tables.
      *
+     * - The CDict's tables are not used at all, and instead we use the working
+     *   context alone to reload the dictionary and use params based on the source
+     *   size. See ZSTD_compress_insertDictionary() and ZSTD_compress_usingDict().
+     *   This method is effective when the dictionary sizes are very small relative
+     *   to the input size, and the input size is fairly large to begin with.
+     *
      * Zstd has a simple internal heuristic that selects which strategy to use
      * at the beginning of a compression. However, if experimentation shows that
      * Zstd is making poor choices, it is possible to override that choice with
@@ -1150,6 +1187,7 @@
     ZSTD_dictDefaultAttach = 0, /* Use the default heuristic. */
     ZSTD_dictForceAttach   = 1, /* Never copy the dictionary. */
     ZSTD_dictForceCopy     = 2, /* Always copy the dictionary. */
+    ZSTD_dictForceLoad     = 3  /* Always reload the dictionary */
 } ZSTD_dictAttachPref_e;
 
 typedef enum {
@@ -1158,7 +1196,7 @@
                                *   levels will be compressed. */
   ZSTD_lcm_huffman = 1,       /**< Always attempt Huffman compression. Uncompressed literals will still be
                                *   emitted if Huffman compression is not profitable. */
-  ZSTD_lcm_uncompressed = 2,  /**< Always emit uncompressed literals. */
+  ZSTD_lcm_uncompressed = 2   /**< Always emit uncompressed literals. */
 } ZSTD_literalCompressionMode_e;
 
 
@@ -1210,20 +1248,38 @@
  *           or an error code (if srcSize is too small) */
 ZSTDLIB_API size_t ZSTD_frameHeaderSize(const void* src, size_t srcSize);
 
+/*! ZSTD_getSequences() :
+ * Extract sequences from the sequence store
+ * zc can be used to insert custom compression params.
+ * This function invokes ZSTD_compress2
+ * @return : number of sequences extracted
+ */
+ZSTDLIB_API size_t ZSTD_getSequences(ZSTD_CCtx* zc, ZSTD_Sequence* outSeqs,
+    size_t outSeqsSize, const void* src, size_t srcSize);
+
 
 /***************************************
 *  Memory management
 ***************************************/
 
 /*! ZSTD_estimate*() :
- *  These functions make it possible to estimate memory usage
- *  of a future {D,C}Ctx, before its creation.
- *  ZSTD_estimateCCtxSize() will provide a budget large enough for any compression level up to selected one.
- *  It will also consider src size to be arbitrarily "large", which is worst case.
- *  If srcSize is known to always be small, ZSTD_estimateCCtxSize_usingCParams() can provide a tighter estimation.
- *  ZSTD_estimateCCtxSize_usingCParams() can be used in tandem with ZSTD_getCParams() to create cParams from compressionLevel.
- *  ZSTD_estimateCCtxSize_usingCCtxParams() can be used in tandem with ZSTD_CCtxParams_setParameter(). Only single-threaded compression is supported. This function will return an error code if ZSTD_c_nbWorkers is >= 1.
- *  Note : CCtx size estimation is only correct for single-threaded compression. */
+ *  These functions make it possible to estimate memory usage of a future
+ *  {D,C}Ctx, before its creation.
+ *
+ *  ZSTD_estimateCCtxSize() will provide a budget large enough for any
+ *  compression level up to selected one. Unlike ZSTD_estimateCStreamSize*(),
+ *  this estimate does not include space for a window buffer, so this estimate
+ *  is guaranteed to be enough for single-shot compressions, but not streaming
+ *  compressions. It will however assume the input may be arbitrarily large,
+ *  which is the worst case. If srcSize is known to always be small,
+ *  ZSTD_estimateCCtxSize_usingCParams() can provide a tighter estimation.
+ *  ZSTD_estimateCCtxSize_usingCParams() can be used in tandem with
+ *  ZSTD_getCParams() to create cParams from compressionLevel.
+ *  ZSTD_estimateCCtxSize_usingCCtxParams() can be used in tandem with
+ *  ZSTD_CCtxParams_setParameter().
+ *
+ *  Note: only single-threaded compression is supported. This function will
+ *  return an error code if ZSTD_c_nbWorkers is >= 1. */
 ZSTDLIB_API size_t ZSTD_estimateCCtxSize(int compressionLevel);
 ZSTDLIB_API size_t ZSTD_estimateCCtxSize_usingCParams(ZSTD_compressionParameters cParams);
 ZSTDLIB_API size_t ZSTD_estimateCCtxSize_usingCCtxParams(const ZSTD_CCtx_params* params);
@@ -1334,7 +1390,8 @@
  *  Create a digested dictionary for compression
  *  Dictionary content is just referenced, not duplicated.
  *  As a consequence, `dictBuffer` **must** outlive CDict,
- *  and its content must remain unmodified throughout the lifetime of CDict. */
+ *  and its content must remain unmodified throughout the lifetime of CDict.
+ *  note: equivalent to ZSTD_createCDict_advanced(), with dictLoadMethod==ZSTD_dlm_byRef */
 ZSTDLIB_API ZSTD_CDict* ZSTD_createCDict_byReference(const void* dictBuffer, size_t dictSize, int compressionLevel);
 
 /*! ZSTD_getCParams() :
@@ -1361,7 +1418,9 @@
 ZSTDLIB_API ZSTD_compressionParameters ZSTD_adjustCParams(ZSTD_compressionParameters cPar, unsigned long long srcSize, size_t dictSize);
 
 /*! ZSTD_compress_advanced() :
- *  Same as ZSTD_compress_usingDict(), with fine-tune control over compression parameters (by structure) */
+ *  Note : this function is now DEPRECATED.
+ *         It can be replaced by ZSTD_compress2(), in combination with ZSTD_CCtx_setParameter() and other parameter setters.
+ *  This prototype will be marked as deprecated and generate compilation warning on reaching v1.5.x */
 ZSTDLIB_API size_t ZSTD_compress_advanced(ZSTD_CCtx* cctx,
                                           void* dst, size_t dstCapacity,
                                     const void* src, size_t srcSize,
@@ -1369,7 +1428,9 @@
                                           ZSTD_parameters params);
 
 /*! ZSTD_compress_usingCDict_advanced() :
- *  Same as ZSTD_compress_usingCDict(), with fine-tune control over frame parameters */
+ *  Note : this function is now REDUNDANT.
+ *         It can be replaced by ZSTD_compress2(), in combination with ZSTD_CCtx_loadDictionary() and other parameter setters.
+ *  This prototype will be marked as deprecated and generate compilation warning in some future version */
 ZSTDLIB_API size_t ZSTD_compress_usingCDict_advanced(ZSTD_CCtx* cctx,
                                               void* dst, size_t dstCapacity,
                                         const void* src, size_t srcSize,
@@ -1441,6 +1502,12 @@
  * There is no guarantee on compressed block size (default:0) */
 #define ZSTD_c_targetCBlockSize ZSTD_c_experimentalParam6
 
+/* User's best guess of source size.
+ * Hint is not valid when srcSizeHint == 0.
+ * There is no guarantee that hint is close to actual source size,
+ * but compression ratio may regress significantly if guess considerably underestimates */
+#define ZSTD_c_srcSizeHint ZSTD_c_experimentalParam7
+
 /*! ZSTD_CCtx_getParameter() :
  *  Get the requested compression parameter value, selected by enum ZSTD_cParameter,
  *  and store it into int* value.
@@ -1613,8 +1680,13 @@
  * pledgedSrcSize must be correct. If it is not known at init time, use
  * ZSTD_CONTENTSIZE_UNKNOWN. Note that, for compatibility with older programs,
  * "0" also disables frame content size field. It may be enabled in the future.
+ * Note : this prototype will be marked as deprecated and generate compilation warnings on reaching v1.5.x
  */
-ZSTDLIB_API size_t ZSTD_initCStream_srcSize(ZSTD_CStream* zcs, int compressionLevel, unsigned long long pledgedSrcSize);
+ZSTDLIB_API size_t
+ZSTD_initCStream_srcSize(ZSTD_CStream* zcs,
+                         int compressionLevel,
+                         unsigned long long pledgedSrcSize);
+
 /**! ZSTD_initCStream_usingDict() :
  * This function is deprecated, and is equivalent to:
  *     ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only);
@@ -1623,42 +1695,66 @@
  *
  * Creates of an internal CDict (incompatible with static CCtx), except if
  * dict == NULL or dictSize < 8, in which case no dict is used.
- * Note: dict is loaded with ZSTD_dm_auto (treated as a full zstd dictionary if
+ * Note: dict is loaded with ZSTD_dct_auto (treated as a full zstd dictionary if
  * it begins with ZSTD_MAGIC_DICTIONARY, else as raw content) and ZSTD_dlm_byCopy.
+ * Note : this prototype will be marked as deprecated and generate compilation warnings on reaching v1.5.x
  */
-ZSTDLIB_API size_t ZSTD_initCStream_usingDict(ZSTD_CStream* zcs, const void* dict, size_t dictSize, int compressionLevel);
+ZSTDLIB_API size_t
+ZSTD_initCStream_usingDict(ZSTD_CStream* zcs,
+                     const void* dict, size_t dictSize,
+                           int compressionLevel);
+
 /**! ZSTD_initCStream_advanced() :
  * This function is deprecated, and is approximately equivalent to:
  *     ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only);
- *     ZSTD_CCtx_setZstdParams(zcs, params); // Set the zstd params and leave the rest as-is
+ *     // Pseudocode: Set each zstd parameter and leave the rest as-is.
+ *     for ((param, value) : params) {
+ *         ZSTD_CCtx_setParameter(zcs, param, value);
+ *     }
  *     ZSTD_CCtx_setPledgedSrcSize(zcs, pledgedSrcSize);
  *     ZSTD_CCtx_loadDictionary(zcs, dict, dictSize);
  *
- * pledgedSrcSize must be correct. If srcSize is not known at init time, use
- * value ZSTD_CONTENTSIZE_UNKNOWN. dict is loaded with ZSTD_dm_auto and ZSTD_dlm_byCopy.
+ * dict is loaded with ZSTD_dct_auto and ZSTD_dlm_byCopy.
+ * pledgedSrcSize must be correct.
+ * If srcSize is not known at init time, use value ZSTD_CONTENTSIZE_UNKNOWN.
+ * Note : this prototype will be marked as deprecated and generate compilation warnings on reaching v1.5.x
  */
-ZSTDLIB_API size_t ZSTD_initCStream_advanced(ZSTD_CStream* zcs, const void* dict, size_t dictSize,
-                                             ZSTD_parameters params, unsigned long long pledgedSrcSize);
+ZSTDLIB_API size_t
+ZSTD_initCStream_advanced(ZSTD_CStream* zcs,
+                    const void* dict, size_t dictSize,
+                          ZSTD_parameters params,
+                          unsigned long long pledgedSrcSize);
+
 /**! ZSTD_initCStream_usingCDict() :
  * This function is deprecated, and equivalent to:
  *     ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only);
  *     ZSTD_CCtx_refCDict(zcs, cdict);
  *
  * note : cdict will just be referenced, and must outlive compression session
+ * Note : this prototype will be marked as deprecated and generate compilation warnings on reaching v1.5.x
  */
 ZSTDLIB_API size_t ZSTD_initCStream_usingCDict(ZSTD_CStream* zcs, const ZSTD_CDict* cdict);
+
 /**! ZSTD_initCStream_usingCDict_advanced() :
- * This function is deprecated, and is approximately equivalent to:
+ *   This function is DEPRECATED, and is approximately equivalent to:
  *     ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only);
- *     ZSTD_CCtx_setZstdFrameParams(zcs, fParams); // Set the zstd frame params and leave the rest as-is
+ *     // Pseudocode: Set each zstd frame parameter and leave the rest as-is.
+ *     for ((fParam, value) : fParams) {
+ *         ZSTD_CCtx_setParameter(zcs, fParam, value);
+ *     }
  *     ZSTD_CCtx_setPledgedSrcSize(zcs, pledgedSrcSize);
  *     ZSTD_CCtx_refCDict(zcs, cdict);
  *
  * same as ZSTD_initCStream_usingCDict(), with control over frame parameters.
  * pledgedSrcSize must be correct. If srcSize is not known at init time, use
  * value ZSTD_CONTENTSIZE_UNKNOWN.
+ * Note : this prototype will be marked as deprecated and generate compilation warnings on reaching v1.5.x
  */
-ZSTDLIB_API size_t ZSTD_initCStream_usingCDict_advanced(ZSTD_CStream* zcs, const ZSTD_CDict* cdict, ZSTD_frameParameters fParams, unsigned long long pledgedSrcSize);
+ZSTDLIB_API size_t
+ZSTD_initCStream_usingCDict_advanced(ZSTD_CStream* zcs,
+                               const ZSTD_CDict* cdict,
+                                     ZSTD_frameParameters fParams,
+                                     unsigned long long pledgedSrcSize);
 
 /*! ZSTD_resetCStream() :
  * This function is deprecated, and is equivalent to:
@@ -1673,6 +1769,7 @@
  *  For the time being, pledgedSrcSize==0 is interpreted as "srcSize unknown" for compatibility with older programs,
  *  but it will change to mean "empty" in future version, so use macro ZSTD_CONTENTSIZE_UNKNOWN instead.
  * @return : 0, or an error code (which can be tested using ZSTD_isError())
+ *  Note : this prototype will be marked as deprecated and generate compilation warnings on reaching v1.5.x
  */
 ZSTDLIB_API size_t ZSTD_resetCStream(ZSTD_CStream* zcs, unsigned long long pledgedSrcSize);
 
@@ -1718,8 +1815,10 @@
  *     ZSTD_DCtx_loadDictionary(zds, dict, dictSize);
  *
  * note: no dictionary will be used if dict == NULL or dictSize < 8
+ * Note : this prototype will be marked as deprecated and generate compilation warnings on reaching v1.5.x
  */
 ZSTDLIB_API size_t ZSTD_initDStream_usingDict(ZSTD_DStream* zds, const void* dict, size_t dictSize);
+
 /**
  * This function is deprecated, and is equivalent to:
  *
@@ -1727,14 +1826,17 @@
  *     ZSTD_DCtx_refDDict(zds, ddict);
  *
  * note : ddict is referenced, it must outlive decompression session
+ * Note : this prototype will be marked as deprecated and generate compilation warnings on reaching v1.5.x
  */
 ZSTDLIB_API size_t ZSTD_initDStream_usingDDict(ZSTD_DStream* zds, const ZSTD_DDict* ddict);
+
 /**
  * This function is deprecated, and is equivalent to:
  *
  *     ZSTD_DCtx_reset(zds, ZSTD_reset_session_only);
  *
  * re-use decompression parameters from previous init; saves dictionary loading
+ * Note : this prototype will be marked as deprecated and generate compilation warnings on reaching v1.5.x
  */
 ZSTDLIB_API size_t ZSTD_resetDStream(ZSTD_DStream* zds);
 
@@ -1908,7 +2010,7 @@
 
 /*!
     Block functions produce and decode raw zstd blocks, without frame metadata.
-    Frame metadata cost is typically ~18 bytes, which can be non-negligible for very small blocks (< 100 bytes).
+    Frame metadata cost is typically ~12 bytes, which can be non-negligible for very small blocks (< 100 bytes).
     But users will have to take in charge needed metadata to regenerate data, such as compressed and content sizes.
 
     A few rules to respect :