Mercurial > hg
view contrib/python-zstandard/make_cffi.py @ 48068:bf8837e3d7ce
dirstate: Remove the flat Rust DirstateMap implementation
Before this changeset we had two Rust implementations of `DirstateMap`.
This removes the "flat" DirstateMap so that the "tree" DirstateMap is always
used when Rust enabled. This simplifies the code a lot, and will enable
(in the next changeset) further removal of a trait abstraction.
This is a performance regression when:
* Rust is enabled, and
* The repository uses the legacy dirstate-v1 file format, and
* For `hg status`, unknown files are not listed (such as with `-mard`)
The regression is about 100 milliseconds for `hg status -mard` on a
semi-large repository (mozilla-central), from ~320ms to ~420ms.
We deem this to be small enough to be worth it.
The new dirstate-v2 is still experimental at this point, but we aim to
stabilize it (though not yet enable it by default for new repositories)
in Mercurial 6.0. Eventually, upgrating repositories to dirsate-v2 will
eliminate this regression (and enable other performance improvements).
# Background
The flat DirstateMap was introduced with the first Rust implementation of the
status algorithm. It works similarly to the previous Python + C one, with a
single `HashMap` that associates file paths to a `DirstateEntry` (where Python
has a dict).
We later added the tree DirstateMap where the root of the tree contains nodes
for files and directories that are directly at the root of the repository,
and nodes for directories can contain child nodes representing the files and
directly that *they* contain directly. The shape of this tree mirrors that of
the working directory in the filesystem. This enables the status algorithm to
traverse this tree in tandem with traversing the filesystem tree, which in
turns enables a more efficient algorithm.
Furthermore, the new dirstate-v2 file format is also based on a tree of the
same shape. The tree DirstateMap can access a dirstate-v2 file without parsing
it: binary data in a single large (possibly memory-mapped) bytes buffer is
traversed on demand. This allows `DirstateMap` creation to take `O(1)` time.
(Mutation works by creating new in-memory nodes with copy-on-write semantics,
and serialization is append-mostly.)
The tradeoff is that for "legacy" repositories that use the dirstate-v1 file
format, parsing that file into a tree DirstateMap takes more time. Profiling
shows that this time is dominated by `HashMap`. For a dirstate containing `F`
files with an average `D` directory depth, the flat DirstateMap does parsing
in `O(F)` number of HashMap operations but the tree DirstateMap in `O(F × D)`
operations, since each node has its own HashMap containing its child nodes.
This slower costs ~140ms on an old snapshot of mozilla-central, and ~80ms
on an old snapshot of the Netbeans repository.
The status algorithm is faster, but with `-mard` (when not listing unknown
files) it is typically not faster *enough* to compensate the slower parsing.
Both Rust implementations are always faster than the Python + C implementation
# Benchmark results
All benchmarks are run on changeset 98c0408324e6, with repositories that use
the dirstate-v1 file format, on a server with 4 CPU cores and 4 CPU threads
(no HyperThreading).
`hg status` benchmarks show wall clock times of the entire command as the
average and standard deviation of serveral runs, collected by
https://github.com/sharkdp/hyperfine and reformated.
Parsing benchmarks are wall clock time of the Rust function that converts a
bytes buffer of the dirstate file into the `DirstateMap` data structure as
used by the status algorithm. A single run each, collected by running
`hg status` this environment variable:
RUST_LOG=hg::dirstate::dirstate_map=trace,hg::dirstate_tree::dirstate_map=trace
Benchmark 1: Rust flat DirstateMap → Rust tree DirstateMap
hg status
mozilla-clean 562.3 ms ± 2.0 ms → 462.5 ms ± 0.6 ms 1.22 ± 0.00 times faster
mozilla-dirty 859.6 ms ± 2.2 ms → 719.5 ms ± 3.2 ms 1.19 ± 0.01 times faster
mozilla-ignored 558.2 ms ± 3.0 ms → 457.9 ms ± 2.9 ms 1.22 ± 0.01 times faster
mozilla-unknowns 859.4 ms ± 5.7 ms → 716.0 ms ± 4.7 ms 1.20 ± 0.01 times faster
netbeans-clean 336.5 ms ± 0.9 ms → 339.5 ms ± 0.4 ms 0.99 ± 0.00 times faster
netbeans-dirty 491.4 ms ± 1.6 ms → 475.1 ms ± 1.2 ms 1.03 ± 0.00 times faster
netbeans-ignored 343.7 ms ± 1.0 ms → 347.8 ms ± 0.4 ms 0.99 ± 0.00 times faster
netbeans-unknowns 484.3 ms ± 1.0 ms → 466.0 ms ± 1.2 ms 1.04 ± 0.00 times faster
hg status -mard
mozilla-clean 317.3 ms ± 0.6 ms → 422.5 ms ± 1.2 ms 0.75 ± 0.00 times faster
mozilla-dirty 315.4 ms ± 0.6 ms → 417.7 ms ± 1.1 ms 0.76 ± 0.00 times faster
mozilla-ignored 314.6 ms ± 0.6 ms → 417.4 ms ± 1.0 ms 0.75 ± 0.00 times faster
mozilla-unknowns 312.9 ms ± 0.9 ms → 417.3 ms ± 1.6 ms 0.75 ± 0.00 times faster
netbeans-clean 212.0 ms ± 0.6 ms → 283.6 ms ± 0.8 ms 0.75 ± 0.00 times faster
netbeans-dirty 211.4 ms ± 1.0 ms → 283.4 ms ± 1.6 ms 0.75 ± 0.01 times faster
netbeans-ignored 211.4 ms ± 0.9 ms → 283.9 ms ± 0.8 ms 0.74 ± 0.01 times faster
netbeans-unknowns 211.1 ms ± 0.6 ms → 283.4 ms ± 1.0 ms 0.74 ± 0.00 times faster
Parsing
mozilla-clean 38.4ms → 177.6ms
mozilla-dirty 38.8ms → 177.0ms
mozilla-ignored 38.8ms → 178.0ms
mozilla-unknowns 38.7ms → 176.9ms
netbeans-clean 16.5ms → 97.3ms
netbeans-dirty 16.5ms → 98.4ms
netbeans-ignored 16.9ms → 97.4ms
netbeans-unknowns 16.9ms → 96.3ms
Benchmark 2: Python + C dirstatemap → Rust tree DirstateMap
hg status
mozilla-clean 1261.0 ms ± 3.6 ms → 461.1 ms ± 0.5 ms 2.73 ± 0.00 times faster
mozilla-dirty 2293.4 ms ± 9.1 ms → 719.6 ms ± 3.6 ms 3.19 ± 0.01 times faster
mozilla-ignored 1240.4 ms ± 2.3 ms → 457.7 ms ± 1.9 ms 2.71 ± 0.00 times faster
mozilla-unknowns 2283.3 ms ± 9.0 ms → 719.7 ms ± 3.8 ms 3.17 ± 0.01 times faster
netbeans-clean 879.7 ms ± 3.5 ms → 339.9 ms ± 0.5 ms 2.59 ± 0.00 times faster
netbeans-dirty 1257.3 ms ± 4.7 ms → 474.6 ms ± 1.6 ms 2.65 ± 0.01 times faster
netbeans-ignored 943.9 ms ± 1.9 ms → 347.3 ms ± 1.1 ms 2.72 ± 0.00 times faster
netbeans-unknowns 1188.1 ms ± 5.0 ms → 465.2 ms ± 2.3 ms 2.55 ± 0.01 times faster
hg status -mard
mozilla-clean 903.2 ms ± 3.6 ms → 423.4 ms ± 2.2 ms 2.13 ± 0.01 times faster
mozilla-dirty 884.6 ms ± 4.5 ms → 417.3 ms ± 1.4 ms 2.12 ± 0.01 times faster
mozilla-ignored 881.9 ms ± 1.3 ms → 417.3 ms ± 0.8 ms 2.11 ± 0.00 times faster
mozilla-unknowns 878.5 ms ± 1.9 ms → 416.4 ms ± 0.9 ms 2.11 ± 0.00 times faster
netbeans-clean 434.9 ms ± 1.8 ms → 284.0 ms ± 0.8 ms 1.53 ± 0.01 times faster
netbeans-dirty 434.1 ms ± 0.8 ms → 283.1 ms ± 0.8 ms 1.53 ± 0.00 times faster
netbeans-ignored 431.7 ms ± 1.1 ms → 283.6 ms ± 1.8 ms 1.52 ± 0.01 times faster
netbeans-unknowns 433.0 ms ± 1.3 ms → 283.5 ms ± 0.7 ms 1.53 ± 0.00 times faster
Differential Revision: https://phab.mercurial-scm.org/D11516
author | Simon Sapin <simon.sapin@octobus.net> |
---|---|
date | Mon, 27 Sep 2021 12:09:15 +0200 |
parents | 89a2afe31e82 |
children | 6000f5b25c9b |
line wrap: on
line source
# Copyright (c) 2016-present, Gregory Szorc # All rights reserved. # # This software may be modified and distributed under the terms # of the BSD license. See the LICENSE file for details. from __future__ import absolute_import import cffi import distutils.ccompiler import os import re import subprocess import tempfile HERE = os.path.abspath(os.path.dirname(__file__)) SOURCES = [ "zstd/%s" % p for p in ( "common/debug.c", "common/entropy_common.c", "common/error_private.c", "common/fse_decompress.c", "common/pool.c", "common/threading.c", "common/xxhash.c", "common/zstd_common.c", "compress/fse_compress.c", "compress/hist.c", "compress/huf_compress.c", "compress/zstd_compress.c", "compress/zstd_compress_literals.c", "compress/zstd_compress_sequences.c", "compress/zstd_double_fast.c", "compress/zstd_fast.c", "compress/zstd_lazy.c", "compress/zstd_ldm.c", "compress/zstd_opt.c", "compress/zstdmt_compress.c", "decompress/huf_decompress.c", "decompress/zstd_ddict.c", "decompress/zstd_decompress.c", "decompress/zstd_decompress_block.c", "dictBuilder/cover.c", "dictBuilder/fastcover.c", "dictBuilder/divsufsort.c", "dictBuilder/zdict.c", ) ] # Headers whose preprocessed output will be fed into cdef(). HEADERS = [ os.path.join(HERE, "zstd", *p) for p in ( ("zstd.h",), ("dictBuilder", "zdict.h"), ) ] INCLUDE_DIRS = [ os.path.join(HERE, d) for d in ( "zstd", "zstd/common", "zstd/compress", "zstd/decompress", "zstd/dictBuilder", ) ] # cffi can't parse some of the primitives in zstd.h. So we invoke the # preprocessor and feed its output into cffi. compiler = distutils.ccompiler.new_compiler() # Needed for MSVC. if hasattr(compiler, "initialize"): compiler.initialize() # Distutils doesn't set compiler.preprocessor, so invoke the preprocessor # manually. if compiler.compiler_type == "unix": args = list(compiler.executables["compiler"]) args.extend( [ "-E", "-DZSTD_STATIC_LINKING_ONLY", "-DZDICT_STATIC_LINKING_ONLY", ] ) elif compiler.compiler_type == "msvc": args = [compiler.cc] args.extend( [ "/EP", "/DZSTD_STATIC_LINKING_ONLY", "/DZDICT_STATIC_LINKING_ONLY", ] ) else: raise Exception("unsupported compiler type: %s" % compiler.compiler_type) def preprocess(path): with open(path, "rb") as fh: lines = [] it = iter(fh) for l in it: # zstd.h includes <stddef.h>, which is also included by cffi's # boilerplate. This can lead to duplicate declarations. So we strip # this include from the preprocessor invocation. # # The same things happens for including zstd.h, so give it the same # treatment. # # We define ZSTD_STATIC_LINKING_ONLY, which is redundant with the inline # #define in zstdmt_compress.h and results in a compiler warning. So drop # the inline #define. if l.startswith( ( b"#include <stddef.h>", b'#include "zstd.h"', b"#define ZSTD_STATIC_LINKING_ONLY", ) ): continue # The preprocessor environment on Windows doesn't define include # paths, so the #include of limits.h fails. We work around this # by removing that import and defining INT_MAX ourselves. This is # a bit hacky. But it gets the job done. # TODO make limits.h work on Windows so we ensure INT_MAX is # correct. if l.startswith(b"#include <limits.h>"): l = b"#define INT_MAX 2147483647\n" # ZSTDLIB_API may not be defined if we dropped zstd.h. It isn't # important so just filter it out. if l.startswith(b"ZSTDLIB_API"): l = l[len(b"ZSTDLIB_API ") :] lines.append(l) fd, input_file = tempfile.mkstemp(suffix=".h") os.write(fd, b"".join(lines)) os.close(fd) try: env = dict(os.environ) if getattr(compiler, "_paths", None): env["PATH"] = compiler._paths process = subprocess.Popen( args + [input_file], stdout=subprocess.PIPE, env=env ) output = process.communicate()[0] ret = process.poll() if ret: raise Exception("preprocessor exited with error") return output finally: os.unlink(input_file) def normalize_output(output): lines = [] for line in output.splitlines(): # CFFI's parser doesn't like __attribute__ on UNIX compilers. if line.startswith(b'__attribute__ ((visibility ("default"))) '): line = line[len(b'__attribute__ ((visibility ("default"))) ') :] if line.startswith(b"__attribute__((deprecated("): continue elif b"__declspec(deprecated(" in line: continue lines.append(line) return b"\n".join(lines) ffi = cffi.FFI() # zstd.h uses a possible undefined MIN(). Define it until # https://github.com/facebook/zstd/issues/976 is fixed. # *_DISABLE_DEPRECATE_WARNINGS prevents the compiler from emitting a warning # when cffi uses the function. Since we statically link against zstd, even # if we use the deprecated functions it shouldn't be a huge problem. ffi.set_source( "_zstd_cffi", """ #define MIN(a,b) ((a)<(b) ? (a) : (b)) #define ZSTD_STATIC_LINKING_ONLY #include <zstd.h> #define ZDICT_STATIC_LINKING_ONLY #define ZDICT_DISABLE_DEPRECATE_WARNINGS #include <zdict.h> """, sources=SOURCES, include_dirs=INCLUDE_DIRS, extra_compile_args=["-DZSTD_MULTITHREAD"], ) DEFINE = re.compile(b"^\\#define ([a-zA-Z0-9_]+) ") sources = [] # Feed normalized preprocessor output for headers into the cdef parser. for header in HEADERS: preprocessed = preprocess(header) sources.append(normalize_output(preprocessed)) # #define's are effectively erased as part of going through preprocessor. # So perform a manual pass to re-add those to the cdef source. with open(header, "rb") as fh: for line in fh: line = line.strip() m = DEFINE.match(line) if not m: continue if m.group(1) == b"ZSTD_STATIC_LINKING_ONLY": continue # The parser doesn't like some constants with complex values. if m.group(1) in (b"ZSTD_LIB_VERSION", b"ZSTD_VERSION_STRING"): continue # The ... is magic syntax by the cdef parser to resolve the # value at compile time. sources.append(m.group(0) + b" ...") cdeflines = b"\n".join(sources).splitlines() cdeflines = [l for l in cdeflines if l.strip()] ffi.cdef(b"\n".join(cdeflines).decode("latin1")) if __name__ == "__main__": ffi.compile()