# HG changeset patch # User Raphaël Gomès # Date 1687426597 -7200 # Node ID 9a4db474ef1ac9996d2b25ad3997022ad31be47b # Parent 41b9eb302d95c6069280bbc40f54e07f006d239f# Parent 0ab3956540a6947e4dc7e98d30f3eb678204be68 branching: merge default into stable for 6.5rc0 diff -r 41b9eb302d95 -r 9a4db474ef1a .gitattributes --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/.gitattributes Thu Jun 22 11:36:37 2023 +0200 @@ -0,0 +1,2 @@ +# So GitLab doesn't think we're using tons of Perl +*.t -linguist-detectable diff -r 41b9eb302d95 -r 9a4db474ef1a .hgignore --- a/.hgignore Thu Jun 22 11:18:47 2023 +0200 +++ b/.hgignore Thu Jun 22 11:36:37 2023 +0200 @@ -19,6 +19,7 @@ *.zip \#*\# .\#* +result/ tests/artifacts/cache/big-file-churn.hg tests/.coverage* tests/.testtimes* diff -r 41b9eb302d95 -r 9a4db474ef1a contrib/heptapod-ci.yml --- a/contrib/heptapod-ci.yml Thu Jun 22 11:18:47 2023 +0200 +++ b/contrib/heptapod-ci.yml Thu Jun 22 11:36:37 2023 +0200 @@ -26,6 +26,7 @@ - clang-format --version script: - echo "python used, $PYTHON" + - $PYTHON --version - echo "$RUNTEST_ARGS" - HGTESTS_ALLOW_NETIO="$TEST_HGTESTS_ALLOW_NETIO" HGMODULEPOLICY="$TEST_HGMODULEPOLICY" "$PYTHON" tests/run-tests.py --color=always $RUNTEST_ARGS diff -r 41b9eb302d95 -r 9a4db474ef1a contrib/import-checker.py --- a/contrib/import-checker.py Thu Jun 22 11:18:47 2023 +0200 +++ b/contrib/import-checker.py Thu Jun 22 11:36:37 2023 +0200 @@ -44,6 +44,7 @@ # third-party imports should be directly imported 'mercurial.thirdparty', 'mercurial.thirdparty.attr', + 'mercurial.thirdparty.jaraco.collections', 'mercurial.thirdparty.zope', 'mercurial.thirdparty.zope.interface', 'typing', diff -r 41b9eb302d95 -r 9a4db474ef1a contrib/nix/flake.lock --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/contrib/nix/flake.lock Thu Jun 22 11:36:37 2023 +0200 @@ -0,0 +1,94 @@ +{ + "nodes": { + "flake-utils": { + "inputs": { + "systems": "systems" + }, + "locked": { + "lastModified": 1681202837, + "narHash": "sha256-H+Rh19JDwRtpVPAWp64F+rlEtxUWBAQW28eAi3SRSzg=", + "owner": "numtide", + "repo": "flake-utils", + "rev": "cfacdce06f30d2b68473a46042957675eebb3401", + "type": "github" + }, + "original": { + "owner": "numtide", + "repo": "flake-utils", + "type": "github" + } + }, + "flaky-utils": { + "locked": { + "lastModified": 1668472805, + "narHash": "sha256-hjRe8QFh2JMo9u6AaxQNGWfDWZxk3psULmPglqsjsLk=", + "ref": "refs/heads/master", + "rev": "c3f9daf4ec56276e040bc33e29c7eeaf1b99d91c", + "revCount": 33, + "type": "git", + "url": "https://cgit.pacien.net/libs/flaky-utils" + }, + "original": { + "type": "git", + "url": "https://cgit.pacien.net/libs/flaky-utils" + } + }, + "nixpkgs": { + "locked": { + "lastModified": 1681482634, + "narHash": "sha256-cT/nr3L8khEYZSGp8qqwxFH+/q4/547MfyOdSj6MhBk=", + "owner": "NixOS", + "repo": "nixpkgs", + "rev": "fda0d99c2cbbb5c89d8855d258cb0821bd9113ad", + "type": "github" + }, + "original": { + "owner": "NixOS", + "ref": "nixos-22.11", + "repo": "nixpkgs", + "type": "github" + } + }, + "nixpkgs-black": { + "locked": { + "lastModified": 1605911135, + "narHash": "sha256-PoVe4Nu7UzYtOboytSzRY9sks6euoEzeCckBN+AIoTU=", + "owner": "NixOS", + "repo": "nixpkgs", + "rev": "c7cb72b0cae397d311236d6773338efb4bd4f2d1", + "type": "github" + }, + "original": { + "owner": "NixOS", + "ref": "c7cb72b0", + "repo": "nixpkgs", + "type": "github" + } + }, + "root": { + "inputs": { + "flake-utils": "flake-utils", + "flaky-utils": "flaky-utils", + "nixpkgs": "nixpkgs", + "nixpkgs-black": "nixpkgs-black" + } + }, + "systems": { + "locked": { + "lastModified": 1681028828, + "narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=", + "owner": "nix-systems", + "repo": "default", + "rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e", + "type": "github" + }, + "original": { + "owner": "nix-systems", + "repo": "default", + "type": "github" + } + } + }, + "root": "root", + "version": 7 +} diff -r 41b9eb302d95 -r 9a4db474ef1a contrib/nix/flake.nix --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/contrib/nix/flake.nix Thu Jun 22 11:36:37 2023 +0200 @@ -0,0 +1,177 @@ +# flake.nix - Nix-defined package and devel env for the Mercurial project. +# +# Copyright 2021-2023 Pacien TRAN-GIRARD +# +# This software may be used and distributed according to the terms of the +# GNU General Public License version 2 or any later version. + +# Usage summary, from the root of this repository: +# +# Enter a shell with development tools: +# nix develop 'hg+file:.?dir=contrib/nix' +# +# Running mercurial: +# nix run 'hg+file:.?dir=contrib/nix' -- version +# +# Running the test suite in a sandbox: +# nix build 'hg+file:.?dir=contrib/nix#mercurial-tests' -L + +{ + inputs = { + nixpkgs.url = "github:NixOS/nixpkgs/nixos-22.11"; + nixpkgs-black.url = "github:NixOS/nixpkgs/c7cb72b0"; # black 20.8b1 + # rust-overlay.url = "github:oxalica/rust-overlay"; + flake-utils.url = "github:numtide/flake-utils"; + flaky-utils.url = "git+https://cgit.pacien.net/libs/flaky-utils"; + }; + + outputs = { + self + , nixpkgs + , nixpkgs-black + # , rust-overlay + , flake-utils + , flaky-utils + }: + flake-utils.lib.eachDefaultSystem (system: + let + # overlays = [ (import rust-overlay) ]; + pkgs = import nixpkgs { inherit system; }; + + # We're in the contrib/nix sub-directory. + src = ../..; + + # For snapshots, to satisfy extension minimum version requirements. + dummyVersion = "99.99"; + + pin = { + # The test suite has issues with the latest/current versions of Python. + # Use an older recommended version instead, matching the CI. + python = pkgs.python39; + + # The project uses a pinned version (rust/clippy.toml) for compiling, + # but uses formatter features from nightly. + # TODO: make cargo use the formatter from nightly automatically + # (not supported by rustup/cargo yet? workaround?) + # rustPlatform = pkgs.rust-bin.stable."1.61.0".default; + # rustPlatformFormatter = pkgs.rust-bin.nightly."2023-04-20".default; + + # The CI uses an old version of the Black code formatter, + # itself depending on old Python libraries. + # The formatting rules have changed in more recent versions. + inherit (import nixpkgs-black { inherit system; }) black; + }; + + in rec { + apps.mercurial = apps.mercurial-rust; + apps.default = apps.mercurial; + apps.mercurial-c = flake-utils.lib.mkApp { + drv = packages.mercurial-c; + }; + apps.mercurial-rust = flake-utils.lib.mkApp { + drv = packages.mercurial-rust; + }; + + packages.mercurial = packages.mercurial-rust; + packages.default = packages.mercurial; + + packages.mercurial-c = pin.python.pkgs.buildPythonApplication { + format = "other"; + pname = "mercurial"; + version = "SNAPSHOT"; + passthru.exePath = "/bin/hg"; + inherit src; + + postPatch = '' + echo 'version = b"${toString dummyVersion}"' \ + > mercurial/__version__.py + + patchShebangs . + + for f in **/*.{py,c,t}; do + # not only used in shebangs + substituteAllInPlace "$f" '/bin/sh' '${pkgs.stdenv.shell}' + done + ''; + + buildInputs = with pin.python.pkgs; [ + docutils + ]; + + nativeBuildInputs = with pkgs; [ + gettext + installShellFiles + ]; + + makeFlags = [ + "PREFIX=$(out)" + ]; + + buildPhase = '' + make local + ''; + + # Test suite is huge ; run on-demand in a separate package instead. + doCheck = false; + }; + + packages.mercurial-rust = packages.mercurial-c.overrideAttrs (super: { + cargoRoot = "rust"; + cargoDeps = pkgs.rustPlatform.importCargoLock { + lockFile = "${src}/rust/Cargo.lock"; + }; + + nativeBuildInputs = (super.nativeBuildInputs or []) ++ ( + with pkgs.rustPlatform; [ + cargoSetupHook + rust.cargo + rust.rustc + ] + ); + + makeFlags = (super.makeFlags or []) ++ [ + "PURE=--rust" + ]; + }); + + packages.mercurial-tests = pkgs.stdenv.mkDerivation { + pname = "mercurial-tests"; + version = "SNAPSHOT"; + inherit src; + + buildInputs = with pkgs; [ + pin.python + pin.black + unzip + which + sqlite + ]; + + postPatch = (packages.mercurial.postPatch or "") + '' + # * paths emitted by our wrapped hg look like ..hg-wrapped-wrapped + # * 'hg' is a wrapper; don't run using python directly + for f in **/*.t; do + substituteInPlace 2>/dev/null "$f" \ + --replace '*/hg:' '*/*hg*:' \ + --replace '"$PYTHON" "$BINDIR"/hg' '"$BINDIR"/hg' + done + ''; + + buildPhase = '' + export HGTEST_REAL_HG="${packages.mercurial}/bin/hg" + export HGMODULEPOLICY="rust+c" + export HGTESTFLAGS="--blacklist blacklists/nix" + make check 2>&1 | tee "$out" + ''; + }; + + devShell = flaky-utils.lib.mkDevShell { + inherit pkgs; + + tools = [ + pin.python + pin.black + ]; + }; + }); +} diff -r 41b9eb302d95 -r 9a4db474ef1a contrib/perf.py --- a/contrib/perf.py Thu Jun 22 11:18:47 2023 +0200 +++ b/contrib/perf.py Thu Jun 22 11:36:37 2023 +0200 @@ -532,10 +532,16 @@ ) +@contextlib.contextmanager +def noop_context(): + yield + + def _timer( fm, func, setup=None, + context=noop_context, title=None, displayall=False, limits=DEFAULTLIMITS, @@ -551,14 +557,16 @@ for i in range(prerun): if setup is not None: setup() - func() + with context(): + func() keepgoing = True while keepgoing: if setup is not None: setup() - with profiler: - with timeone() as item: - r = func() + with context(): + with profiler: + with timeone() as item: + r = func() profiler = NOOPCTX count += 1 results.append(item[0]) @@ -1900,6 +1908,201 @@ fm.end() +def _find_stream_generator(version): + """find the proper generator function for this stream version""" + import mercurial.streamclone + + available = {} + + # try to fetch a v1 generator + generatev1 = getattr(mercurial.streamclone, "generatev1", None) + if generatev1 is not None: + + def generate(repo): + entries, bytes, data = generatev2(repo, None, None, True) + return data + + available[b'v1'] = generatev1 + # try to fetch a v2 generator + generatev2 = getattr(mercurial.streamclone, "generatev2", None) + if generatev2 is not None: + + def generate(repo): + entries, bytes, data = generatev2(repo, None, None, True) + return data + + available[b'v2'] = generate + # try to fetch a v3 generator + generatev3 = getattr(mercurial.streamclone, "generatev3", None) + if generatev3 is not None: + + def generate(repo): + entries, bytes, data = generatev3(repo, None, None, True) + return data + + available[b'v3-exp'] = generate + + # resolve the request + if version == b"latest": + # latest is the highest non experimental version + latest_key = max(v for v in available if b'-exp' not in v) + return available[latest_key] + elif version in available: + return available[version] + else: + msg = b"unkown or unavailable version: %s" + msg %= version + hint = b"available versions: %s" + hint %= b', '.join(sorted(available)) + raise error.Abort(msg, hint=hint) + + +@command( + b'perf::stream-locked-section', + [ + ( + b'', + b'stream-version', + b'latest', + b'stream version to use ("v1", "v2", "v3" or "latest", (the default))', + ), + ] + + formatteropts, +) +def perf_stream_clone_scan(ui, repo, stream_version, **opts): + """benchmark the initial, repo-locked, section of a stream-clone""" + + opts = _byteskwargs(opts) + timer, fm = gettimer(ui, opts) + + # deletion of the generator may trigger some cleanup that we do not want to + # measure + result_holder = [None] + + def setupone(): + result_holder[0] = None + + generate = _find_stream_generator(stream_version) + + def runone(): + # the lock is held for the duration the initialisation + result_holder[0] = generate(repo) + + timer(runone, setup=setupone, title=b"load") + fm.end() + + +@command( + b'perf::stream-generate', + [ + ( + b'', + b'stream-version', + b'latest', + b'stream version to us ("v1", "v2" or "latest", (the default))', + ), + ] + + formatteropts, +) +def perf_stream_clone_generate(ui, repo, stream_version, **opts): + """benchmark the full generation of a stream clone""" + + opts = _byteskwargs(opts) + timer, fm = gettimer(ui, opts) + + # deletion of the generator may trigger some cleanup that we do not want to + # measure + + generate = _find_stream_generator(stream_version) + + def runone(): + # the lock is held for the duration the initialisation + for chunk in generate(repo): + pass + + timer(runone, title=b"generate") + fm.end() + + +@command( + b'perf::stream-consume', + formatteropts, +) +def perf_stream_clone_consume(ui, repo, filename, **opts): + """benchmark the full application of a stream clone + + This include the creation of the repository + """ + # try except to appease check code + msg = b"mercurial too old, missing necessary module: %s" + try: + from mercurial import bundle2 + except ImportError as exc: + msg %= _bytestr(exc) + raise error.Abort(msg) + try: + from mercurial import exchange + except ImportError as exc: + msg %= _bytestr(exc) + raise error.Abort(msg) + try: + from mercurial import hg + except ImportError as exc: + msg %= _bytestr(exc) + raise error.Abort(msg) + try: + from mercurial import localrepo + except ImportError as exc: + msg %= _bytestr(exc) + raise error.Abort(msg) + + opts = _byteskwargs(opts) + timer, fm = gettimer(ui, opts) + + # deletion of the generator may trigger some cleanup that we do not want to + # measure + if not (os.path.isfile(filename) and os.access(filename, os.R_OK)): + raise error.Abort("not a readable file: %s" % filename) + + run_variables = [None, None] + + @contextlib.contextmanager + def context(): + with open(filename, mode='rb') as bundle: + with tempfile.TemporaryDirectory() as tmp_dir: + tmp_dir = fsencode(tmp_dir) + run_variables[0] = bundle + run_variables[1] = tmp_dir + yield + run_variables[0] = None + run_variables[1] = None + + def runone(): + bundle = run_variables[0] + tmp_dir = run_variables[1] + # only pass ui when no srcrepo + localrepo.createrepository( + repo.ui, tmp_dir, requirements=repo.requirements + ) + target = hg.repository(repo.ui, tmp_dir) + gen = exchange.readbundle(target.ui, bundle, bundle.name) + # stream v1 + if util.safehasattr(gen, 'apply'): + gen.apply(target) + else: + with target.transaction(b"perf::stream-consume") as tr: + bundle2.applybundle( + target, + gen, + tr, + source=b'unbundle', + url=filename, + ) + + timer(runone, context=context, title=b"consume") + fm.end() + + @command(b'perf::parents|perfparents', formatteropts) def perfparents(ui, repo, **opts): """benchmark the time necessary to fetch one changeset's parents. diff -r 41b9eb302d95 -r 9a4db474ef1a contrib/python-zstandard/setup_zstd.py --- a/contrib/python-zstandard/setup_zstd.py Thu Jun 22 11:18:47 2023 +0200 +++ b/contrib/python-zstandard/setup_zstd.py Thu Jun 22 11:36:37 2023 +0200 @@ -145,8 +145,16 @@ include_dirs = set([os.path.join(actual_root, d) for d in ext_includes]) if not system_zstd: - include_dirs.update( - [os.path.join(actual_root, d) for d in zstd_includes] + from distutils import sysconfig + from shlex import quote + + includes = [] + for incdir in [os.path.join(actual_root, d) for d in zstd_includes]: + includes.append('-I' + quote(incdir)) + include_dirs.add(incdir) + config_vars = sysconfig.get_config_vars() + config_vars['CFLAGS'] = ' '.join( + includes + [config_vars.get('CFLAGS', '')] ) if support_legacy: include_dirs.update( diff -r 41b9eb302d95 -r 9a4db474ef1a hg --- a/hg Thu Jun 22 11:18:47 2023 +0200 +++ b/hg Thu Jun 22 11:36:37 2023 +0200 @@ -38,21 +38,21 @@ ) ) -from hgdemandimport import tracing +try: + from hgdemandimport import tracing +except ImportError: + sys.stderr.write( + "abort: couldn't find mercurial libraries in [%s]\n" + % ' '.join(sys.path) + ) + sys.stderr.write("(check your install and PYTHONPATH)\n") + sys.exit(-1) with tracing.log('hg script'): # enable importing on demand to reduce startup time - try: - import hgdemandimport + import hgdemandimport - hgdemandimport.enable() - except ImportError: - sys.stderr.write( - "abort: couldn't find mercurial libraries in [%s]\n" - % ' '.join(sys.path) - ) - sys.stderr.write("(check your install and PYTHONPATH)\n") - sys.exit(-1) + hgdemandimport.enable() from mercurial import dispatch diff -r 41b9eb302d95 -r 9a4db474ef1a hgext/clonebundles.py --- a/hgext/clonebundles.py Thu Jun 22 11:18:47 2023 +0200 +++ b/hgext/clonebundles.py Thu Jun 22 11:36:37 2023 +0200 @@ -202,15 +202,136 @@ occurs. So server operators should prepare for some people to follow these instructions when a failure occurs, thus driving more load to the original Mercurial server when the bundle hosting service fails. + + +inline clonebundles +------------------- + +It is possible to transmit clonebundles inline in case repositories are +accessed over SSH. This avoids having to setup an external HTTPS server +and results in the same access control as already present for the SSH setup. + +Inline clonebundles should be placed into the `.hg/bundle-cache` directory. +A clonebundle at `.hg/bundle-cache/mybundle.bundle` is referred to +in the `clonebundles.manifest` file as `peer-bundle-cache://mybundle.bundle`. + + +auto-generation of clone bundles +-------------------------------- + +It is possible to set Mercurial to automatically re-generate clone bundles when +enough new content is available. + +Mercurial will take care of the process asynchronously. The defined list of +bundle-type will be generated, uploaded, and advertised. Older bundles will get +decommissioned as newer ones replace them. + +Bundles Generation: +................... + +The extension can generate multiple variants of the clone bundle. Each +different variant will be defined by the "bundle-spec" they use:: + + [clone-bundles] + auto-generate.formats= zstd-v2, gzip-v2 + +See `hg help bundlespec` for details about available options. + +By default, new bundles are generated when 5% of the repository contents or at +least 1000 revisions are not contained in the cached bundles. This option can +be controlled by the `clone-bundles.trigger.below-bundled-ratio` option +(default 0.95) and the `clone-bundles.trigger.revs` option (default 1000):: + + [clone-bundles] + trigger.below-bundled-ratio=0.95 + trigger.revs=1000 + +This logic can be manually triggered using the `admin::clone-bundles-refresh` +command, or automatically on each repository change if +`clone-bundles.auto-generate.on-change` is set to `yes`. + + [clone-bundles] + auto-generate.on-change=yes + auto-generate.formats= zstd-v2, gzip-v2 + +Automatic Inline serving +........................ + +The simplest way to serve the generated bundle is through the Mercurial +protocol. However it is not the most efficient as request will still be served +by that main server. It is useful in case where authentication is complexe or +when an efficient mirror system is already in use anyway. See the `inline +clonebundles` section above for details about inline clonebundles + +To automatically serve generated bundle through inline clonebundle, simply set +the following option:: + + auto-generate.serve-inline=yes + +Enabling this option disable the managed upload and serving explained below. + +Bundles Upload and Serving: +........................... + +This is the most efficient way to serve automatically generated clone bundles, +but requires some setup. + +The generated bundles need to be made available to users through a "public" URL. +This should be donne through `clone-bundles.upload-command` configuration. The +value of this command should be a shell command. It will have access to the +bundle file path through the `$HGCB_BUNDLE_PATH` variable. And the expected +basename in the "public" URL is accessible at:: + + [clone-bundles] + upload-command=sftp put $HGCB_BUNDLE_PATH \ + sftp://bundles.host/clone-bundles/$HGCB_BUNDLE_BASENAME + +If the file was already uploaded, the command must still succeed. + +After upload, the file should be available at an url defined by +`clone-bundles.url-template`. + + [clone-bundles] + url-template=https://bundles.host/cache/clone-bundles/{basename} + +Old bundles cleanup: +.................... + +When new bundles are generated, the older ones are no longer necessary and can +be removed from storage. This is done through the `clone-bundles.delete-command` +configuration. The command is given the url of the artifact to delete through +the `$HGCB_BUNDLE_URL` environment variable. + + [clone-bundles] + delete-command=sftp rm sftp://bundles.host/clone-bundles/$HGCB_BUNDLE_BASENAME + +If the file was already deleted, the command must still succeed. """ +import os +import weakref + +from mercurial.i18n import _ + from mercurial import ( bundlecaches, + commands, + error, extensions, + localrepo, + lock, + node, + registrar, + util, wireprotov1server, ) + +from mercurial.utils import ( + procutil, +) + testedwith = b'ships-with-hg-core' @@ -222,9 +343,740 @@ # missing file. if repo.vfs.exists(bundlecaches.CB_MANIFEST_FILE): caps.append(b'clonebundles') + caps.append(b'clonebundles_manifest') return caps def extsetup(ui): extensions.wrapfunction(wireprotov1server, b'_capabilities', capabilities) + + +# logic for bundle auto-generation + + +configtable = {} +configitem = registrar.configitem(configtable) + +cmdtable = {} +command = registrar.command(cmdtable) + +configitem(b'clone-bundles', b'auto-generate.on-change', default=False) +configitem(b'clone-bundles', b'auto-generate.formats', default=list) +configitem(b'clone-bundles', b'auto-generate.serve-inline', default=False) +configitem(b'clone-bundles', b'trigger.below-bundled-ratio', default=0.95) +configitem(b'clone-bundles', b'trigger.revs', default=1000) + +configitem(b'clone-bundles', b'upload-command', default=None) + +configitem(b'clone-bundles', b'delete-command', default=None) + +configitem(b'clone-bundles', b'url-template', default=None) + +configitem(b'devel', b'debug.clonebundles', default=False) + + +# category for the post-close transaction hooks +CAT_POSTCLOSE = b"clonebundles-autobundles" + +# template for bundle file names +BUNDLE_MASK = ( + b"full-%(bundle_type)s-%(revs)d_revs-%(tip_short)s_tip-%(op_id)s.hg" +) + + +# file in .hg/ use to track clonebundles being auto-generated +AUTO_GEN_FILE = b'clonebundles.auto-gen' + + +class BundleBase(object): + """represents the core of properties that matters for us in a bundle + + :bundle_type: the bundlespec (see hg help bundlespec) + :revs: the number of revisions in the repo at bundle creation time + :tip_rev: the rev-num of the tip revision + :tip_node: the node id of the tip-most revision in the bundle + + :ready: True if the bundle is ready to be served + """ + + ready = False + + def __init__(self, bundle_type, revs, tip_rev, tip_node): + self.bundle_type = bundle_type + self.revs = revs + self.tip_rev = tip_rev + self.tip_node = tip_node + + def valid_for(self, repo): + """is this bundle applicable to the current repository + + This is useful for detecting bundles made irrelevant by stripping. + """ + tip_node = node.bin(self.tip_node) + return repo.changelog.index.get_rev(tip_node) == self.tip_rev + + def __eq__(self, other): + left = (self.ready, self.bundle_type, self.tip_rev, self.tip_node) + right = (other.ready, other.bundle_type, other.tip_rev, other.tip_node) + return left == right + + def __neq__(self, other): + return not self == other + + def __cmp__(self, other): + if self == other: + return 0 + return -1 + + +class RequestedBundle(BundleBase): + """A bundle that should be generated. + + Additional attributes compared to BundleBase + :heads: list of head revisions (as rev-num) + :op_id: a "unique" identifier for the operation triggering the change + """ + + def __init__(self, bundle_type, revs, tip_rev, tip_node, head_revs, op_id): + self.head_revs = head_revs + self.op_id = op_id + super(RequestedBundle, self).__init__( + bundle_type, + revs, + tip_rev, + tip_node, + ) + + @property + def suggested_filename(self): + """A filename that can be used for the generated bundle""" + data = { + b'bundle_type': self.bundle_type, + b'revs': self.revs, + b'heads': self.head_revs, + b'tip_rev': self.tip_rev, + b'tip_node': self.tip_node, + b'tip_short': self.tip_node[:12], + b'op_id': self.op_id, + } + return BUNDLE_MASK % data + + def generate_bundle(self, repo, file_path): + """generate the bundle at `filepath`""" + commands.bundle( + repo.ui, + repo, + file_path, + base=[b"null"], + rev=self.head_revs, + type=self.bundle_type, + quiet=True, + ) + + def generating(self, file_path, hostname=None, pid=None): + """return a GeneratingBundle object from this object""" + if pid is None: + pid = os.getpid() + if hostname is None: + hostname = lock._getlockprefix() + return GeneratingBundle( + self.bundle_type, + self.revs, + self.tip_rev, + self.tip_node, + hostname, + pid, + file_path, + ) + + +class GeneratingBundle(BundleBase): + """A bundle being generated + + extra attributes compared to BundleBase: + + :hostname: the hostname of the machine generating the bundle + :pid: the pid of the process generating the bundle + :filepath: the target filename of the bundle + + These attributes exist to help detect stalled generation processes. + """ + + ready = False + + def __init__( + self, bundle_type, revs, tip_rev, tip_node, hostname, pid, filepath + ): + self.hostname = hostname + self.pid = pid + self.filepath = filepath + super(GeneratingBundle, self).__init__( + bundle_type, revs, tip_rev, tip_node + ) + + @classmethod + def from_line(cls, line): + """create an object by deserializing a line from AUTO_GEN_FILE""" + assert line.startswith(b'PENDING-v1 ') + ( + __, + bundle_type, + revs, + tip_rev, + tip_node, + hostname, + pid, + filepath, + ) = line.split() + hostname = util.urlreq.unquote(hostname) + filepath = util.urlreq.unquote(filepath) + revs = int(revs) + tip_rev = int(tip_rev) + pid = int(pid) + return cls( + bundle_type, revs, tip_rev, tip_node, hostname, pid, filepath + ) + + def to_line(self): + """serialize the object to include as a line in AUTO_GEN_FILE""" + templ = b"PENDING-v1 %s %d %d %s %s %d %s" + data = ( + self.bundle_type, + self.revs, + self.tip_rev, + self.tip_node, + util.urlreq.quote(self.hostname), + self.pid, + util.urlreq.quote(self.filepath), + ) + return templ % data + + def __eq__(self, other): + if not super(GeneratingBundle, self).__eq__(other): + return False + left = (self.hostname, self.pid, self.filepath) + right = (other.hostname, other.pid, other.filepath) + return left == right + + def uploaded(self, url, basename): + """return a GeneratedBundle from this object""" + return GeneratedBundle( + self.bundle_type, + self.revs, + self.tip_rev, + self.tip_node, + url, + basename, + ) + + +class GeneratedBundle(BundleBase): + """A bundle that is done being generated and can be served + + extra attributes compared to BundleBase: + + :file_url: the url where the bundle is available. + :basename: the "basename" used to upload (useful for deletion) + + These attributes exist to generate a bundle manifest + (.hg/pullbundles.manifest) + """ + + ready = True + + def __init__( + self, bundle_type, revs, tip_rev, tip_node, file_url, basename + ): + self.file_url = file_url + self.basename = basename + super(GeneratedBundle, self).__init__( + bundle_type, revs, tip_rev, tip_node + ) + + @classmethod + def from_line(cls, line): + """create an object by deserializing a line from AUTO_GEN_FILE""" + assert line.startswith(b'DONE-v1 ') + ( + __, + bundle_type, + revs, + tip_rev, + tip_node, + file_url, + basename, + ) = line.split() + revs = int(revs) + tip_rev = int(tip_rev) + file_url = util.urlreq.unquote(file_url) + return cls(bundle_type, revs, tip_rev, tip_node, file_url, basename) + + def to_line(self): + """serialize the object to include as a line in AUTO_GEN_FILE""" + templ = b"DONE-v1 %s %d %d %s %s %s" + data = ( + self.bundle_type, + self.revs, + self.tip_rev, + self.tip_node, + util.urlreq.quote(self.file_url), + self.basename, + ) + return templ % data + + def manifest_line(self): + """serialize the object to include as a line in pullbundles.manifest""" + templ = b"%s BUNDLESPEC=%s" + if self.file_url.startswith(b'http'): + templ += b" REQUIRESNI=true" + return templ % (self.file_url, self.bundle_type) + + def __eq__(self, other): + if not super(GeneratedBundle, self).__eq__(other): + return False + return self.file_url == other.file_url + + +def parse_auto_gen(content): + """parse the AUTO_GEN_FILE to return a list of Bundle object""" + bundles = [] + for line in content.splitlines(): + if line.startswith(b'PENDING-v1 '): + bundles.append(GeneratingBundle.from_line(line)) + elif line.startswith(b'DONE-v1 '): + bundles.append(GeneratedBundle.from_line(line)) + return bundles + + +def dumps_auto_gen(bundles): + """serialize a list of Bundle as a AUTO_GEN_FILE content""" + lines = [] + for b in bundles: + lines.append(b"%s\n" % b.to_line()) + lines.sort() + return b"".join(lines) + + +def read_auto_gen(repo): + """read the AUTO_GEN_FILE for the a list of Bundle object""" + data = repo.vfs.tryread(AUTO_GEN_FILE) + if not data: + return [] + return parse_auto_gen(data) + + +def write_auto_gen(repo, bundles): + """write a list of Bundle objects into the repo's AUTO_GEN_FILE""" + assert repo._cb_lock_ref is not None + data = dumps_auto_gen(bundles) + with repo.vfs(AUTO_GEN_FILE, mode=b'wb', atomictemp=True) as f: + f.write(data) + + +def generate_manifest(bundles): + """write a list of Bundle objects into the repo's AUTO_GEN_FILE""" + bundles = list(bundles) + bundles.sort(key=lambda b: b.bundle_type) + lines = [] + for b in bundles: + lines.append(b"%s\n" % b.manifest_line()) + return b"".join(lines) + + +def update_ondisk_manifest(repo): + """update the clonebundle manifest with latest url""" + with repo.clonebundles_lock(): + bundles = read_auto_gen(repo) + + per_types = {} + for b in bundles: + if not (b.ready and b.valid_for(repo)): + continue + current = per_types.get(b.bundle_type) + if current is not None and current.revs >= b.revs: + continue + per_types[b.bundle_type] = b + manifest = generate_manifest(per_types.values()) + with repo.vfs( + bundlecaches.CB_MANIFEST_FILE, mode=b"wb", atomictemp=True + ) as f: + f.write(manifest) + + +def update_bundle_list(repo, new_bundles=(), del_bundles=()): + """modify the repo's AUTO_GEN_FILE + + This method also regenerates the clone bundle manifest when needed""" + with repo.clonebundles_lock(): + bundles = read_auto_gen(repo) + if del_bundles: + bundles = [b for b in bundles if b not in del_bundles] + new_bundles = [b for b in new_bundles if b not in bundles] + bundles.extend(new_bundles) + write_auto_gen(repo, bundles) + all_changed = [] + all_changed.extend(new_bundles) + all_changed.extend(del_bundles) + if any(b.ready for b in all_changed): + update_ondisk_manifest(repo) + + +def cleanup_tmp_bundle(repo, target): + """remove a GeneratingBundle file and entry""" + assert not target.ready + with repo.clonebundles_lock(): + repo.vfs.tryunlink(target.filepath) + update_bundle_list(repo, del_bundles=[target]) + + +def finalize_one_bundle(repo, target): + """upload a generated bundle and advertise it in the clonebundles.manifest""" + with repo.clonebundles_lock(): + bundles = read_auto_gen(repo) + if target in bundles and target.valid_for(repo): + result = upload_bundle(repo, target) + update_bundle_list(repo, new_bundles=[result]) + cleanup_tmp_bundle(repo, target) + + +def find_outdated_bundles(repo, bundles): + """finds outdated bundles""" + olds = [] + per_types = {} + for b in bundles: + if not b.valid_for(repo): + olds.append(b) + continue + l = per_types.setdefault(b.bundle_type, []) + l.append(b) + for key in sorted(per_types): + all = per_types[key] + if len(all) > 1: + all.sort(key=lambda b: b.revs, reverse=True) + olds.extend(all[1:]) + return olds + + +def collect_garbage(repo): + """finds outdated bundles and get them deleted""" + with repo.clonebundles_lock(): + bundles = read_auto_gen(repo) + olds = find_outdated_bundles(repo, bundles) + for o in olds: + delete_bundle(repo, o) + update_bundle_list(repo, del_bundles=olds) + + +def upload_bundle(repo, bundle): + """upload the result of a GeneratingBundle and return a GeneratedBundle + + The upload is done using the `clone-bundles.upload-command` + """ + inline = repo.ui.config(b'clone-bundles', b'auto-generate.serve-inline') + basename = repo.vfs.basename(bundle.filepath) + if inline: + dest_dir = repo.vfs.join(bundlecaches.BUNDLE_CACHE_DIR) + repo.vfs.makedirs(dest_dir) + dest = repo.vfs.join(dest_dir, basename) + util.copyfiles(bundle.filepath, dest, hardlink=True) + url = bundlecaches.CLONEBUNDLESCHEME + basename + return bundle.uploaded(url, basename) + else: + cmd = repo.ui.config(b'clone-bundles', b'upload-command') + url = repo.ui.config(b'clone-bundles', b'url-template') + filepath = procutil.shellquote(bundle.filepath) + variables = { + b'HGCB_BUNDLE_PATH': filepath, + b'HGCB_BUNDLE_BASENAME': basename, + } + env = procutil.shellenviron(environ=variables) + ret = repo.ui.system(cmd, environ=env) + if ret: + raise error.Abort(b"command returned status %d: %s" % (ret, cmd)) + url = ( + url.decode('utf8') + .format(basename=basename.decode('utf8')) + .encode('utf8') + ) + return bundle.uploaded(url, basename) + + +def delete_bundle(repo, bundle): + """delete a bundle from storage""" + assert bundle.ready + + inline = bundle.file_url.startswith(bundlecaches.CLONEBUNDLESCHEME) + + if inline: + msg = b'clone-bundles: deleting inline bundle %s\n' + else: + msg = b'clone-bundles: deleting bundle %s\n' + msg %= bundle.basename + if repo.ui.configbool(b'devel', b'debug.clonebundles'): + repo.ui.write(msg) + else: + repo.ui.debug(msg) + + if inline: + inline_path = repo.vfs.join( + bundlecaches.BUNDLE_CACHE_DIR, + bundle.basename, + ) + util.tryunlink(inline_path) + else: + cmd = repo.ui.config(b'clone-bundles', b'delete-command') + variables = { + b'HGCB_BUNDLE_URL': bundle.file_url, + b'HGCB_BASENAME': bundle.basename, + } + env = procutil.shellenviron(environ=variables) + ret = repo.ui.system(cmd, environ=env) + if ret: + raise error.Abort(b"command returned status %d: %s" % (ret, cmd)) + + +def auto_bundle_needed_actions(repo, bundles, op_id): + """find the list of bundles that need action + + returns a list of RequestedBundle objects that need to be generated and + uploaded.""" + create_bundles = [] + delete_bundles = [] + repo = repo.filtered(b"immutable") + targets = repo.ui.configlist(b'clone-bundles', b'auto-generate.formats') + ratio = float( + repo.ui.config(b'clone-bundles', b'trigger.below-bundled-ratio') + ) + abs_revs = repo.ui.configint(b'clone-bundles', b'trigger.revs') + revs = len(repo.changelog) + generic_data = { + 'revs': revs, + 'head_revs': repo.changelog.headrevs(), + 'tip_rev': repo.changelog.tiprev(), + 'tip_node': node.hex(repo.changelog.tip()), + 'op_id': op_id, + } + for t in targets: + t = bundlecaches.parsebundlespec(repo, t, strict=False).as_spec() + if new_bundle_needed(repo, bundles, ratio, abs_revs, t, revs): + data = generic_data.copy() + data['bundle_type'] = t + b = RequestedBundle(**data) + create_bundles.append(b) + delete_bundles.extend(find_outdated_bundles(repo, bundles)) + return create_bundles, delete_bundles + + +def new_bundle_needed(repo, bundles, ratio, abs_revs, bundle_type, revs): + """consider the current cached content and trigger new bundles if needed""" + threshold = max((revs * ratio), (revs - abs_revs)) + for b in bundles: + if not b.valid_for(repo) or b.bundle_type != bundle_type: + continue + if b.revs > threshold: + return False + return True + + +def start_one_bundle(repo, bundle): + """start the generation of a single bundle file + + the `bundle` argument should be a RequestedBundle object. + + This data is passed to the `debugmakeclonebundles` "as is". + """ + data = util.pickle.dumps(bundle) + cmd = [procutil.hgexecutable(), b'--cwd', repo.path, INTERNAL_CMD] + env = procutil.shellenviron() + msg = b'clone-bundles: starting bundle generation: %s\n' + stdout = None + stderr = None + waits = [] + record_wait = None + if repo.ui.configbool(b'devel', b'debug.clonebundles'): + stdout = procutil.stdout + stderr = procutil.stderr + repo.ui.write(msg % bundle.bundle_type) + record_wait = waits.append + else: + repo.ui.debug(msg % bundle.bundle_type) + bg = procutil.runbgcommand + bg( + cmd, + env, + stdin_bytes=data, + stdout=stdout, + stderr=stderr, + record_wait=record_wait, + ) + for f in waits: + f() + + +INTERNAL_CMD = b'debug::internal-make-clone-bundles' + + +@command(INTERNAL_CMD, [], b'') +def debugmakeclonebundles(ui, repo): + """Internal command to auto-generate debug bundles""" + requested_bundle = util.pickle.load(procutil.stdin) + procutil.stdin.close() + + collect_garbage(repo) + + fname = requested_bundle.suggested_filename + fpath = repo.vfs.makedirs(b'tmp-bundles') + fpath = repo.vfs.join(b'tmp-bundles', fname) + bundle = requested_bundle.generating(fpath) + update_bundle_list(repo, new_bundles=[bundle]) + + requested_bundle.generate_bundle(repo, fpath) + + repo.invalidate() + finalize_one_bundle(repo, bundle) + + +def make_auto_bundler(source_repo): + reporef = weakref.ref(source_repo) + + def autobundle(tr): + repo = reporef() + assert repo is not None + bundles = read_auto_gen(repo) + new, __ = auto_bundle_needed_actions(repo, bundles, b"%d_txn" % id(tr)) + for data in new: + start_one_bundle(repo, data) + return None + + return autobundle + + +def reposetup(ui, repo): + """install the two pieces needed for automatic clonebundle generation + + - add a "post-close" hook that fires bundling when needed + - introduce a clone-bundle lock to let multiple processes meddle with the + state files. + """ + if not repo.local(): + return + + class autobundlesrepo(repo.__class__): + def transaction(self, *args, **kwargs): + tr = super(autobundlesrepo, self).transaction(*args, **kwargs) + enabled = repo.ui.configbool( + b'clone-bundles', + b'auto-generate.on-change', + ) + targets = repo.ui.configlist( + b'clone-bundles', b'auto-generate.formats' + ) + if enabled and targets: + tr.addpostclose(CAT_POSTCLOSE, make_auto_bundler(self)) + return tr + + @localrepo.unfilteredmethod + def clonebundles_lock(self, wait=True): + '''Lock the repository file related to clone bundles''' + if not util.safehasattr(self, '_cb_lock_ref'): + self._cb_lock_ref = None + l = self._currentlock(self._cb_lock_ref) + if l is not None: + l.lock() + return l + + l = self._lock( + vfs=self.vfs, + lockname=b"clonebundleslock", + wait=wait, + releasefn=None, + acquirefn=None, + desc=_(b'repository %s') % self.origroot, + ) + self._cb_lock_ref = weakref.ref(l) + return l + + repo._wlockfreeprefix.add(AUTO_GEN_FILE) + repo._wlockfreeprefix.add(bundlecaches.CB_MANIFEST_FILE) + repo.__class__ = autobundlesrepo + + +@command( + b'admin::clone-bundles-refresh', + [ + ( + b'', + b'background', + False, + _(b'start bundle generation in the background'), + ), + ], + b'', +) +def cmd_admin_clone_bundles_refresh( + ui, + repo: localrepo.localrepository, + background=False, +): + """generate clone bundles according to the configuration + + This runs the logic for automatic generation, removing outdated bundles and + generating new ones if necessary. See :hg:`help -e clone-bundles` for + details about how to configure this feature. + """ + debug = repo.ui.configbool(b'devel', b'debug.clonebundles') + bundles = read_auto_gen(repo) + op_id = b"%d_acbr" % os.getpid() + create, delete = auto_bundle_needed_actions(repo, bundles, op_id) + + # if some bundles are scheduled for creation in the background, they will + # deal with garbage collection too, so no need to synchroniously do it. + # + # However if no bundles are scheduled for creation, we need to explicitly do + # it here. + if not (background and create): + # we clean up outdated bundles before generating new ones to keep the + # last two versions of the bundle around for a while and avoid having to + # deal with clients that just got served a manifest. + for o in delete: + delete_bundle(repo, o) + update_bundle_list(repo, del_bundles=delete) + + if create: + fpath = repo.vfs.makedirs(b'tmp-bundles') + + if background: + for requested_bundle in create: + start_one_bundle(repo, requested_bundle) + else: + for requested_bundle in create: + if debug: + msg = b'clone-bundles: starting bundle generation: %s\n' + repo.ui.write(msg % requested_bundle.bundle_type) + fname = requested_bundle.suggested_filename + fpath = repo.vfs.join(b'tmp-bundles', fname) + generating_bundle = requested_bundle.generating(fpath) + update_bundle_list(repo, new_bundles=[generating_bundle]) + requested_bundle.generate_bundle(repo, fpath) + result = upload_bundle(repo, generating_bundle) + update_bundle_list(repo, new_bundles=[result]) + update_ondisk_manifest(repo) + cleanup_tmp_bundle(repo, generating_bundle) + + +@command(b'admin::clone-bundles-clear', [], b'') +def cmd_admin_clone_bundles_clear(ui, repo: localrepo.localrepository): + """remove existing clone bundle caches + + See `hg help admin::clone-bundles-refresh` for details on how to regenerate + them. + + This command will only affect bundles currently available, it will not + affect bundles being asynchronously generated. + """ + bundles = read_auto_gen(repo) + delete = [b for b in bundles if b.ready] + for o in delete: + delete_bundle(repo, o) + update_bundle_list(repo, del_bundles=delete) diff -r 41b9eb302d95 -r 9a4db474ef1a hgext/fastexport.py --- a/hgext/fastexport.py Thu Jun 22 11:18:47 2023 +0200 +++ b/hgext/fastexport.py Thu Jun 22 11:36:37 2023 +0200 @@ -69,10 +69,10 @@ return b"refs/heads/" + branch -def write_data(buf, data, skip_newline): +def write_data(buf, data, add_newline=False): buf.append(b"data %d\n" % len(data)) buf.append(data) - if not skip_newline or data[-1:] != b"\n": + if add_newline or data[-1:] != b"\n": buf.append(b"\n") @@ -103,7 +103,7 @@ marks[filerev] = mark data = filectx.data() buf = [b"blob\n", b"mark :%d\n" % mark] - write_data(buf, data, False) + write_data(buf, data, True) ui.write(*buf, keepprogressbar=True) del buf @@ -122,7 +122,7 @@ convert_to_git_date(ctx.date()), ), ] - write_data(buf, ctx.description(), True) + write_data(buf, ctx.description()) if parents: buf.append(b"from :%d\n" % marks[parents[0].hex()]) if len(parents) == 2: diff -r 41b9eb302d95 -r 9a4db474ef1a hgext/infinitepush/__init__.py --- a/hgext/infinitepush/__init__.py Thu Jun 22 11:18:47 2023 +0200 +++ b/hgext/infinitepush/__init__.py Thu Jun 22 11:36:37 2023 +0200 @@ -154,6 +154,18 @@ configitem( b'infinitepush', + b'deprecation-message', + default=True, +) + +configitem( + b'infinitepush', + b'deprecation-abort', + default=True, +) + +configitem( + b'infinitepush', b'server', default=False, ) @@ -317,7 +329,20 @@ return ui.configbool(b'infinitepush', b'server') +WARNING_MSG = b"""IMPORTANT: if you use this extension, please contact +mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be +unused and barring learning of users of this functionality, we drop this +extension in Mercurial 6.6. +""" + + def reposetup(ui, repo): + if ui.configbool(b'infinitepush', b'deprecation-message'): + ui.write_err(WARNING_MSG) + if ui.configbool(b'infinitepush', b'deprecation-abort'): + msg = b"USING EXTENSION INFINITE PUSH DESPITE PENDING DROP" + hint = b"contact mercurial-devel@mercurial-scm.org" + raise error.Abort(msg, hint=hint) if _isserver(ui) and repo.local(): repo.bundlestore = bundlestore(repo) @@ -330,6 +355,11 @@ clientextsetup(ui) +def uipopulate(ui): + if not ui.hasconfig(b"experimental", b"changegroup3"): + ui.setconfig(b"experimental", b"changegroup3", False, b"infinitepush") + + def commonsetup(ui): wireprotov1server.commands[b'listkeyspatterns'] = ( wireprotolistkeyspatterns, diff -r 41b9eb302d95 -r 9a4db474ef1a hgext/largefiles/lfutil.py --- a/hgext/largefiles/lfutil.py Thu Jun 22 11:18:47 2023 +0200 +++ b/hgext/largefiles/lfutil.py Thu Jun 22 11:36:37 2023 +0200 @@ -551,10 +551,10 @@ def islfilesrepo(repo): '''Return true if the repo is a largefile repo.''' - if b'largefiles' in repo.requirements and any( - shortnameslash in f[1] for f in repo.store.datafiles() - ): - return True + if b'largefiles' in repo.requirements: + for entry in repo.store.data_entries(): + if entry.is_revlog and shortnameslash in entry.target_id: + return True return any(openlfdirstate(repo.ui, repo, False)) diff -r 41b9eb302d95 -r 9a4db474ef1a hgext/largefiles/reposetup.py --- a/hgext/largefiles/reposetup.py Thu Jun 22 11:18:47 2023 +0200 +++ b/hgext/largefiles/reposetup.py Thu Jun 22 11:36:37 2023 +0200 @@ -457,11 +457,16 @@ def checkrequireslfiles(ui, repo, **kwargs): with repo.lock(): - if b'largefiles' not in repo.requirements and any( - lfutil.shortname + b'/' in f[1] for f in repo.store.datafiles() - ): - repo.requirements.add(b'largefiles') - scmutil.writereporequirements(repo) + if b'largefiles' in repo.requirements: + return + marker = lfutil.shortnameslash + for entry in repo.store.data_entries(): + # XXX note that this match is not rooted and can wrongly match + # directory ending with ".hglf" + if entry.is_revlog and marker in entry.target_id: + repo.requirements.add(b'largefiles') + scmutil.writereporequirements(repo) + break ui.setconfig( b'hooks', b'changegroup.lfiles', checkrequireslfiles, b'largefiles' diff -r 41b9eb302d95 -r 9a4db474ef1a hgext/narrow/narrowcommands.py --- a/hgext/narrow/narrowcommands.py Thu Jun 22 11:18:47 2023 +0200 +++ b/hgext/narrow/narrowcommands.py Thu Jun 22 11:36:37 2023 +0200 @@ -288,13 +288,15 @@ repair.strip(ui, unfi, tostrip, topic=b'narrow', backup=backup) todelete = [] - for t, f, size in repo.store.datafiles(): - if f.startswith(b'data/'): - file = f[5:-2] - if not newmatch(file): - todelete.append(f) - elif f.startswith(b'meta/'): - dir = f[5:-13] + for entry in repo.store.data_entries(): + if not entry.is_revlog: + continue + if entry.is_filelog: + if not newmatch(entry.target_id): + for file_ in entry.files(): + todelete.append(file_.unencoded_path) + elif entry.is_manifestlog: + dir = entry.target_id dirs = sorted(pathutil.dirs({dir})) + [dir] include = True for d in dirs: @@ -305,7 +307,8 @@ if visit == b'all': break if not include: - todelete.append(f) + for file_ in entry.files(): + todelete.append(file_.unencoded_path) repo.destroying() @@ -644,7 +647,7 @@ if ( ui.promptchoice( _( - b'remove these unused includes (yn)?' + b'remove these unused includes (Yn)?' b'$$ &Yes $$ &No' ) ) diff -r 41b9eb302d95 -r 9a4db474ef1a hgext/narrow/narrowrepo.py --- a/hgext/narrow/narrowrepo.py Thu Jun 22 11:18:47 2023 +0200 +++ b/hgext/narrow/narrowrepo.py Thu Jun 22 11:36:37 2023 +0200 @@ -19,8 +19,8 @@ dirstate = super(narrowrepository, self)._makedirstate() return narrowdirstate.wrapdirstate(self, dirstate) - def peer(self, path=None): - peer = super(narrowrepository, self).peer(path=path) + def peer(self, *args, **kwds): + peer = super(narrowrepository, self).peer(*args, **kwds) peer._caps.add(wireprototypes.NARROWCAP) peer._caps.add(wireprototypes.ELLIPSESCAP) return peer diff -r 41b9eb302d95 -r 9a4db474ef1a hgext/rebase.py --- a/hgext/rebase.py Thu Jun 22 11:18:47 2023 +0200 +++ b/hgext/rebase.py Thu Jun 22 11:36:37 2023 +0200 @@ -52,6 +52,7 @@ util, ) + # The following constants are used throughout the rebase module. The ordering of # their values must be maintained. @@ -84,15 +85,6 @@ return 1 -def _savegraft(ctx, extra): - s = ctx.extra().get(b'source', None) - if s is not None: - extra[b'source'] = s - s = ctx.extra().get(b'intermediate-source', None) - if s is not None: - extra[b'intermediate-source'] = s - - def _savebranch(ctx, extra): extra[b'branch'] = ctx.branch() @@ -193,7 +185,7 @@ self.date = opts.get('date', None) e = opts.get('extrafn') # internal, used by e.g. hgsubversion - self.extrafns = [_savegraft] + self.extrafns = [rewriteutil.preserve_extras_on_rebase] if e: self.extrafns = [e] diff -r 41b9eb302d95 -r 9a4db474ef1a hgext/remotefilelog/__init__.py --- a/hgext/remotefilelog/__init__.py Thu Jun 22 11:18:47 2023 +0200 +++ b/hgext/remotefilelog/__init__.py Thu Jun 22 11:36:37 2023 +0200 @@ -408,9 +408,7 @@ # bundle2 flavor of streamclones, so force us to use # v1 instead. if b'v2' in pullop.remotebundle2caps.get(b'stream', []): - pullop.remotebundle2caps[b'stream'] = [ - c for c in pullop.remotebundle2caps[b'stream'] if c != b'v2' - ] + pullop.remotebundle2caps[b'stream'] = [] if bundle2: return False, None supported, requirements = orig(pullop, bundle2=bundle2) diff -r 41b9eb302d95 -r 9a4db474ef1a hgext/remotefilelog/contentstore.py --- a/hgext/remotefilelog/contentstore.py Thu Jun 22 11:18:47 2023 +0200 +++ b/hgext/remotefilelog/contentstore.py Thu Jun 22 11:36:37 2023 +0200 @@ -375,7 +375,7 @@ ledger.markdataentry(self, treename, node) ledger.markhistoryentry(self, treename, node) - for t, path, size in self._store.datafiles(): + for t, path, size in self._store.data_entries(): if path[:5] != b'meta/' or path[-2:] != b'.i': continue diff -r 41b9eb302d95 -r 9a4db474ef1a hgext/remotefilelog/remotefilelogserver.py --- a/hgext/remotefilelog/remotefilelogserver.py Thu Jun 22 11:18:47 2023 +0200 +++ b/hgext/remotefilelog/remotefilelogserver.py Thu Jun 22 11:36:37 2023 +0200 @@ -145,7 +145,9 @@ ) # don't clone filelogs to shallow clients - def _walkstreamfiles(orig, repo, matcher=None): + def _walkstreamfiles( + orig, repo, matcher=None, phase=False, obsolescence=False + ): if state.shallowremote: # if we are shallow ourselves, stream our local commits if shallowutil.isenabled(repo): @@ -162,27 +164,32 @@ ): n = util.pconvert(fp[striplen:]) d = store.decodedir(n) - t = store.FILETYPE_OTHER - yield (t, d, st.st_size) + yield store.SimpleStoreEntry( + entry_path=d, + is_volatile=False, + file_size=st.st_size, + ) + if kind == stat.S_IFDIR: visit.append(fp) if scmutil.istreemanifest(repo): - for (t, u, s) in repo.store.datafiles(): - if u.startswith(b'meta/') and ( - u.endswith(b'.i') or u.endswith(b'.d') - ): - yield (t, u, s) + for entry in repo.store.data_entries(): + if not entry.is_revlog: + continue + if entry.is_manifestlog: + yield entry # Return .d and .i files that do not match the shallow pattern match = state.match if match and not match.always(): - for (t, u, s) in repo.store.datafiles(): - f = u[5:-2] # trim data/... and .i/.d - if not state.match(f): - yield (t, u, s) + for entry in repo.store.data_entries(): + if not entry.is_revlog: + continue + if not state.match(entry.target_id): + yield entry - for x in repo.store.topfiles(): + for x in repo.store.top_entries(): if state.noflatmf and x[1][:11] == b'00manifest.': continue yield x @@ -195,7 +202,9 @@ _(b"Cannot clone from a shallow repo to a full repo.") ) else: - for x in orig(repo, matcher): + for x in orig( + repo, matcher, phase=phase, obsolescence=obsolescence + ): yield x extensions.wrapfunction(streamclone, b'_walkstreamfiles', _walkstreamfiles) diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/__main__.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/mercurial/__main__.py Thu Jun 22 11:36:37 2023 +0200 @@ -0,0 +1,12 @@ +def run(): + from . import demandimport + + with demandimport.tracing.log('hg script'): + demandimport.enable() + from . import dispatch + + dispatch.run() + + +if __name__ == '__main__': + run() diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/bundle2.py --- a/mercurial/bundle2.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/bundle2.py Thu Jun 22 11:36:37 2023 +0200 @@ -1234,7 +1234,7 @@ # we only support fixed size data now. # This will be improved in the future. if util.safehasattr(self.data, 'next') or util.safehasattr( - self.data, b'__next__' + self.data, '__next__' ): buff = util.chunkbuffer(self.data) chunk = buff.read(preferedchunksize) @@ -1381,7 +1381,7 @@ def __init__(self, ui, header, fp): super(unbundlepart, self).__init__(fp) self._seekable = util.safehasattr(fp, 'seek') and util.safehasattr( - fp, b'tell' + fp, 'tell' ) self.ui = ui # unbundle state attr @@ -1671,6 +1671,10 @@ # Else always advertise support on client, because payload support # should always be advertised. + if repo.ui.configbool(b'experimental', b'stream-v3'): + if b'stream' in caps: + caps[b'stream'] += (b'v3-exp',) + # b'rev-branch-cache is no longer advertised, but still supported # for legacy clients. @@ -1703,6 +1707,7 @@ vfs=None, compression=None, compopts=None, + allow_internal=False, ): if bundletype.startswith(b'HG10'): cg = changegroup.makechangegroup(repo, outgoing, b'01', source) @@ -1718,9 +1723,21 @@ elif not bundletype.startswith(b'HG20'): raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype) + # enforce that no internal phase are to be bundled + bundled_internal = repo.revs(b"%ln and _internal()", outgoing.ancestorsof) + if bundled_internal and not allow_internal: + count = len(repo.revs(b'%ln and _internal()', outgoing.missing)) + msg = "backup bundle would contains %d internal changesets" + msg %= count + raise error.ProgrammingError(msg) + caps = {} if opts.get(b'obsolescence', False): caps[b'obsmarkers'] = (b'V1',) + if opts.get(b'streamv2'): + caps[b'stream'] = [b'v2'] + elif opts.get(b'streamv3-exp'): + caps[b'stream'] = [b'v3-exp'] bundle = bundle20(ui, caps) bundle.setcompression(compression, compopts) _addpartsfromopts(ui, repo, bundle, source, outgoing, opts) @@ -1750,18 +1767,25 @@ part.addparam( b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False ) - if opts.get(b'phases') and repo.revs( - b'%ln and secret()', outgoing.ancestorsof - ): - part.addparam( - b'targetphase', b'%d' % phases.secret, mandatory=False - ) + if opts.get(b'phases'): + target_phase = phases.draft + for head in outgoing.ancestorsof: + target_phase = max(target_phase, repo[head].phase()) + if target_phase > phases.draft: + part.addparam( + b'targetphase', + b'%d' % target_phase, + mandatory=False, + ) if repository.REPO_FEATURE_SIDE_DATA in repo.features: part.addparam(b'exp-sidedata', b'1') if opts.get(b'streamv2', False): addpartbundlestream2(bundler, repo, stream=True) + if opts.get(b'streamv3-exp', False): + addpartbundlestream2(bundler, repo, stream=True) + if opts.get(b'tagsfnodescache', True): addparttagsfnodescache(repo, bundler, outgoing) @@ -1868,17 +1892,25 @@ return if not streamclone.allowservergeneration(repo): - raise error.Abort( - _( - b'stream data requested but server does not allow ' - b'this feature' - ), - hint=_( - b'well-behaved clients should not be ' - b'requesting stream data from servers not ' - b'advertising it; the client may be buggy' - ), + msg = _(b'stream data requested but server does not allow this feature') + hint = _(b'the client seems buggy') + raise error.Abort(msg, hint=hint) + if not (b'stream' in bundler.capabilities): + msg = _( + b'stream data requested but supported streaming clone versions were not specified' ) + hint = _(b'the client seems buggy') + raise error.Abort(msg, hint=hint) + client_supported = set(bundler.capabilities[b'stream']) + server_supported = set(getrepocaps(repo, role=b'client').get(b'stream', [])) + common_supported = client_supported & server_supported + if not common_supported: + msg = _(b'no common supported version with the client: %s; %s') + str_server = b','.join(sorted(server_supported)) + str_client = b','.join(sorted(client_supported)) + msg %= (str_server, str_client) + raise error.Abort(msg) + version = max(common_supported) # Stream clones don't compress well. And compression undermines a # goal of stream clones, which is to be fast. Communicate the desire @@ -1909,15 +1941,24 @@ elif repo.obsstore._version in remoteversions: includeobsmarkers = True - filecount, bytecount, it = streamclone.generatev2( - repo, includepats, excludepats, includeobsmarkers - ) - requirements = streamclone.streamed_requirements(repo) - requirements = _formatrequirementsspec(requirements) - part = bundler.newpart(b'stream2', data=it) - part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True) - part.addparam(b'filecount', b'%d' % filecount, mandatory=True) - part.addparam(b'requirements', requirements, mandatory=True) + if version == b"v2": + filecount, bytecount, it = streamclone.generatev2( + repo, includepats, excludepats, includeobsmarkers + ) + requirements = streamclone.streamed_requirements(repo) + requirements = _formatrequirementsspec(requirements) + part = bundler.newpart(b'stream2', data=it) + part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True) + part.addparam(b'filecount', b'%d' % filecount, mandatory=True) + part.addparam(b'requirements', requirements, mandatory=True) + elif version == b"v3-exp": + it = streamclone.generatev3( + repo, includepats, excludepats, includeobsmarkers + ) + requirements = streamclone.streamed_requirements(repo) + requirements = _formatrequirementsspec(requirements) + part = bundler.newpart(b'stream3-exp', data=it) + part.addparam(b'requirements', requirements, mandatory=True) def buildobsmarkerspart(bundler, markers, mandatory=True): @@ -2573,6 +2614,20 @@ streamclone.applybundlev2(repo, part, filecount, bytecount, requirements) +@parthandler(b'stream3-exp', (b'requirements',)) +def handlestreamv3bundle(op, part): + requirements = urlreq.unquote(part.params[b'requirements']) + requirements = requirements.split(b',') if requirements else [] + + repo = op.repo + if len(repo): + msg = _(b'cannot apply stream clone to non empty repository') + raise error.Abort(msg) + + repo.ui.debug(b'applying stream bundle\n') + streamclone.applybundlev3(repo, part, requirements) + + def widen_bundle( bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses ): diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/bundlecaches.py --- a/mercurial/bundlecaches.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/bundlecaches.py Thu Jun 22 11:36:37 2023 +0200 @@ -23,11 +23,40 @@ urlreq = util.urlreq +BUNDLE_CACHE_DIR = b'bundle-cache' CB_MANIFEST_FILE = b'clonebundles.manifest' +CLONEBUNDLESCHEME = b"peer-bundle-cache://" + + +def get_manifest(repo): + """get the bundle manifest to be served to a client from a server""" + raw_text = repo.vfs.tryread(CB_MANIFEST_FILE) + entries = [e.split(b' ', 1) for e in raw_text.splitlines()] + + new_lines = [] + for e in entries: + url = alter_bundle_url(repo, e[0]) + if len(e) == 1: + line = url + b'\n' + else: + line = b"%s %s\n" % (url, e[1]) + new_lines.append(line) + return b''.join(new_lines) + + +def alter_bundle_url(repo, url): + """a function that exist to help extension and hosting to alter the url + + This will typically be used to inject authentication information in the url + of cached bundles.""" + return url + + SUPPORTED_CLONEBUNDLE_SCHEMES = [ b"http://", b"https://", b"largefile://", + CLONEBUNDLESCHEME, ] @@ -60,11 +89,18 @@ if overwrite or key not in self._explicit_params: self._explicit_params[key] = value + def as_spec(self): + parts = [b"%s-%s" % (self.compression, self.version)] + for param in sorted(self._explicit_params.items()): + parts.append(b'%s=%s' % param) + return b';'.join(parts) + # Maps bundle version human names to changegroup versions. _bundlespeccgversions = { b'v1': b'01', b'v2': b'02', + b'v3': b'03', b'packed1': b's1', b'bundle2': b'02', # legacy } @@ -87,6 +123,14 @@ b'tagsfnodescache': True, b'revbranchcache': True, }, + b'v3': { + b'changegroup': True, + b'cg.version': b'03', + b'obsolescence': False, + b'phases': True, + b'tagsfnodescache': True, + b'revbranchcache': True, + }, b'streamv2': { b'changegroup': False, b'cg.version': b'02', @@ -96,6 +140,15 @@ b'tagsfnodescache': False, b'revbranchcache': False, }, + b'streamv3-exp': { + b'changegroup': False, + b'cg.version': b'03', + b'obsolescence': False, + b'phases': False, + b"streamv3-exp": True, + b'tagsfnodescache': False, + b'revbranchcache': False, + }, b'packed1': { b'cg.version': b's1', }, @@ -265,19 +318,19 @@ ) # Compute contentopts based on the version - if b"stream" in params and params[b"stream"] == b"v2": - # That case is fishy as this mostly derails the version selection + if b"stream" in params: + # This case is fishy as this mostly derails the version selection # mechanism. `stream` bundles are quite specific and used differently # as "normal" bundles. # - # So we are pinning this to "v2", as this will likely be - # compatible forever. (see the next conditional). - # # (we should probably define a cleaner way to do this and raise a - # warning when the old way is encounter) - version = b"streamv2" + # warning when the old way is encountered) + if params[b"stream"] == b"v2": + version = b"streamv2" + if params[b"stream"] == b"v3-exp": + version = b"streamv3-exp" contentopts = _bundlespeccontentopts.get(version, {}).copy() - if version == b"streamv2": + if version == b"streamv2" or version == b"streamv3-exp": # streamv2 have been reported as "v2" for a while. version = b"v2" @@ -335,7 +388,10 @@ if ( bundlespec.wirecompression == b'UN' and bundlespec.wireversion == b'02' - and bundlespec.contentopts.get(b'streamv2') + and ( + bundlespec.contentopts.get(b'streamv2') + or bundlespec.contentopts.get(b'streamv3-exp') + ) ): return True diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/bundlerepo.py --- a/mercurial/bundlerepo.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/bundlerepo.py Thu Jun 22 11:36:37 2023 +0200 @@ -484,8 +484,8 @@ def cancopy(self): return False - def peer(self, path=None): - return bundlepeer(self, path=path) + def peer(self, path=None, remotehidden=False): + return bundlepeer(self, path=path, remotehidden=remotehidden) def getcwd(self): return encoding.getcwd() # always outside the repo diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/chgserver.py --- a/mercurial/chgserver.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/chgserver.py Thu Jun 22 11:36:37 2023 +0200 @@ -236,7 +236,7 @@ # will behave differently (i.e. write to stdout). if ( out is not self.fout - or not util.safehasattr(self.fout, b'fileno') + or not util.safehasattr(self.fout, 'fileno') or self.fout.fileno() != procutil.stdout.fileno() or self._finoutredirected ): @@ -262,7 +262,7 @@ newui = srcui.__class__.load() for a in [b'fin', b'fout', b'ferr', b'environ']: setattr(newui, a, getattr(srcui, a)) - if util.safehasattr(srcui, b'_csystem'): + if util.safehasattr(srcui, '_csystem'): newui._csystem = srcui._csystem # command line args @@ -603,7 +603,7 @@ } ) - if util.safehasattr(procutil, b'setprocname'): + if util.safehasattr(procutil, 'setprocname'): def setprocname(self): """Change process title""" diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/cmdutil.py --- a/mercurial/cmdutil.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/cmdutil.py Thu Jun 22 11:36:37 2023 +0200 @@ -1450,7 +1450,7 @@ if returnrevlog: if isinstance(r, revlog.revlog): pass - elif util.safehasattr(r, b'_revlog'): + elif util.safehasattr(r, '_revlog'): r = r._revlog # pytype: disable=attribute-error elif r is not None: raise error.InputError( @@ -2754,7 +2754,6 @@ def cat(ui, repo, ctx, matcher, basefm, fntemplate, prefix, **opts): err = 1 - opts = pycompat.byteskwargs(opts) def write(path): filename = None @@ -2768,7 +2767,7 @@ except OSError: pass with formatter.maybereopen(basefm, filename) as fm: - _updatecatformatter(fm, ctx, matcher, path, opts.get(b'decode')) + _updatecatformatter(fm, ctx, matcher, path, opts.get('decode')) # Automation often uses hg cat on single files, so special case it # for performance to avoid the cost of parsing the manifest. @@ -2803,7 +2802,7 @@ basefm, fntemplate, subprefix, - **pycompat.strkwargs(opts), + **opts, ): err = 0 except error.RepoLookupError: diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/commands.py --- a/mercurial/commands.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/commands.py Thu Jun 22 11:36:37 2023 +0200 @@ -69,6 +69,7 @@ ) from .utils import ( dateutil, + procutil, stringutil, urlutil, ) @@ -1665,6 +1666,14 @@ scmutil.nochangesfound(ui, repo, not base and excluded) return 1 + # internal changeset are internal implementation details that should not + # leave the repository. Bundling with `hg bundle` create such risk. + bundled_internal = repo.revs(b"%ln and _internal()", missing) + if bundled_internal: + msg = _(b"cannot bundle internal changesets") + hint = _(b"%d internal changesets selected") % len(bundled_internal) + raise error.Abort(msg, hint=hint) + if heads: outgoing = discovery.outgoing( repo, missingroots=missing, ancestorsof=heads @@ -1714,8 +1723,9 @@ bundlespec.set_param( b'obsolescence-mandatory', obs_mand_cfg, overwrite=False ) - phases_cfg = cfg(b'experimental', b'bundle-phases') - bundlespec.set_param(b'phases', phases_cfg, overwrite=False) + if not bundlespec.params.get(b'phases', False): + phases_cfg = cfg(b'experimental', b'bundle-phases') + bundlespec.set_param(b'phases', phases_cfg, overwrite=False) bundle2.writenewbundle( ui, @@ -3529,22 +3539,20 @@ """ cmdutil.check_incompatible_arguments(opts, 'all_files', ['all', 'diff']) - opts = pycompat.byteskwargs(opts) - diff = opts.get(b'all') or opts.get(b'diff') - follow = opts.get(b'follow') - if opts.get(b'all_files') is None and not diff: - opts[b'all_files'] = True + + diff = opts.get('all') or opts.get('diff') + follow = opts.get('follow') + if opts.get('all_files') is None and not diff: + opts['all_files'] = True plaingrep = ( - opts.get(b'all_files') - and not opts.get(b'rev') - and not opts.get(b'follow') + opts.get('all_files') and not opts.get('rev') and not opts.get('follow') ) - all_files = opts.get(b'all_files') + all_files = opts.get('all_files') if plaingrep: - opts[b'rev'] = [b'wdir()'] + opts['rev'] = [b'wdir()'] reflags = re.M - if opts.get(b'ignore_case'): + if opts.get('ignore_case'): reflags |= re.I try: regexp = util.re.compile(pattern, reflags) @@ -3555,7 +3563,7 @@ ) return 1 sep, eol = b':', b'\n' - if opts.get(b'print0'): + if opts.get('print0'): sep = eol = b'\0' searcher = grepmod.grepsearcher( @@ -3603,7 +3611,7 @@ b'linenumber', b'%d', l.linenum, - opts.get(b'line_number'), + opts.get('line_number'), b'', ), ] @@ -3625,14 +3633,14 @@ b'user', b'%s', formatuser(ctx.user()), - opts.get(b'user'), + opts.get('user'), b'', ), ( b'date', b'%s', fm.formatdate(ctx.date(), datefmt), - opts.get(b'date'), + opts.get('date'), b'', ), ] @@ -3643,15 +3651,15 @@ field = fieldnamemap.get(name, name) label = extra_label + (b'grep.%s' % name) fm.condwrite(cond, field, fmt, data, label=label) - if not opts.get(b'files_with_matches'): + if not opts.get('files_with_matches'): fm.plain(sep, label=b'grep.sep') - if not opts.get(b'text') and binary(): + if not opts.get('text') and binary(): fm.plain(_(b" Binary file matches")) else: displaymatches(fm.nested(b'texts', tmpl=b'{text}'), l) fm.plain(eol) found = True - if opts.get(b'files_with_matches'): + if opts.get('files_with_matches'): break return found @@ -3677,9 +3685,9 @@ wopts = logcmdutil.walkopts( pats=pats, opts=opts, - revspec=opts[b'rev'], - include_pats=opts[b'include'], - exclude_pats=opts[b'exclude'], + revspec=opts['rev'], + include_pats=opts['include'], + exclude_pats=opts['exclude'], follow=follow, force_changelog_traversal=all_files, filter_revisions_by_pats=not all_files, @@ -3687,7 +3695,7 @@ revs, makefilematcher = logcmdutil.makewalker(repo, wopts) ui.pager(b'grep') - fm = ui.formatter(b'grep', opts) + fm = ui.formatter(b'grep', pycompat.byteskwargs(opts)) for fn, ctx, pstates, states in searcher.searchfiles(revs, makefilematcher): r = display(fm, fn, ctx, pstates, states) found = found or r @@ -5395,6 +5403,12 @@ _(b'a specific branch you would like to pull'), _(b'BRANCH'), ), + ( + b'', + b'remote-hidden', + False, + _(b"include changesets hidden on the remote (EXPERIMENTAL)"), + ), ] + remoteopts, _(b'[-u] [-f] [-r REV]... [-e CMD] [--remotecmd CMD] [SOURCE]...'), @@ -5432,6 +5446,14 @@ Specifying bookmark as ``.`` is equivalent to specifying the active bookmark's name. + .. container:: verbose + + One can use the `--remote-hidden` flag to pull changesets + hidden on the remote. This flag is "best effort", and will only + work if the server supports the feature and is configured to + allow the user to access hidden changesets. This option is + experimental and backwards compatibility is not garanteed. + Returns 0 on success, 1 if an update had unresolved files. """ @@ -5446,12 +5468,16 @@ for path in urlutil.get_pull_paths(repo, ui, sources): ui.status(_(b'pulling from %s\n') % urlutil.hidepassword(path.loc)) ui.flush() - other = hg.peer(repo, opts, path) + other = hg.peer(repo, opts, path, remotehidden=opts[b'remote_hidden']) update_conflict = None try: branches = (path.branch, opts.get(b'branch', [])) revs, checkout = hg.addbranchrevs( - repo, other, branches, opts.get(b'rev') + repo, + other, + branches, + opts.get(b'rev'), + remotehidden=opts[b'remote_hidden'], ) pullopargs = {} @@ -6644,7 +6670,25 @@ raise error.RepoError( _(b"there is no Mercurial repository here (.hg not found)") ) - s = wireprotoserver.sshserver(ui, repo) + accesshidden = False + if repo.filtername is None: + allow = ui.configlist( + b'experimental', b'server.allow-hidden-access' + ) + user = procutil.getuser() + if allow and scmutil.ismember(ui, user, allow): + accesshidden = True + else: + msg = ( + _( + b'ignoring request to access hidden changeset by ' + b'unauthorized user: %s\n' + ) + % user + ) + ui.warn(msg) + + s = wireprotoserver.sshserver(ui, repo, accesshidden=accesshidden) s.serve_forever() return diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/commandserver.py --- a/mercurial/commandserver.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/commandserver.py Thu Jun 22 11:36:37 2023 +0200 @@ -332,7 +332,7 @@ # any kind of interaction must use server channels, but chg may # replace channels by fully functional tty files. so nontty is # enforced only if cin is a channel. - if not util.safehasattr(self.cin, b'fileno'): + if not util.safehasattr(self.cin, 'fileno'): ui.setconfig(b'ui', b'nontty', b'true', b'commandserver') req = dispatch.request( @@ -384,7 +384,7 @@ if self.cmsg: hellomsg += b'message-encoding: %s\n' % self.cmsg.encoding hellomsg += b'pid: %d' % procutil.getpid() - if util.safehasattr(os, b'getpgid'): + if util.safehasattr(os, 'getpgid'): hellomsg += b'\n' hellomsg += b'pgid: %d' % os.getpgid(0) @@ -559,7 +559,7 @@ self.ui = ui self.repo = repo self.address = opts[b'address'] - if not util.safehasattr(socket, b'AF_UNIX'): + if not util.safehasattr(socket, 'AF_UNIX'): raise error.Abort(_(b'unsupported platform')) if not self.address: raise error.Abort(_(b'no socket path specified with --address')) @@ -588,7 +588,7 @@ o = socket.socketpair(socket.AF_UNIX, socket.SOCK_DGRAM) self._mainipc, self._workeripc = o self._servicehandler.bindsocket(self._sock, self.address) - if util.safehasattr(procutil, b'unblocksignal'): + if util.safehasattr(procutil, 'unblocksignal'): procutil.unblocksignal(signal.SIGCHLD) o = signal.signal(signal.SIGCHLD, self._sigchldhandler) self._oldsigchldhandler = o diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/configitems.py --- a/mercurial/configitems.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/configitems.py Thu Jun 22 11:36:37 2023 +0200 @@ -975,7 +975,7 @@ coreconfigitem( b'experimental', b'changegroup3', - default=False, + default=True, ) coreconfigitem( b'experimental', @@ -1248,6 +1248,11 @@ ) coreconfigitem( b'experimental', + b'server.allow-hidden-access', + default=list, +) +coreconfigitem( + b'experimental', b'server.filesdata.recommended-batch-size', default=50000, ) @@ -1293,6 +1298,11 @@ ) coreconfigitem( b'experimental', + b'stream-v3', + default=False, +) +coreconfigitem( + b'experimental', b'treemanifest', default=False, ) diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/crecord.py --- a/mercurial/crecord.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/crecord.py Thu Jun 22 11:36:37 2023 +0200 @@ -573,7 +573,7 @@ ui.write(_(b'starting interactive selection\n')) chunkselector = curseschunkselector(headerlist, ui, operation) origsigtstp = sentinel = object() - if util.safehasattr(signal, b'SIGTSTP'): + if util.safehasattr(signal, 'SIGTSTP'): origsigtstp = signal.getsignal(signal.SIGTSTP) try: with util.with_lc_ctype(): @@ -1944,7 +1944,7 @@ """ origsigwinch = sentinel = object() - if util.safehasattr(signal, b'SIGWINCH'): + if util.safehasattr(signal, 'SIGWINCH'): origsigwinch = signal.signal(signal.SIGWINCH, self.sigwinchhandler) try: return self._main(stdscr) @@ -1990,7 +1990,7 @@ ) # newwin([height, width,] begin_y, begin_x) self.statuswin = curses.newwin(self.numstatuslines, 0, 0, 0) - self.statuswin.keypad(1) # interpret arrow-key, etc. esc sequences + self.statuswin.keypad(True) # interpret arrow-key, etc. esc sequences # figure out how much space to allocate for the chunk-pad which is # used for displaying the patch diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/debugcommands.py --- a/mercurial/debugcommands.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/debugcommands.py Thu Jun 22 11:36:37 2023 +0200 @@ -50,6 +50,7 @@ error, exchange, extensions, + filelog, filemerge, filesetlang, formatter, @@ -58,6 +59,7 @@ localrepo, lock as lockmod, logcmdutil, + manifest, mergestate as mergestatemod, metadata, obsolete, @@ -93,6 +95,7 @@ wireprotoserver, ) from .interfaces import repository +from .stabletailgraph import stabletailsort from .utils import ( cborutil, compression, @@ -614,6 +617,10 @@ Stream bundles are special bundles that are essentially archives of revlog files. They are commonly used for cloning very quickly. + + This command creates a "version 1" stream clone, which is deprecated in + favor of newer versions of the stream protocol. Bundles using such newer + versions can be generated using the `hg bundle` command. """ # TODO we may want to turn this into an abort when this functionality # is moved into `hg bundle`. @@ -708,10 +715,12 @@ opts = pycompat.byteskwargs(opts) if opts.get(b'changelog') or opts.get(b'manifest') or opts.get(b'dir'): if rev is not None: - raise error.CommandError(b'debugdata', _(b'invalid arguments')) + raise error.InputError( + _(b'cannot specify a revision with other arguments') + ) file_, rev = None, file_ elif rev is None: - raise error.CommandError(b'debugdata', _(b'invalid arguments')) + raise error.InputError(_(b'please specify a revision')) r = cmdutil.openstorage(repo, b'debugdata', file_, opts) try: ui.write(r.rawdata(r.lookup(rev))) @@ -1273,7 +1282,7 @@ if opts.get(b'old'): def doit(pushedrevs, remoteheads, remote=remote): - if not util.safehasattr(remote, b'branches'): + if not util.safehasattr(remote, 'branches'): # enable in-client legacy support remote = localrepo.locallegacypeer(remote.local()) if remote_revs: @@ -1713,7 +1722,7 @@ if fm.isplain(): def formatvalue(value): - if util.safehasattr(value, b'startswith'): + if util.safehasattr(value, 'startswith'): return value if value: return b'yes' @@ -1840,7 +1849,7 @@ bundle2.writebundle(ui, bundle, bundlepath, bundletype) -@command(b'debugignore', [], b'[FILE]') +@command(b'debugignore', [], b'[FILE]...') def debugignore(ui, repo, *files, **opts): """display the combined ignore pattern and information about ignored files @@ -1902,7 +1911,7 @@ fm = ui.formatter(b'debugindex', opts) - revlog = getattr(store, b'_revlog', store) + revlog = getattr(store, '_revlog', store) return revlog_debug.debug_index( ui, @@ -1938,7 +1947,7 @@ """show stats related to the changelog index""" repo.changelog.shortest(repo.nullid, 1) index = repo.changelog.index - if not util.safehasattr(index, b'stats'): + if not util.safehasattr(index, 'stats'): raise error.Abort(_(b'debugindexstats only works with native code')) for k, v in sorted(index.stats().items()): ui.write(b'%s: %d\n' % (k, v)) @@ -2599,56 +2608,64 @@ @command( b'debugnodemap', - [ - ( - b'', - b'dump-new', - False, - _(b'write a (new) persistent binary nodemap on stdout'), - ), - (b'', b'dump-disk', False, _(b'dump on-disk data on stdout')), - ( - b'', - b'check', - False, - _(b'check that the data on disk data are correct.'), - ), - ( - b'', - b'metadata', - False, - _(b'display the on disk meta data for the nodemap'), - ), - ], + ( + cmdutil.debugrevlogopts + + [ + ( + b'', + b'dump-new', + False, + _(b'write a (new) persistent binary nodemap on stdout'), + ), + (b'', b'dump-disk', False, _(b'dump on-disk data on stdout')), + ( + b'', + b'check', + False, + _(b'check that the data on disk data are correct.'), + ), + ( + b'', + b'metadata', + False, + _(b'display the on disk meta data for the nodemap'), + ), + ] + ), + _(b'-c|-m|FILE'), ) -def debugnodemap(ui, repo, **opts): +def debugnodemap(ui, repo, file_=None, **opts): """write and inspect on disk nodemap""" + if opts.get('changelog') or opts.get('manifest') or opts.get('dir'): + if file_ is not None: + raise error.InputError( + _(b'cannot specify a file with other arguments') + ) + elif file_ is None: + opts['changelog'] = True + r = cmdutil.openstorage( + repo.unfiltered(), b'debugnodemap', file_, pycompat.byteskwargs(opts) + ) + if isinstance(r, (manifest.manifestrevlog, filelog.filelog)): + r = r._revlog if opts['dump_new']: - unfi = repo.unfiltered() - cl = unfi.changelog - if util.safehasattr(cl.index, "nodemap_data_all"): - data = cl.index.nodemap_data_all() + if util.safehasattr(r.index, "nodemap_data_all"): + data = r.index.nodemap_data_all() else: - data = nodemap.persistent_data(cl.index) + data = nodemap.persistent_data(r.index) ui.write(data) elif opts['dump_disk']: - unfi = repo.unfiltered() - cl = unfi.changelog - nm_data = nodemap.persisted_data(cl) + nm_data = nodemap.persisted_data(r) if nm_data is not None: docket, data = nm_data ui.write(data[:]) elif opts['check']: - unfi = repo.unfiltered() - cl = unfi.changelog - nm_data = nodemap.persisted_data(cl) + nm_data = nodemap.persisted_data(r) if nm_data is not None: docket, data = nm_data - return nodemap.check_data(ui, cl.index, data) + return nodemap.check_data(ui, r.index, data) elif opts['metadata']: - unfi = repo.unfiltered() - cl = unfi.changelog - nm_data = nodemap.persisted_data(cl) + nm_data = nodemap.persisted_data(r) if nm_data is not None: docket, data = nm_data ui.write((b"uid: %s\n") % docket.uid) @@ -3552,10 +3569,12 @@ opts = pycompat.byteskwargs(opts) if opts.get(b'changelog') or opts.get(b'manifest') or opts.get(b'dir'): if rev is not None: - raise error.CommandError(b'debugdata', _(b'invalid arguments')) + raise error.InputError( + _(b'cannot specify a revision with other arguments') + ) file_, rev = None, file_ elif rev is None: - raise error.CommandError(b'debugdata', _(b'invalid arguments')) + raise error.InputError(_(b'please specify a revision')) r = cmdutil.openstorage(repo, b'debugdata', file_, opts) r = getattr(r, '_revlog', r) try: @@ -3644,6 +3663,60 @@ @command( + b'debug::stable-tail-sort', + [ + ( + b'T', + b'template', + b'{rev}\n', + _(b'display with template'), + _(b'TEMPLATE'), + ), + ], + b'REV', +) +def debug_stable_tail_sort(ui, repo, revspec, template, **opts): + """display the stable-tail sort of the ancestors of a given node""" + rev = logcmdutil.revsingle(repo, revspec).rev() + cl = repo.changelog + + displayer = logcmdutil.maketemplater(ui, repo, template) + sorted_revs = stabletailsort._stable_tail_sort_naive(cl, rev) + for ancestor_rev in sorted_revs: + displayer.show(repo[ancestor_rev]) + + +@command( + b'debug::stable-tail-sort-leaps', + [ + ( + b'T', + b'template', + b'{rev}', + _(b'display with template'), + _(b'TEMPLATE'), + ), + (b's', b'specific', False, _(b'restrict to specific leaps')), + ], + b'REV', +) +def debug_stable_tail_sort_leaps(ui, repo, rspec, template, specific, **opts): + """display the leaps in the stable-tail sort of a node, one per line""" + rev = logcmdutil.revsingle(repo, rspec).rev() + + if specific: + get_leaps = stabletailsort._find_specific_leaps_naive + else: + get_leaps = stabletailsort._find_all_leaps_naive + + displayer = logcmdutil.maketemplater(ui, repo, template) + for source, target in get_leaps(repo.changelog, rev): + displayer.show(repo[source]) + displayer.show(repo[target]) + ui.write(b'\n') + + +@command( b"debugbackupbundle", [ ( @@ -4512,7 +4585,7 @@ peer = None else: ui.write(_(b'creating ssh peer from handshake results\n')) - peer = sshpeer.makepeer( + peer = sshpeer._make_peer( ui, url, proc, @@ -4568,7 +4641,7 @@ ) else: peer_path = urlutil.try_path(ui, path) - peer = httppeer.makepeer(ui, peer_path, opener=opener) + peer = httppeer._make_peer(ui, peer_path, opener=opener) # We /could/ populate stdin/stdout with sock.makefile()... else: diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/dirstate.py --- a/mercurial/dirstate.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/dirstate.py Thu Jun 22 11:36:37 2023 +0200 @@ -1760,12 +1760,6 @@ return list(files) return [f for f in dmap if match(f)] - def _actualfilename(self, tr): - if tr: - return self._pendingfilename - else: - return self._filename - def all_file_names(self): """list all filename currently used by this dirstate diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/dirstatemap.py --- a/mercurial/dirstatemap.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/dirstatemap.py Thu Jun 22 11:36:37 2023 +0200 @@ -380,7 +380,7 @@ return # TODO: adjust this estimate for dirstate-v2 - if util.safehasattr(parsers, b'dict_new_presized'): + if util.safehasattr(parsers, 'dict_new_presized'): # Make an estimate of the number of files in the dirstate based on # its size. This trades wasting some memory for avoiding costly # resizes. Each entry have a prefix of 17 bytes followed by one or diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/discovery.py --- a/mercurial/discovery.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/discovery.py Thu Jun 22 11:36:37 2023 +0200 @@ -104,14 +104,14 @@ if ancestorsof is None: ancestorsof = cl.heads() if missingroots: - discbases = [] - for n in missingroots: - discbases.extend([p for p in cl.parents(n) if p != repo.nullid]) # TODO remove call to nodesbetween. # TODO populate attributes on outgoing instance instead of setting # discbases. csets, roots, heads = cl.nodesbetween(missingroots, ancestorsof) included = set(csets) + discbases = [] + for n in csets: + discbases.extend([p for p in cl.parents(n) if p != repo.nullid]) ancestorsof = heads commonheads = [n for n in discbases if n not in included] elif not commonheads: diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/dispatch.py --- a/mercurial/dispatch.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/dispatch.py Thu Jun 22 11:36:37 2023 +0200 @@ -107,7 +107,7 @@ def _flushstdio(ui, err): status = None # In all cases we try to flush stdio streams. - if util.safehasattr(ui, b'fout'): + if util.safehasattr(ui, 'fout'): assert ui is not None # help pytype assert ui.fout is not None # help pytype try: @@ -116,7 +116,7 @@ err = e status = -1 - if util.safehasattr(ui, b'ferr'): + if util.safehasattr(ui, 'ferr'): assert ui is not None # help pytype assert ui.ferr is not None # help pytype try: @@ -331,7 +331,7 @@ ui = req.ui try: - for name in b'SIGBREAK', b'SIGHUP', b'SIGTERM': + for name in 'SIGBREAK', 'SIGHUP', 'SIGTERM': num = getattr(signal, name, None) if num: signal.signal(num, catchterm) @@ -367,12 +367,18 @@ # shenanigans wherein a user does something like pass # --debugger or --config=ui.debugger=1 as a repo # name. This used to actually run the debugger. + nbargs = 4 + hashiddenaccess = b'--hidden' in cmdargs + if hashiddenaccess: + nbargs += 1 if ( - len(req.args) != 4 + len(req.args) != nbargs or req.args[0] != b'-R' or req.args[1].startswith(b'--') or req.args[2] != b'serve' or req.args[3] != b'--stdio' + or hashiddenaccess + and req.args[4] != b'--hidden' ): raise error.Abort( _(b'potentially unsafe serve --stdio invocation: %s') @@ -514,7 +520,7 @@ def aliasargs(fn, givenargs): args = [] # only care about alias 'args', ignore 'args' set by extensions.wrapfunction - if not util.safehasattr(fn, b'_origfunc'): + if not util.safehasattr(fn, '_origfunc'): args = getattr(fn, 'args', args) if args: cmd = b' '.join(map(procutil.shellquote, args)) @@ -702,7 +708,7 @@ } if name not in adefaults: raise AttributeError(name) - if self.badalias or util.safehasattr(self, b'shell'): + if self.badalias or util.safehasattr(self, 'shell'): return adefaults[name] return getattr(self.fn, name) @@ -728,7 +734,7 @@ self.name, self.definition, ) - if util.safehasattr(self, b'shell'): + if util.safehasattr(self, 'shell'): return self.fn(ui, *args, **opts) else: try: @@ -1018,7 +1024,7 @@ cmd = aliases[0] fn = entry[0] - if cmd and util.safehasattr(fn, b'shell'): + if cmd and util.safehasattr(fn, 'shell'): # shell alias shouldn't receive early options which are consumed by hg _earlyopts, args = _earlysplitopts(args) d = lambda: fn(ui, *args[1:]) diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/encoding.py --- a/mercurial/encoding.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/encoding.py Thu Jun 22 11:36:37 2023 +0200 @@ -657,7 +657,7 @@ pass s = pycompat.bytestr(s) - r = b"" + r = bytearray() pos = 0 l = len(s) while pos < l: @@ -673,7 +673,7 @@ c = unichr(0xDC00 + ord(s[pos])).encode('utf-8', _utf8strict) pos += 1 r += c - return r + return bytes(r) def fromutf8b(s): @@ -712,7 +712,7 @@ # helper again to walk the string without "decoding" it. s = pycompat.bytestr(s) - r = b"" + r = bytearray() pos = 0 l = len(s) while pos < l: @@ -722,4 +722,4 @@ if b"\xed\xb0\x80" <= c <= b"\xed\xb3\xbf": c = pycompat.bytechr(ord(c.decode("utf-8", _utf8strict)) & 0xFF) r += c - return r + return bytes(r) diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/exchange.py --- a/mercurial/exchange.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/exchange.py Thu Jun 22 11:36:37 2023 +0200 @@ -146,6 +146,12 @@ splitted = requirements.split() params = bundle2._formatrequirementsparams(splitted) return b'none-v2;stream=v2;%s' % params + elif part.type == b'stream3-exp' and version is None: + # A stream3 part requires to be part of a v2 bundle + requirements = urlreq.unquote(part.params[b'requirements']) + splitted = requirements.split() + params = bundle2._formatrequirementsparams(splitted) + return b'none-v2;stream=v3-exp;%s' % params elif part.type == b'obsmarkers': params[b'obsolescence'] = b'yes' if not part.mandatory: @@ -1637,7 +1643,7 @@ # We allow the narrow patterns to be passed in explicitly to provide more # flexibility for API consumers. - if includepats or excludepats: + if includepats is not None or excludepats is not None: includepats = includepats or set() excludepats = excludepats or set() else: @@ -2421,7 +2427,7 @@ return info, bundler.getchunks() -@getbundle2partsgenerator(b'stream2') +@getbundle2partsgenerator(b'stream') def _getbundlestream2(bundler, repo, *args, **kwargs): return bundle2.addpartbundlestream2(bundler, repo, **kwargs) @@ -2828,7 +2834,7 @@ url = entries[0][b'URL'] repo.ui.status(_(b'applying clone bundle from %s\n') % url) - if trypullbundlefromurl(repo.ui, repo, url): + if trypullbundlefromurl(repo.ui, repo, url, remote): repo.ui.status(_(b'finished applying clone bundle\n')) # Bundle failed. # @@ -2849,11 +2855,22 @@ ) -def trypullbundlefromurl(ui, repo, url): +def inline_clone_bundle_open(ui, url, peer): + if not peer: + raise error.Abort(_(b'no remote repository supplied for %s' % url)) + clonebundleid = url[len(bundlecaches.CLONEBUNDLESCHEME) :] + peerclonebundle = peer.get_cached_bundle_inline(clonebundleid) + return util.chunkbuffer(peerclonebundle) + + +def trypullbundlefromurl(ui, repo, url, peer): """Attempt to apply a bundle from a URL.""" with repo.lock(), repo.transaction(b'bundleurl') as tr: try: - fh = urlmod.open(ui, url) + if url.startswith(bundlecaches.CLONEBUNDLESCHEME): + fh = inline_clone_bundle_open(ui, url, peer) + else: + fh = urlmod.open(ui, url) cg = readbundle(ui, fh, b'stream') if isinstance(cg, streamclone.streamcloneapplier): diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/filelog.py --- a/mercurial/filelog.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/filelog.py Thu Jun 22 11:36:37 2023 +0200 @@ -42,6 +42,15 @@ opts = opener.options self._fix_issue6528 = opts.get(b'issue6528.fix-incoming', True) + def get_revlog(self): + """return an actual revlog instance if any + + This exist because a lot of code leverage the fact the underlying + storage is a revlog for optimization, so giving simple way to access + the revlog instance helps such code. + """ + return self._revlog + def __len__(self): return len(self._revlog) diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/help.py --- a/mercurial/help.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/help.py Thu Jun 22 11:36:37 2023 +0200 @@ -810,7 +810,7 @@ doc = gettext(pycompat.getdoc(entry[0])) if not doc: doc = _(b"(no help text available)") - if util.safehasattr(entry[0], b'definition'): # aliased command + if util.safehasattr(entry[0], 'definition'): # aliased command source = entry[0].source if entry[0].definition.startswith(b'!'): # shell alias doc = _(b'shell alias for: %s\n\n%s\n\ndefined by: %s\n') % ( diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/helptext/config.txt --- a/mercurial/helptext/config.txt Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/helptext/config.txt Thu Jun 22 11:36:37 2023 +0200 @@ -1318,6 +1318,12 @@ changeset to tag is in ``$HG_NODE``. The name of tag is in ``$HG_TAG``. The tag is local if ``$HG_LOCAL=1``, or in the repository if ``$HG_LOCAL=0``. +``pretransmit-inline-clone-bundle`` + Run before transferring an inline clonebundle to the peer. + If the exit status is 0, the inline clonebundle will be allowed to be + transferred. A non-zero status will cause the transfer to fail. + The path of the inline clonebundle is in ``$HG_CLONEBUNDLEPATH``. + ``pretxnopen`` Run before any new repository transaction is open. The reason for the transaction will be in ``$HG_TXNNAME``, and a unique identifier for the @@ -1622,7 +1628,7 @@ in ``http_proxy.no``. (default: False) ``http`` ----------- +-------- Used to configure access to Mercurial repositories via HTTP. diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/helptext/patterns.txt --- a/mercurial/helptext/patterns.txt Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/helptext/patterns.txt Thu Jun 22 11:36:37 2023 +0200 @@ -18,7 +18,8 @@ current repository root, and when the path points to a directory, it is matched recursively. To match all files in a directory non-recursively (not including any files in subdirectories), ``rootfilesin:`` can be used, specifying an -absolute path (relative to the repository root). +absolute path (relative to the repository root). To match a single file exactly, +relative to the repository root, you can use ``filepath:``. To use an extended glob, start a name with ``glob:``. Globs are rooted at the current directory; a glob such as ``*.c`` will only match files @@ -50,11 +51,15 @@ Plain examples:: - path:foo/bar a name bar in a directory named foo in the root - of the repository - path:path:name a file or directory named "path:name" - rootfilesin:foo/bar the files in a directory called foo/bar, but not any files - in its subdirectories and not a file bar in directory foo + path:foo/bar a name bar in a directory named foo in the root + of the repository + path:some/path a file or directory named "some/path" + filepath:some/path/to/a/file exactly a single file named + "some/path/to/a/file", relative to the root + of the repository + rootfilesin:foo/bar the files in a directory called foo/bar, but + not any files in its subdirectories and not + a file bar in directory foo Glob examples:: diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/helptext/rust.txt --- a/mercurial/helptext/rust.txt Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/helptext/rust.txt Thu Jun 22 11:36:37 2023 +0200 @@ -76,8 +76,9 @@ MSRV ==== -The minimum supported Rust version is currently 1.61.0. The project's policy is -to follow the version from Debian testing, to make the distributions' job easier. +The minimum supported Rust version is defined in `rust/clippy.toml`. +The project's policy is to keep it at or below the version from Debian testing, +to make the distributions' job easier. rhg === diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/hg.py --- a/mercurial/hg.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/hg.py Thu Jun 22 11:36:37 2023 +0200 @@ -65,10 +65,10 @@ sharedbookmarks = b'bookmarks' -def addbranchrevs(lrepo, other, branches, revs): +def addbranchrevs(lrepo, other, branches, revs, remotehidden=False): if util.safehasattr(other, 'peer'): # a courtesy to callers using a localrepo for other - peer = other.peer() + peer = other.peer(remotehidden=remotehidden) else: peer = other hashbranch, branches = branches @@ -242,7 +242,15 @@ return repo.filtered(b'visible') -def peer(uiorrepo, opts, path, create=False, intents=None, createopts=None): +def peer( + uiorrepo, + opts, + path, + create=False, + intents=None, + createopts=None, + remotehidden=False, +): '''return a repository peer for the specified path''' ui = getattr(uiorrepo, 'ui', uiorrepo) rui = remoteui(uiorrepo, opts) @@ -260,6 +268,7 @@ create, intents=intents, createopts=createopts, + remotehidden=remotehidden, ) _setup_repo_or_peer(rui, peer) else: @@ -274,7 +283,7 @@ intents=intents, createopts=createopts, ) - peer = repo.peer(path=peer_path) + peer = repo.peer(path=peer_path, remotehidden=remotehidden) return peer @@ -308,7 +317,7 @@ if repo.sharedpath == repo.path: return None - if util.safehasattr(repo, b'srcrepo') and repo.srcrepo: + if util.safehasattr(repo, 'srcrepo') and repo.srcrepo: return repo.srcrepo # the sharedpath always ends in the .hg; we want the path to the repo @@ -1558,7 +1567,7 @@ def remoteui(src, opts): """build a remote ui from ui or repo and opts""" - if util.safehasattr(src, b'baseui'): # looks like a repository + if util.safehasattr(src, 'baseui'): # looks like a repository dst = src.baseui.copy() # drop repo-specific config src = src.ui # copy target options from repo else: # assume it's a global ui object diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/hgweb/common.py --- a/mercurial/hgweb/common.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/hgweb/common.py Thu Jun 22 11:36:37 2023 +0200 @@ -13,6 +13,7 @@ import os import stat +from ..i18n import _ from ..pycompat import ( getattr, open, @@ -20,6 +21,7 @@ from .. import ( encoding, pycompat, + scmutil, templater, util, ) @@ -38,15 +40,33 @@ HTTP_UNSUPPORTED_MEDIA_TYPE = 415 HTTP_SERVER_ERROR = 500 +ismember = scmutil.ismember -def ismember(ui, username, userlist): - """Check if username is a member of userlist. - If userlist has a single '*' member, all users are considered members. - Can be overridden by extensions to provide more complex authorization - schemes. - """ - return userlist == [b'*'] or username in userlist +def hashiddenaccess(repo, req): + if bool(req.qsparams.get(b'access-hidden')): + # Disable this by default for now. Main risk is to get critical + # information exposed through this. This is expecially risky if + # someone decided to make a changeset secret for good reason, but + # its predecessors are still draft. + # + # The feature is currently experimental, so we can still decide to + # change the default. + ui = repo.ui + allow = ui.configlist(b'experimental', b'server.allow-hidden-access') + user = req.remoteuser + if allow and ismember(ui, user, allow): + return True + else: + msg = ( + _( + b'ignoring request to access hidden changeset by ' + b'unauthorized user: %r\n' + ) + % user + ) + ui.warn(msg) + return False def checkauthz(hgweb, req, op): diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/hgweb/hgweb_mod.py --- a/mercurial/hgweb/hgweb_mod.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/hgweb/hgweb_mod.py Thu Jun 22 11:36:37 2023 +0200 @@ -39,6 +39,7 @@ ) from . import ( + common, request as requestmod, webcommands, webutil, @@ -124,6 +125,16 @@ self.req = req self.res = res + # Only works if the filter actually support being upgraded to show + # visible changesets + current_filter = repo.filtername + if ( + common.hashiddenaccess(repo, req) + and current_filter is not None + and current_filter + b'.hidden' in repoview.filtertable + ): + self.repo = self.repo.filtered(repo.filtername + b'.hidden') + self.maxchanges = self.configint(b'web', b'maxchanges') self.stripecount = self.configint(b'web', b'stripes') self.maxshortchanges = self.configint(b'web', b'maxshortchanges') @@ -467,7 +478,7 @@ except (error.LookupError, error.RepoLookupError) as err: msg = pycompat.bytestr(err) - if util.safehasattr(err, b'name') and not isinstance( + if util.safehasattr(err, 'name') and not isinstance( err, error.ManifestLookupError ): msg = b'revision not found: %s' % err.name diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/hgweb/server.py --- a/mercurial/hgweb/server.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/hgweb/server.py Thu Jun 22 11:36:37 2023 +0200 @@ -100,7 +100,7 @@ def log_request(self, code='-', size='-'): xheaders = [] - if util.safehasattr(self, b'headers'): + if util.safehasattr(self, 'headers'): xheaders = [ h for h in self.headers.items() if h[0].startswith('x-') ] @@ -214,7 +214,7 @@ env['wsgi.multithread'] = isinstance( self.server, socketserver.ThreadingMixIn ) - if util.safehasattr(socketserver, b'ForkingMixIn'): + if util.safehasattr(socketserver, 'ForkingMixIn'): env['wsgi.multiprocess'] = isinstance( self.server, socketserver.ForkingMixIn ) @@ -344,7 +344,7 @@ threading.active_count() # silence pyflakes and bypass demandimport _mixin = socketserver.ThreadingMixIn except ImportError: - if util.safehasattr(os, b"fork"): + if util.safehasattr(os, "fork"): _mixin = socketserver.ForkingMixIn else: diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/hgweb/webutil.py --- a/mercurial/hgweb/webutil.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/hgweb/webutil.py Thu Jun 22 11:36:37 2023 +0200 @@ -211,7 +211,7 @@ b'description': s.description(), b'branch': s.branch(), } - if util.safehasattr(s, b'path'): + if util.safehasattr(s, 'path'): d[b'file'] = s.path() yield d diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/httppeer.py --- a/mercurial/httppeer.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/httppeer.py Thu Jun 22 11:36:37 2023 +0200 @@ -65,7 +65,7 @@ class _multifile: def __init__(self, *fileobjs): for f in fileobjs: - if not util.safehasattr(f, b'length'): + if not util.safehasattr(f, 'length'): raise ValueError( b'_multifile only supports file objects that ' b'have a length but this one does not:', @@ -108,7 +108,14 @@ def makev1commandrequest( - ui, requestbuilder, caps, capablefn, repobaseurl, cmd, args + ui, + requestbuilder, + caps, + capablefn, + repobaseurl, + cmd, + args, + remotehidden=False, ): """Make an HTTP request to run a command for a version 1 client. @@ -127,6 +134,8 @@ ui.debug(b"sending %s command\n" % cmd) q = [(b'cmd', cmd)] + if remotehidden: + q.append(('access-hidden', '1')) headersize = 0 # Important: don't use self.capable() here or else you end up # with infinite recursion when trying to look up capabilities @@ -171,7 +180,7 @@ qs = b'?%s' % urlreq.urlencode(q) cu = b"%s%s" % (repobaseurl, qs) size = 0 - if util.safehasattr(data, b'length'): + if util.safehasattr(data, 'length'): size = data.length elif data is not None: size = len(data) @@ -381,13 +390,16 @@ class httppeer(wireprotov1peer.wirepeer): - def __init__(self, ui, path, url, opener, requestbuilder, caps): - super().__init__(ui, path=path) + def __init__( + self, ui, path, url, opener, requestbuilder, caps, remotehidden=False + ): + super().__init__(ui, path=path, remotehidden=remotehidden) self._url = url self._caps = caps self.limitedarguments = caps is not None and b'httppostargs' not in caps self._urlopener = opener self._requestbuilder = requestbuilder + self._remotehidden = remotehidden def __del__(self): for h in self._urlopener.handlers: @@ -429,6 +441,13 @@ def capabilities(self): return self._caps + def _finish_inline_clone_bundle(self, stream): + # HTTP streams must hit the end to process the last empty + # chunk of Chunked-Encoding so the connection can be reused. + chunk = stream.read(1) + if chunk: + self._abort(error.ResponseError(_(b"unexpected response:"), chunk)) + # End of ipeercommands interface. def _callstream(self, cmd, _compressible=False, **args): @@ -442,6 +461,7 @@ self._url, cmd, args, + self._remotehidden, ) resp = sendrequest(self.ui, self._urlopener, req) @@ -592,7 +612,9 @@ return respurl, info -def makepeer(ui, path, opener=None, requestbuilder=urlreq.request): +def _make_peer( + ui, path, opener=None, requestbuilder=urlreq.request, remotehidden=False +): """Construct an appropriate HTTP peer instance. ``opener`` is an ``url.opener`` that should be used to establish @@ -615,11 +637,19 @@ respurl, info = performhandshake(ui, url, opener, requestbuilder) return httppeer( - ui, path, respurl, opener, requestbuilder, info[b'v1capabilities'] + ui, + path, + respurl, + opener, + requestbuilder, + info[b'v1capabilities'], + remotehidden=remotehidden, ) -def make_peer(ui, path, create, intents=None, createopts=None): +def make_peer( + ui, path, create, intents=None, createopts=None, remotehidden=False +): if create: raise error.Abort(_(b'cannot create new http repository')) try: @@ -628,7 +658,7 @@ _(b'Python support for SSL and HTTPS is not installed') ) - inst = makepeer(ui, path) + inst = _make_peer(ui, path, remotehidden=remotehidden) return inst except error.RepoError as httpexception: diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/interfaces/repository.py --- a/mercurial/interfaces/repository.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/interfaces/repository.py Thu Jun 22 11:36:37 2023 +0200 @@ -176,6 +176,12 @@ Returns a set of string capabilities. """ + def get_cached_bundle_inline(path): + """Retrieve a clonebundle across the wire. + + Returns a chunkbuffer + """ + def clonebundles(): """Obtains the clone bundles manifest for the repo. @@ -388,7 +394,7 @@ limitedarguments = False - def __init__(self, ui, path=None): + def __init__(self, ui, path=None, remotehidden=False): self.ui = ui self.path = path @@ -1404,6 +1410,14 @@ This one behaves the same way, except for manifest data. """ + def get_revlog(): + """return an actual revlog instance if any + + This exist because a lot of code leverage the fact the underlying + storage is a revlog for optimization, so giving simple way to access + the revlog instance helps such code. + """ + class imanifestlog(interfaceutil.Interface): """Interface representing a collection of manifest snapshots. diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/localrepo.py --- a/mercurial/localrepo.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/localrepo.py Thu Jun 22 11:36:37 2023 +0200 @@ -307,13 +307,17 @@ class localpeer(repository.peer): '''peer for a local repo; reflects only the most recent API''' - def __init__(self, repo, caps=None, path=None): - super(localpeer, self).__init__(repo.ui, path=path) + def __init__(self, repo, caps=None, path=None, remotehidden=False): + super(localpeer, self).__init__( + repo.ui, path=path, remotehidden=remotehidden + ) if caps is None: caps = moderncaps.copy() - self._repo = repo.filtered(b'served') - + if remotehidden: + self._repo = repo.filtered(b'served.hidden') + else: + self._repo = repo.filtered(b'served') if repo._wanted_sidedata: formatted = bundle2.format_remote_wanted_sidedata(repo) caps.add(b'exp-wanted-sidedata=' + formatted) @@ -344,8 +348,12 @@ def capabilities(self): return self._caps + def get_cached_bundle_inline(self, path): + # not needed with local peer + raise NotImplementedError + def clonebundles(self): - return self._repo.tryread(bundlecaches.CB_MANIFEST_FILE) + return bundlecaches.get_manifest(self._repo) def debugwireargs(self, one, two, three=None, four=None, five=None): """Used to test argument passing over the wire""" @@ -411,7 +419,7 @@ try: bundle = exchange.readbundle(self.ui, bundle, None) ret = exchange.unbundle(self._repo, bundle, heads, b'push', url) - if util.safehasattr(ret, b'getchunks'): + if util.safehasattr(ret, 'getchunks'): # This is a bundle20 object, turn it into an unbundler. # This little dance should be dropped eventually when the # API is finally improved. @@ -455,8 +463,10 @@ """peer extension which implements legacy methods too; used for tests with restricted capabilities""" - def __init__(self, repo, path=None): - super(locallegacypeer, self).__init__(repo, caps=legacycaps, path=path) + def __init__(self, repo, path=None, remotehidden=False): + super(locallegacypeer, self).__init__( + repo, caps=legacycaps, path=path, remotehidden=remotehidden + ) # Begin of baselegacywirecommands interface. @@ -1450,7 +1460,7 @@ if self.ui.configbool(b'devel', b'all-warnings') or self.ui.configbool( b'devel', b'check-locks' ): - if util.safehasattr(self.svfs, b'vfs'): # this is filtervfs + if util.safehasattr(self.svfs, 'vfs'): # this is filtervfs self.svfs.vfs.audit = self._getsvfsward(self.svfs.vfs.audit) else: # standard vfs self.svfs.audit = self._getsvfsward(self.svfs.audit) @@ -1512,8 +1522,8 @@ repo = rref() if ( repo is None - or not util.safehasattr(repo, b'_wlockref') - or not util.safehasattr(repo, b'_lockref') + or not util.safehasattr(repo, '_wlockref') + or not util.safehasattr(repo, '_lockref') ): return if mode in (None, b'r', b'rb'): @@ -1561,7 +1571,7 @@ def checksvfs(path, mode=None): ret = origfunc(path, mode=mode) repo = rref() - if repo is None or not util.safehasattr(repo, b'_lockref'): + if repo is None or not util.safehasattr(repo, '_lockref'): return if mode in (None, b'r', b'rb'): return @@ -1657,8 +1667,10 @@ parts.pop() return False - def peer(self, path=None): - return localpeer(self, path=path) # not cached to avoid reference cycle + def peer(self, path=None, remotehidden=False): + return localpeer( + self, path=path, remotehidden=remotehidden + ) # not cached to avoid reference cycle def unfiltered(self): """Return unfiltered version of the repository @@ -2924,6 +2936,14 @@ if repository.CACHE_MANIFESTLOG_CACHE in caches: self.manifestlog.update_caches(transaction=tr) + for entry in self.store.walk(): + if not entry.is_revlog: + continue + if not entry.is_manifestlog: + continue + manifestrevlog = entry.get_revlog_instance(self).get_revlog() + if manifestrevlog is not None: + manifestrevlog.update_caches(transaction=tr) if repository.CACHE_REV_BRANCH in caches: rbc = unfi.revbranchcache() diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/lock.py --- a/mercurial/lock.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/lock.py Thu Jun 22 11:36:37 2023 +0200 @@ -76,11 +76,11 @@ # save handlers first so they can be restored even if a setup is # interrupted between signal.signal() and orighandlers[] =. for name in [ - b'CTRL_C_EVENT', - b'SIGINT', - b'SIGBREAK', - b'SIGHUP', - b'SIGTERM', + 'CTRL_C_EVENT', + 'SIGINT', + 'SIGBREAK', + 'SIGHUP', + 'SIGTERM', ]: num = getattr(signal, name, None) if num and num not in orighandlers: diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/mail.py --- a/mercurial/mail.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/mail.py Thu Jun 22 11:36:37 2023 +0200 @@ -54,9 +54,9 @@ self._ui = ui self._host = host - def starttls(self, keyfile=None, certfile=None): + def starttls(self, keyfile=None, certfile=None, context=None): if not self.has_extn("starttls"): - msg = b"STARTTLS extension not supported by server" + msg = "STARTTLS extension not supported by server" raise smtplib.SMTPException(msg) (resp, reply) = self.docmd("STARTTLS") if resp == 220: diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/manifest.py --- a/mercurial/manifest.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/manifest.py Thu Jun 22 11:36:37 2023 +0200 @@ -1617,9 +1617,18 @@ self.index = self._revlog.index self._generaldelta = self._revlog._generaldelta + def get_revlog(self): + """return an actual revlog instance if any + + This exist because a lot of code leverage the fact the underlying + storage is a revlog for optimization, so giving simple way to access + the revlog instance helps such code. + """ + return self._revlog + def _setupmanifestcachehooks(self, repo): """Persist the manifestfulltextcache on lock release""" - if not util.safehasattr(repo, b'_wlockref'): + if not util.safehasattr(repo, '_wlockref'): return self._fulltextcache._opener = repo.wcachevfs diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/match.py --- a/mercurial/match.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/match.py Thu Jun 22 11:36:37 2023 +0200 @@ -30,6 +30,7 @@ b're', b'glob', b'path', + b'filepath', b'relglob', b'relpath', b'relre', @@ -181,6 +182,8 @@ 're:' - a regular expression 'path:' - a path relative to repository root, which is matched recursively + 'filepath:' - an exact path to a single file, relative to the + repository root 'rootfilesin:' - a path relative to repository root, which is matched non-recursively (will not match subdirectories) 'relglob:' - an unrooted glob (*.c matches C files in all dirs) @@ -334,10 +337,18 @@ """Convert 'kind:pat' from the patterns list to tuples with kind and normalized and rooted patterns and with listfiles expanded.""" kindpats = [] + kinds_to_normalize = ( + b'relglob', + b'path', + b'filepath', + b'rootfilesin', + b'rootglob', + ) + for kind, pat in [_patsplit(p, default) for p in patterns]: if kind in cwdrelativepatternkinds: pat = pathutil.canonpath(root, cwd, pat, auditor=auditor) - elif kind in (b'relglob', b'path', b'rootfilesin', b'rootglob'): + elif kind in kinds_to_normalize: pat = util.normpath(pat) elif kind in (b'listfile', b'listfile0'): try: @@ -1340,6 +1351,10 @@ return b'' if kind == b're': return pat + if kind == b'filepath': + raise error.ProgrammingError( + "'filepath:' patterns should not be converted to a regex" + ) if kind in (b'path', b'relpath'): if pat == b'.': return b'' @@ -1444,7 +1459,14 @@ """ try: allgroups = [] - regexps = [_regex(k, p, globsuffix) for (k, p, s) in kindpats] + regexps = [] + exact = set() + for (kind, pattern, _source) in kindpats: + if kind == b'filepath': + exact.add(pattern) + continue + regexps.append(_regex(kind, pattern, globsuffix)) + fullregexp = _joinregexes(regexps) startidx = 0 @@ -1469,9 +1491,20 @@ allgroups.append(_joinregexes(group)) allmatchers = [_rematcher(g) for g in allgroups] func = lambda s: any(m(s) for m in allmatchers) - return fullregexp, func + + actualfunc = func + if exact: + # An empty regex will always match, so only call the regex if + # there were any actual patterns to match. + if not regexps: + actualfunc = lambda s: s in exact + else: + actualfunc = lambda s: s in exact or func(s) + return fullregexp, actualfunc except re.error: for k, p, s in kindpats: + if k == b'filepath': + continue try: _rematcher(_regex(k, p, globsuffix)) except re.error: @@ -1502,7 +1535,7 @@ break root.append(p) r.append(b'/'.join(root)) - elif kind in (b'relpath', b'path'): + elif kind in (b'relpath', b'path', b'filepath'): if pat == b'.': pat = b'' r.append(pat) diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/mdiff.py --- a/mercurial/mdiff.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/mdiff.py Thu Jun 22 11:36:37 2023 +0200 @@ -213,7 +213,7 @@ if ( opts is None or not opts.xdiff - or not util.safehasattr(bdiff, b'xdiffblocks') + or not util.safehasattr(bdiff, 'xdiffblocks') ): return bdiff.blocks else: diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/patch.py --- a/mercurial/patch.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/patch.py Thu Jun 22 11:36:37 2023 +0200 @@ -168,7 +168,7 @@ mimeheaders = [b'content-type'] - if not util.safehasattr(stream, b'next'): + if not util.safehasattr(stream, 'next'): # http responses, for example, have readline but not next stream = fiter(stream) @@ -1703,7 +1703,7 @@ newhunks = [] for c in hunks: - if util.safehasattr(c, b'reversehunk'): + if util.safehasattr(c, 'reversehunk'): c = c.reversehunk() newhunks.append(c) return newhunks diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/phases.py --- a/mercurial/phases.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/phases.py Thu Jun 22 11:36:37 2023 +0200 @@ -154,6 +154,7 @@ internal = 96 # non-continuous for compatibility allphases = (public, draft, secret, archived, internal) trackedphases = (draft, secret, archived, internal) +not_public_phases = trackedphases # record phase names cmdphasenames = [b'public', b'draft', b'secret'] # known to `hg phase` command phasenames = dict(enumerate(cmdphasenames)) @@ -171,6 +172,10 @@ remotehiddenphases = (secret, archived, internal) localhiddenphases = (internal, archived) +all_internal_phases = tuple(p for p in allphases if p & internal) +# We do not want any internal content to exit the repository, ever. +no_bundle_phases = all_internal_phases + def supportinternal(repo): # type: (localrepo.localrepository) -> bool @@ -458,11 +463,11 @@ def replace(self, phcache): """replace all values in 'self' with content of phcache""" for a in ( - b'phaseroots', - b'dirty', - b'opener', - b'_loadedrevslen', - b'_phasesets', + 'phaseroots', + 'dirty', + 'opener', + '_loadedrevslen', + '_phasesets', ): setattr(self, a, getattr(phcache, a)) @@ -826,10 +831,8 @@ cl = repo.changelog headsbyphase = {i: [] for i in allphases} - # No need to keep track of secret phase; any heads in the subset that - # are not mentioned are implicitly secret. - for phase in allphases[:secret]: - revset = b"heads(%%ln & %s())" % phasenames[phase] + for phase in allphases: + revset = b"heads(%%ln & _phase(%d))" % phase headsbyphase[phase] = [cl.node(r) for r in repo.revs(revset, subset)] return headsbyphase diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/repair.py --- a/mercurial/repair.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/repair.py Thu Jun 22 11:36:37 2023 +0200 @@ -34,7 +34,14 @@ def backupbundle( - repo, bases, heads, node, suffix, compress=True, obsolescence=True + repo, + bases, + heads, + node, + suffix, + compress=True, + obsolescence=True, + tmp_backup=False, ): """create a bundle with the specified revisions as a backup""" @@ -81,6 +88,7 @@ contentopts, vfs, compression=comp, + allow_internal=tmp_backup, ) @@ -197,6 +205,7 @@ b'temp', compress=False, obsolescence=False, + tmp_backup=True, ) with ui.uninterruptible(): @@ -335,8 +344,26 @@ def _createstripbackup(repo, stripbases, node, topic): # backup the changeset we are about to strip vfs = repo.vfs - cl = repo.changelog - backupfile = backupbundle(repo, stripbases, cl.heads(), node, topic) + unfi = repo.unfiltered() + to_node = unfi.changelog.node + # internal changeset are internal implementation details that should not + # leave the repository and not be exposed to the users. In addition feature + # using them requires to be resistant to strip. See test case for more + # details. + all_backup = unfi.revs( + b"(%ln)::(%ld) and not _internal()", + stripbases, + unfi.changelog.headrevs(), + ) + if not all_backup: + return None + + def to_nodes(revs): + return [to_node(r) for r in revs] + + bases = to_nodes(unfi.revs("roots(%ld)", all_backup)) + heads = to_nodes(unfi.revs("heads(%ld)", all_backup)) + backupfile = backupbundle(repo, bases, heads, node, topic) repo.ui.status(_(b"saved backup bundle to %s\n") % vfs.join(backupfile)) repo.ui.log( b"backupbundle", b"saved backup bundle to %s\n", vfs.join(backupfile) @@ -417,12 +444,9 @@ if scmutil.istreemanifest(repo): # This logic is safe if treemanifest isn't enabled, but also # pointless, so we skip it if treemanifest isn't enabled. - for t, unencoded, size in repo.store.datafiles(): - if unencoded.startswith(b'meta/') and unencoded.endswith( - b'00manifest.i' - ): - dir = unencoded[5:-12] - yield repo.manifestlog.getstorage(dir) + for entry in repo.store.data_entries(): + if entry.is_revlog and entry.is_manifestlog: + yield repo.manifestlog.getstorage(entry.target_id) def rebuildfncache(ui, repo, only_data=False): diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/revlog.py --- a/mercurial/revlog.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/revlog.py Thu Jun 22 11:36:37 2023 +0200 @@ -290,6 +290,16 @@ _flagserrorclass = error.RevlogError + @staticmethod + def is_inline_index(header_bytes): + header = INDEX_HEADER.unpack(header_bytes)[0] + + _format_flags = header & ~0xFFFF + _format_version = header & 0xFFFF + + features = FEATURES_BY_VERSION[_format_version] + return features[b'inline'](_format_flags) + def __init__( self, opener, @@ -506,6 +516,97 @@ except FileNotFoundError: return b'' + def get_streams(self, max_linkrev, force_inline=False): + n = len(self) + index = self.index + while n > 0: + linkrev = index[n - 1][4] + if linkrev < max_linkrev: + break + # note: this loop will rarely go through multiple iterations, since + # it only traverses commits created during the current streaming + # pull operation. + # + # If this become a problem, using a binary search should cap the + # runtime of this. + n = n - 1 + if n == 0: + # no data to send + return [] + index_size = n * index.entry_size + data_size = self.end(n - 1) + + # XXX we might have been split (or stripped) since the object + # initialization, We need to close this race too, but having a way to + # pre-open the file we feed to the revlog and never closing them before + # we are done streaming. + + if self._inline: + + def get_stream(): + with self._indexfp() as fp: + yield None + size = index_size + data_size + if size <= 65536: + yield fp.read(size) + else: + yield from util.filechunkiter(fp, limit=size) + + inline_stream = get_stream() + next(inline_stream) + return [ + (self._indexfile, inline_stream, index_size + data_size), + ] + elif force_inline: + + def get_stream(): + with self._datafp() as fp_d: + yield None + + for rev in range(n): + idx = self.index.entry_binary(rev) + if rev == 0 and self._docket is None: + # re-inject the inline flag + header = self._format_flags + header |= self._format_version + header |= FLAG_INLINE_DATA + header = self.index.pack_header(header) + idx = header + idx + yield idx + yield self._getsegmentforrevs(rev, rev, df=fp_d)[1] + + inline_stream = get_stream() + next(inline_stream) + return [ + (self._indexfile, inline_stream, index_size + data_size), + ] + else: + + def get_index_stream(): + with self._indexfp() as fp: + yield None + if index_size <= 65536: + yield fp.read(index_size) + else: + yield from util.filechunkiter(fp, limit=index_size) + + def get_data_stream(): + with self._datafp() as fp: + yield None + if data_size <= 65536: + yield fp.read(data_size) + else: + yield from util.filechunkiter(fp, limit=data_size) + + index_stream = get_index_stream() + next(index_stream) + data_stream = get_data_stream() + next(data_stream) + return [ + (self._datafile, data_stream, data_size), + (self._indexfile, index_stream, index_size), + ] + def _loadindex(self, docket=None): new_header, mmapindexthreshold, force_nodemap = self._init_opts() @@ -663,6 +764,10 @@ # revlog header -> revlog compressor self._decompressors = {} + def get_revlog(self): + """simple function to mirror API of other not-really-revlog API""" + return self + @util.propertycache def revlog_kind(self): return self.target[0] @@ -1782,7 +1887,7 @@ """tells whether rev is a snapshot""" if not self._sparserevlog: return self.deltaparent(rev) == nullrev - elif util.safehasattr(self.index, b'issnapshot'): + elif util.safehasattr(self.index, 'issnapshot'): # directly assign the method to cache the testing and access self.issnapshot = self.index.issnapshot return self.issnapshot(rev) @@ -2076,10 +2181,6 @@ opener = self.opener weak_self = weakref.ref(self) - fncache = getattr(opener, 'fncache', None) - if fncache is not None: - fncache.addignore(new_index_file_path) - # the "split" index replace the real index when the transaction is finalized def finalize_callback(tr): opener.rename( diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/revlogutils/debug.py --- a/mercurial/revlogutils/debug.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/revlogutils/debug.py Thu Jun 22 11:36:37 2023 +0200 @@ -663,27 +663,6 @@ deltacomputer.finddeltainfo(revinfo, fh, target_rev=rev) -def _get_revlogs(repo, changelog: bool, manifest: bool, filelogs: bool): - """yield revlogs from this repository""" - if changelog: - yield repo.changelog - - if manifest: - # XXX: Handle tree manifest - root_mf = repo.manifestlog.getstorage(b'') - assert not root_mf._treeondisk - yield root_mf._revlog - - if filelogs: - files = set() - for rev in repo: - ctx = repo[rev] - files |= set(ctx.files()) - - for f in sorted(files): - yield repo.file(f)._revlog - - def debug_revlog_stats( repo, fm, changelog: bool, manifest: bool, filelogs: bool ): @@ -693,7 +672,17 @@ """ fm.plain(b'rev-count data-size inl type target \n') - for rlog in _get_revlogs(repo, changelog, manifest, filelogs): + revlog_entries = [e for e in repo.store.walk() if e.is_revlog] + revlog_entries.sort(key=lambda e: (e.revlog_type, e.target_id)) + + for entry in revlog_entries: + if not changelog and entry.is_changelog: + continue + elif not manifest and entry.is_manifestlog: + continue + elif not filelogs and entry.is_filelog: + continue + rlog = entry.get_revlog_instance(repo).get_revlog() fm.startitem() nb_rev = len(rlog) inline = rlog._inline diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/revlogutils/deltas.py --- a/mercurial/revlogutils/deltas.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/revlogutils/deltas.py Thu Jun 22 11:36:37 2023 +0200 @@ -585,12 +585,14 @@ if deltainfo is None: return False - if ( - revinfo.cachedelta is not None - and deltainfo.base == revinfo.cachedelta[0] - and revinfo.cachedelta[2] == DELTA_BASE_REUSE_FORCE - ): - return True + # the DELTA_BASE_REUSE_FORCE case should have been taken care of sooner so + # we should never end up asking such question. Adding the assert as a + # safe-guard to detect anything that would be fishy in this regard. + assert ( + revinfo.cachedelta is None + or revinfo.cachedelta[2] != DELTA_BASE_REUSE_FORCE + or not revlog._generaldelta + ) # - 'deltainfo.distance' is the distance from the base revision -- # bounding it limits the amount of I/O we need to do. @@ -693,14 +695,14 @@ yield None return - if ( - cachedelta is not None - and nullrev == cachedelta[0] - and cachedelta[2] == DELTA_BASE_REUSE_FORCE - ): - # instruction are to forcibly do a full snapshot - yield None - return + # the DELTA_BASE_REUSE_FORCE case should have been taken care of sooner so + # we should never end up asking such question. Adding the assert as a + # safe-guard to detect anything that would be fishy in this regard. + assert ( + cachedelta is None + or cachedelta[2] != DELTA_BASE_REUSE_FORCE + or not revlog._generaldelta + ) deltalength = revlog.length deltaparent = revlog.deltaparent @@ -736,15 +738,6 @@ if rev in tested: continue - if ( - cachedelta is not None - and rev == cachedelta[0] - and cachedelta[2] == DELTA_BASE_REUSE_FORCE - ): - # instructions are to forcibly consider/use this delta base - group.append(rev) - continue - # an higher authority deamed the base unworthy (e.g. censored) if excluded_bases is not None and rev in excluded_bases: tested.add(rev) @@ -1067,7 +1060,7 @@ end_rev < self._start_rev or end_rev > self._end_rev ), (self._start_rev, self._end_rev, start_rev, end_rev) cache = self.snapshots - if util.safehasattr(revlog.index, b'findsnapshots'): + if util.safehasattr(revlog.index, 'findsnapshots'): revlog.index.findsnapshots(cache, start_rev, end_rev) else: deltaparent = revlog.deltaparent diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/revlogutils/flagutil.py --- a/mercurial/revlogutils/flagutil.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/revlogutils/flagutil.py Thu Jun 22 11:36:37 2023 +0200 @@ -176,8 +176,12 @@ vhash = True if flag not in revlog._flagprocessors: + hint = None + if flag == REVIDX_EXTSTORED: + hint = _(b"the lfs extension must be enabled") + message = _(b"missing processor for flag '%#x'") % flag - raise revlog._flagserrorclass(message) + raise revlog._flagserrorclass(message, hint=hint) processor = revlog._flagprocessors[flag] if processor is not None: diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/revlogutils/nodemap.py --- a/mercurial/revlogutils/nodemap.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/revlogutils/nodemap.py Thu Jun 22 11:36:37 2023 +0200 @@ -611,10 +611,10 @@ def check_data(ui, index, data): """verify that the provided nodemap data are valid for the given idex""" ret = 0 - ui.status((b"revision in index: %d\n") % len(index)) + ui.status((b"revisions in index: %d\n") % len(index)) root, __ = parse_data(data) all_revs = set(_all_revisions(root)) - ui.status((b"revision in nodemap: %d\n") % len(all_revs)) + ui.status((b"revisions in nodemap: %d\n") % len(all_revs)) for r in range(len(index)): if r not in all_revs: msg = b" revision missing from nodemap: %d\n" % r @@ -637,7 +637,7 @@ if all_revs: for r in sorted(all_revs): - msg = b" extra revision in nodemap: %d\n" % r + msg = b" extra revisions in nodemap: %d\n" % r ui.write_err(msg) ret = 1 return ret diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/revlogutils/rewrite.py --- a/mercurial/revlogutils/rewrite.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/revlogutils/rewrite.py Thu Jun 22 11:36:37 2023 +0200 @@ -808,8 +808,6 @@ def repair_issue6528( ui, repo, dry_run=False, to_report=None, from_report=None, paranoid=False ): - from .. import store # avoid cycle - @contextlib.contextmanager def context(): if dry_run or to_report: # No need for locking @@ -825,9 +823,9 @@ with context(): files = list( - (file_type, path) - for (file_type, path, _s) in repo.store.datafiles() - if path.endswith(b'.i') and file_type & store.FILEFLAGS_FILELOG + entry + for entry in repo.store.data_entries() + if entry.is_revlog and entry.is_filelog ) progress = ui.makeprogress( @@ -837,15 +835,10 @@ ) found_nothing = True - for file_type, path in files: - if ( - not path.endswith(b'.i') - or not file_type & store.FILEFLAGS_FILELOG - ): - continue + for entry in files: progress.increment() - filename = _get_filename_from_filelog_index(path) - fl = _filelog_from_filename(repo, filename) + filename = entry.target_id + fl = _filelog_from_filename(repo, entry.target_id) # Set of filerevs (or hex filenodes if `to_report`) that need fixing to_fix = set() @@ -861,8 +854,8 @@ node = binascii.hexlify(fl.node(filerev)) raise error.Abort(msg % (filename, node)) if affected: - msg = b"found affected revision %d for filelog '%s'\n" - ui.warn(msg % (filerev, path)) + msg = b"found affected revision %d for file '%s'\n" + ui.warn(msg % (filerev, filename)) found_nothing = False if not dry_run: if to_report: diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/revset.py --- a/mercurial/revset.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/revset.py Thu Jun 22 11:36:37 2023 +0200 @@ -1967,6 +1967,12 @@ return repo._phasecache.getrevset(repo, targets, subset) +@predicate(b'_internal()', safe=True) +def _internal(repo, subset, x): + getargs(x, 0, 0, _(b"_internal takes no arguments")) + return _phase(repo, subset, *phases.all_internal_phases) + + @predicate(b'_phase(idx)', safe=True) def phase(repo, subset, x): l = getargs(x, 1, 1, b"_phase requires one argument") @@ -2061,7 +2067,7 @@ @predicate(b'_notpublic', safe=True) def _notpublic(repo, subset, x): getargs(x, 0, 0, b"_notpublic takes no arguments") - return _phase(repo, subset, phases.draft, phases.secret) + return _phase(repo, subset, *phases.not_public_phases) # for internal use diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/rewriteutil.py --- a/mercurial/rewriteutil.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/rewriteutil.py Thu Jun 22 11:36:37 2023 +0200 @@ -27,6 +27,21 @@ NODE_RE = re.compile(br'\b[0-9a-f]{6,64}\b') +# set of extra entry that should survive a rebase-like operation, extensible by extensions +retained_extras_on_rebase = { + b'source', + b'intermediate-source', +} + + +def preserve_extras_on_rebase(old_ctx, new_extra): + """preserve the relevant `extra` entry from old_ctx on rebase-like operation""" + new_extra.update( + (key, value) + for key, value in old_ctx.extra().items() + if key in retained_extras_on_rebase + ) + def _formatrevs(repo, revs, maxrevs=4): """returns a string summarizing revisions in a decent size diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/scmutil.py --- a/mercurial/scmutil.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/scmutil.py Thu Jun 22 11:36:37 2023 +0200 @@ -234,7 +234,7 @@ ui.error(_(b"abort: error: %s\n") % stringutil.forcebytestr(reason)) except (IOError, OSError) as inst: if ( - util.safehasattr(inst, b"args") + util.safehasattr(inst, "args") and inst.args and inst.args[0] == errno.EPIPE ): @@ -1066,7 +1066,7 @@ return # translate mapping's other forms - if not util.safehasattr(replacements, b'items'): + if not util.safehasattr(replacements, 'items'): replacements = {(n,): () for n in replacements} else: # upgrading non tuple "source" to tuple ones for BC @@ -2313,3 +2313,13 @@ mark, mark, ) + + +def ismember(ui, username, userlist): + """Check if username is a member of userlist. + + If userlist has a single '*' member, all users are considered members. + Can be overridden by extensions to provide more complex authorization + schemes. + """ + return userlist == [b'*'] or username in userlist diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/shelve.py --- a/mercurial/shelve.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/shelve.py Thu Jun 22 11:36:37 2023 +0200 @@ -516,7 +516,7 @@ def getcommitfunc(extra, interactive, editor=False): def commitfunc(ui, repo, message, match, opts): - hasmq = util.safehasattr(repo, b'mq') + hasmq = util.safehasattr(repo, 'mq') if hasmq: saved, repo.mq.checkapplied = repo.mq.checkapplied, False diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/smartset.py --- a/mercurial/smartset.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/smartset.py Thu Jun 22 11:36:37 2023 +0200 @@ -137,7 +137,7 @@ This is part of the mandatory API for smartset.""" # builtin cannot be cached. but do not needs to - if cache and util.safehasattr(condition, b'__code__'): + if cache and util.safehasattr(condition, '__code__'): condition = util.cachefunc(condition) return filteredset(self, condition, condrepr) @@ -668,9 +668,9 @@ # try to use our own fast iterator if it exists self._trysetasclist() if self._ascending: - attr = b'fastasc' + attr = 'fastasc' else: - attr = b'fastdesc' + attr = 'fastdesc' it = getattr(self, attr) if it is not None: return it() @@ -1127,7 +1127,7 @@ This boldly assumes the other contains valid revs only. """ # other not a smartset, make is so - if not util.safehasattr(other, b'isascending'): + if not util.safehasattr(other, 'isascending'): # filter out hidden revision # (this boldly assumes all smartset are pure) # diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/sshpeer.py --- a/mercurial/sshpeer.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/sshpeer.py Thu Jun 22 11:36:37 2023 +0200 @@ -177,7 +177,9 @@ ui.develwarn(b'missing close on SSH connection created at:\n%s' % warn) -def _makeconnection(ui, sshcmd, args, remotecmd, path, sshenv=None): +def _makeconnection( + ui, sshcmd, args, remotecmd, path, sshenv=None, remotehidden=False +): """Create an SSH connection to a server. Returns a tuple of (process, stdin, stdout, stderr) for the @@ -187,8 +189,12 @@ sshcmd, args, procutil.shellquote( - b'%s -R %s serve --stdio' - % (_serverquote(remotecmd), _serverquote(path)) + b'%s -R %s serve --stdio%s' + % ( + _serverquote(remotecmd), + _serverquote(path), + b' --hidden' if remotehidden else b'', + ) ), ) @@ -372,7 +378,16 @@ class sshv1peer(wireprotov1peer.wirepeer): def __init__( - self, ui, path, proc, stdin, stdout, stderr, caps, autoreadstderr=True + self, + ui, + path, + proc, + stdin, + stdout, + stderr, + caps, + autoreadstderr=True, + remotehidden=False, ): """Create a peer from an existing SSH connection. @@ -383,7 +398,7 @@ ``autoreadstderr`` denotes whether to automatically read from stderr and to forward its output. """ - super().__init__(ui, path=path) + super().__init__(ui, path=path, remotehidden=remotehidden) # self._subprocess is unused. Keeping a handle on the process # holds a reference and prevents it from being garbage collected. self._subprocess = proc @@ -400,6 +415,7 @@ self._caps = caps self._autoreadstderr = autoreadstderr self._initstack = b''.join(util.getstackframes(1)) + self._remotehidden = remotehidden # Commands that have a "framed" response where the first line of the # response contains the length of that response. @@ -568,7 +584,16 @@ self._readerr() -def makepeer(ui, path, proc, stdin, stdout, stderr, autoreadstderr=True): +def _make_peer( + ui, + path, + proc, + stdin, + stdout, + stderr, + autoreadstderr=True, + remotehidden=False, +): """Make a peer instance from existing pipes. ``path`` and ``proc`` are stored on the eventual peer instance and may @@ -598,6 +623,7 @@ stderr, caps, autoreadstderr=autoreadstderr, + remotehidden=remotehidden, ) else: _cleanuppipes(ui, stdout, stdin, stderr, warn=None) @@ -606,7 +632,9 @@ ) -def make_peer(ui, path, create, intents=None, createopts=None): +def make_peer( + ui, path, create, intents=None, createopts=None, remotehidden=False +): """Create an SSH peer. The returned object conforms to the ``wireprotov1peer.wirepeer`` interface. @@ -655,10 +683,18 @@ raise error.RepoError(_(b'could not create remote repo')) proc, stdin, stdout, stderr = _makeconnection( - ui, sshcmd, args, remotecmd, remotepath, sshenv + ui, + sshcmd, + args, + remotecmd, + remotepath, + sshenv, + remotehidden=remotehidden, ) - peer = makepeer(ui, path, proc, stdin, stdout, stderr) + peer = _make_peer( + ui, path, proc, stdin, stdout, stderr, remotehidden=remotehidden + ) # Finally, if supported by the server, notify it about our own # capabilities. diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/sslutil.py --- a/mercurial/sslutil.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/sslutil.py Thu Jun 22 11:36:37 2023 +0200 @@ -419,7 +419,7 @@ pass # Try to print more helpful error messages for known failures. - if util.safehasattr(e, b'reason'): + if util.safehasattr(e, 'reason'): # This error occurs when the client and server don't share a # common/supported SSL/TLS protocol. We've disabled SSLv2 and SSLv3 # outright. Hopefully the reason for this error is that we require @@ -628,7 +628,7 @@ # Otherwise, use the list of more secure ciphers if found in the ssl module. if exactprotocol: sslcontext.set_ciphers('DEFAULT:@SECLEVEL=0') - elif util.safehasattr(ssl, b'_RESTRICTED_SERVER_CIPHERS'): + elif util.safehasattr(ssl, '_RESTRICTED_SERVER_CIPHERS'): sslcontext.options |= getattr(ssl, 'OP_CIPHER_SERVER_PREFERENCE', 0) # pytype: disable=module-attr sslcontext.set_ciphers(ssl._RESTRICTED_SERVER_CIPHERS) diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/stabletailgraph/__init__.py diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/stabletailgraph/stabletailsort.py --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/mercurial/stabletailgraph/stabletailsort.py Thu Jun 22 11:36:37 2023 +0200 @@ -0,0 +1,172 @@ +# stabletailsort.py - stable ordering of revisions +# +# Copyright 2021-2023 Pacien TRAN-GIRARD +# +# This software may be used and distributed according to the terms of the +# GNU General Public License version 2 or any later version. + +""" +Stable-tail sort computation. + +The "stable-tail sort", or STS, is a reverse topological ordering of the +ancestors of a node, which tends to share large suffixes with the stable-tail +sort of ancestors and other nodes, giving it its name. + +Its properties should make it suitable for making chunks of ancestors with high +reuse and incrementality for example. + +This module and implementation are experimental. Most functions are not yet +optimised to operate on large production graphs. +""" + +import itertools +from ..node import nullrev +from .. import ancestor + + +def _sorted_parents(cl, p1, p2): + """ + Chooses and returns the pair (px, pt) from (p1, p2). + + Where + "px" denotes the parent starting the "exclusive" part, and + "pt" denotes the parent starting the "Tail" part. + + "px" is chosen as the parent with the lowest rank with the goal of + minimising the size of the exclusive part and maximise the size of the + tail part, hopefully reducing the overall complexity of the stable-tail + sort. + + In case of equal ranks, the stable node ID is used as a tie-breaker. + """ + r1, r2 = cl.fast_rank(p1), cl.fast_rank(p2) + if r1 < r2: + return (p1, p2) + elif r1 > r2: + return (p2, p1) + elif cl.node(p1) < cl.node(p2): + return (p1, p2) + else: + return (p2, p1) + + +def _nonoedipal_parent_revs(cl, rev): + """ + Returns the non-œdipal parent pair of the given revision. + + An œdipal merge is a merge with parents p1, p2 with either + p1 in ancestors(p2) or p2 in ancestors(p1). + In the first case, p1 is the œdipal parent. + In the second case, p2 is the œdipal parent. + + Œdipal edges start empty exclusive parts. They do not bring new ancestors. + As such, they can be skipped when computing any topological sort or any + iteration over the ancestors of a node. + + The œdipal edges are eliminated here using the rank information. + """ + p1, p2 = cl.parentrevs(rev) + if p1 == nullrev or cl.fast_rank(p2) == cl.fast_rank(rev) - 1: + return p2, nullrev + elif p2 == nullrev or cl.fast_rank(p1) == cl.fast_rank(rev) - 1: + return p1, nullrev + else: + return p1, p2 + + +def _parents(cl, rev): + p1, p2 = _nonoedipal_parent_revs(cl, rev) + if p2 == nullrev: + return p1, p2 + + return _sorted_parents(cl, p1, p2) + + +def _stable_tail_sort_naive(cl, head_rev): + """ + Naive topological iterator of the ancestors given by the stable-tail sort. + + The stable-tail sort of a node "h" is defined as the sequence: + sts(h) := [h] + excl(h) + sts(pt(h)) + where excl(h) := u for u in sts(px(h)) if u not in ancestors(pt(h)) + + This implementation uses a call-stack whose size is + O(number of open merges). + + As such, this implementation exists mainly as a defining reference. + """ + cursor_rev = head_rev + while cursor_rev != nullrev: + yield cursor_rev + + px, pt = _parents(cl, cursor_rev) + if pt == nullrev: + cursor_rev = px + else: + tail_ancestors = ancestor.lazyancestors( + cl.parentrevs, (pt,), inclusive=True + ) + exclusive_ancestors = ( + a + for a in _stable_tail_sort_naive(cl, px) + if a not in tail_ancestors + ) + + # Notice that excl(cur) is disjoint from ancestors(pt), + # so there is no double-counting: + # rank(cur) = len([cur]) + len(excl(cur)) + rank(pt) + excl_part_size = cl.fast_rank(cursor_rev) - cl.fast_rank(pt) - 1 + yield from itertools.islice(exclusive_ancestors, excl_part_size) + cursor_rev = pt + + +def _find_all_leaps_naive(cl, head_rev): + """ + Yields the leaps in the stable-tail sort of the given revision. + + A leap is a pair of revisions (source, target) consecutive in the + stable-tail sort of a head, for which target != px(source). + + Leaps are yielded in the same order as encountered in the stable-tail sort, + from head to root. + """ + sts = _stable_tail_sort_naive(cl, head_rev) + prev = next(sts) + for current in sts: + if current != _parents(cl, prev)[0]: + yield (prev, current) + + prev = current + + +def _find_specific_leaps_naive(cl, head_rev): + """ + Returns the specific leaps in the stable-tail sort of the given revision. + + Specific leaps are leaps appear in the stable-tail sort of a given + revision, but not in the stable-tail sort of any of its ancestors. + + The final leaps (leading to the pt of the considered merge) are omitted. + + Only merge nodes can have associated specific leaps. + + This implementations uses the whole leap sets of the given revision and + of its parents. + """ + px, pt = _parents(cl, head_rev) + if px == nullrev or pt == nullrev: + return # linear nodes cannot have specific leaps + + parents_leaps = set(_find_all_leaps_naive(cl, px)) + + sts = _stable_tail_sort_naive(cl, head_rev) + prev = next(sts) + for current in sts: + if current == pt: + break + if current != _parents(cl, prev)[0]: + leap = (prev, current) + if leap not in parents_leaps: + yield leap + + prev = current diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/statichttprepo.py --- a/mercurial/statichttprepo.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/statichttprepo.py Thu Jun 22 11:36:37 2023 +0200 @@ -119,7 +119,7 @@ def http_error_416(self, req, fp, code, msg, hdrs): # HTTP's Range Not Satisfiable error - raise _RangeError(b'Requested Range Not Satisfiable') + raise _RangeError('Requested Range Not Satisfiable') def build_opener(ui, authinfo): @@ -134,13 +134,13 @@ def __call__(self, path, mode=b'r', *args, **kw): if mode not in (b'r', b'rb'): - raise IOError(b'Permission denied') + raise IOError('Permission denied') f = b"/".join((self.base, urlreq.quote(path))) return httprangereader(f, urlopener) - def join(self, path): + def join(self, path, *insidef): if path: - return pathutil.join(self.base, path) + return pathutil.join(self.base, path, *insidef) else: return self.base @@ -237,8 +237,8 @@ def local(self): return False - def peer(self, path=None): - return statichttppeer(self, path=path) + def peer(self, path=None, remotehidden=False): + return statichttppeer(self, path=path, remotehidden=remotehidden) def wlock(self, wait=True): raise error.LockUnavailable( @@ -260,8 +260,12 @@ pass # statichttprepository are read only -def make_peer(ui, path, create, intents=None, createopts=None): +def make_peer( + ui, path, create, intents=None, createopts=None, remotehidden=False +): if create: raise error.Abort(_(b'cannot create new static-http repository')) url = path.loc[7:] - return statichttprepository(ui, url).peer(path=path) + return statichttprepository(ui, url).peer( + path=path, remotehidden=remotehidden + ) diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/store.py --- a/mercurial/store.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/store.py Thu Jun 22 11:36:37 2023 +0200 @@ -1,25 +1,35 @@ -# store.py - repository store handling for Mercurial +# store.py - repository store handling for Mercurial) # # Copyright 2008 Olivia Mackall # # This software may be used and distributed according to the terms of the # GNU General Public License version 2 or any later version. - +import collections import functools import os import re import stat +from typing import Generator, List from .i18n import _ from .pycompat import getattr +from .thirdparty import attr from .node import hex +from .revlogutils.constants import ( + INDEX_HEADER, + KIND_CHANGELOG, + KIND_FILELOG, + KIND_MANIFESTLOG, +) from . import ( changelog, error, + filelog, manifest, policy, pycompat, + revlog as revlogmod, util, vfs as vfsmod, ) @@ -31,7 +41,7 @@ fncache_chunksize = 10 ** 6 -def _matchtrackedpath(path, matcher): +def _match_tracked_entry(entry, matcher): """parses a fncache entry and returns whether the entry is tracking a path matched by matcher or not. @@ -39,13 +49,11 @@ if matcher is None: return True - path = decodedir(path) - if path.startswith(b'data/'): - return matcher(path[len(b'data/') : -len(b'.i')]) - elif path.startswith(b'meta/'): - return matcher.visitdir(path[len(b'meta/') : -len(b'/00manifest.i')]) - - raise error.ProgrammingError(b"cannot decode path %s" % path) + if entry.is_filelog: + return matcher(entry.target_id) + elif entry.is_manifestlog: + return matcher.visitdir(entry.target_id.rstrip(b'/')) + raise error.ProgrammingError(b"cannot process entry %r" % entry) # This avoids a collision between a file named foo and a dir named @@ -384,15 +392,21 @@ b'requires', ] -REVLOG_FILES_MAIN_EXT = (b'.i', b'i.tmpcensored') -REVLOG_FILES_OTHER_EXT = ( +REVLOG_FILES_EXT = ( + b'.i', b'.idx', b'.d', b'.dat', b'.n', b'.nd', b'.sda', - b'd.tmpcensored', +) +# file extension that also use a `-SOMELONGIDHASH.ext` form +REVLOG_FILES_LONG_EXT = ( + b'.nd', + b'.idx', + b'.dat', + b'.sda', ) # files that are "volatile" and might change between listing and streaming # @@ -408,48 +422,325 @@ def is_revlog(f, kind, st): if kind != stat.S_IFREG: - return None - return revlog_type(f) + return False + if f.endswith(REVLOG_FILES_EXT): + return True + return False + + +def is_revlog_file(f): + if f.endswith(REVLOG_FILES_EXT): + return True + return False + + +@attr.s(slots=True) +class StoreFile: + """a file matching a store entry""" + + unencoded_path = attr.ib() + _file_size = attr.ib(default=None) + is_volatile = attr.ib(default=False) + + def file_size(self, vfs): + if self._file_size is None: + if vfs is None: + msg = b"calling vfs-less file_size without prior call: %s" + msg %= self.unencoded_path + raise error.ProgrammingError(msg) + try: + self._file_size = vfs.stat(self.unencoded_path).st_size + except FileNotFoundError: + self._file_size = 0 + return self._file_size + + def get_stream(self, vfs, copies): + """return data "stream" information for this file + + (unencoded_file_path, content_iterator, content_size) + """ + size = self.file_size(None) + + def get_stream(): + actual_path = copies[vfs.join(self.unencoded_path)] + with open(actual_path, 'rb') as fp: + yield None # ready to stream + if size <= 65536: + yield fp.read(size) + else: + yield from util.filechunkiter(fp, limit=size) + + s = get_stream() + next(s) + return (self.unencoded_path, s, size) -def revlog_type(f): - # XXX we need to filter `undo.` created by the transaction here, however - # being naive about it also filter revlog for `undo.*` files, leading to - # issue6542. So we no longer use EXCLUDED. - if f.endswith(REVLOG_FILES_MAIN_EXT): - return FILEFLAGS_REVLOG_MAIN - elif f.endswith(REVLOG_FILES_OTHER_EXT): - t = FILETYPE_FILELOG_OTHER - if f.endswith(REVLOG_FILES_VOLATILE_EXT): - t |= FILEFLAGS_VOLATILE - return t - return None +@attr.s(slots=True, init=False) +class BaseStoreEntry: + """An entry in the store + + This is returned by `store.walk` and represent some data in the store.""" + + def files(self) -> List[StoreFile]: + raise NotImplementedError + + def get_streams( + self, + repo=None, + vfs=None, + copies=None, + max_changeset=None, + preserve_file_count=False, + ): + """return a list of data stream associated to files for this entry + + return [(unencoded_file_path, content_iterator, content_size), …] + """ + assert vfs is not None + return [f.get_stream(vfs, copies) for f in self.files()] + + +@attr.s(slots=True, init=False) +class SimpleStoreEntry(BaseStoreEntry): + """A generic entry in the store""" + + is_revlog = False + + _entry_path = attr.ib() + _is_volatile = attr.ib(default=False) + _file_size = attr.ib(default=None) + _files = attr.ib(default=None) + + def __init__( + self, + entry_path, + is_volatile=False, + file_size=None, + ): + super().__init__() + self._entry_path = entry_path + self._is_volatile = is_volatile + self._file_size = file_size + self._files = None + + def files(self) -> List[StoreFile]: + if self._files is None: + self._files = [ + StoreFile( + unencoded_path=self._entry_path, + file_size=self._file_size, + is_volatile=self._is_volatile, + ) + ] + return self._files -# the file is part of changelog data -FILEFLAGS_CHANGELOG = 1 << 13 -# the file is part of manifest data -FILEFLAGS_MANIFESTLOG = 1 << 12 -# the file is part of filelog data -FILEFLAGS_FILELOG = 1 << 11 -# file that are not directly part of a revlog -FILEFLAGS_OTHER = 1 << 10 +@attr.s(slots=True, init=False) +class RevlogStoreEntry(BaseStoreEntry): + """A revlog entry in the store""" + + is_revlog = True + + revlog_type = attr.ib(default=None) + target_id = attr.ib(default=None) + _path_prefix = attr.ib(default=None) + _details = attr.ib(default=None) + _files = attr.ib(default=None) + + def __init__( + self, + revlog_type, + path_prefix, + target_id, + details, + ): + super().__init__() + self.revlog_type = revlog_type + self.target_id = target_id + self._path_prefix = path_prefix + assert b'.i' in details, (path_prefix, details) + self._details = details + self._files = None + + @property + def is_changelog(self): + return self.revlog_type == KIND_CHANGELOG + + @property + def is_manifestlog(self): + return self.revlog_type == KIND_MANIFESTLOG + + @property + def is_filelog(self): + return self.revlog_type == KIND_FILELOG + + def main_file_path(self): + """unencoded path of the main revlog file""" + return self._path_prefix + b'.i' + + def files(self) -> List[StoreFile]: + if self._files is None: + self._files = [] + for ext in sorted(self._details, key=_ext_key): + path = self._path_prefix + ext + file_size = self._details[ext] + # files that are "volatile" and might change between + # listing and streaming + # + # note: the ".nd" file are nodemap data and won't "change" + # but they might be deleted. + volatile = ext.endswith(REVLOG_FILES_VOLATILE_EXT) + f = StoreFile(path, file_size, volatile) + self._files.append(f) + return self._files + + def get_streams( + self, + repo=None, + vfs=None, + copies=None, + max_changeset=None, + preserve_file_count=False, + ): + if ( + repo is None + or max_changeset is None + # This use revlog-v2, ignore for now + or any(k.endswith(b'.idx') for k in self._details.keys()) + # This is not inline, no race expected + or b'.d' in self._details + ): + return super().get_streams( + repo=repo, + vfs=vfs, + copies=copies, + max_changeset=max_changeset, + preserve_file_count=preserve_file_count, + ) + elif not preserve_file_count: + stream = [ + f.get_stream(vfs, copies) + for f in self.files() + if not f.unencoded_path.endswith((b'.i', b'.d')) + ] + rl = self.get_revlog_instance(repo).get_revlog() + rl_stream = rl.get_streams(max_changeset) + stream.extend(rl_stream) + return stream + + name_to_size = {} + for f in self.files(): + name_to_size[f.unencoded_path] = f.file_size(None) + + stream = [ + f.get_stream(vfs, copies) + for f in self.files() + if not f.unencoded_path.endswith(b'.i') + ] -# the main entry point for a revlog -FILEFLAGS_REVLOG_MAIN = 1 << 1 -# a secondary file for a revlog -FILEFLAGS_REVLOG_OTHER = 1 << 0 + index_path = self._path_prefix + b'.i' + + index_file = None + try: + index_file = vfs(index_path) + header = index_file.read(INDEX_HEADER.size) + if revlogmod.revlog.is_inline_index(header): + size = name_to_size[index_path] + + # no split underneath, just return the stream + def get_stream(): + fp = index_file + try: + fp.seek(0) + yield None + if size <= 65536: + yield fp.read(size) + else: + yield from util.filechunkiter(fp, limit=size) + finally: + fp.close() -# files that are "volatile" and might change between listing and streaming -FILEFLAGS_VOLATILE = 1 << 20 + s = get_stream() + next(s) + index_file = None + stream.append((index_path, s, size)) + else: + rl = self.get_revlog_instance(repo).get_revlog() + rl_stream = rl.get_streams(max_changeset, force_inline=True) + for name, s, size in rl_stream: + if name_to_size.get(name, 0) != size: + msg = _(b"expected %d bytes but %d provided for %s") + msg %= name_to_size.get(name, 0), size, name + raise error.Abort(msg) + stream.extend(rl_stream) + finally: + if index_file is not None: + index_file.close() + + files = self.files() + assert len(stream) == len(files), ( + stream, + files, + self._path_prefix, + self.target_id, + ) + return stream + + def get_revlog_instance(self, repo): + """Obtain a revlog instance from this store entry -FILETYPE_CHANGELOG_MAIN = FILEFLAGS_CHANGELOG | FILEFLAGS_REVLOG_MAIN -FILETYPE_CHANGELOG_OTHER = FILEFLAGS_CHANGELOG | FILEFLAGS_REVLOG_OTHER -FILETYPE_MANIFESTLOG_MAIN = FILEFLAGS_MANIFESTLOG | FILEFLAGS_REVLOG_MAIN -FILETYPE_MANIFESTLOG_OTHER = FILEFLAGS_MANIFESTLOG | FILEFLAGS_REVLOG_OTHER -FILETYPE_FILELOG_MAIN = FILEFLAGS_FILELOG | FILEFLAGS_REVLOG_MAIN -FILETYPE_FILELOG_OTHER = FILEFLAGS_FILELOG | FILEFLAGS_REVLOG_OTHER -FILETYPE_OTHER = FILEFLAGS_OTHER + An instance of the appropriate class is returned. + """ + if self.is_changelog: + return changelog.changelog(repo.svfs) + elif self.is_manifestlog: + mandir = self.target_id + return manifest.manifestrevlog( + repo.nodeconstants, repo.svfs, tree=mandir + ) + else: + return filelog.filelog(repo.svfs, self.target_id) + + +def _gather_revlog(files_data): + """group files per revlog prefix + + The returns a two level nested dict. The top level key is the revlog prefix + without extension, the second level is all the file "suffix" that were + seen for this revlog and arbitrary file data as value. + """ + revlogs = collections.defaultdict(dict) + for u, value in files_data: + name, ext = _split_revlog_ext(u) + revlogs[name][ext] = value + return sorted(revlogs.items()) + + +def _split_revlog_ext(filename): + """split the revlog file prefix from the variable extension""" + if filename.endswith(REVLOG_FILES_LONG_EXT): + char = b'-' + else: + char = b'.' + idx = filename.rfind(char) + return filename[:idx], filename[idx:] + + +def _ext_key(ext): + """a key to order revlog suffix + + important to issue .i after other entry.""" + # the only important part of this order is to keep the `.i` last. + if ext.endswith(b'.n'): + return (0, ext) + elif ext.endswith(b'.nd'): + return (10, ext) + elif ext.endswith(b'.d'): + return (20, ext) + elif ext.endswith(b'.i'): + return (50, ext) + else: + return (40, ext) class basicstore: @@ -467,7 +758,7 @@ def join(self, f): return self.path + b'/' + encodedir(f) - def _walk(self, relpath, recurse): + def _walk(self, relpath, recurse, undecodable=None): '''yields (revlog_type, unencoded, size)''' path = self.path if relpath: @@ -481,12 +772,12 @@ p = visit.pop() for f, kind, st in readdir(p, stat=True): fp = p + b'/' + f - rl_type = is_revlog(f, kind, st) - if rl_type is not None: + if is_revlog(f, kind, st): n = util.pconvert(fp[striplen:]) - l.append((rl_type, decodedir(n), st.st_size)) + l.append((decodedir(n), st.st_size)) elif kind == stat.S_IFDIR and recurse: visit.append(fp) + l.sort() return l @@ -501,40 +792,97 @@ rootstore = manifest.manifestrevlog(repo.nodeconstants, self.vfs) return manifest.manifestlog(self.vfs, repo, rootstore, storenarrowmatch) - def datafiles(self, matcher=None, undecodable=None): + def data_entries( + self, matcher=None, undecodable=None + ) -> Generator[BaseStoreEntry, None, None]: """Like walk, but excluding the changelog and root manifest. When [undecodable] is None, revlogs names that can't be decoded cause an exception. When it is provided, it should be a list and the filenames that can't be decoded are added to it instead. This is very rarely needed.""" - files = self._walk(b'data', True) + self._walk(b'meta', True) - for (t, u, s) in files: - yield (FILEFLAGS_FILELOG | t, u, s) + dirs = [ + (b'data', KIND_FILELOG, False), + (b'meta', KIND_MANIFESTLOG, True), + ] + for base_dir, rl_type, strip_filename in dirs: + files = self._walk(base_dir, True, undecodable=undecodable) + for revlog, details in _gather_revlog(files): + revlog_target_id = revlog.split(b'/', 1)[1] + if strip_filename and b'/' in revlog: + revlog_target_id = revlog_target_id.rsplit(b'/', 1)[0] + revlog_target_id += b'/' + yield RevlogStoreEntry( + path_prefix=revlog, + revlog_type=rl_type, + target_id=revlog_target_id, + details=details, + ) - def topfiles(self): - # yield manifest before changelog + def top_entries( + self, phase=False, obsolescence=False + ) -> Generator[BaseStoreEntry, None, None]: + if phase and self.vfs.exists(b'phaseroots'): + yield SimpleStoreEntry( + entry_path=b'phaseroots', + is_volatile=True, + ) + + if obsolescence and self.vfs.exists(b'obsstore'): + # XXX if we had the file size it could be non-volatile + yield SimpleStoreEntry( + entry_path=b'obsstore', + is_volatile=True, + ) + files = reversed(self._walk(b'', False)) - for (t, u, s) in files: + + changelogs = collections.defaultdict(dict) + manifestlogs = collections.defaultdict(dict) + + for u, s in files: if u.startswith(b'00changelog'): - yield (FILEFLAGS_CHANGELOG | t, u, s) + name, ext = _split_revlog_ext(u) + changelogs[name][ext] = s elif u.startswith(b'00manifest'): - yield (FILEFLAGS_MANIFESTLOG | t, u, s) + name, ext = _split_revlog_ext(u) + manifestlogs[name][ext] = s else: - yield (FILETYPE_OTHER | t, u, s) + yield SimpleStoreEntry( + entry_path=u, + is_volatile=False, + file_size=s, + ) + # yield manifest before changelog + top_rl = [ + (manifestlogs, KIND_MANIFESTLOG), + (changelogs, KIND_CHANGELOG), + ] + assert len(manifestlogs) <= 1 + assert len(changelogs) <= 1 + for data, revlog_type in top_rl: + for revlog, details in sorted(data.items()): + yield RevlogStoreEntry( + path_prefix=revlog, + revlog_type=revlog_type, + target_id=b'', + details=details, + ) - def walk(self, matcher=None): - """return file related to data storage (ie: revlogs) + def walk( + self, matcher=None, phase=False, obsolescence=False + ) -> Generator[BaseStoreEntry, None, None]: + """return files related to data storage (ie: revlogs) - yields (file_type, unencoded, size) + yields instance from BaseStoreEntry subclasses if a matcher is passed, storage files of only those tracked paths are passed with matches the matcher """ # yield data files first - for x in self.datafiles(matcher): + for x in self.data_entries(matcher): yield x - for x in self.topfiles(): + for x in self.top_entries(phase=phase, obsolescence=obsolescence): yield x def copylist(self): @@ -571,13 +919,10 @@ self.vfs = vfsmod.filtervfs(vfs, encodefilename) self.opener = self.vfs - # note: topfiles would also need a decode phase. It is just that in - # practice we do not have any file outside of `data/` that needs encoding. - # However that might change so we should probably add a test and encoding - # decoding for it too. see issue6548 - - def datafiles(self, matcher=None, undecodable=None): - for t, f1, size in super(encodedstore, self).datafiles(): + def _walk(self, relpath, recurse, undecodable=None): + old = super()._walk(relpath, recurse) + new = [] + for f1, value in old: try: f2 = decodefilename(f1) except KeyError: @@ -587,9 +932,18 @@ else: undecodable.append(f1) continue - if not _matchtrackedpath(f2, matcher): - continue - yield t, f2, size + new.append((f2, value)) + return new + + def data_entries( + self, matcher=None, undecodable=None + ) -> Generator[BaseStoreEntry, None, None]: + entries = super(encodedstore, self).data_entries( + undecodable=undecodable + ) + for entry in entries: + if _match_tracked_entry(entry, matcher): + yield entry def join(self, f): return self.path + b'/' + encodefilename(f) @@ -732,8 +1086,10 @@ def __call__(self, path, mode=b'r', *args, **kw): encoded = self.encode(path) - if mode not in (b'r', b'rb') and ( - path.startswith(b'data/') or path.startswith(b'meta/') + if ( + mode not in (b'r', b'rb') + and (path.startswith(b'data/') or path.startswith(b'meta/')) + and is_revlog_file(path) ): # do not trigger a fncache load when adding a file that already is # known to exist. @@ -783,18 +1139,34 @@ def getsize(self, path): return self.rawvfs.stat(path).st_size - def datafiles(self, matcher=None, undecodable=None): - for f in sorted(self.fncache): - if not _matchtrackedpath(f, matcher): - continue - ef = self.encode(f) - try: - t = revlog_type(f) - assert t is not None, f - t |= FILEFLAGS_FILELOG - yield t, f, self.getsize(ef) - except FileNotFoundError: - pass + def data_entries( + self, matcher=None, undecodable=None + ) -> Generator[BaseStoreEntry, None, None]: + # Note: all files in fncache should be revlog related, However the + # fncache might contains such file added by previous version of + # Mercurial. + files = ((f, None) for f in self.fncache if is_revlog_file(f)) + by_revlog = _gather_revlog(files) + for revlog, details in by_revlog: + if revlog.startswith(b'data/'): + rl_type = KIND_FILELOG + revlog_target_id = revlog.split(b'/', 1)[1] + elif revlog.startswith(b'meta/'): + rl_type = KIND_MANIFESTLOG + # drop the initial directory and the `00manifest` file part + tmp = revlog.split(b'/', 1)[1] + revlog_target_id = tmp.rsplit(b'/', 1)[0] + b'/' + else: + # unreachable + assert False, revlog + entry = RevlogStoreEntry( + path_prefix=revlog, + revlog_type=rl_type, + target_id=revlog_target_id, + details=details, + ) + if _match_tracked_entry(entry, matcher): + yield entry def copylist(self): d = ( diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/streamclone.py --- a/mercurial/streamclone.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/streamclone.py Thu Jun 22 11:36:37 2023 +0200 @@ -11,10 +11,10 @@ import struct from .i18n import _ -from .pycompat import open from .interfaces import repository from . import ( bookmarks, + bundle2 as bundle2mod, cacheutil, error, narrowspec, @@ -69,10 +69,30 @@ repo = pullop.repo remote = pullop.remote + # should we consider streaming clone at all ? + streamrequested = pullop.streamclonerequested + # If we don't have a preference, let the server decide for us. This + # likely only comes into play in LANs. + if streamrequested is None: + # The server can advertise whether to prefer streaming clone. + streamrequested = remote.capable(b'stream-preferred') + if not streamrequested: + return False, None + + # Streaming clone only works on an empty destination repository + if len(repo): + return False, None + + # Streaming clone only works if all data is being requested. + if pullop.heads: + return False, None + bundle2supported = False if pullop.canusebundle2: - if b'v2' in pullop.remotebundle2caps.get(b'stream', []): - bundle2supported = True + local_caps = bundle2mod.getrepocaps(repo, role=b'client') + local_supported = set(local_caps.get(b'stream', [])) + remote_supported = set(pullop.remotebundle2caps.get(b'stream', [])) + bundle2supported = bool(local_supported & remote_supported) # else # Server doesn't support bundle2 stream clone or doesn't support # the versions we support. Fall back and possibly allow legacy. @@ -84,25 +104,6 @@ elif bundle2 and not bundle2supported: return False, None - # Streaming clone only works on empty repositories. - if len(repo): - return False, None - - # Streaming clone only works if all data is being requested. - if pullop.heads: - return False, None - - streamrequested = pullop.streamclonerequested - - # If we don't have a preference, let the server decide for us. This - # likely only comes into play in LANs. - if streamrequested is None: - # The server can advertise whether to prefer streaming clone. - streamrequested = remote.capable(b'stream-preferred') - - if not streamrequested: - return False, None - # In order for stream clone to work, the client has to support all the # requirements advertised by the server. # @@ -241,8 +242,8 @@ # This is it's own function so extensions can override it. -def _walkstreamfiles(repo, matcher=None): - return repo.store.walk(matcher) +def _walkstreamfiles(repo, matcher=None, phase=False, obsolescence=False): + return repo.store.walk(matcher, phase=phase, obsolescence=obsolescence) def generatev1(repo): @@ -269,10 +270,12 @@ # Get consistent snapshot of repo, lock during scan. with repo.lock(): repo.ui.debug(b'scanning\n') - for file_type, name, size in _walkstreamfiles(repo): - if size: - entries.append((name, size)) - total_bytes += size + for entry in _walkstreamfiles(repo): + for f in entry.files(): + file_size = f.file_size(repo.store.vfs) + if file_size: + entries.append((f.unencoded_path, file_size)) + total_bytes += file_size _test_sync_point_walk_1(repo) _test_sync_point_walk_2(repo) @@ -425,7 +428,16 @@ with repo.svfs.backgroundclosing(repo.ui, expectedcount=filecount): for i in range(filecount): # XXX doesn't support '\n' or '\r' in filenames - l = fp.readline() + if util.safehasattr(fp, 'readline'): + l = fp.readline() + else: + # inline clonebundles use a chunkbuffer, so no readline + # --> this should be small anyway, the first line + # only contains the size of the bundle + l_buf = [] + while not (l_buf and l_buf[-1] == b'\n'): + l_buf.append(fp.read(1)) + l = b''.join(l_buf) try: name, size = l.split(b'\0', 1) size = int(size) @@ -552,28 +564,55 @@ return (src, name, ftype, copy(vfsmap[src].join(name))) -@contextlib.contextmanager -def maketempcopies(): - """return a function to temporary copy file""" +class TempCopyManager: + """Manage temporary backup of volatile file during stream clone + + This should be used as a Python context, the copies will be discarded when + exiting the context. + + A copy can be done by calling the object on the real path (encoded full + path) - files = [] - dst_dir = pycompat.mkdtemp(prefix=b'hg-clone-') - try: + The backup path can be retrieved using the __getitem__ protocol, obj[path]. + On file without backup, it will return the unmodified path. (equivalent to + `dict.get(x, x)`) + """ + + def __init__(self): + self._copies = None + self._dst_dir = None - def copy(src): - fd, dst = pycompat.mkstemp( - prefix=os.path.basename(src), dir=dst_dir - ) - os.close(fd) - files.append(dst) - util.copyfiles(src, dst, hardlink=True) - return dst + def __enter__(self): + if self._copies is not None: + msg = "Copies context already open" + raise error.ProgrammingError(msg) + self._copies = {} + self._dst_dir = pycompat.mkdtemp(prefix=b'hg-clone-') + return self - yield copy - finally: - for tmp in files: + def __call__(self, src): + """create a backup of the file at src""" + prefix = os.path.basename(src) + fd, dst = pycompat.mkstemp(prefix=prefix, dir=self._dst_dir) + os.close(fd) + self._copies[src] = dst + util.copyfiles(src, dst, hardlink=True) + return dst + + def __getitem__(self, src): + """return the path to a valid version of `src` + + If the file has no backup, the path of the file is returned + unmodified.""" + return self._copies.get(src, src) + + def __exit__(self, *args, **kwars): + """discard all backups""" + for tmp in self._copies.values(): util.tryunlink(tmp) - util.tryrmdir(dst_dir) + util.tryrmdir(self._dst_dir) + self._copies = None + self._dst_dir = None def _makemap(repo): @@ -589,7 +628,7 @@ return vfsmap -def _emit2(repo, entries, totalfilesize): +def _emit2(repo, entries): """actually emit the stream bundle""" vfsmap = _makemap(repo) # we keep repo.vfs out of the on purpose, ther are too many danger there @@ -602,51 +641,111 @@ b'repo.vfs must not be added to vfsmap for security reasons' ) + # translate the vfs one + entries = [(vfs_key, vfsmap[vfs_key], e) for (vfs_key, e) in entries] + + max_linkrev = len(repo) + file_count = totalfilesize = 0 + # record the expected size of every file + for k, vfs, e in entries: + for f in e.files(): + file_count += 1 + totalfilesize += f.file_size(vfs) + progress = repo.ui.makeprogress( _(b'bundle'), total=totalfilesize, unit=_(b'bytes') ) progress.update(0) - with maketempcopies() as copy, progress: - # copy is delayed until we are in the try - entries = [_filterfull(e, copy, vfsmap) for e in entries] - yield None # this release the lock on the repository + with TempCopyManager() as copy, progress: + # create a copy of volatile files + for k, vfs, e in entries: + for f in e.files(): + if f.is_volatile: + copy(vfs.join(f.unencoded_path)) + # the first yield release the lock on the repository + yield file_count, totalfilesize totalbytecount = 0 - for src, name, ftype, data in entries: - vfs = vfsmap[src] - yield src - yield util.uvarintencode(len(name)) - if ftype == _fileappend: - fp = vfs(name) - size = data - elif ftype == _filefull: - fp = open(data, b'rb') - size = util.fstat(fp).st_size - bytecount = 0 - try: + for src, vfs, e in entries: + entry_streams = e.get_streams( + repo=repo, + vfs=vfs, + copies=copy, + max_changeset=max_linkrev, + preserve_file_count=True, + ) + for name, stream, size in entry_streams: + yield src + yield util.uvarintencode(len(name)) yield util.uvarintencode(size) yield name - if size <= 65536: - chunks = (fp.read(size),) - else: - chunks = util.filechunkiter(fp, limit=size) - for chunk in chunks: + bytecount = 0 + for chunk in stream: bytecount += len(chunk) totalbytecount += len(chunk) progress.update(totalbytecount) yield chunk if bytecount != size: - # Would most likely be caused by a race due to `hg strip` or - # a revlog split - raise error.Abort( - _( - b'clone could only read %d bytes from %s, but ' - b'expected %d bytes' - ) - % (bytecount, name, size) + # Would most likely be caused by a race due to `hg + # strip` or a revlog split + msg = _( + b'clone could only read %d bytes from %s, but ' + b'expected %d bytes' ) - finally: - fp.close() + raise error.Abort(msg % (bytecount, name, size)) + + +def _emit3(repo, entries): + """actually emit the stream bundle (v3)""" + vfsmap = _makemap(repo) + # we keep repo.vfs out of the map on purpose, ther are too many dangers + # there (eg: .hg/hgrc), + # + # this assert is duplicated (from _makemap) as authors might think this is + # fine, while this is really not fine. + if repo.vfs in vfsmap.values(): + raise error.ProgrammingError( + b'repo.vfs must not be added to vfsmap for security reasons' + ) + + # translate the vfs once + entries = [(vfs_key, vfsmap[vfs_key], e) for (vfs_key, e) in entries] + total_entry_count = len(entries) + + max_linkrev = len(repo) + progress = repo.ui.makeprogress( + _(b'bundle'), + total=total_entry_count, + unit=_(b'entry'), + ) + progress.update(0) + with TempCopyManager() as copy, progress: + # create a copy of volatile files + for k, vfs, e in entries: + for f in e.files(): + f.file_size(vfs) # record the expected size under lock + if f.is_volatile: + copy(vfs.join(f.unencoded_path)) + # the first yield release the lock on the repository + yield None + + yield util.uvarintencode(total_entry_count) + + for src, vfs, e in entries: + entry_streams = e.get_streams( + repo=repo, + vfs=vfs, + copies=copy, + max_changeset=max_linkrev, + ) + yield util.uvarintencode(len(entry_streams)) + for name, stream, size in entry_streams: + yield src + yield util.uvarintencode(len(name)) + yield util.uvarintencode(size) + yield name + yield from stream + progress.increment() def _test_sync_point_walk_1(repo): @@ -657,45 +756,37 @@ """a function for synchronisation during tests""" -def _v2_walk(repo, includes, excludes, includeobsmarkers): +def _entries_walk(repo, includes, excludes, includeobsmarkers): """emit a seris of files information useful to clone a repo - return (entries, totalfilesize) - - entries is a list of tuple (vfs-key, file-path, file-type, size) + return (vfs-key, entry) iterator - - `vfs-key`: is a key to the right vfs to write the file (see _makemap) - - `name`: file path of the file to copy (to be feed to the vfss) - - `file-type`: do this file need to be copied with the source lock ? - - `size`: the size of the file (or None) + Where `entry` is StoreEntry. (used even for cache entries) """ assert repo._currentlock(repo._lockref) is not None - entries = [] - totalfilesize = 0 matcher = None if includes or excludes: matcher = narrowspec.match(repo.root, includes, excludes) - for rl_type, name, size in _walkstreamfiles(repo, matcher): - if size: - ft = _fileappend - if rl_type & store.FILEFLAGS_VOLATILE: - ft = _filefull - entries.append((_srcstore, name, ft, size)) - totalfilesize += size - for name in _walkstreamfullstorefiles(repo): - if repo.svfs.exists(name): - totalfilesize += repo.svfs.lstat(name).st_size - entries.append((_srcstore, name, _filefull, None)) - if includeobsmarkers and repo.svfs.exists(b'obsstore'): - totalfilesize += repo.svfs.lstat(b'obsstore').st_size - entries.append((_srcstore, b'obsstore', _filefull, None)) + phase = not repo.publishing() + entries = _walkstreamfiles( + repo, + matcher, + phase=phase, + obsolescence=includeobsmarkers, + ) + for entry in entries: + yield (_srcstore, entry) + for name in cacheutil.cachetocopy(repo): if repo.cachevfs.exists(name): - totalfilesize += repo.cachevfs.lstat(name).st_size - entries.append((_srccache, name, _filefull, None)) - return entries, totalfilesize + # not really a StoreEntry, but close enough + entry = store.SimpleStoreEntry( + entry_path=name, + is_volatile=True, + ) + yield (_srccache, entry) def generatev2(repo, includes, excludes, includeobsmarkers): @@ -715,20 +806,64 @@ repo.ui.debug(b'scanning\n') - entries, totalfilesize = _v2_walk( + entries = _entries_walk( repo, includes=includes, excludes=excludes, includeobsmarkers=includeobsmarkers, ) - chunks = _emit2(repo, entries, totalfilesize) + chunks = _emit2(repo, entries) + first = next(chunks) + file_count, total_file_size = first + _test_sync_point_walk_1(repo) + _test_sync_point_walk_2(repo) + + return file_count, total_file_size, chunks + + +def generatev3(repo, includes, excludes, includeobsmarkers): + """Emit content for version 3 of a streaming clone. + + the data stream consists the following: + 1) A varint E containing the number of entries (can be 0), then E entries follow + 2) For each entry: + 2.1) The number of files in this entry (can be 0, but typically 1 or 2) + 2.2) For each file: + 2.2.1) A char representing the file destination (eg: store or cache) + 2.2.2) A varint N containing the length of the filename + 2.2.3) A varint M containing the length of file data + 2.2.4) N bytes containing the filename (the internal, store-agnostic form) + 2.2.5) M bytes containing the file data + + Returns the data iterator. + + XXX This format is experimental and subject to change. Here is a + XXX non-exhaustive list of things this format could do or change: + + - making it easier to write files in parallel + - holding the lock for a shorter time + - improving progress information + - ways to adjust the number of expected entries/files ? + """ + + with repo.lock(): + + repo.ui.debug(b'scanning\n') + + entries = _entries_walk( + repo, + includes=includes, + excludes=excludes, + includeobsmarkers=includeobsmarkers, + ) + chunks = _emit3(repo, list(entries)) first = next(chunks) assert first is None _test_sync_point_walk_1(repo) _test_sync_point_walk_2(repo) - return len(entries), totalfilesize, chunks + return chunks @contextlib.contextmanager @@ -812,6 +947,80 @@ progress.complete() +def consumev3(repo, fp): + """Apply the contents from a version 3 streaming clone. + + Data is read from an object that only needs to provide a ``read(size)`` + method. + """ + with repo.lock(): + start = util.timer() + + entrycount = util.uvarintdecodestream(fp) + repo.ui.status(_(b'%d entries to transfer\n') % (entrycount)) + + progress = repo.ui.makeprogress( + _(b'clone'), + total=entrycount, + unit=_(b'entries'), + ) + progress.update(0) + bytes_transferred = 0 + + vfsmap = _makemap(repo) + # we keep repo.vfs out of the on purpose, there are too many dangers + # there (eg: .hg/hgrc), + # + # this assert is duplicated (from _makemap) as authors might think this + # is fine, while this is really not fine. + if repo.vfs in vfsmap.values(): + raise error.ProgrammingError( + b'repo.vfs must not be added to vfsmap for security reasons' + ) + + with repo.transaction(b'clone'): + ctxs = (vfs.backgroundclosing(repo.ui) for vfs in vfsmap.values()) + with nested(*ctxs): + + for i in range(entrycount): + filecount = util.uvarintdecodestream(fp) + if filecount == 0: + if repo.ui.debugflag: + repo.ui.debug(b'entry with no files [%d]\n' % (i)) + for i in range(filecount): + src = util.readexactly(fp, 1) + vfs = vfsmap[src] + namelen = util.uvarintdecodestream(fp) + datalen = util.uvarintdecodestream(fp) + + name = util.readexactly(fp, namelen) + + if repo.ui.debugflag: + msg = b'adding [%s] %s (%s)\n' + msg %= (src, name, util.bytecount(datalen)) + repo.ui.debug(msg) + bytes_transferred += datalen + + with vfs(name, b'w') as ofp: + for chunk in util.filechunkiter(fp, limit=datalen): + ofp.write(chunk) + progress.increment(step=1) + + # force @filecache properties to be reloaded from + # streamclone-ed file at next access + repo.invalidate(clearfilecache=True) + + elapsed = util.timer() - start + if elapsed <= 0: + elapsed = 0.001 + msg = _(b'transferred %s in %.1f seconds (%s/sec)\n') + byte_count = util.bytecount(bytes_transferred) + bytes_sec = util.bytecount(bytes_transferred / elapsed) + msg %= (byte_count, elapsed, bytes_sec) + repo.ui.status(msg) + progress.complete() + + def applybundlev2(repo, fp, filecount, filesize, requirements): from . import localrepo @@ -835,6 +1044,28 @@ nodemap.post_stream_cleanup(repo) +def applybundlev3(repo, fp, requirements): + from . import localrepo + + missingreqs = [r for r in requirements if r not in repo.supported] + if missingreqs: + msg = _(b'unable to apply stream clone: unsupported format: %s') + msg %= b', '.join(sorted(missingreqs)) + raise error.Abort(msg) + + consumev3(repo, fp) + + repo.requirements = new_stream_clone_requirements( + repo.requirements, + requirements, + ) + repo.svfs.options = localrepo.resolvestorevfsoptions( + repo.ui, repo.requirements, repo.features + ) + scmutil.writereporequirements(repo) + nodemap.post_stream_cleanup(repo) + + def _copy_files(src_vfs_map, dst_vfs_map, entries, progress): hardlink = [True] @@ -842,7 +1073,7 @@ hardlink[0] = False progress.topic = _(b'copying') - for k, path, size in entries: + for k, path in entries: src_vfs = src_vfs_map[k] dst_vfs = dst_vfs_map[k] src_path = src_vfs.join(path) @@ -893,17 +1124,19 @@ if os.path.exists(srcbookmarks): bm_count = 1 - entries, totalfilesize = _v2_walk( + entries = _entries_walk( src_repo, includes=None, excludes=None, includeobsmarkers=True, ) + entries = list(entries) src_vfs_map = _makemap(src_repo) dest_vfs_map = _makemap(dest_repo) + total_files = sum(len(e[1].files()) for e in entries) + bm_count progress = src_repo.ui.makeprogress( topic=_(b'linking'), - total=len(entries) + bm_count, + total=total_files, unit=_(b'files'), ) # copy files @@ -913,7 +1146,11 @@ # this would also requires checks that nobody is appending any data # to the files while we do the clone, so this is not done yet. We # could do this blindly when copying files. - files = ((k, path, size) for k, path, ftype, size in entries) + files = [ + (vfs_key, f.unencoded_path) + for vfs_key, e in entries + for f in e.files() + ] hardlink = _copy_files(src_vfs_map, dest_vfs_map, files, progress) # copy bookmarks over @@ -926,7 +1163,7 @@ msg = b'linked %d files\n' else: msg = b'copied %d files\n' - src_repo.ui.debug(msg % (len(entries) + bm_count)) + src_repo.ui.debug(msg % total_files) with dest_repo.transaction(b"localclone") as tr: dest_repo.store.write(tr) diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/strip.py --- a/mercurial/strip.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/strip.py Thu Jun 22 11:36:37 2023 +0200 @@ -36,7 +36,7 @@ currentbranch = repo[None].branch() if ( - util.safehasattr(repo, b'mq') + util.safehasattr(repo, 'mq') and p2 != repo.nullid and p2 in [x.node for x in repo.mq.applied] ): diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/subrepoutil.py --- a/mercurial/subrepoutil.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/subrepoutil.py Thu Jun 22 11:36:37 2023 +0200 @@ -384,7 +384,7 @@ Either absolute or relative the outermost repo""" parent = repo chunks = [] - while util.safehasattr(parent, b'_subparent'): + while util.safehasattr(parent, '_subparent'): source = urlutil.url(parent._subsource) chunks.append(bytes(source)) if source.isabs(): @@ -400,7 +400,7 @@ # type: (localrepo.localrepository) -> bytes """return path to this (sub)repo as seen from outermost repo""" parent = repo - while util.safehasattr(parent, b'_subparent'): + while util.safehasattr(parent, '_subparent'): parent = parent._subparent return repo.root[len(pathutil.normasprefix(parent.root)) :] @@ -415,7 +415,7 @@ # type: (localrepo.localrepository, bool, bool) -> Optional[bytes] """return pull/push path of repo - either based on parent repo .hgsub info or on the top repo config. Abort or return None if no source found.""" - if util.safehasattr(repo, b'_subparent'): + if util.safehasattr(repo, '_subparent'): source = urlutil.url(repo._subsource) if source.isabs(): return bytes(source) @@ -428,7 +428,7 @@ return bytes(parent) else: # recursion reached top repo path = None - if util.safehasattr(repo, b'_subtoppath'): + if util.safehasattr(repo, '_subtoppath'): path = repo._subtoppath elif push and repo.ui.config(b'paths', b'default-push'): path = repo.ui.config(b'paths', b'default-push') diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/templatefilters.py --- a/mercurial/templatefilters.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/templatefilters.py Thu Jun 22 11:36:37 2023 +0200 @@ -339,14 +339,14 @@ raise error.ProgrammingError( b'Mercurial only does output with bytes: %r' % obj ) - elif util.safehasattr(obj, b'keys'): + elif util.safehasattr(obj, 'keys'): out = [ b'"%s": %s' % (encoding.jsonescape(k, paranoid=paranoid), json(v, paranoid)) for k, v in sorted(obj.items()) ] return b'{' + b', '.join(out) + b'}' - elif util.safehasattr(obj, b'__iter__'): + elif util.safehasattr(obj, '__iter__'): out = [json(i, paranoid) for i in obj] return b'[' + b', '.join(out) + b']' raise error.ProgrammingError(b'cannot encode %r' % obj) diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/templates/json/map --- a/mercurial/templates/json/map Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/templates/json/map Thu Jun 22 11:36:37 2023 +0200 @@ -65,6 +65,7 @@ "tags": [{join(changesettag, ", ")}], "user": {author|utf8|json}, "parents": [{join(parent%changesetparent, ", ")}], + "children": [{join(child%changesetparent, ", ")}], "files": [{join(files, ", ")}], "diff": [{join(diff, ", ")}], "phase": {phase|json} diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/templateutil.py --- a/mercurial/templateutil.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/templateutil.py Thu Jun 22 11:36:37 2023 +0200 @@ -281,7 +281,7 @@ def getmember(self, context, mapping, key): # TODO: maybe split hybrid list/dict types? - if not util.safehasattr(self._values, b'get'): + if not util.safehasattr(self._values, 'get'): raise error.ParseError(_(b'not a dictionary')) key = unwrapastype(context, mapping, key, self._keytype) return self._wrapvalue(key, self._values.get(key)) @@ -301,13 +301,13 @@ def _wrapvalue(self, key, val): if val is None: return - if util.safehasattr(val, b'_makemap'): + if util.safehasattr(val, '_makemap'): # a nested hybrid list/dict, which has its own way of map operation return val return hybriditem(None, key, val, self._makemap) def filter(self, context, mapping, select): - if util.safehasattr(self._values, b'get'): + if util.safehasattr(self._values, 'get'): values = { k: v for k, v in self._values.items() @@ -341,7 +341,7 @@ def tovalue(self, context, mapping): # TODO: make it non-recursive for trivial lists/dicts xs = self._values - if util.safehasattr(xs, b'get'): + if util.safehasattr(xs, 'get'): return {k: unwrapvalue(context, mapping, v) for k, v in xs.items()} return [unwrapvalue(context, mapping, x) for x in xs] @@ -858,7 +858,7 @@ ) elif thing is None: pass - elif not util.safehasattr(thing, b'__iter__'): + elif not util.safehasattr(thing, '__iter__'): yield pycompat.bytestr(thing) else: for i in thing: @@ -868,7 +868,7 @@ yield i elif i is None: pass - elif not util.safehasattr(i, b'__iter__'): + elif not util.safehasattr(i, '__iter__'): yield pycompat.bytestr(i) else: for j in flatten(context, mapping, i): diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/thirdparty/sha1dc/lib/sha1.c --- a/mercurial/thirdparty/sha1dc/lib/sha1.c Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/thirdparty/sha1dc/lib/sha1.c Thu Jun 22 11:36:37 2023 +0200 @@ -102,6 +102,10 @@ */ #define SHA1DC_BIGENDIAN +#elif (defined(__APPLE__) && defined(__BIG_ENDIAN__) && !defined(SHA1DC_BIGENDIAN)) +/* older gcc compilers which are the default on Apple PPC do not define __BYTE_ORDER__ */ +#define SHA1DC_BIGENDIAN + /* Not under GCC-alike or glibc or *BSD or newlib or or */ #elif defined(SHA1DC_ON_INTEL_LIKE_PROCESSOR) /* diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/transaction.py --- a/mercurial/transaction.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/transaction.py Thu Jun 22 11:36:37 2023 +0200 @@ -316,7 +316,7 @@ self._abortcallback = {} def __repr__(self): - name = '/'.join(self._names) + name = b'/'.join(self._names) return '' % ( name, self._count, @@ -413,7 +413,7 @@ vfs = self._vfsmap[location] dirname, filename = vfs.split(file) - backupfilename = b"%s.backup.%s" % (self._journal, filename) + backupfilename = b"%s.backup.%s.bck" % (self._journal, filename) backupfile = vfs.reljoin(dirname, backupfilename) if vfs.exists(file): filepath = vfs.join(file) diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/ui.py --- a/mercurial/ui.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/ui.py Thu Jun 22 11:36:37 2023 +0200 @@ -1107,10 +1107,16 @@ def fout(self): return self._fout + @util.propertycache + def _fout_is_a_tty(self): + self._isatty(self._fout) + @fout.setter def fout(self, f): self._fout = f self._fmsgout, self._fmsgerr = _selectmsgdests(self) + if '_fout_is_a_tty' in vars(self): + del self._fout_is_a_tty @property def ferr(self): @@ -1234,7 +1240,7 @@ return # inlined _writenobuf() for speed - if not opts.get('keepprogressbar', False): + if not opts.get('keepprogressbar', self._fout_is_a_tty): self._progclear() msg = b''.join(args) @@ -1273,7 +1279,7 @@ def _writenobuf(self, dest, *args: bytes, **opts: _MsgOpts) -> None: # update write() as well if you touch this code - if not opts.get('keepprogressbar', False): + if not opts.get('keepprogressbar', self._fout_is_a_tty): self._progclear() msg = b''.join(args) diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/unionrepo.py --- a/mercurial/unionrepo.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/unionrepo.py Thu Jun 22 11:36:37 2023 +0200 @@ -270,8 +270,8 @@ def cancopy(self): return False - def peer(self, path=None): - return unionpeer(self, path=None) + def peer(self, path=None, remotehidden=False): + return unionpeer(self, path=None, remotehidden=remotehidden) def getcwd(self): return encoding.getcwd() # always outside the repo diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/upgrade_utils/actions.py --- a/mercurial/upgrade_utils/actions.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/upgrade_utils/actions.py Thu Jun 22 11:36:37 2023 +0200 @@ -950,9 +950,6 @@ requirements in the returned set. """ return { - # The upgrade code does not yet support these experimental features. - # This is an artificial limitation. - requirements.TREEMANIFEST_REQUIREMENT, # This was a precursor to generaldelta and was never enabled by default. # It should (hopefully) not exist in the wild. b'parentdelta', @@ -1052,6 +1049,7 @@ requirements.SHARESAFE_REQUIREMENT, requirements.SPARSEREVLOG_REQUIREMENT, requirements.STORE_REQUIREMENT, + requirements.TREEMANIFEST_REQUIREMENT, requirements.NARROW_REQUIREMENT, } for name in compression.compengines: diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/upgrade_utils/engine.py --- a/mercurial/upgrade_utils/engine.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/upgrade_utils/engine.py Thu Jun 22 11:36:37 2023 +0200 @@ -11,10 +11,7 @@ from ..i18n import _ from ..pycompat import getattr from .. import ( - changelog, error, - filelog, - manifest, metadata, pycompat, requirements, @@ -47,32 +44,7 @@ return sidedatamod.get_sidedata_helpers(srcrepo, dstrepo._wanted_sidedata) -def _revlogfrompath(repo, rl_type, path): - """Obtain a revlog from a repo path. - - An instance of the appropriate class is returned. - """ - if rl_type & store.FILEFLAGS_CHANGELOG: - return changelog.changelog(repo.svfs) - elif rl_type & store.FILEFLAGS_MANIFESTLOG: - mandir = b'' - if b'/' in path: - mandir = path.rsplit(b'/', 1)[0] - return manifest.manifestrevlog( - repo.nodeconstants, repo.svfs, tree=mandir - ) - else: - # drop the extension and the `data/` prefix - path_part = path.rsplit(b'.', 1)[0].split(b'/', 1) - if len(path_part) < 2: - msg = _(b'cannot recognize revlog from filename: %s') - msg %= path - raise error.Abort(msg) - path = path_part[1] - return filelog.filelog(repo.svfs, path) - - -def _copyrevlog(tr, destrepo, oldrl, rl_type, unencodedname): +def _copyrevlog(tr, destrepo, oldrl, entry): """copy all relevant files for `oldrl` into `destrepo` store Files are copied "as is" without any transformation. The copy is performed @@ -80,7 +52,7 @@ content is compatible with format of the destination repository. """ oldrl = getattr(oldrl, '_revlog', oldrl) - newrl = _revlogfrompath(destrepo, rl_type, unencodedname) + newrl = entry.get_revlog_instance(destrepo) newrl = getattr(newrl, '_revlog', newrl) oldvfs = oldrl.opener @@ -98,7 +70,8 @@ if copydata: util.copyfile(olddata, newdata) - if rl_type & store.FILEFLAGS_FILELOG: + if entry.is_filelog: + unencodedname = entry.main_file_path() destrepo.svfs.fncache.add(unencodedname) if copydata: destrepo.svfs.fncache.add(unencodedname[:-2] + b'.d') @@ -113,18 +86,18 @@ ) -def matchrevlog(revlogfilter, rl_type): +def matchrevlog(revlogfilter, entry): """check if a revlog is selected for cloning. In other words, are there any updates which need to be done on revlog or it can be blindly copied. The store entry is checked against the passed filter""" - if rl_type & store.FILEFLAGS_CHANGELOG: + if entry.is_changelog: return UPGRADE_CHANGELOG in revlogfilter - elif rl_type & store.FILEFLAGS_MANIFESTLOG: + elif entry.is_manifestlog: return UPGRADE_MANIFEST in revlogfilter - assert rl_type & store.FILEFLAGS_FILELOG + assert entry.is_filelog return UPGRADE_FILELOGS in revlogfilter @@ -133,19 +106,20 @@ dstrepo, tr, old_revlog, - rl_type, - unencoded, + entry, upgrade_op, sidedata_helpers, oncopiedrevision, ): """returns the new revlog object created""" newrl = None - if matchrevlog(upgrade_op.revlogs_to_process, rl_type): + revlog_path = entry.main_file_path() + if matchrevlog(upgrade_op.revlogs_to_process, entry): ui.note( - _(b'cloning %d revisions from %s\n') % (len(old_revlog), unencoded) + _(b'cloning %d revisions from %s\n') + % (len(old_revlog), revlog_path) ) - newrl = _revlogfrompath(dstrepo, rl_type, unencoded) + newrl = entry.get_revlog_instance(dstrepo) old_revlog.clone( tr, newrl, @@ -156,10 +130,10 @@ ) else: msg = _(b'blindly copying %s containing %i revisions\n') - ui.note(msg % (unencoded, len(old_revlog))) - _copyrevlog(tr, dstrepo, old_revlog, rl_type, unencoded) + ui.note(msg % (revlog_path, len(old_revlog))) + _copyrevlog(tr, dstrepo, old_revlog, entry) - newrl = _revlogfrompath(dstrepo, rl_type, unencoded) + newrl = entry.get_revlog_instance(dstrepo) return newrl @@ -200,22 +174,11 @@ # Perform a pass to collect metadata. This validates we can open all # source files and allows a unified progress bar to be displayed. - for rl_type, unencoded, size in alldatafiles: - if not rl_type & store.FILEFLAGS_REVLOG_MAIN: + for entry in alldatafiles: + if not entry.is_revlog: continue - # the store.walk function will wrongly pickup transaction backup and - # get confused. As a quick fix for 5.9 release, we ignore those. - # (this is not a module constants because it seems better to keep the - # hack together) - skip_undo = ( - b'undo.backup.00changelog.i', - b'undo.backup.00manifest.i', - ) - if unencoded in skip_undo: - continue - - rl = _revlogfrompath(srcrepo, rl_type, unencoded) + rl = entry.get_revlog_instance(srcrepo) info = rl.storageinfo( exclusivefiles=True, @@ -232,19 +195,19 @@ srcrawsize += rawsize # This is for the separate progress bars. - if rl_type & store.FILEFLAGS_CHANGELOG: - changelogs[unencoded] = rl_type + if entry.is_changelog: + changelogs[entry.target_id] = entry crevcount += len(rl) csrcsize += datasize crawsize += rawsize - elif rl_type & store.FILEFLAGS_MANIFESTLOG: - manifests[unencoded] = rl_type + elif entry.is_manifestlog: + manifests[entry.target_id] = entry mcount += 1 mrevcount += len(rl) msrcsize += datasize mrawsize += rawsize - elif rl_type & store.FILEFLAGS_FILELOG: - filelogs[unencoded] = rl_type + elif entry.is_filelog: + filelogs[entry.target_id] = entry fcount += 1 frevcount += len(rl) fsrcsize += datasize @@ -289,16 +252,15 @@ ) ) progress = srcrepo.ui.makeprogress(_(b'file revisions'), total=frevcount) - for unencoded, rl_type in sorted(filelogs.items()): - oldrl = _revlogfrompath(srcrepo, rl_type, unencoded) + for target_id, entry in sorted(filelogs.items()): + oldrl = entry.get_revlog_instance(srcrepo) newrl = _perform_clone( ui, dstrepo, tr, oldrl, - rl_type, - unencoded, + entry, upgrade_op, sidedata_helpers, oncopiedrevision, @@ -331,15 +293,14 @@ progress = srcrepo.ui.makeprogress( _(b'manifest revisions'), total=mrevcount ) - for unencoded, rl_type in sorted(manifests.items()): - oldrl = _revlogfrompath(srcrepo, rl_type, unencoded) + for target_id, entry in sorted(manifests.items()): + oldrl = entry.get_revlog_instance(srcrepo) newrl = _perform_clone( ui, dstrepo, tr, oldrl, - rl_type, - unencoded, + entry, upgrade_op, sidedata_helpers, oncopiedrevision, @@ -371,15 +332,14 @@ progress = srcrepo.ui.makeprogress( _(b'changelog revisions'), total=crevcount ) - for unencoded, rl_type in sorted(changelogs.items()): - oldrl = _revlogfrompath(srcrepo, rl_type, unencoded) + for target_id, entry in sorted(changelogs.items()): + oldrl = entry.get_revlog_instance(srcrepo) newrl = _perform_clone( ui, dstrepo, tr, oldrl, - rl_type, - unencoded, + entry, upgrade_op, sidedata_helpers, oncopiedrevision, @@ -410,7 +370,7 @@ are cloned""" for path, kind, st in sorted(srcrepo.store.vfs.readdir(b'', stat=True)): # don't copy revlogs as they are already cloned - if store.revlog_type(path) is not None: + if store.is_revlog_file(path): continue # Skip transaction related files. if path.startswith(b'undo'): diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/url.py --- a/mercurial/url.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/url.py Thu Jun 22 11:36:37 2023 +0200 @@ -190,7 +190,7 @@ return _sendfile -has_https = util.safehasattr(urlreq, b'httpshandler') +has_https = util.safehasattr(urlreq, 'httpshandler') class httpconnection(keepalive.HTTPConnection): diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/utils/urlutil.py --- a/mercurial/utils/urlutil.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/utils/urlutil.py Thu Jun 22 11:36:37 2023 +0200 @@ -233,7 +233,7 @@ self.path = path # leave the query string escaped - for a in (b'user', b'passwd', b'host', b'port', b'path', b'fragment'): + for a in ('user', 'passwd', 'host', 'port', 'path', 'fragment'): v = getattr(self, a) if v is not None: setattr(self, a, urlreq.unquote(v)) diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/verify.py --- a/mercurial/verify.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/verify.py Thu Jun 22 11:36:37 2023 +0200 @@ -407,10 +407,13 @@ subdirs = set() revlogv1 = self.revlogv1 undecodable = [] - for t, f, size in repo.store.datafiles(undecodable=undecodable): - if (size > 0 or not revlogv1) and f.startswith(b'meta/'): - storefiles.add(_normpath(f)) - subdirs.add(os.path.dirname(f)) + for entry in repo.store.data_entries(undecodable=undecodable): + for file_ in entry.files(): + f = file_.unencoded_path + size = file_.file_size(repo.store.vfs) + if (size > 0 or not revlogv1) and f.startswith(b'meta/'): + storefiles.add(_normpath(f)) + subdirs.add(os.path.dirname(f)) for f in undecodable: self._err(None, _(b"cannot decode filename '%s'") % f) subdirprogress = ui.makeprogress( @@ -472,9 +475,12 @@ storefiles = set() undecodable = [] - for t, f, size in repo.store.datafiles(undecodable=undecodable): - if (size > 0 or not revlogv1) and f.startswith(b'data/'): - storefiles.add(_normpath(f)) + for entry in repo.store.data_entries(undecodable=undecodable): + for file_ in entry.files(): + size = file_.file_size(repo.store.vfs) + f = file_.unencoded_path + if (size > 0 or not revlogv1) and f.startswith(b'data/'): + storefiles.add(_normpath(f)) for f in undecodable: self._err(None, _(b"cannot decode filename '%s'") % f) diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/wireprotoserver.py --- a/mercurial/wireprotoserver.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/wireprotoserver.py Thu Jun 22 11:36:37 2023 +0200 @@ -317,7 +317,8 @@ proto.checkperm(wireprotov1server.commands[cmd].permission) - rsp = wireprotov1server.dispatch(repo, proto, cmd) + accesshidden = hgwebcommon.hashiddenaccess(repo, req) + rsp = wireprotov1server.dispatch(repo, proto, cmd, accesshidden) if isinstance(rsp, bytes): setresponse(HTTP_OK, HGTYPE, bodybytes=rsp) @@ -445,7 +446,7 @@ pass -def _runsshserver(ui, repo, fin, fout, ev): +def _runsshserver(ui, repo, fin, fout, ev, accesshidden=False): # This function operates like a state machine of sorts. The following # states are defined: # @@ -486,7 +487,9 @@ _sshv1respondbytes(fout, b'') continue - rsp = wireprotov1server.dispatch(repo, proto, request) + rsp = wireprotov1server.dispatch( + repo, proto, request, accesshidden=accesshidden + ) repo.ui.fout.flush() repo.ui.ferr.flush() @@ -521,10 +524,11 @@ class sshserver: - def __init__(self, ui, repo, logfh=None): + def __init__(self, ui, repo, logfh=None, accesshidden=False): self._ui = ui self._repo = repo self._fin, self._fout = ui.protectfinout() + self._accesshidden = accesshidden # Log write I/O to stdout and stderr if configured. if logfh: @@ -541,4 +545,6 @@ def serveuntil(self, ev): """Serve until a threading.Event is set.""" - _runsshserver(self._ui, self._repo, self._fin, self._fout, ev) + _runsshserver( + self._ui, self._repo, self._fin, self._fout, ev, self._accesshidden + ) diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/wireprotov1peer.py --- a/mercurial/wireprotov1peer.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/wireprotov1peer.py Thu Jun 22 11:36:37 2023 +0200 @@ -338,8 +338,24 @@ # Begin of ipeercommands interface. def clonebundles(self): - self.requirecap(b'clonebundles', _(b'clone bundles')) - return self._call(b'clonebundles') + if self.capable(b'clonebundles_manifest'): + return self._call(b'clonebundles_manifest') + else: + self.requirecap(b'clonebundles', _(b'clone bundles')) + return self._call(b'clonebundles') + + def _finish_inline_clone_bundle(self, stream): + pass # allow override for httppeer + + def get_cached_bundle_inline(self, path): + stream = self._callstream(b"get_cached_bundle_inline", path=path) + length = util.uvarintdecodestream(stream) + + # SSH streams will block if reading more than length + for chunk in util.filechunkiter(stream, limit=length): + yield chunk + + self._finish_inline_clone_bundle(stream) @batchable def lookup(self, key): @@ -483,7 +499,7 @@ else: heads = wireprototypes.encodelist(heads) - if util.safehasattr(bundle, b'deltaheader'): + if util.safehasattr(bundle, 'deltaheader'): # this a bundle10, do the old style call sequence ret, output = self._callpush(b"unbundle", bundle, heads=heads) if ret == b"": diff -r 41b9eb302d95 -r 9a4db474ef1a mercurial/wireprotov1server.py --- a/mercurial/wireprotov1server.py Thu Jun 22 11:18:47 2023 +0200 +++ b/mercurial/wireprotov1server.py Thu Jun 22 11:36:37 2023 +0200 @@ -21,8 +21,10 @@ encoding, error, exchange, + hook, pushkey as pushkeymod, pycompat, + repoview, requirements as requirementsmod, streamclone, util, @@ -60,7 +62,7 @@ # wire protocol command can either return a string or one of these classes. -def getdispatchrepo(repo, proto, command): +def getdispatchrepo(repo, proto, command, accesshidden=False): """Obtain the repo used for processing wire protocol commands. The intent of this function is to serve as a monkeypatch point for @@ -68,11 +70,21 @@ specialized circumstances. """ viewconfig = repo.ui.config(b'server', b'view') + + # Only works if the filter actually supports being upgraded to show hidden + # changesets. + if ( + accesshidden + and viewconfig is not None + and viewconfig + b'.hidden' in repoview.filtertable + ): + viewconfig += b'.hidden' + return repo.filtered(viewconfig) -def dispatch(repo, proto, command): - repo = getdispatchrepo(repo, proto, command) +def dispatch(repo, proto, command, accesshidden=False): + repo = getdispatchrepo(repo, proto, command, accesshidden=accesshidden) func, spec = commands[command] args = proto.getargs(spec) @@ -253,8 +265,59 @@ return wireprototypes.bytesresponse(b''.join(r)) +@wireprotocommand(b'get_cached_bundle_inline', b'path', permission=b'pull') +def get_cached_bundle_inline(repo, proto, path): + """ + Server command to send a clonebundle to the client + """ + if hook.hashook(repo.ui, b'pretransmit-inline-clone-bundle'): + hook.hook( + repo.ui, + repo, + b'pretransmit-inline-clone-bundle', + throw=True, + clonebundlepath=path, + ) + + bundle_dir = repo.vfs.join(bundlecaches.BUNDLE_CACHE_DIR) + clonebundlepath = repo.vfs.join(bundle_dir, path) + if not repo.vfs.exists(clonebundlepath): + raise error.Abort(b'clonebundle %s does not exist' % path) + + clonebundles_dir = os.path.realpath(bundle_dir) + if not os.path.realpath(clonebundlepath).startswith(clonebundles_dir): + raise error.Abort(b'clonebundle %s is using an illegal path' % path) + + def generator(vfs, bundle_path): + with vfs(bundle_path) as f: + length = os.fstat(f.fileno())[6] + yield util.uvarintencode(length) + for chunk in util.filechunkiter(f): + yield chunk + + stream = generator(repo.vfs, clonebundlepath) + return wireprototypes.streamres(gen=stream, prefer_uncompressed=True) + + @wireprotocommand(b'clonebundles', b'', permission=b'pull') def clonebundles(repo, proto): + """A legacy version of clonebundles_manifest + + This version filtered out new url scheme (like peer-bundle-cache://) to + avoid confusion in older clients. + """ + manifest_contents = bundlecaches.get_manifest(repo) + # Filter out peer-bundle-cache:// entries + modified_manifest = [] + for line in manifest_contents.splitlines(): + if line.startswith(bundlecaches.CLONEBUNDLESCHEME): + continue + modified_manifest.append(line) + return wireprototypes.bytesresponse(b'\n'.join(modified_manifest)) + + +@wireprotocommand(b'clonebundles_manifest', b'*', permission=b'pull') +def clonebundles_2(repo, proto, args): """Server command for returning info for available bundles to seed clones. Clients will parse this response and determine what bundle to fetch. @@ -262,10 +325,13 @@ Extensions may wrap this command to filter or dynamically emit data depending on the request. e.g. you could advertise URLs for the closest data center given the client's IP address. + + The only filter on the server side is filtering out inline clonebundles + in case a client does not support them. + Otherwise, older clients would retrieve and error out on those. """ - return wireprototypes.bytesresponse( - repo.vfs.tryread(bundlecaches.CB_MANIFEST_FILE) - ) + manifest_contents = bundlecaches.get_manifest(repo) + return wireprototypes.bytesresponse(manifest_contents) wireprotocaps = [ @@ -655,7 +721,7 @@ r = exchange.unbundle( repo, gen, their_heads, b'serve', proto.client() ) - if util.safehasattr(r, b'addpart'): + if util.safehasattr(r, 'addpart'): # The return looks streamable, we are in the bundle2 case # and should return a stream. return wireprototypes.streamreslegacy(gen=r.getchunks()) diff -r 41b9eb302d95 -r 9a4db474ef1a rust/Cargo.lock --- a/rust/Cargo.lock Thu Jun 22 11:18:47 2023 +0200 +++ b/rust/Cargo.lock Thu Jun 22 11:36:37 2023 +0200 @@ -3,12 +3,6 @@ version = 3 [[package]] -name = "Inflector" -version = "0.11.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fe438c63458706e03479442743baae6c88256498e6431708f6dfc520a26515d3" - -[[package]] name = "adler" version = "1.0.2" source = "registry+https://github.com/rust-lang/crates.io-index" @@ -35,12 +29,6 @@ ] [[package]] -name = "aliasable" -version = "0.1.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "250f629c0161ad8107cf89319e990051fae62832fd343083bea452d93e2205fd" - -[[package]] name = "android_system_properties" version = "0.1.5" source = "registry+https://github.com/rust-lang/crates.io-index" @@ -539,7 +527,6 @@ "logging_timer", "memmap2", "once_cell", - "ouroboros", "pretty_assertions", "rand 0.8.5", "rand_distr", @@ -547,6 +534,7 @@ "rayon", "regex", "same-file", + "self_cell", "sha-1 0.10.0", "tempfile", "thread_local", @@ -809,29 +797,6 @@ checksum = "7b5bf27447411e9ee3ff51186bf7a08e16c341efdde93f4d823e8844429bed7e" [[package]] -name = "ouroboros" -version = "0.15.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "dfbb50b356159620db6ac971c6d5c9ab788c9cc38a6f49619fca2a27acb062ca" -dependencies = [ - "aliasable", - "ouroboros_macro", -] - -[[package]] -name = "ouroboros_macro" -version = "0.15.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4a0d9d1a6191c4f391f87219d1ea42b23f09ee84d64763cd05ee6ea88d9f384d" -dependencies = [ - "Inflector", - "proc-macro-error", - "proc-macro2", - "quote", - "syn", -] - -[[package]] name = "output_vt100" version = "0.1.3" source = "registry+https://github.com/rust-lang/crates.io-index" @@ -1095,8 +1060,8 @@ "logging_timer", "rayon", "regex", - "users", "which", + "whoami", ] [[package]] @@ -1130,6 +1095,12 @@ checksum = "9c8132065adcfd6e02db789d9285a0deb2f3fcb04002865ab67d5fb103533898" [[package]] +name = "self_cell" +version = "1.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4a3926e239738d36060909ffe6f511502f92149a45a1fade7fe031cb2d33e88b" + +[[package]] name = "semver" version = "1.0.14" source = "registry+https://github.com/rust-lang/crates.io-index" @@ -1271,16 +1242,6 @@ checksum = "c0edd1e5b14653f783770bce4a4dabb4a5108a5370a5f5d8cfe8710c361f6c8b" [[package]] -name = "users" -version = "0.11.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "24cc0f6d6f267b73e5a2cadf007ba8f9bc39c6a6f9666f8cf25ea809a153b032" -dependencies = [ - "libc", - "log", -] - -[[package]] name = "vcpkg" version = "0.2.15" source = "registry+https://github.com/rust-lang/crates.io-index" @@ -1376,6 +1337,16 @@ checksum = "1c38c045535d93ec4f0b4defec448e4291638ee608530863b1e2ba115d4fff7f" [[package]] +name = "web-sys" +version = "0.3.60" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bcda906d8be16e728fd5adc5b729afad4e444e106ab28cd1c7256e54fa61510f" +dependencies = [ + "js-sys", + "wasm-bindgen", +] + +[[package]] name = "which" version = "4.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" @@ -1387,6 +1358,16 @@ ] [[package]] +name = "whoami" +version = "1.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2c70234412ca409cc04e864e89523cb0fc37f5e1344ebed5a3ebf4192b6b9f68" +dependencies = [ + "wasm-bindgen", + "web-sys", +] + +[[package]] name = "winapi" version = "0.3.9" source = "registry+https://github.com/rust-lang/crates.io-index" diff -r 41b9eb302d95 -r 9a4db474ef1a rust/README.rst --- a/rust/README.rst Thu Jun 22 11:18:47 2023 +0200 +++ b/rust/README.rst Thu Jun 22 11:36:37 2023 +0200 @@ -7,17 +7,19 @@ improves performance in some areas. There are currently four independent Rust projects: + - chg. An implementation of chg, in Rust instead of C. - hgcli. A project that provides a (mostly) self-contained "hg" binary, for ease of deployment and a bit of speed, using PyOxidizer. See - hgcli/README.md. + ``hgcli/README.md``. - hg-core (and hg-cpython): implementation of some functionality of mercurial in Rust, e.g. ancestry computations in revision graphs, status or pull discovery. The top-level ``Cargo.toml`` file defines a workspace containing these crates. - rhg: a pure Rust implementation of Mercurial, with a fallback mechanism for - unsupported invocations. It reuses the logic `hg-core` but completely forgoes - interaction with Python. See `rust/rhg/README.md` for more details. + unsupported invocations. It reuses the logic ``hg-core`` but + completely forgoes interaction with Python. See + ``rust/rhg/README.md`` for more details. Using Rust code =============== @@ -41,10 +43,10 @@ ================ In the future, compile-time opt-ins may be added -to the `features` section in ``hg-cpython/Cargo.toml``. +to the ``features`` section in ``hg-cpython/Cargo.toml``. -To use features from the Makefile, use the `HG_RUST_FEATURES` environment -variable: for instance `HG_RUST_FEATURES="some-feature other-feature"` +To use features from the Makefile, use the ``HG_RUST_FEATURES`` environment +variable: for instance ``HG_RUST_FEATURES="some-feature other-feature"``. Profiling ========= @@ -57,7 +59,7 @@ Creating a ``.cargo/config`` file with the following content enables debug information in optimized builds. This make profiles more informative with source file name and line number for Rust stack frames and -(in some cases) stack frames for Rust functions that have been inlined. +(in some cases) stack frames for Rust functions that have been inlined:: [profile.release] debug = true @@ -69,7 +71,7 @@ as opposed to tools for native code like ``perf``, which attribute time to the python interpreter instead of python functions). -Example usage: +Example usage:: $ make PURE=--rust local # Don't forget to recompile after a code change $ py-spy record --native --output /tmp/profile.svg -- ./hg ... @@ -77,9 +79,25 @@ Developing Rust =============== -The current version of Rust in use is ``1.61.0``, because it's what Debian -testing has. You can use ``rustup override set 1.61.0`` at the root of the repo -to make it easier on you. +Minimum Supported Rust Version +------------------------------ + +The minimum supported rust version (MSRV) is specified in the `Clippy`_ +configuration file at ``rust/clippy.toml``. It is set to be ``1.61.0`` as of +this writing, but keep in mind that the authoritative value is the one +from the configuration file. + +We bump it from time to time, with the general rule being that our +MSRV should not be greater that the version of the Rust toolchain +shipping with Debian testing, so that the Rust enhanced Mercurial can +be eventually packaged in Debian. + +To ensure that you are not depending on features introduced in later +versions, you can issue ``rustup override set x.y.z`` at the root of +the repository. + +Build and development +--------------------- Go to the ``hg-cpython`` folder:: @@ -117,8 +135,28 @@ using the nightly version because it has been stable enough and provides comment folding. -To format the entire Rust workspace:: +Our CI enforces that the code does not need reformatting. Before +submitting your changes, please format the entire Rust workspace by running:: + $ cargo +nightly fmt This requires you to have the nightly toolchain installed. + +Linting: code sanity +-------------------- + +We're using `Clippy`_, the standard code diagnosis tool of the Rust +community. + +Our CI enforces that the code is free of Clippy warnings, so you might +want to run it on your side before submitting your changes. Simply do:: + + % cargo clippy + +from the top of the Rust workspace. Clippy is part of the default +``rustup`` install, so it should work right away. In case it would +not, you can install it with ``rustup component add``. + + +.. _Clippy: https://doc.rust-lang.org/stable/clippy/ diff -r 41b9eb302d95 -r 9a4db474ef1a rust/clippy.toml --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/rust/clippy.toml Thu Jun 22 11:36:37 2023 +0200 @@ -0,0 +1,1 @@ +msrv = "1.61.0" diff -r 41b9eb302d95 -r 9a4db474ef1a rust/hg-core/Cargo.toml --- a/rust/hg-core/Cargo.toml Thu Jun 22 11:18:47 2023 +0200 +++ b/rust/hg-core/Cargo.toml Thu Jun 22 11:36:37 2023 +0200 @@ -20,12 +20,12 @@ lazy_static = "1.4.0" libc = "0.2.137" logging_timer = "1.1.0" -ouroboros = "0.15.5" rand = "0.8.5" rand_pcg = "0.3.1" rand_distr = "0.4.3" rayon = "1.7.0" regex = "1.7.0" +self_cell = "1.0" sha-1 = "0.10.0" twox-hash = "1.6.3" same-file = "1.0.6" diff -r 41b9eb302d95 -r 9a4db474ef1a rust/hg-core/src/checkexec.rs --- a/rust/hg-core/src/checkexec.rs Thu Jun 22 11:18:47 2023 +0200 +++ b/rust/hg-core/src/checkexec.rs Thu Jun 22 11:36:37 2023 +0200 @@ -112,8 +112,10 @@ Ok(false) } -/// This function is a rust rewrite of [checkexec] function from [posix.py] -/// Returns true if the filesystem supports execute permissions. +/// This function is a Rust rewrite of the `checkexec` function from +/// `posix.py`. +/// +/// Returns `true` if the filesystem supports execute permissions. pub fn check_exec(path: impl AsRef) -> bool { check_exec_impl(path).unwrap_or(false) } diff -r 41b9eb302d95 -r 9a4db474ef1a rust/hg-core/src/dirstate/parsers.rs --- a/rust/hg-core/src/dirstate/parsers.rs Thu Jun 22 11:18:47 2023 +0200 +++ b/rust/hg-core/src/dirstate/parsers.rs Thu Jun 22 11:36:37 2023 +0200 @@ -61,12 +61,21 @@ Option<&'a HgPath>, ) -> Result<(), HgError>, ) -> Result<&'a DirstateParents, HgError> { - let (parents, rest) = DirstateParents::from_bytes(contents) - .map_err(|_| HgError::corrupted("Too little data for dirstate."))?; + let mut entry_idx = 0; + let original_len = contents.len(); + let (parents, rest) = + DirstateParents::from_bytes(contents).map_err(|_| { + HgError::corrupted(format!( + "Too little data for dirstate: {} bytes.", + original_len + )) + })?; contents = rest; while !contents.is_empty() { let (raw_entry, rest) = RawEntry::from_bytes(contents) - .map_err(|_| HgError::corrupted("Overflow in dirstate."))?; + .map_err(|_| HgError::corrupted(format!( + "dirstate corrupted: ran out of bytes at entry header {}, offset {}.", + entry_idx, original_len-contents.len())))?; let entry = DirstateEntry::from_v1_data( EntryState::try_from(raw_entry.state)?, @@ -74,9 +83,14 @@ raw_entry.size.get(), raw_entry.mtime.get(), ); + let filename_len = raw_entry.length.get() as usize; let (paths, rest) = - u8::slice_from_bytes(rest, raw_entry.length.get() as usize) - .map_err(|_| HgError::corrupted("Overflow in dirstate."))?; + u8::slice_from_bytes(rest, filename_len) + .map_err(|_| + HgError::corrupted(format!( + "dirstate corrupted: ran out of bytes at entry {}, offset {} (expected {} bytes).", + entry_idx, original_len-contents.len(), filename_len)) + )?; // `paths` is either a single path, or two paths separated by a NULL // byte @@ -87,6 +101,7 @@ let copy_source = iter.next().map(HgPath::new); each_entry(path, &entry, copy_source)?; + entry_idx += 1; contents = rest; } Ok(parents) diff -r 41b9eb302d95 -r 9a4db474ef1a rust/hg-core/src/dirstate_tree/owning.rs --- a/rust/hg-core/src/dirstate_tree/owning.rs Thu Jun 22 11:18:47 2023 +0200 +++ b/rust/hg-core/src/dirstate_tree/owning.rs Thu Jun 22 11:36:37 2023 +0200 @@ -1,19 +1,18 @@ use crate::{DirstateError, DirstateParents}; use super::dirstate_map::DirstateMap; +use self_cell::self_cell; use std::ops::Deref; -use ouroboros::self_referencing; - -/// Keep a `DirstateMap<'on_disk>` next to the `on_disk` buffer that it -/// borrows. -#[self_referencing] -pub struct OwningDirstateMap { - on_disk: Box + Send>, - #[borrows(on_disk)] - #[covariant] - map: DirstateMap<'this>, -} +self_cell!( + /// Keep a `DirstateMap<'owner>` next to the `owner` buffer that it + /// borrows. + pub struct OwningDirstateMap { + owner: Box + Send>, + #[covariant] + dependent: DirstateMap, + } +); impl OwningDirstateMap { pub fn new_empty(on_disk: OnDisk) -> Self @@ -22,11 +21,7 @@ { let on_disk = Box::new(on_disk); - OwningDirstateMapBuilder { - on_disk, - map_builder: |bytes| DirstateMap::empty(bytes), - } - .build() + OwningDirstateMap::new(on_disk, |bytes| DirstateMap::empty(bytes)) } pub fn new_v1( @@ -40,16 +35,12 @@ let mut parents = DirstateParents::NULL; Ok(( - OwningDirstateMapTryBuilder { - on_disk, - map_builder: |bytes| { - DirstateMap::new_v1(bytes, identity).map(|(dmap, p)| { - parents = p.unwrap_or(DirstateParents::NULL); - dmap - }) - }, - } - .try_build()?, + OwningDirstateMap::try_new(on_disk, |bytes| { + DirstateMap::new_v1(bytes, identity).map(|(dmap, p)| { + parents = p.unwrap_or(DirstateParents::NULL); + dmap + }) + })?, parents, )) } @@ -66,28 +57,24 @@ { let on_disk = Box::new(on_disk); - OwningDirstateMapTryBuilder { - on_disk, - map_builder: |bytes| { - DirstateMap::new_v2(bytes, data_size, metadata, uuid, identity) - }, - } - .try_build() + OwningDirstateMap::try_new(on_disk, |bytes| { + DirstateMap::new_v2(bytes, data_size, metadata, uuid, identity) + }) } pub fn with_dmap_mut( &mut self, f: impl FnOnce(&mut DirstateMap) -> R, ) -> R { - self.with_map_mut(f) + self.with_dependent_mut(|_owner, dmap| f(dmap)) } pub fn get_map(&self) -> &DirstateMap { - self.borrow_map() + self.borrow_dependent() } pub fn on_disk(&self) -> &[u8] { - self.borrow_on_disk() + self.borrow_owner() } pub fn old_uuid(&self) -> Option<&[u8]> { diff -r 41b9eb302d95 -r 9a4db474ef1a rust/hg-core/src/filepatterns.rs --- a/rust/hg-core/src/filepatterns.rs Thu Jun 22 11:18:47 2023 +0200 +++ b/rust/hg-core/src/filepatterns.rs Thu Jun 22 11:36:37 2023 +0200 @@ -50,6 +50,8 @@ Glob, /// a path relative to repository root, which is matched recursively Path, + /// a single exact path relative to repository root + FilePath, /// A path relative to cwd RelPath, /// an unrooted glob (*.rs matches Rust files in all dirs) @@ -157,6 +159,7 @@ match kind { b"re:" => Ok(PatternSyntax::Regexp), b"path:" => Ok(PatternSyntax::Path), + b"filepath:" => Ok(PatternSyntax::FilePath), b"relpath:" => Ok(PatternSyntax::RelPath), b"rootfilesin:" => Ok(PatternSyntax::RootFiles), b"relglob:" => Ok(PatternSyntax::RelGlob), @@ -252,7 +255,8 @@ } PatternSyntax::Include | PatternSyntax::SubInclude - | PatternSyntax::ExpandedSubInclude(_) => unreachable!(), + | PatternSyntax::ExpandedSubInclude(_) + | PatternSyntax::FilePath => unreachable!(), } } @@ -319,9 +323,9 @@ } _ => pattern.to_owned(), }; - if *syntax == PatternSyntax::RootGlob - && !pattern.iter().any(|b| GLOB_SPECIAL_CHARACTERS.contains(b)) - { + let is_simple_rootglob = *syntax == PatternSyntax::RootGlob + && !pattern.iter().any(|b| GLOB_SPECIAL_CHARACTERS.contains(b)); + if is_simple_rootglob || syntax == &PatternSyntax::FilePath { Ok(None) } else { let mut entry = entry.clone(); diff -r 41b9eb302d95 -r 9a4db474ef1a rust/hg-core/src/lib.rs --- a/rust/hg-core/src/lib.rs Thu Jun 22 11:18:47 2023 +0200 +++ b/rust/hg-core/src/lib.rs Thu Jun 22 11:36:37 2023 +0200 @@ -63,7 +63,6 @@ #[derive(Debug, PartialEq)] pub enum DirstateMapError { PathNotFound(HgPathBuf), - EmptyPath, InvalidPath(HgPathError), } @@ -73,9 +72,6 @@ DirstateMapError::PathNotFound(_) => { f.write_str("expected a value, found none") } - DirstateMapError::EmptyPath => { - f.write_str("Overflow in dirstate.") - } DirstateMapError::InvalidPath(path_error) => path_error.fmt(f), } } diff -r 41b9eb302d95 -r 9a4db474ef1a rust/hg-core/src/matchers.rs --- a/rust/hg-core/src/matchers.rs Thu Jun 22 11:18:47 2023 +0200 +++ b/rust/hg-core/src/matchers.rs Thu Jun 22 11:36:37 2023 +0200 @@ -708,7 +708,9 @@ } roots.push(root); } - PatternSyntax::Path | PatternSyntax::RelPath => { + PatternSyntax::Path + | PatternSyntax::RelPath + | PatternSyntax::FilePath => { let pat = HgPath::new(if pattern == b"." { &[] as &[u8] } else { @@ -1223,6 +1225,40 @@ VisitChildrenSet::This ); + // VisitchildrensetFilePath + let matcher = IncludeMatcher::new(vec![IgnorePattern::new( + PatternSyntax::FilePath, + b"dir/z", + Path::new(""), + )]) + .unwrap(); + + let mut set = HashSet::new(); + set.insert(HgPathBuf::from_bytes(b"dir")); + assert_eq!( + matcher.visit_children_set(HgPath::new(b"")), + VisitChildrenSet::Set(set) + ); + assert_eq!( + matcher.visit_children_set(HgPath::new(b"folder")), + VisitChildrenSet::Empty + ); + let mut set = HashSet::new(); + set.insert(HgPathBuf::from_bytes(b"z")); + assert_eq!( + matcher.visit_children_set(HgPath::new(b"dir")), + VisitChildrenSet::Set(set) + ); + // OPT: these should probably be set(). + assert_eq!( + matcher.visit_children_set(HgPath::new(b"dir/subdir")), + VisitChildrenSet::Empty + ); + assert_eq!( + matcher.visit_children_set(HgPath::new(b"dir/subdir/x")), + VisitChildrenSet::Empty + ); + // Test multiple patterns let matcher = IncludeMatcher::new(vec![ IgnorePattern::new(PatternSyntax::RelPath, b"foo", Path::new("")), diff -r 41b9eb302d95 -r 9a4db474ef1a rust/hg-core/src/revlog/changelog.rs --- a/rust/hg-core/src/revlog/changelog.rs Thu Jun 22 11:18:47 2023 +0200 +++ b/rust/hg-core/src/revlog/changelog.rs Thu Jun 22 11:36:37 2023 +0200 @@ -1,6 +1,6 @@ use crate::errors::HgError; -use crate::revlog::Revision; use crate::revlog::{Node, NodePrefix}; +use crate::revlog::{Revision, NULL_REVISION}; use crate::revlog::{Revlog, RevlogEntry, RevlogError}; use crate::utils::hg_path::HgPath; use crate::vfs::Vfs; @@ -9,7 +9,7 @@ use std::borrow::Cow; use std::fmt::{Debug, Formatter}; -/// A specialized `Revlog` to work with `changelog` data format. +/// A specialized `Revlog` to work with changelog data format. pub struct Changelog { /// The generic `revlog` format. pub(crate) revlog: Revlog, @@ -23,7 +23,7 @@ Ok(Self { revlog }) } - /// Return the `ChangelogEntry` for the given node ID. + /// Return the `ChangelogRevisionData` for the given node ID. pub fn data_for_node( &self, node: NodePrefix, @@ -32,30 +32,29 @@ self.data_for_rev(rev) } - /// Return the `RevlogEntry` of the given revision number. + /// Return the [`ChangelogEntry`] for the given revision number. pub fn entry_for_rev( &self, rev: Revision, - ) -> Result { - self.revlog.get_entry(rev) + ) -> Result { + let revlog_entry = self.revlog.get_entry(rev)?; + Ok(ChangelogEntry { revlog_entry }) } - /// Return the `ChangelogEntry` of the given revision number. + /// Return the [`ChangelogRevisionData`] for the given revision number. + /// + /// This is a useful shortcut in case the caller does not need the + /// generic revlog information (parents, hashes etc). Otherwise + /// consider taking a [`ChangelogEntry`] with + /// [entry_for_rev](`Self::entry_for_rev`) and doing everything from there. pub fn data_for_rev( &self, rev: Revision, ) -> Result { - let bytes = self.revlog.get_rev_data(rev)?; - if bytes.is_empty() { - Ok(ChangelogRevisionData::null()) - } else { - Ok(ChangelogRevisionData::new(bytes).map_err(|err| { - RevlogError::Other(HgError::CorruptedRepository(format!( - "Invalid changelog data for revision {}: {:?}", - rev, err - ))) - })?) + if rev == NULL_REVISION { + return Ok(ChangelogRevisionData::null()); } + self.entry_for_rev(rev)?.data() } pub fn node_from_rev(&self, rev: Revision) -> Option<&Node> { @@ -70,6 +69,59 @@ } } +/// A specialized `RevlogEntry` for `changelog` data format +/// +/// This is a `RevlogEntry` with the added semantics that the associated +/// data should meet the requirements for `changelog`, materialized by +/// the fact that `data()` constructs a `ChangelogRevisionData`. +/// In case that promise would be broken, the `data` method returns an error. +#[derive(Clone)] +pub struct ChangelogEntry<'changelog> { + /// Same data, as a generic `RevlogEntry`. + pub(crate) revlog_entry: RevlogEntry<'changelog>, +} + +impl<'changelog> ChangelogEntry<'changelog> { + pub fn data<'a>( + &'a self, + ) -> Result, RevlogError> { + let bytes = self.revlog_entry.data()?; + if bytes.is_empty() { + Ok(ChangelogRevisionData::null()) + } else { + Ok(ChangelogRevisionData::new(bytes).map_err(|err| { + RevlogError::Other(HgError::CorruptedRepository(format!( + "Invalid changelog data for revision {}: {:?}", + self.revlog_entry.revision(), + err + ))) + })?) + } + } + + /// Obtain a reference to the underlying `RevlogEntry`. + /// + /// This allows the caller to access the information that is common + /// to all revlog entries: revision number, node id, parent revisions etc. + pub fn as_revlog_entry(&self) -> &RevlogEntry { + &self.revlog_entry + } + + pub fn p1_entry(&self) -> Result, RevlogError> { + Ok(self + .revlog_entry + .p1_entry()? + .map(|revlog_entry| Self { revlog_entry })) + } + + pub fn p2_entry(&self) -> Result, RevlogError> { + Ok(self + .revlog_entry + .p2_entry()? + .map(|revlog_entry| Self { revlog_entry })) + } +} + /// `Changelog` entry which knows how to interpret the `changelog` data bytes. #[derive(PartialEq)] pub struct ChangelogRevisionData<'changelog> { @@ -215,6 +267,8 @@ #[cfg(test)] mod tests { use super::*; + use crate::vfs::Vfs; + use crate::NULL_REVISION; use pretty_assertions::assert_eq; #[test] @@ -268,4 +322,20 @@ ); assert_eq!(data.description(), b"some\ncommit\nmessage"); } + + #[test] + fn test_data_from_rev_null() -> Result<(), RevlogError> { + // an empty revlog will be enough for this case + let temp = tempfile::tempdir().unwrap(); + let vfs = Vfs { base: temp.path() }; + std::fs::write(temp.path().join("foo.i"), b"").unwrap(); + let revlog = Revlog::open(&vfs, "foo.i", None, false).unwrap(); + + let changelog = Changelog { revlog }; + assert_eq!( + changelog.data_for_rev(NULL_REVISION)?, + ChangelogRevisionData::null() + ); + Ok(()) + } } diff -r 41b9eb302d95 -r 9a4db474ef1a rust/hg-core/src/revlog/mod.rs --- a/rust/hg-core/src/revlog/mod.rs Thu Jun 22 11:18:47 2023 +0200 +++ b/rust/hg-core/src/revlog/mod.rs Thu Jun 22 11:36:37 2023 +0200 @@ -23,6 +23,7 @@ use flate2::read::ZlibDecoder; use sha1::{Digest, Sha1}; +use std::cell::RefCell; use zstd; use self::node::{NODE_BYTES_LENGTH, NULL_NODE}; @@ -400,10 +401,10 @@ /// The revlog entry's bytes and the necessary informations to extract /// the entry's data. #[derive(Clone)] -pub struct RevlogEntry<'a> { - revlog: &'a Revlog, +pub struct RevlogEntry<'revlog> { + revlog: &'revlog Revlog, rev: Revision, - bytes: &'a [u8], + bytes: &'revlog [u8], compressed_len: u32, uncompressed_len: i32, base_rev_or_base_of_delta_chain: Option, @@ -413,7 +414,22 @@ hash: Node, } -impl<'a> RevlogEntry<'a> { +thread_local! { + // seems fine to [unwrap] here: this can only fail due to memory allocation + // failing, and it's normal for that to cause panic. + static ZSTD_DECODER : RefCell> = + RefCell::new(zstd::bulk::Decompressor::new().ok().unwrap()); +} + +fn zstd_decompress_to_buffer( + bytes: &[u8], + buf: &mut Vec, +) -> Result { + ZSTD_DECODER + .with(|decoder| decoder.borrow_mut().decompress_to_buffer(bytes, buf)) +} + +impl<'revlog> RevlogEntry<'revlog> { pub fn revision(&self) -> Revision { self.rev } @@ -430,7 +446,9 @@ self.p1 != NULL_REVISION } - pub fn p1_entry(&self) -> Result, RevlogError> { + pub fn p1_entry( + &self, + ) -> Result>, RevlogError> { if self.p1 == NULL_REVISION { Ok(None) } else { @@ -438,7 +456,9 @@ } } - pub fn p2_entry(&self) -> Result, RevlogError> { + pub fn p2_entry( + &self, + ) -> Result>, RevlogError> { if self.p2 == NULL_REVISION { Ok(None) } else { @@ -473,7 +493,7 @@ } /// The data for this entry, after resolving deltas if any. - pub fn rawdata(&self) -> Result, HgError> { + pub fn rawdata(&self) -> Result, HgError> { let mut entry = self.clone(); let mut delta_chain = vec![]; @@ -503,8 +523,8 @@ fn check_data( &self, - data: Cow<'a, [u8]>, - ) -> Result, HgError> { + data: Cow<'revlog, [u8]>, + ) -> Result, HgError> { if self.revlog.check_hash( self.p1, self.p2, @@ -525,7 +545,7 @@ } } - pub fn data(&self) -> Result, HgError> { + pub fn data(&self) -> Result, HgError> { let data = self.rawdata()?; if self.is_censored() { return Err(HgError::CensoredNodeError); @@ -535,7 +555,7 @@ /// Extract the data contained in the entry. /// This may be a delta. (See `is_delta`.) - fn data_chunk(&self) -> Result, HgError> { + fn data_chunk(&self) -> Result, HgError> { if self.bytes.is_empty() { return Ok(Cow::Borrowed(&[])); } @@ -576,15 +596,28 @@ } fn uncompressed_zstd_data(&self) -> Result, HgError> { + let cap = self.uncompressed_len.max(0) as usize; if self.is_delta() { - let mut buf = Vec::with_capacity(self.compressed_len as usize); - zstd::stream::copy_decode(self.bytes, &mut buf) - .map_err(|e| corrupted(e.to_string()))?; + // [cap] is usually an over-estimate of the space needed because + // it's the length of delta-decoded data, but we're interested + // in the size of the delta. + // This means we have to [shrink_to_fit] to avoid holding on + // to a large chunk of memory, but it also means we must have a + // fallback branch, for the case when the delta is longer than + // the original data (surprisingly, this does happen in practice) + let mut buf = Vec::with_capacity(cap); + match zstd_decompress_to_buffer(self.bytes, &mut buf) { + Ok(_) => buf.shrink_to_fit(), + Err(_) => { + buf.clear(); + zstd::stream::copy_decode(self.bytes, &mut buf) + .map_err(|e| corrupted(e.to_string()))?; + } + }; Ok(buf) } else { - let cap = self.uncompressed_len.max(0) as usize; - let mut buf = vec![0; cap]; - let len = zstd::bulk::decompress_to_buffer(self.bytes, &mut buf) + let mut buf = Vec::with_capacity(cap); + let len = zstd_decompress_to_buffer(self.bytes, &mut buf) .map_err(|e| corrupted(e.to_string()))?; if len != self.uncompressed_len as usize { Err(corrupted("uncompressed length does not match")) diff -r 41b9eb302d95 -r 9a4db474ef1a rust/hg-core/src/revlog/nodemap.rs --- a/rust/hg-core/src/revlog/nodemap.rs Thu Jun 22 11:18:47 2023 +0200 +++ b/rust/hg-core/src/revlog/nodemap.rs Thu Jun 22 11:36:37 2023 +0200 @@ -25,6 +25,9 @@ #[derive(Debug, PartialEq)] pub enum NodeMapError { + /// A `NodePrefix` matches several [`Revision`]s. + /// + /// This can be returned by methods meant for (at most) one match. MultipleResults, /// A `Revision` stored in the nodemap could not be found in the index RevisionNotInIndex(Revision), @@ -35,8 +38,8 @@ /// ## `RevlogIndex` and `NodeMap` /// /// One way to think about their relationship is that -/// the `NodeMap` is a prefix-oriented reverse index of the `Node` information -/// carried by a [`RevlogIndex`]. +/// the `NodeMap` is a prefix-oriented reverse index of the [`Node`] +/// information carried by a [`RevlogIndex`]. /// /// Many of the methods in this trait take a `RevlogIndex` argument /// which is used for validation of their results. This index must naturally @@ -45,14 +48,10 @@ /// Notably, the `NodeMap` must not store /// information about more `Revision` values than there are in the index. /// In these methods, an encountered `Revision` is not in the index, a -/// [`RevisionNotInIndex`] error is returned. +/// [RevisionNotInIndex](NodeMapError) error is returned. /// /// In insert operations, the rule is thus that the `NodeMap` must always -/// be updated after the `RevlogIndex` -/// be updated first, and the `NodeMap` second. -/// -/// [`RevisionNotInIndex`]: enum.NodeMapError.html#variant.RevisionNotInIndex -/// [`RevlogIndex`]: ../trait.RevlogIndex.html +/// be updated after the `RevlogIndex` it is about. pub trait NodeMap { /// Find the unique `Revision` having the given `Node` /// @@ -69,8 +68,8 @@ /// /// If no Revision matches the given prefix, `Ok(None)` is returned. /// - /// If several Revisions match the given prefix, a [`MultipleResults`] - /// error is returned. + /// If several Revisions match the given prefix, a + /// [MultipleResults](NodeMapError) error is returned. fn find_bin( &self, idx: &impl RevlogIndex, @@ -84,17 +83,18 @@ /// returns the number of hexadecimal digits that would had sufficed /// to find the revision uniquely. /// - /// Returns `None` if no `Revision` could be found for the prefix. + /// Returns `None` if no [`Revision`] could be found for the prefix. /// - /// If several Revisions match the given prefix, a [`MultipleResults`] - /// error is returned. + /// If several Revisions match the given prefix, a + /// [MultipleResults](NodeMapError) error is returned. fn unique_prefix_len_bin( &self, idx: &impl RevlogIndex, node_prefix: NodePrefix, ) -> Result, NodeMapError>; - /// Same as `unique_prefix_len_bin`, with a full `Node` as input + /// Same as [unique_prefix_len_bin](Self::unique_prefix_len_bin), with + /// a full [`Node`] as input fn unique_prefix_len_node( &self, idx: &impl RevlogIndex, @@ -113,7 +113,7 @@ ) -> Result<(), NodeMapError>; } -/// Low level NodeTree [`Blocks`] elements +/// Low level NodeTree [`Block`] elements /// /// These are exactly as for instance on persistent storage. type RawElement = unaligned::I32Be; @@ -156,7 +156,9 @@ } } -/// A logical block of the `NodeTree`, packed with a fixed size. +const ELEMENTS_PER_BLOCK: usize = 16; // number of different values in a nybble + +/// A logical block of the [`NodeTree`], packed with a fixed size. /// /// These are always used in container types implementing `Index`, /// such as `&Block` @@ -167,21 +169,18 @@ /// /// - absent (value -1) /// - another `Block` in the same indexable container (value ≥ 0) -/// - a `Revision` leaf (value ≤ -2) +/// - a [`Revision`] leaf (value ≤ -2) /// /// Endianness has to be fixed for consistency on shared storage across /// different architectures. /// /// A key difference with the C `nodetree` is that we need to be /// able to represent the [`Block`] at index 0, hence -1 is the empty marker -/// rather than 0 and the `Revision` range upper limit of -2 instead of -1. +/// rather than 0 and the [`Revision`] range upper limit of -2 instead of -1. /// /// Another related difference is that `NULL_REVISION` (-1) is not /// represented at all, because we want an immutable empty nodetree /// to be valid. - -const ELEMENTS_PER_BLOCK: usize = 16; // number of different values in a nybble - #[derive(Copy, Clone, BytesCast, PartialEq)] #[repr(transparent)] pub struct Block([RawElement; ELEMENTS_PER_BLOCK]); @@ -218,7 +217,7 @@ /// Because of the append only nature of our node trees, we need to /// keep the original untouched and store new blocks separately. /// -/// The mutable root `Block` is kept apart so that we don't have to rebump +/// The mutable root [`Block`] is kept apart so that we don't have to rebump /// it on each insertion. pub struct NodeTree { readonly: Box + Send>, @@ -242,7 +241,7 @@ } } -/// Return `None` unless the `Node` for `rev` has given prefix in `index`. +/// Return `None` unless the [`Node`] for `rev` has given prefix in `idx`. fn has_prefix_or_none( idx: &impl RevlogIndex, prefix: NodePrefix, @@ -260,7 +259,7 @@ } /// validate that the candidate's node starts indeed with given prefix, -/// and treat ambiguities related to `NULL_REVISION`. +/// and treat ambiguities related to [`NULL_REVISION`]. /// /// From the data in the NodeTree, one can only conclude that some /// revision is the only one for a *subprefix* of the one being looked up. @@ -304,12 +303,10 @@ /// Create from an opaque bunch of bytes /// - /// The created `NodeTreeBytes` from `buffer`, + /// The created [`NodeTreeBytes`] from `bytes`, /// of which exactly `amount` bytes are used. /// /// - `buffer` could be derived from `PyBuffer` and `Mmap` objects. - /// - `offset` allows for the final file format to include fixed data - /// (generation number, behavioural flags) /// - `amount` is expressed in bytes, and is not automatically derived from /// `bytes`, so that a caller that manages them atomically can perform /// temporary disk serializations and still rollback easily if needed. @@ -323,7 +320,7 @@ NodeTree::new(Box::new(NodeTreeBytes::new(bytes, amount))) } - /// Retrieve added `Block` and the original immutable data + /// Retrieve added [`Block`]s and the original immutable data pub fn into_readonly_and_added( self, ) -> (Box + Send>, Vec) { @@ -335,7 +332,7 @@ (readonly, vec) } - /// Retrieve added `Blocks` as bytes, ready to be written to persistent + /// Retrieve added [`Block]s as bytes, ready to be written to persistent /// storage pub fn into_readonly_and_added_bytes( self, @@ -381,16 +378,17 @@ /// /// The first returned value is the result of analysing `NodeTree` data /// *alone*: whereas `None` guarantees that the given prefix is absent - /// from the `NodeTree` data (but still could match `NULL_NODE`), with - /// `Some(rev)`, it is to be understood that `rev` is the unique `Revision` - /// that could match the prefix. Actually, all that can be inferred from + /// from the [`NodeTree`] data (but still could match [`NULL_NODE`]), with + /// `Some(rev)`, it is to be understood that `rev` is the unique + /// [`Revision`] that could match the prefix. Actually, all that can + /// be inferred from /// the `NodeTree` data is that `rev` is the revision with the longest /// common node prefix with the given prefix. /// /// The second returned value is the size of the smallest subprefix /// of `prefix` that would give the same result, i.e. not the - /// `MultipleResults` error variant (again, using only the data of the - /// `NodeTree`). + /// [MultipleResults](NodeMapError) error variant (again, using only the + /// data of the [`NodeTree`]). fn lookup( &self, prefix: NodePrefix, diff -r 41b9eb302d95 -r 9a4db474ef1a rust/hg-core/src/utils.rs --- a/rust/hg-core/src/utils.rs Thu Jun 22 11:18:47 2023 +0200 +++ b/rust/hg-core/src/utils.rs Thu Jun 22 11:36:37 2023 +0200 @@ -301,7 +301,7 @@ /// calling `merge(key, left_value, right_value)` to resolve keys that exist in /// both. /// -/// CC https://github.com/bodil/im-rs/issues/166 +/// CC pub(crate) fn ordmap_union_with_merge( left: OrdMap, right: OrdMap, diff -r 41b9eb302d95 -r 9a4db474ef1a rust/hg-core/src/utils/hg_path.rs --- a/rust/hg-core/src/utils/hg_path.rs Thu Jun 22 11:18:47 2023 +0200 +++ b/rust/hg-core/src/utils/hg_path.rs Thu Jun 22 11:36:37 2023 +0200 @@ -479,10 +479,11 @@ } } -/// TODO: Once https://www.mercurial-scm.org/wiki/WindowsUTF8Plan is +/// Create a new [`OsString`] from types referenceable as [`HgPath`]. +/// +/// TODO: Once is /// implemented, these conversion utils will have to work differently depending /// on the repository encoding: either `UTF-8` or `MBCS`. - pub fn hg_path_to_os_string>( hg_path: P, ) -> Result { @@ -498,12 +499,14 @@ Ok(os_str.to_os_string()) } +/// Create a new [`PathBuf`] from types referenceable as [`HgPath`]. pub fn hg_path_to_path_buf>( hg_path: P, ) -> Result { Ok(Path::new(&hg_path_to_os_string(hg_path)?).to_path_buf()) } +/// Create a new [`HgPathBuf`] from types referenceable as [`OsStr`]. pub fn os_string_to_hg_path_buf>( os_string: S, ) -> Result { @@ -520,6 +523,7 @@ Ok(buf) } +/// Create a new [`HgPathBuf`] from types referenceable as [`Path`]. pub fn path_to_hg_path_buf>( path: P, ) -> Result { diff -r 41b9eb302d95 -r 9a4db474ef1a rust/hg-cpython/src/dirstate/dirs_multiset.rs --- a/rust/hg-cpython/src/dirstate/dirs_multiset.rs Thu Jun 22 11:18:47 2023 +0200 +++ b/rust/hg-cpython/src/dirstate/dirs_multiset.rs Thu Jun 22 11:36:37 2023 +0200 @@ -17,7 +17,7 @@ use hg::{ utils::hg_path::{HgPath, HgPathBuf}, - DirsMultiset, DirsMultisetIter, DirstateMapError, + DirsMultiset, DirsMultisetIter, }; py_class!(pub class Dirs |py| { @@ -54,19 +54,11 @@ def addpath(&self, path: PyObject) -> PyResult { self.inner(py).borrow_mut().add_path( HgPath::new(path.extract::(py)?.data(py)), - ).and(Ok(py.None())).or_else(|e| { - match e { - DirstateMapError::EmptyPath => { - Ok(py.None()) - }, - e => { - Err(PyErr::new::( + ).and(Ok(py.None())).map_err(|e| PyErr::new::( py, e.to_string(), - )) - } - } - }) + ) + ) } def delpath(&self, path: PyObject) -> PyResult { @@ -74,19 +66,12 @@ HgPath::new(path.extract::(py)?.data(py)), ) .and(Ok(py.None())) - .or_else(|e| { - match e { - DirstateMapError::EmptyPath => { - Ok(py.None()) - }, - e => { - Err(PyErr::new::( + .map_err(|e| + PyErr::new::( py, e.to_string(), - )) - } - } - }) + ) + ) } def __iter__(&self) -> PyResult { let leaked_ref = self.inner(py).leak_immutable(); diff -r 41b9eb302d95 -r 9a4db474ef1a rust/rhg/Cargo.toml --- a/rust/rhg/Cargo.toml Thu Jun 22 11:18:47 2023 +0200 +++ b/rust/rhg/Cargo.toml Thu Jun 22 11:36:37 2023 +0200 @@ -20,6 +20,6 @@ regex = "1.7.0" env_logger = "0.9.3" format-bytes = "0.3.0" -users = "0.11.0" +whoami = "1.4" which = "4.3.0" rayon = "1.7.0" diff -r 41b9eb302d95 -r 9a4db474ef1a rust/rhg/src/blackbox.rs --- a/rust/rhg/src/blackbox.rs Thu Jun 22 11:18:47 2023 +0200 +++ b/rust/rhg/src/blackbox.rs Thu Jun 22 11:36:37 2023 +0200 @@ -120,8 +120,7 @@ impl ConfiguredBlackbox<'_> { fn log(&self, date_time: &DateTime, message: &[u8]) { let date = format_bytes::Utf8(date_time.format(self.date_format)); - let user = users::get_current_username().map(get_bytes_from_os_str); - let user = user.as_deref().unwrap_or(b"???"); + let user = get_bytes_from_os_str(whoami::username_os()); let rev = format_bytes::Utf8(match self.repo.dirstate_parents() { Ok(parents) if parents.p2 == hg::revlog::node::NULL_NODE => { format!("{:x}", parents.p1) diff -r 41b9eb302d95 -r 9a4db474ef1a rust/rhg/src/commands/files.rs --- a/rust/rhg/src/commands/files.rs Thu Jun 22 11:18:47 2023 +0200 +++ b/rust/rhg/src/commands/files.rs Thu Jun 22 11:36:37 2023 +0200 @@ -1,5 +1,7 @@ use crate::error::CommandError; -use crate::ui::{print_narrow_sparse_warnings, Ui}; +use crate::ui::{ + print_narrow_sparse_warnings, relative_paths, RelativePaths, Ui, +}; use crate::utils::path_utils::RelativizePaths; use clap::Arg; use hg::narrow; @@ -28,12 +30,10 @@ } pub fn run(invocation: &crate::CliInvocation) -> Result<(), CommandError> { - let relative = invocation.config.get(b"ui", b"relative-paths"); - if relative.is_some() { - return Err(CommandError::unsupported( - "non-default ui.relative-paths", - )); - } + let relative_paths = match relative_paths(invocation.config)? { + RelativePaths::Legacy => true, + RelativePaths::Bool(v) => v, + }; let rev = invocation.subcommand_args.get_one::("rev"); @@ -57,7 +57,7 @@ if let Some(rev) = rev { let files = list_rev_tracked_files(repo, rev, narrow_matcher) .map_err(|e| (e, rev.as_ref()))?; - display_files(invocation.ui, repo, files.iter()) + display_files(invocation.ui, repo, relative_paths, files.iter()) } else { // The dirstate always reflects the sparse narrowspec. let dirstate = repo.dirstate_map()?; @@ -77,6 +77,7 @@ display_files( invocation.ui, repo, + relative_paths, files.into_iter().map::, _>(Ok), ) } @@ -85,6 +86,7 @@ fn display_files<'a, E>( ui: &Ui, repo: &Repo, + relative_paths: bool, files: impl IntoIterator>, ) -> Result<(), CommandError> where @@ -96,7 +98,11 @@ let relativize = RelativizePaths::new(repo)?; for result in files { let path = result?; - stdout.write_all(&relativize.relativize(path))?; + if relative_paths { + stdout.write_all(&relativize.relativize(path))?; + } else { + stdout.write_all(path.as_bytes())?; + } stdout.write_all(b"\n")?; any = true; } diff -r 41b9eb302d95 -r 9a4db474ef1a rust/rhg/src/commands/status.rs --- a/rust/rhg/src/commands/status.rs Thu Jun 22 11:18:47 2023 +0200 +++ b/rust/rhg/src/commands/status.rs Thu Jun 22 11:36:37 2023 +0200 @@ -111,6 +111,13 @@ .long("copies"), ) .arg( + Arg::new("print0") + .help("end filenames with NUL, for use with xargs") + .short('0') + .action(clap::ArgAction::SetTrue) + .long("print0"), + ) + .arg( Arg::new("no-status") .help("hide status prefix") .short('n') @@ -213,10 +220,11 @@ let config = invocation.config; let args = invocation.subcommand_args; - // TODO add `!args.get_flag("print0") &&` when we support `print0` + let print0 = args.get_flag("print0"); let verbose = args.get_flag("verbose") || config.get_bool(b"ui", b"verbose")? || config.get_bool(b"commands", b"status.verbose")?; + let verbose = verbose && !print0; let all = args.get_flag("all"); let display_states = if all { @@ -363,6 +371,7 @@ } else { None }, + print0, }; if display_states.modified { output.display(b"M ", "status.modified", ds_status.modified)?; @@ -527,6 +536,7 @@ ui: &'a Ui, no_status: bool, relativize: Option, + print0: bool, } impl DisplayStatusPaths<'_> { @@ -560,12 +570,15 @@ if !self.no_status { self.ui.write_stdout_labelled(status_prefix, label)? } - self.ui - .write_stdout_labelled(&format_bytes!(b"{}\n", path), label)?; + let linebreak = if self.print0 { b"\x00" } else { b"\n" }; + self.ui.write_stdout_labelled( + &format_bytes!(b"{}{}", path, linebreak), + label, + )?; if let Some(source) = copy_source.filter(|_| !self.no_status) { let label = "status.copied"; self.ui.write_stdout_labelled( - &format_bytes!(b" {}\n", source), + &format_bytes!(b" {}{}", source, linebreak), label, )? } diff -r 41b9eb302d95 -r 9a4db474ef1a rust/rhg/src/ui.rs --- a/rust/rhg/src/ui.rs Thu Jun 22 11:18:47 2023 +0200 +++ b/rust/rhg/src/ui.rs Thu Jun 22 11:36:37 2023 +0200 @@ -221,6 +221,18 @@ } } +pub enum RelativePaths { + Legacy, + Bool(bool), +} + +pub fn relative_paths(config: &Config) -> Result { + Ok(match config.get(b"ui", b"relative-paths") { + None | Some(b"legacy") => RelativePaths::Legacy, + _ => RelativePaths::Bool(config.get_bool(b"ui", b"relative-paths")?), + }) +} + fn isatty(config: &Config) -> Result { Ok(if config.get_bool(b"ui", b"nontty")? { false diff -r 41b9eb302d95 -r 9a4db474ef1a setup.py --- a/setup.py Thu Jun 22 11:18:47 2023 +0200 +++ b/setup.py Thu Jun 22 11:36:37 2023 +0200 @@ -1299,6 +1299,7 @@ 'mercurial.hgweb', 'mercurial.interfaces', 'mercurial.pure', + 'mercurial.stabletailgraph', 'mercurial.templates', 'mercurial.thirdparty', 'mercurial.thirdparty.attr', diff -r 41b9eb302d95 -r 9a4db474ef1a tests/blacklists/nix --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/tests/blacklists/nix Thu Jun 22 11:36:37 2023 +0200 @@ -0,0 +1,8 @@ +# Tests to be disabled when building and testing in the Nix sandbox. + +# tests enforcing "/usr/bin/env" shebangs, which are patched for nix +test-run-tests.t +test-check-shbang.t + +# doesn't like the extra setlocale warnings emitted by the nix bash wrappers +test-locale.t diff -r 41b9eb302d95 -r 9a4db474ef1a tests/common-pattern.py --- a/tests/common-pattern.py Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/common-pattern.py Thu Jun 22 11:36:37 2023 +0200 @@ -10,7 +10,7 @@ ( br'bundlecaps=HG20%2Cbundle2%3DHG20%250A' br'bookmarks%250A' - br'changegroup%253D01%252C02%250A' + br'changegroup%253D01%252C02%252C03%250A' br'checkheads%253Drelated%250A' br'digests%253Dmd5%252Csha1%252Csha512%250A' br'error%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250A' @@ -26,7 +26,7 @@ ( br'bundlecaps=HG20%2Cbundle2%3DHG20%250A' br'bookmarks%250A' - br'changegroup%253D01%252C02%250A' + br'changegroup%253D01%252C02%252C03%250A' br'checkheads%3Drelated%0A' br'digests%253Dmd5%252Csha1%252Csha512%250A' br'error%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250A' @@ -42,7 +42,7 @@ ( br'bundle2=HG20%0A' br'bookmarks%0A' - br'changegroup%3D01%2C02%0A' + br'changegroup%3D01%2C02%2C03%0A' br'checkheads%3Drelated%0A' br'digests%3Dmd5%2Csha1%2Csha512%0A' br'error%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0A' @@ -59,7 +59,7 @@ ( br'bundle2=HG20%0A' br'bookmarks%0A' - br'changegroup%3D01%2C02%0A' + br'changegroup%3D01%2C02%2C03%0A' br'checkheads%3Drelated%0A' br'digests%3Dmd5%2Csha1%2Csha512%0A' br'error%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0A' @@ -74,7 +74,7 @@ ( br'bundle2=HG20%0A' br'bookmarks%0A' - br'changegroup%3D01%2C02%0A' + br'changegroup%3D01%2C02%2C03%0A' br'digests%3Dmd5%2Csha1%2Csha512%0A' br'error%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0A' br'hgtagsfnodes%0A' @@ -122,6 +122,11 @@ % (m.group(1), m.group(2)) ), ), + # `discovery debug output + ( + br'\b(\d+) total queries in \d.\d\d\d\ds\b', + lambda m: (br'%s total queries in *.????s (glob)' % m.group(1)), + ), ] # Various platform error strings, keyed on a common replacement string diff -r 41b9eb302d95 -r 9a4db474ef1a tests/filterpyflakes.py --- a/tests/filterpyflakes.py Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/filterpyflakes.py Thu Jun 22 11:36:37 2023 +0200 @@ -24,10 +24,9 @@ break # pattern matches if keep: fn = line.split(':', 1)[0] - f = open(fn) - data = f.read() - f.close() - if 'no-' 'check-code' in data: + with open(fn, 'rb') as f: + data = f.read() + if b'no-' b'check-code' in data: continue lines.append(line) diff -r 41b9eb302d95 -r 9a4db474ef1a tests/library-infinitepush.sh --- a/tests/library-infinitepush.sh Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/library-infinitepush.sh Thu Jun 22 11:36:37 2023 +0200 @@ -16,6 +16,9 @@ infinitepush= [infinitepush] branchpattern=re:scratch/.* +deprecation-abort=no +deprecation-message=yes + EOF } diff -r 41b9eb302d95 -r 9a4db474ef1a tests/notcapable --- a/tests/notcapable Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/notcapable Thu Jun 22 11:36:37 2023 +0200 @@ -15,10 +15,10 @@ if name in b'$CAP'.split(b' '): return False return orig(self, name, *args, **kwargs) -def wrappeer(orig, self, path=None): +def wrappeer(orig, self, *args, **kwargs): # Since we're disabling some newer features, we need to make sure local # repos add in the legacy features again. - return localrepo.locallegacypeer(self, path=path) + return localrepo.locallegacypeer(self, *args, **kwargs) EOF echo '[extensions]' >> $HGRCPATH diff -r 41b9eb302d95 -r 9a4db474ef1a tests/simplestorerepo.py --- a/tests/simplestorerepo.py Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/simplestorerepo.py Thu Jun 22 11:36:37 2023 +0200 @@ -664,8 +664,8 @@ class simplestore(store.encodedstore): - def datafiles(self, undecodable=None): - for x in super(simplestore, self).datafiles(): + def data_entries(self, undecodable=None): + for x in super(simplestore, self).data_entries(): yield x # Supplement with non-revlog files. diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-acl.t --- a/tests/test-acl.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-acl.t Thu Jun 22 11:36:37 2023 +0200 @@ -109,7 +109,7 @@ f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd 911600dab2ae7a9baff75958b84fe606851ce955 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -175,7 +175,7 @@ f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd 911600dab2ae7a9baff75958b84fe606851ce955 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -245,7 +245,7 @@ f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd 911600dab2ae7a9baff75958b84fe606851ce955 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -325,7 +325,7 @@ f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd 911600dab2ae7a9baff75958b84fe606851ce955 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -396,7 +396,7 @@ f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd 911600dab2ae7a9baff75958b84fe606851ce955 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -471,7 +471,7 @@ f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd 911600dab2ae7a9baff75958b84fe606851ce955 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -543,7 +543,7 @@ f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd 911600dab2ae7a9baff75958b84fe606851ce955 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -620,7 +620,7 @@ f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd 911600dab2ae7a9baff75958b84fe606851ce955 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -694,7 +694,7 @@ f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd 911600dab2ae7a9baff75958b84fe606851ce955 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -767,7 +767,7 @@ list of changesets: ef1ea85a6374b77d6da9dcda9541f498f2d17df7 bundle2-output-bundle: "HG20", 7 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:bookmarks" 37 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload @@ -856,7 +856,7 @@ list of changesets: ef1ea85a6374b77d6da9dcda9541f498f2d17df7 bundle2-output-bundle: "HG20", 7 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:bookmarks" 37 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload @@ -947,7 +947,7 @@ f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd 911600dab2ae7a9baff75958b84fe606851ce955 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -1033,7 +1033,7 @@ f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd 911600dab2ae7a9baff75958b84fe606851ce955 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -1117,7 +1117,7 @@ f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd 911600dab2ae7a9baff75958b84fe606851ce955 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -1195,7 +1195,7 @@ f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd 911600dab2ae7a9baff75958b84fe606851ce955 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -1284,7 +1284,7 @@ f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd 911600dab2ae7a9baff75958b84fe606851ce955 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -1374,7 +1374,7 @@ f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd 911600dab2ae7a9baff75958b84fe606851ce955 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -1461,7 +1461,7 @@ f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd 911600dab2ae7a9baff75958b84fe606851ce955 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -1544,7 +1544,7 @@ f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd 911600dab2ae7a9baff75958b84fe606851ce955 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -1631,7 +1631,7 @@ f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd 911600dab2ae7a9baff75958b84fe606851ce955 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -1754,7 +1754,7 @@ 911600dab2ae7a9baff75958b84fe606851ce955 e8fc755d4d8217ee5b0c2bb41558c40d43b92c01 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 48 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -1841,7 +1841,7 @@ 911600dab2ae7a9baff75958b84fe606851ce955 e8fc755d4d8217ee5b0c2bb41558c40d43b92c01 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 48 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -1919,7 +1919,7 @@ 911600dab2ae7a9baff75958b84fe606851ce955 e8fc755d4d8217ee5b0c2bb41558c40d43b92c01 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 48 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -1993,7 +1993,7 @@ 911600dab2ae7a9baff75958b84fe606851ce955 e8fc755d4d8217ee5b0c2bb41558c40d43b92c01 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 48 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -2061,7 +2061,7 @@ 911600dab2ae7a9baff75958b84fe606851ce955 e8fc755d4d8217ee5b0c2bb41558c40d43b92c01 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 48 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -2153,7 +2153,7 @@ 911600dab2ae7a9baff75958b84fe606851ce955 e8fc755d4d8217ee5b0c2bb41558c40d43b92c01 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 48 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -2244,7 +2244,7 @@ 911600dab2ae7a9baff75958b84fe606851ce955 e8fc755d4d8217ee5b0c2bb41558c40d43b92c01 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 48 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -2317,7 +2317,7 @@ 911600dab2ae7a9baff75958b84fe606851ce955 e8fc755d4d8217ee5b0c2bb41558c40d43b92c01 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 48 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload @@ -2402,7 +2402,7 @@ 911600dab2ae7a9baff75958b84fe606851ce955 e8fc755d4d8217ee5b0c2bb41558c40d43b92c01 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 48 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-bad-extension.t --- a/tests/test-bad-extension.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-bad-extension.t Thu Jun 22 11:36:37 2023 +0200 @@ -63,14 +63,11 @@ Exception: bit bucket overflow *** failed to import extension "badext2": No module named 'badext2' Traceback (most recent call last): - ImportError: No module named 'hgext.badext2' (no-py36 !) - ModuleNotFoundError: No module named 'hgext.badext2' (py36 !) + ModuleNotFoundError: No module named 'hgext.badext2' Traceback (most recent call last): - ImportError: No module named 'hgext3rd.badext2' (no-py36 !) - ModuleNotFoundError: No module named 'hgext3rd.badext2' (py36 !) + ModuleNotFoundError: No module named 'hgext3rd.badext2' Traceback (most recent call last): - ImportError: No module named 'badext2' (no-py36 !) - ModuleNotFoundError: No module named 'badext2' (py36 !) + ModuleNotFoundError: No module named 'badext2' names of extensions failed to load can be accessed via extensions.notloaded() @@ -111,25 +108,19 @@ YYYY/MM/DD HH:MM:SS (PID)> - loading extension: badext2 YYYY/MM/DD HH:MM:SS (PID)> - could not import hgext.badext2 (No module named *badext2*): trying hgext3rd.badext2 (glob) Traceback (most recent call last): - ImportError: No module named 'hgext.badext2' (no-py36 !) - ModuleNotFoundError: No module named 'hgext.badext2' (py36 !) + ModuleNotFoundError: No module named 'hgext.badext2' YYYY/MM/DD HH:MM:SS (PID)> - could not import hgext3rd.badext2 (No module named *badext2*): trying badext2 (glob) Traceback (most recent call last): - ImportError: No module named 'hgext.badext2' (no-py36 !) - ModuleNotFoundError: No module named 'hgext.badext2' (py36 !) + ModuleNotFoundError: No module named 'hgext.badext2' Traceback (most recent call last): - ImportError: No module named 'hgext3rd.badext2' (no-py36 !) - ModuleNotFoundError: No module named 'hgext3rd.badext2' (py36 !) + ModuleNotFoundError: No module named 'hgext3rd.badext2' *** failed to import extension "badext2": No module named 'badext2' Traceback (most recent call last): - ImportError: No module named 'hgext.badext2' (no-py36 !) - ModuleNotFoundError: No module named 'hgext.badext2' (py36 !) + ModuleNotFoundError: No module named 'hgext.badext2' Traceback (most recent call last): - ImportError: No module named 'hgext3rd.badext2' (no-py36 !) - ModuleNotFoundError: No module named 'hgext3rd.badext2' (py36 !) + ModuleNotFoundError: No module named 'hgext3rd.badext2' Traceback (most recent call last): - ModuleNotFoundError: No module named 'badext2' (py36 !) - ImportError: No module named 'badext2' (no-py36 !) + ModuleNotFoundError: No module named 'badext2' YYYY/MM/DD HH:MM:SS (PID)> > loaded 2 extensions, total time * (glob) YYYY/MM/DD HH:MM:SS (PID)> - loading configtable attributes YYYY/MM/DD HH:MM:SS (PID)> - executing uisetup hooks diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-bookmarks-pushpull.t --- a/tests/test-bookmarks-pushpull.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-bookmarks-pushpull.t Thu Jun 22 11:36:37 2023 +0200 @@ -129,10 +129,10 @@ bundle2-output: bundle parameter: bundle2-output: start of parts bundle2-output: bundle part: "replycaps" - bundle2-output-part: "replycaps" 224 bytes payload + bundle2-output-part: "replycaps" 227 bytes payload bundle2-output: part 0: "REPLYCAPS" bundle2-output: header chunk size: 16 - bundle2-output: payload chunk size: 224 + bundle2-output: payload chunk size: 227 bundle2-output: closing payload chunk bundle2-output: bundle part: "check:bookmarks" bundle2-output-part: "check:bookmarks" 23 bytes payload @@ -162,9 +162,9 @@ bundle2-input: part parameters: 0 bundle2-input: found a handler for part replycaps bundle2-input-part: "replycaps" supported - bundle2-input: payload chunk size: 224 + bundle2-input: payload chunk size: 227 bundle2-input: payload chunk size: 0 - bundle2-input-part: total payload size 224 + bundle2-input-part: total payload size 227 bundle2-input: part header size: 22 bundle2-input: part type: "CHECK:BOOKMARKS" bundle2-input: part id: "1" @@ -241,10 +241,10 @@ bundle2-output: bundle parameter: bundle2-output: start of parts bundle2-output: bundle part: "replycaps" - bundle2-output-part: "replycaps" 224 bytes payload + bundle2-output-part: "replycaps" 227 bytes payload bundle2-output: part 0: "REPLYCAPS" bundle2-output: header chunk size: 16 - bundle2-output: payload chunk size: 224 + bundle2-output: payload chunk size: 227 bundle2-output: closing payload chunk bundle2-output: bundle part: "check:bookmarks" bundle2-output-part: "check:bookmarks" 23 bytes payload @@ -275,9 +275,9 @@ bundle2-input: part parameters: 0 bundle2-input: found a handler for part replycaps bundle2-input-part: "replycaps" supported - bundle2-input: payload chunk size: 224 + bundle2-input: payload chunk size: 227 bundle2-input: payload chunk size: 0 - bundle2-input-part: total payload size 224 + bundle2-input-part: total payload size 227 bundle2-input: part header size: 22 bundle2-input: part type: "CHECK:BOOKMARKS" bundle2-input: part id: "1" diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-bundle-phase-internal.t --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/tests/test-bundle-phase-internal.t Thu Jun 22 11:36:37 2023 +0200 @@ -0,0 +1,286 @@ +===================================================== +test behavior of the `internal` phase around bundling +===================================================== + +Long story short, internal changeset are internal implementation details and +they should never leave the repository. Hence, they should never be in a +bundle. + +Setup +===== + + $ cat << EOF >> $HGRCPATH + > [ui] + > logtemplate="{node|short} [{phase}] {desc|firstline}" + > EOF + + + $ hg init reference-repo --config format.use-internal-phase=yes + $ cd reference-repo + $ echo a > a + $ hg add a + $ hg commit -m "a" + $ echo b > b + $ hg add b + $ hg commit -m "b" + $ echo b > c + $ hg add c + $ hg commit -m "c" + $ hg log -G + @ 07f0cc02c068 [draft] c + | + o d2ae7f538514 [draft] b + | + o cb9a9f314b8b [draft] a + + $ hg up ".^" + 0 files updated, 0 files merged, 1 files removed, 0 files unresolved + +do a shelve + + $ touch a_file.txt + $ hg shelve -A + adding a_file.txt + shelved as default + 0 files updated, 0 files merged, 1 files removed, 0 files unresolved + $ hg log -G --hidden + o 2ec3cf310d86 [internal] changes to: b + | + | o 07f0cc02c068 [draft] c + |/ + @ d2ae7f538514 [draft] b + | + o cb9a9f314b8b [draft] a + + $ shelved_node=`hg log --rev tip --hidden -T '{node|short}'` + +add more changeset above it + + $ hg up 'desc(a)' + 0 files updated, 0 files merged, 1 files removed, 0 files unresolved + $ echo d > d + $ hg add d + $ hg commit -m "d" + created new head + $ echo d > e + $ hg add e + $ hg commit -m "e" + $ hg up null + 0 files updated, 0 files merged, 3 files removed, 0 files unresolved + $ hg log -G + o 636bc07920e3 [draft] e + | + o 980f7dc84c29 [draft] d + | + | o 07f0cc02c068 [draft] c + | | + | o d2ae7f538514 [draft] b + |/ + o cb9a9f314b8b [draft] a + + $ hg log -G --hidden + o 636bc07920e3 [draft] e + | + o 980f7dc84c29 [draft] d + | + | o 2ec3cf310d86 [internal] changes to: b + | | + | | o 07f0cc02c068 [draft] c + | |/ + | o d2ae7f538514 [draft] b + |/ + o cb9a9f314b8b [draft] a + + $ cd .. + +backup bundle from strip +======================== + +strip an ancestors of the internal changeset +-------------------------------------------- + + $ cp -ar reference-repo strip-ancestor + $ cd strip-ancestor + +The internal change is stripped, yet it should be skipped from the backup bundle. + + $ hg log -G + o 636bc07920e3 [draft] e + | + o 980f7dc84c29 [draft] d + | + | o 07f0cc02c068 [draft] c + | | + | o d2ae7f538514 [draft] b + |/ + o cb9a9f314b8b [draft] a + + $ hg debugstrip 'desc(b)' + saved backup bundle to $TESTTMP/strip-ancestor/.hg/strip-backup/d2ae7f538514-59bd8bc3-backup.hg + +The change should be either gone or hidden + + $ hg log -G + o 636bc07920e3 [draft] e + | + o 980f7dc84c29 [draft] d + | + o cb9a9f314b8b [draft] a + + +The backup should not include it (as people tend to manipulate these directly) + + $ ls -1 .hg/strip-backup/ + d2ae7f538514-59bd8bc3-backup.hg + $ hg debugbundle .hg/strip-backup/*.hg + Stream params: {Compression: BZ} + changegroup -- {nbchanges: 2, version: 03} (mandatory: True) + d2ae7f538514cd87c17547b0de4cea71fe1af9fb + 07f0cc02c06869c81ebf33867edef30554020c0d + cache:rev-branch-cache -- {} (mandatory: False) + phase-heads -- {} (mandatory: True) + 07f0cc02c06869c81ebf33867edef30554020c0d draft + +Shelve should still work + + $ hg unshelve + unshelving change 'default' + rebasing shelved changes + $ hg status + A a_file.txt + + $ cd .. + +strip an unrelated changeset with a lower revnum +------------------------------------------------ + + $ cp -ar reference-repo strip-unrelated + $ cd strip-unrelated + +The internal change is not directly stripped, but it is affected by the strip +and it is in the "temporary backup" zone. The zone that needs to be put in a +temporary bundle while we affect data under it. + + $ hg debugstrip 'desc(c)' + saved backup bundle to $TESTTMP/strip-unrelated/.hg/strip-backup/07f0cc02c068-8fd0515f-backup.hg + +The change should be either gone or hidden + + $ hg log -G + o 636bc07920e3 [draft] e + | + o 980f7dc84c29 [draft] d + | + | o d2ae7f538514 [draft] b + |/ + o cb9a9f314b8b [draft] a + +The backup should not include it (as people tend to manipulate these directly) + + $ ls -1 .hg/strip-backup/ + 07f0cc02c068-8fd0515f-backup.hg + $ hg debugbundle .hg/strip-backup/*.hg + Stream params: {Compression: BZ} + changegroup -- {nbchanges: 1, version: 03} (mandatory: True) + 07f0cc02c06869c81ebf33867edef30554020c0d + cache:rev-branch-cache -- {} (mandatory: False) + phase-heads -- {} (mandatory: True) + 07f0cc02c06869c81ebf33867edef30554020c0d draft + +Shelve should still work + + $ hg unshelve + unshelving change 'default' + rebasing shelved changes + $ hg status + A a_file.txt + + $ cd .. + +explicitly strip the internal changeset +--------------------------------------- + + $ cp -ar reference-repo strip-explicit + $ cd strip-explicit + +The internal change is directly selected for stripping. + + $ hg debugstrip --hidden $shelved_node + +The change should be gone + + $ hg log -G --hidden + o 636bc07920e3 [draft] e + | + o 980f7dc84c29 [draft] d + | + | o 07f0cc02c068 [draft] c + | | + | o d2ae7f538514 [draft] b + |/ + o cb9a9f314b8b [draft] a + + +We don't need to backup anything + + $ ls -1 .hg/strip-backup/ + +Shelve should still work + + $ hg unshelve + unshelving change 'default' + rebasing shelved changes + $ hg status + A a_file.txt + + $ cd .. + +Explicitly bundling the internal change +======================================= + + $ cd reference-repo + +try to bundle it alone explicitly +--------------------------------- + +We should not allow it + + $ hg bundle --type v3 --exact --rev $shelved_node --hidden ../internal-01.hg + abort: cannot bundle internal changesets + (1 internal changesets selected) + [255] + $ hg debugbundle ../internal-01.hg + abort: $ENOENT$: '../internal-01.hg' + [255] + +try to bundle it with other, somewhat explicitly +------------------------------------------------ + +We should not allow it + + $ hg bundle --type v3 --exact --rev 'desc(b)':: --hidden ../internal-02.hg + abort: cannot bundle internal changesets + (1 internal changesets selected) + [255] + $ hg debugbundle ../internal-02.hg + abort: $ENOENT$: '../internal-02.hg' + [255] + +bundle visible ancestors +------------------------ + +This should succeed as the standard filtering is skipping the internal change naturally + + $ hg bundle --type v3 --exact --rev 'desc(b)':: ../internal-03.hg + 2 changesets found + $ hg debugbundle ../internal-03.hg + Stream params: {Compression: BZ} + changegroup -- {nbchanges: 2, version: 03} (mandatory: True) + d2ae7f538514cd87c17547b0de4cea71fe1af9fb + 07f0cc02c06869c81ebf33867edef30554020c0d + cache:rev-branch-cache -- {} (mandatory: False) + phase-heads -- {} (mandatory: True) + 07f0cc02c06869c81ebf33867edef30554020c0d draft + + $ cd .. + diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-bundle-phases.t --- a/tests/test-bundle-phases.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-bundle-phases.t Thu Jun 22 11:36:37 2023 +0200 @@ -44,6 +44,7 @@ cache:rev-branch-cache -- {} (mandatory: False) phase-heads -- {} (mandatory: True) 26805aba1e600a82e93661149f2313866a221a7b draft + 9bc730a19041f9ec7cb33c626e811aa233efb18c secret $ hg strip --no-backup C Phases show on incoming, and are also restored when pulling. Secret commits @@ -374,6 +375,7 @@ phase-heads -- {} (mandatory: True) dc0947a82db884575bb76ea10ac97b08536bfa03 public 03ca77807e919db8807c3749086dc36fb478cac0 draft + 4e4f9194f9f181c57f62e823e8bdfa46ab9e4ff4 secret $ hg strip --no-backup A $ hg unbundle -q bundle $ rm bundle @@ -398,6 +400,7 @@ 4e4f9194f9f181c57f62e823e8bdfa46ab9e4ff4 cache:rev-branch-cache -- {} (mandatory: False) phase-heads -- {} (mandatory: True) + 4e4f9194f9f181c57f62e823e8bdfa46ab9e4ff4 secret $ rm bundle $ hg bundle --base A -r D bundle @@ -411,6 +414,7 @@ cache:rev-branch-cache -- {} (mandatory: False) phase-heads -- {} (mandatory: True) dc0947a82db884575bb76ea10ac97b08536bfa03 public + 4e4f9194f9f181c57f62e823e8bdfa46ab9e4ff4 secret $ rm bundle $ hg bundle --base 'B + C' -r 'D + E' bundle @@ -423,4 +427,5 @@ cache:rev-branch-cache -- {} (mandatory: False) phase-heads -- {} (mandatory: True) 03ca77807e919db8807c3749086dc36fb478cac0 draft + 4e4f9194f9f181c57f62e823e8bdfa46ab9e4ff4 secret $ rm bundle diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-bundle-type.t --- a/tests/test-bundle-type.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-bundle-type.t Thu Jun 22 11:36:37 2023 +0200 @@ -4,127 +4,409 @@ $ hg init t2 $ cd t1 $ echo blablablablabla > file.txt - $ hg ci -Ama + $ hg ci -A -m commit_root adding file.txt - $ hg log | grep summary - summary: a - $ hg bundle ../b1 ../t2 + $ echo kapoue > file.txt + $ hg ci -m commit_1 + $ echo scrabageul > file.txt + $ hg ci -m commit_2 + $ hg up 'desc("commit_root")' + 1 files updated, 0 files merged, 0 files removed, 0 files unresolved + $ echo flagabalagla > file.txt + $ hg ci -m commit_3 + created new head + $ echo aliofia > file.txt + $ hg ci -m commit_4 + $ echo alklqo > file.txt + $ hg ci -m commit_5 + $ echo peakfeo > file.txt + $ hg ci -m commit_6 --secret + $ hg phase --public --rev 'desc(commit_3)' + $ hg log -GT '[{phase}] {desc|firstline}\n' + @ [secret] commit_6 + | + o [draft] commit_5 + | + o [draft] commit_4 + | + o [public] commit_3 + | + | o [draft] commit_2 + | | + | o [draft] commit_1 + |/ + o [public] commit_root + + +XXX the bundle generation is defined by a discovery round here. So the secret +changeset should be excluded. + + $ hg bundle ../b1.hg ../t2 searching for changes - 1 changesets found + 7 changesets found (known-bad-output !) + 6 changesets found (missing-correct-output !) + $ cd .. - $ cd ../t2 - $ hg unbundle ../b1 + $ hg -R t2 unbundle ./b1.hg adding changesets adding manifests adding file changes - added 1 changesets with 1 changes to 1 files - new changesets c35a0f9217e6 (1 drafts) - (run 'hg update' to get a working copy) - $ hg up + added 7 changesets with 7 changes to 1 files (+1 heads) (known-bad-output !) + added 6 changesets with 6 changes to 1 files (+1 heads) (missing-correct-output !) + new changesets ac39af4a9f7d:b9f5f740a8cd (7 drafts) + (run 'hg heads' to see heads, 'hg merge' to merge) + $ hg -R t2 up 1 files updated, 0 files merged, 0 files removed, 0 files unresolved - $ hg log | grep summary - summary: a - $ cd .. + updated to "b9f5f740a8cd: commit_6" + 1 other heads for branch "default" + $ hg -R t2 log -GT '[{phase}] {desc|firstline}\n' + @ [draft] commit_6 (known-bad-output !) + | (known-bad-output !) + o [draft] commit_5 + | + o [draft] commit_4 + | + o [draft] commit_3 + | + | o [draft] commit_2 + | | + | o [draft] commit_1 + |/ + o [draft] commit_root + Unknown compression type is rejected $ hg init t3 - $ cd t3 - $ hg -q unbundle ../b1 - $ hg bundle -a -t unknown out.hg + $ hg -R t3 -q unbundle ./b1.hg + $ hg -R t3 bundle -a -t unknown out.hg abort: unknown is not a recognized bundle specification (see 'hg help bundlespec' for supported values for --type) [10] - $ hg bundle -a -t unknown-v2 out.hg + $ hg -R t3 bundle -a -t unknown-v2 out.hg abort: unknown compression is not supported (see 'hg help bundlespec' for supported values for --type) [10] - $ cd .. +test bundle types +================= -test bundle types +since we use --all, it is okay to include the secret changeset here. It is +unfortunate that the phase information for the secret one is lost. $ testbundle() { > echo % test bundle type $1 - > hg init t$1 - > cd t1 - > hg bundle -t $1 ../b$1 ../t$1 - > f -q -B6 -D ../b$1; echo - > cd ../t$1 - > hg debugbundle ../b$1 - > hg debugbundle --spec ../b$1 + > echo '===================================' + > hg -R t1 bundle --all --type $1 ./b-$1.hg + > f -q -B6 -D ./b-$1.hg; echo + > hg debugbundle ./b-$1.hg + > hg debugbundle --spec ./b-$1.hg > echo - > cd .. + > hg init repo-from-type-$1 + > hg unbundle -R repo-from-type-$1 ./b-$1.hg + > hg -R repo-from-type-$1 log -GT '[{phase}] {desc|firstline}\n' + > echo > } - $ for t in "None" "bzip2" "gzip" "none-v2" "v2" "v1" "gzip-v1"; do + $ for t in "None" "bzip2" "gzip" "none-v2" "v2" "v1" "gzip-v1" "v3"; do > testbundle $t > done % test bundle type None - searching for changes - 1 changesets found + =================================== + 7 changesets found HG20\x00\x00 (esc) Stream params: {} - changegroup -- {nbchanges: 1, version: 02} (mandatory: True) - c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf + changegroup -- {nbchanges: 7, version: 02} (mandatory: True) + ac39af4a9f7d2aaa7d244720e57838be9bf63b03 + 901e97fadc587978ec52f2fa76af4aefc2d191e8 + a8c3a1ed30eb71f03f476c5fa7ead831ef991a55 + 66e2c4b43e0cf8f0bdff0733a0b97ce57874e35d + 624e609639853fe22c88d42a8fd1f53a0e9b7ebe + 2ea90778052ba7558fab36e3fd5d149512ff986b + b9f5f740a8cd76700020e3903ee55ecff78bd3e5 cache:rev-branch-cache -- {} (mandatory: False) none-v2 + adding changesets + adding manifests + adding file changes + added 7 changesets with 7 changes to 1 files (+1 heads) + new changesets ac39af4a9f7d:b9f5f740a8cd (7 drafts) + (run 'hg heads' to see heads, 'hg merge' to merge) + o [draft] commit_6 + | + o [draft] commit_5 + | + o [draft] commit_4 + | + o [draft] commit_3 + | + | o [draft] commit_2 + | | + | o [draft] commit_1 + |/ + o [draft] commit_root + + % test bundle type bzip2 - searching for changes - 1 changesets found + =================================== + 7 changesets found HG20\x00\x00 (esc) Stream params: {Compression: BZ} - changegroup -- {nbchanges: 1, version: 02} (mandatory: True) - c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf + changegroup -- {nbchanges: 7, version: 02} (mandatory: True) + ac39af4a9f7d2aaa7d244720e57838be9bf63b03 + 901e97fadc587978ec52f2fa76af4aefc2d191e8 + a8c3a1ed30eb71f03f476c5fa7ead831ef991a55 + 66e2c4b43e0cf8f0bdff0733a0b97ce57874e35d + 624e609639853fe22c88d42a8fd1f53a0e9b7ebe + 2ea90778052ba7558fab36e3fd5d149512ff986b + b9f5f740a8cd76700020e3903ee55ecff78bd3e5 cache:rev-branch-cache -- {} (mandatory: False) bzip2-v2 + adding changesets + adding manifests + adding file changes + added 7 changesets with 7 changes to 1 files (+1 heads) + new changesets ac39af4a9f7d:b9f5f740a8cd (7 drafts) + (run 'hg heads' to see heads, 'hg merge' to merge) + o [draft] commit_6 + | + o [draft] commit_5 + | + o [draft] commit_4 + | + o [draft] commit_3 + | + | o [draft] commit_2 + | | + | o [draft] commit_1 + |/ + o [draft] commit_root + + % test bundle type gzip - searching for changes - 1 changesets found + =================================== + 7 changesets found HG20\x00\x00 (esc) Stream params: {Compression: GZ} - changegroup -- {nbchanges: 1, version: 02} (mandatory: True) - c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf + changegroup -- {nbchanges: 7, version: 02} (mandatory: True) + ac39af4a9f7d2aaa7d244720e57838be9bf63b03 + 901e97fadc587978ec52f2fa76af4aefc2d191e8 + a8c3a1ed30eb71f03f476c5fa7ead831ef991a55 + 66e2c4b43e0cf8f0bdff0733a0b97ce57874e35d + 624e609639853fe22c88d42a8fd1f53a0e9b7ebe + 2ea90778052ba7558fab36e3fd5d149512ff986b + b9f5f740a8cd76700020e3903ee55ecff78bd3e5 cache:rev-branch-cache -- {} (mandatory: False) gzip-v2 + adding changesets + adding manifests + adding file changes + added 7 changesets with 7 changes to 1 files (+1 heads) + new changesets ac39af4a9f7d:b9f5f740a8cd (7 drafts) + (run 'hg heads' to see heads, 'hg merge' to merge) + o [draft] commit_6 + | + o [draft] commit_5 + | + o [draft] commit_4 + | + o [draft] commit_3 + | + | o [draft] commit_2 + | | + | o [draft] commit_1 + |/ + o [draft] commit_root + + % test bundle type none-v2 - searching for changes - 1 changesets found + =================================== + 7 changesets found HG20\x00\x00 (esc) Stream params: {} - changegroup -- {nbchanges: 1, version: 02} (mandatory: True) - c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf + changegroup -- {nbchanges: 7, version: 02} (mandatory: True) + ac39af4a9f7d2aaa7d244720e57838be9bf63b03 + 901e97fadc587978ec52f2fa76af4aefc2d191e8 + a8c3a1ed30eb71f03f476c5fa7ead831ef991a55 + 66e2c4b43e0cf8f0bdff0733a0b97ce57874e35d + 624e609639853fe22c88d42a8fd1f53a0e9b7ebe + 2ea90778052ba7558fab36e3fd5d149512ff986b + b9f5f740a8cd76700020e3903ee55ecff78bd3e5 cache:rev-branch-cache -- {} (mandatory: False) none-v2 + adding changesets + adding manifests + adding file changes + added 7 changesets with 7 changes to 1 files (+1 heads) + new changesets ac39af4a9f7d:b9f5f740a8cd (7 drafts) + (run 'hg heads' to see heads, 'hg merge' to merge) + o [draft] commit_6 + | + o [draft] commit_5 + | + o [draft] commit_4 + | + o [draft] commit_3 + | + | o [draft] commit_2 + | | + | o [draft] commit_1 + |/ + o [draft] commit_root + + % test bundle type v2 - searching for changes - 1 changesets found + =================================== + 7 changesets found HG20\x00\x00 (esc) Stream params: {Compression: BZ} - changegroup -- {nbchanges: 1, version: 02} (mandatory: True) - c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf + changegroup -- {nbchanges: 7, version: 02} (mandatory: True) + ac39af4a9f7d2aaa7d244720e57838be9bf63b03 + 901e97fadc587978ec52f2fa76af4aefc2d191e8 + a8c3a1ed30eb71f03f476c5fa7ead831ef991a55 + 66e2c4b43e0cf8f0bdff0733a0b97ce57874e35d + 624e609639853fe22c88d42a8fd1f53a0e9b7ebe + 2ea90778052ba7558fab36e3fd5d149512ff986b + b9f5f740a8cd76700020e3903ee55ecff78bd3e5 cache:rev-branch-cache -- {} (mandatory: False) bzip2-v2 + adding changesets + adding manifests + adding file changes + added 7 changesets with 7 changes to 1 files (+1 heads) + new changesets ac39af4a9f7d:b9f5f740a8cd (7 drafts) + (run 'hg heads' to see heads, 'hg merge' to merge) + o [draft] commit_6 + | + o [draft] commit_5 + | + o [draft] commit_4 + | + o [draft] commit_3 + | + | o [draft] commit_2 + | | + | o [draft] commit_1 + |/ + o [draft] commit_root + + % test bundle type v1 - searching for changes - 1 changesets found + =================================== + 7 changesets found HG10BZ - c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf + ac39af4a9f7d2aaa7d244720e57838be9bf63b03 + 901e97fadc587978ec52f2fa76af4aefc2d191e8 + a8c3a1ed30eb71f03f476c5fa7ead831ef991a55 + 66e2c4b43e0cf8f0bdff0733a0b97ce57874e35d + 624e609639853fe22c88d42a8fd1f53a0e9b7ebe + 2ea90778052ba7558fab36e3fd5d149512ff986b + b9f5f740a8cd76700020e3903ee55ecff78bd3e5 bzip2-v1 + adding changesets + adding manifests + adding file changes + added 7 changesets with 7 changes to 1 files (+1 heads) + new changesets ac39af4a9f7d:b9f5f740a8cd (7 drafts) + (run 'hg heads' to see heads, 'hg merge' to merge) + o [draft] commit_6 + | + o [draft] commit_5 + | + o [draft] commit_4 + | + o [draft] commit_3 + | + | o [draft] commit_2 + | | + | o [draft] commit_1 + |/ + o [draft] commit_root + + % test bundle type gzip-v1 - searching for changes - 1 changesets found + =================================== + 7 changesets found HG10GZ - c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf + ac39af4a9f7d2aaa7d244720e57838be9bf63b03 + 901e97fadc587978ec52f2fa76af4aefc2d191e8 + a8c3a1ed30eb71f03f476c5fa7ead831ef991a55 + 66e2c4b43e0cf8f0bdff0733a0b97ce57874e35d + 624e609639853fe22c88d42a8fd1f53a0e9b7ebe + 2ea90778052ba7558fab36e3fd5d149512ff986b + b9f5f740a8cd76700020e3903ee55ecff78bd3e5 gzip-v1 + adding changesets + adding manifests + adding file changes + added 7 changesets with 7 changes to 1 files (+1 heads) + new changesets ac39af4a9f7d:b9f5f740a8cd (7 drafts) + (run 'hg heads' to see heads, 'hg merge' to merge) + o [draft] commit_6 + | + o [draft] commit_5 + | + o [draft] commit_4 + | + o [draft] commit_3 + | + | o [draft] commit_2 + | | + | o [draft] commit_1 + |/ + o [draft] commit_root + + + % test bundle type v3 + =================================== + 7 changesets found + HG20\x00\x00 (esc) + Stream params: {Compression: BZ} + changegroup -- {nbchanges: 7, targetphase: 2, version: 03} (mandatory: True) + ac39af4a9f7d2aaa7d244720e57838be9bf63b03 + 901e97fadc587978ec52f2fa76af4aefc2d191e8 + a8c3a1ed30eb71f03f476c5fa7ead831ef991a55 + 66e2c4b43e0cf8f0bdff0733a0b97ce57874e35d + 624e609639853fe22c88d42a8fd1f53a0e9b7ebe + 2ea90778052ba7558fab36e3fd5d149512ff986b + b9f5f740a8cd76700020e3903ee55ecff78bd3e5 + cache:rev-branch-cache -- {} (mandatory: False) + phase-heads -- {} (mandatory: True) + 66e2c4b43e0cf8f0bdff0733a0b97ce57874e35d public + a8c3a1ed30eb71f03f476c5fa7ead831ef991a55 draft + 2ea90778052ba7558fab36e3fd5d149512ff986b draft + b9f5f740a8cd76700020e3903ee55ecff78bd3e5 secret + bzip2-v2;cg.version=03 + + adding changesets + adding manifests + adding file changes + added 7 changesets with 7 changes to 1 files (+1 heads) + new changesets ac39af4a9f7d:b9f5f740a8cd (4 drafts, 1 secrets) + (run 'hg heads' to see heads, 'hg merge' to merge) + o [secret] commit_6 + | + o [draft] commit_5 + | + o [draft] commit_4 + | + o [public] commit_3 + | + | o [draft] commit_2 + | | + | o [draft] commit_1 + |/ + o [public] commit_root + + Compression level can be adjusted for bundle2 bundles @@ -167,36 +449,90 @@ > testbundle $t > done % test bundle type zstd - searching for changes - 1 changesets found + =================================== + 7 changesets found HG20\x00\x00 (esc) Stream params: {Compression: ZS} - changegroup -- {nbchanges: 1, version: 02} (mandatory: True) - c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf + changegroup -- {nbchanges: 7, version: 02} (mandatory: True) + ac39af4a9f7d2aaa7d244720e57838be9bf63b03 + 901e97fadc587978ec52f2fa76af4aefc2d191e8 + a8c3a1ed30eb71f03f476c5fa7ead831ef991a55 + 66e2c4b43e0cf8f0bdff0733a0b97ce57874e35d + 624e609639853fe22c88d42a8fd1f53a0e9b7ebe + 2ea90778052ba7558fab36e3fd5d149512ff986b + b9f5f740a8cd76700020e3903ee55ecff78bd3e5 cache:rev-branch-cache -- {} (mandatory: False) zstd-v2 + adding changesets + adding manifests + adding file changes + added 7 changesets with 7 changes to 1 files (+1 heads) + new changesets ac39af4a9f7d:b9f5f740a8cd (7 drafts) + (run 'hg heads' to see heads, 'hg merge' to merge) + o [draft] commit_6 + | + o [draft] commit_5 + | + o [draft] commit_4 + | + o [draft] commit_3 + | + | o [draft] commit_2 + | | + | o [draft] commit_1 + |/ + o [draft] commit_root + + % test bundle type zstd-v2 - searching for changes - 1 changesets found + =================================== + 7 changesets found HG20\x00\x00 (esc) Stream params: {Compression: ZS} - changegroup -- {nbchanges: 1, version: 02} (mandatory: True) - c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf + changegroup -- {nbchanges: 7, version: 02} (mandatory: True) + ac39af4a9f7d2aaa7d244720e57838be9bf63b03 + 901e97fadc587978ec52f2fa76af4aefc2d191e8 + a8c3a1ed30eb71f03f476c5fa7ead831ef991a55 + 66e2c4b43e0cf8f0bdff0733a0b97ce57874e35d + 624e609639853fe22c88d42a8fd1f53a0e9b7ebe + 2ea90778052ba7558fab36e3fd5d149512ff986b + b9f5f740a8cd76700020e3903ee55ecff78bd3e5 cache:rev-branch-cache -- {} (mandatory: False) zstd-v2 + adding changesets + adding manifests + adding file changes + added 7 changesets with 7 changes to 1 files (+1 heads) + new changesets ac39af4a9f7d:b9f5f740a8cd (7 drafts) + (run 'hg heads' to see heads, 'hg merge' to merge) + o [draft] commit_6 + | + o [draft] commit_5 + | + o [draft] commit_4 + | + o [draft] commit_3 + | + | o [draft] commit_2 + | | + | o [draft] commit_1 + |/ + o [draft] commit_root + + Explicit request for zstd on non-generaldelta repos $ hg --config format.usegeneraldelta=false init nogd $ hg -q -R nogd pull t1 $ hg -R nogd bundle -a -t zstd nogd-zstd - 1 changesets found + 6 changesets found zstd-v1 always fails - $ hg -R tzstd bundle -a -t zstd-v1 zstd-v1 + $ hg -R t1 bundle -a -t zstd-v1 zstd-v1 abort: compression engine zstd is not supported on v1 bundles (see 'hg help bundlespec' for supported values for --type) [10] @@ -243,26 +579,44 @@ Test controlling the changegroup version $ hg -R t1 bundle --config experimental.changegroup3=yes -a -t v2 ./v2-cg-default.hg - 1 changesets found + 7 changesets found $ hg debugbundle ./v2-cg-default.hg --part-type changegroup Stream params: {Compression: BZ} - changegroup -- {nbchanges: 1, version: 02} (mandatory: True) - c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf + changegroup -- {nbchanges: 7, version: 02} (mandatory: True) + ac39af4a9f7d2aaa7d244720e57838be9bf63b03 + 901e97fadc587978ec52f2fa76af4aefc2d191e8 + a8c3a1ed30eb71f03f476c5fa7ead831ef991a55 + 66e2c4b43e0cf8f0bdff0733a0b97ce57874e35d + 624e609639853fe22c88d42a8fd1f53a0e9b7ebe + 2ea90778052ba7558fab36e3fd5d149512ff986b + b9f5f740a8cd76700020e3903ee55ecff78bd3e5 $ hg debugbundle ./v2-cg-default.hg --spec bzip2-v2 $ hg -R t1 bundle --config experimental.changegroup3=yes -a -t 'v2;cg.version=02' ./v2-cg-02.hg - 1 changesets found + 7 changesets found $ hg debugbundle ./v2-cg-02.hg --part-type changegroup Stream params: {Compression: BZ} - changegroup -- {nbchanges: 1, version: 02} (mandatory: True) - c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf + changegroup -- {nbchanges: 7, version: 02} (mandatory: True) + ac39af4a9f7d2aaa7d244720e57838be9bf63b03 + 901e97fadc587978ec52f2fa76af4aefc2d191e8 + a8c3a1ed30eb71f03f476c5fa7ead831ef991a55 + 66e2c4b43e0cf8f0bdff0733a0b97ce57874e35d + 624e609639853fe22c88d42a8fd1f53a0e9b7ebe + 2ea90778052ba7558fab36e3fd5d149512ff986b + b9f5f740a8cd76700020e3903ee55ecff78bd3e5 $ hg debugbundle ./v2-cg-02.hg --spec bzip2-v2 $ hg -R t1 bundle --config experimental.changegroup3=yes -a -t 'v2;cg.version=03' ./v2-cg-03.hg - 1 changesets found + 7 changesets found $ hg debugbundle ./v2-cg-03.hg --part-type changegroup Stream params: {Compression: BZ} - changegroup -- {nbchanges: 1, version: 03} (mandatory: True) - c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf + changegroup -- {nbchanges: 7, version: 03} (mandatory: True) + ac39af4a9f7d2aaa7d244720e57838be9bf63b03 + 901e97fadc587978ec52f2fa76af4aefc2d191e8 + a8c3a1ed30eb71f03f476c5fa7ead831ef991a55 + 66e2c4b43e0cf8f0bdff0733a0b97ce57874e35d + 624e609639853fe22c88d42a8fd1f53a0e9b7ebe + 2ea90778052ba7558fab36e3fd5d149512ff986b + b9f5f740a8cd76700020e3903ee55ecff78bd3e5 $ hg debugbundle ./v2-cg-03.hg --spec bzip2-v2;cg.version=03 diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-check-code.t --- a/tests/test-check-code.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-check-code.t Thu Jun 22 11:36:37 2023 +0200 @@ -57,6 +57,7 @@ .arcconfig .clang-format .editorconfig + .gitattributes .hgignore .hgsigs .hgtags diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-clone-stream-revlog-split.t --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/tests/test-clone-stream-revlog-split.t Thu Jun 22 11:36:37 2023 +0200 @@ -0,0 +1,179 @@ +Test stream cloning while a revlog split happens +------------------------------------------------ + +#testcases stream-bundle2-v2 stream-bundle2-v3 + +#if stream-bundle2-v3 + $ cat << EOF >> $HGRCPATH + > [experimental] + > stream-v3 = yes + > EOF +#endif + +setup a repository for tests +---------------------------- + + $ cat >> $HGRCPATH << EOF + > [format] + > # skip compression to make it easy to trigger a split + > revlog-compression=none + > [phases] + > publish=no + > EOF + + $ hg init server + $ cd server + $ file="some-file" + $ printf '%20d' '1' > $file + $ hg commit -Aqma + $ printf '%1024d' '1' > $file + $ hg commit -Aqmb + $ printf '%20d' '1' > $file + $ hg commit -Aqmc + +check the revlog is inline + + $ f -s .hg/store/data/some-file* + .hg/store/data/some-file.i: size=1259 + $ hg debug-revlog-index some-file + rev linkrev nodeid p1-nodeid p2-nodeid + 0 0 ed70cecbc103 000000000000 000000000000 + 1 1 7241018db64c ed70cecbc103 000000000000 + 2 2 fa1120531cc1 7241018db64c 000000000000 + $ cd .. + +setup synchronisation file + + $ HG_TEST_STREAM_WALKED_FILE_1="$TESTTMP/sync_file_walked_1" + $ export HG_TEST_STREAM_WALKED_FILE_1 + $ HG_TEST_STREAM_WALKED_FILE_2="$TESTTMP/sync_file_walked_2" + $ export HG_TEST_STREAM_WALKED_FILE_2 + $ HG_TEST_STREAM_WALKED_FILE_3="$TESTTMP/sync_file_walked_3" + $ export HG_TEST_STREAM_WALKED_FILE_3 + + +Test stream-clone raced by a revlog-split +========================================= + +Test stream-clone where the file is split right after the lock section is done + +Start the server + + $ hg serve -R server \ + > -p $HGPORT1 -d --error errors.log --pid-file=hg.pid \ + > --config extensions.stream_steps="$RUNTESTDIR/testlib/ext-stream-clone-steps.py" + $ cat hg.pid >> $DAEMON_PIDS + +Start a client doing a streaming clone + + $ ( \ + > hg clone --debug --stream -U http://localhost:$HGPORT1 \ + > clone-while-split > client.log 2>&1; \ + > touch "$HG_TEST_STREAM_WALKED_FILE_3" \ + > ) & + +Wait for the server to be done collecting data + + $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_1 + +trigger a split + + $ dd if=/dev/zero of=server/$file bs=1k count=128 > /dev/null 2>&1 + $ hg -R server ci -m "triggering a split" --config ui.timeout.warn=-1 + +unlock the stream generation + + $ touch $HG_TEST_STREAM_WALKED_FILE_2 + +wait for the client to be done cloning. + + $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_3 + +Check everything is fine + + $ cat client.log + using http://localhost:$HGPORT1/ + sending capabilities command + query 1; heads + sending batch command + streaming all changes + sending getbundle command + bundle2-input-bundle: with-transaction + bundle2-input-part: "stream2" (params: 3 mandatory) supported (stream-bundle2-v2 !) + bundle2-input-part: "stream3-exp" (params: 1 mandatory) supported (stream-bundle2-v3 !) + applying stream bundle + 7 files to transfer, 2.11 KB of data (stream-bundle2-v2 !) + adding [s] data/some-file.i (1.23 KB) (stream-bundle2-v2 !) + 7 entries to transfer (stream-bundle2-v3 !) + adding [s] data/some-file.d (1.04 KB) (stream-bundle2-v3 !) + adding [s] data/some-file.i (192 bytes) (stream-bundle2-v3 !) + adding [s] phaseroots (43 bytes) + adding [s] 00manifest.i (348 bytes) + adding [s] 00changelog.i (381 bytes) + adding [c] branch2-served (94 bytes) + adding [c] rbc-names-v1 (7 bytes) + adding [c] rbc-revs-v1 (24 bytes) + updating the branch cache + transferred 2.11 KB in * seconds (* */sec) (glob) + bundle2-input-part: total payload size 2268 (stream-bundle2-v2 !) + bundle2-input-part: total payload size 2296 (stream-bundle2-v3 !) + bundle2-input-part: "listkeys" (params: 1 mandatory) supported + bundle2-input-bundle: 2 parts total + checking for updated bookmarks + updating the branch cache + (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob) + $ tail -2 errors.log + $ hg -R clone-while-split verify + checking changesets + checking manifests + crosschecking files in changesets and manifests + checking files + checking dirstate + checked 3 changesets with 3 changes to 1 files + $ hg -R clone-while-split tip + changeset: 2:dbd9854c38a6 + tag: tip + user: test + date: Thu Jan 01 00:00:00 1970 +0000 + summary: c + + $ hg -R clone-while-split debug-revlog-index some-file + rev linkrev nodeid p1-nodeid p2-nodeid + 0 0 ed70cecbc103 000000000000 000000000000 + 1 1 7241018db64c ed70cecbc103 000000000000 + 2 2 fa1120531cc1 7241018db64c 000000000000 + $ hg -R server phase --rev 'all()' + 0: draft + 1: draft + 2: draft + 3: draft + $ hg -R clone-while-split phase --rev 'all()' + 0: draft + 1: draft + 2: draft + +subsequent pull work + + $ hg -R clone-while-split pull + pulling from http://localhost:$HGPORT1/ + searching for changes + adding changesets + adding manifests + adding file changes + added 1 changesets with 1 changes to 1 files + new changesets df05c6cb1406 (1 drafts) + (run 'hg update' to get a working copy) + + $ hg -R clone-while-split debug-revlog-index some-file + rev linkrev nodeid p1-nodeid p2-nodeid + 0 0 ed70cecbc103 000000000000 000000000000 + 1 1 7241018db64c ed70cecbc103 000000000000 + 2 2 fa1120531cc1 7241018db64c 000000000000 + 3 3 a631378adaa3 fa1120531cc1 000000000000 + $ hg -R clone-while-split verify + checking changesets + checking manifests + crosschecking files in changesets and manifests + checking files + checking dirstate + checked 4 changesets with 4 changes to 1 files diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-clone-stream.t --- a/tests/test-clone-stream.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-clone-stream.t Thu Jun 22 11:36:37 2023 +0200 @@ -1,6 +1,6 @@ #require serve no-reposimplestore no-chg -#testcases stream-legacy stream-bundle2 +#testcases stream-legacy stream-bundle2-v2 stream-bundle2-v3 #if stream-legacy $ cat << EOF >> $HGRCPATH @@ -8,6 +8,12 @@ > bundle2.stream = no > EOF #endif +#if stream-bundle2-v3 + $ cat << EOF >> $HGRCPATH + > [experimental] + > stream-v3 = yes + > EOF +#endif Initialize repository @@ -128,6 +134,75 @@ changegroup 01 02 + 03 + checkheads + related + digests + md5 + sha1 + sha512 + error + abort + unsupportedcontent + pushraced + pushkey + hgtagsfnodes + listkeys + phases + heads + pushkey + remote-changegroup + http + https + + $ hg clone --stream -U http://localhost:$HGPORT server-disabled + warning: stream clone requested but server has them disabled + requesting all changes + adding changesets + adding manifests + adding file changes + added 3 changesets with 1088 changes to 1088 files + new changesets 96ee1d7354c4:5223b5e3265f + + $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1" + 200 Script output follows + content-type: application/mercurial-0.2 + + + $ f --size body --hexdump --bytes 100 + body: size=140 + 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......| + 0010: 73 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |s.ERROR:ABORT...| + 0020: 00 01 01 07 3c 04 16 6d 65 73 73 61 67 65 73 74 |....<..messagest| + 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques| + 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d| + 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th| + 0060: 69 73 20 66 |is f| + +#endif +#if stream-bundle2-v2 + $ hg debugcapabilities http://localhost:$HGPORT + Main capabilities: + batch + branchmap + $USUAL_BUNDLE2_CAPS_SERVER$ + changegroupsubset + compression=$BUNDLE2_COMPRESSIONS$ + getbundle + httpheader=1024 + httpmediatype=0.1rx,0.1tx,0.2tx + known + lookup + pushkey + unbundle=HG10GZ,HG10BZ,HG10UN + unbundlehash + Bundle2 capabilities: + HG20 + bookmarks + changegroup + 01 + 02 + 03 checkheads related digests @@ -157,23 +232,23 @@ added 3 changesets with 1088 changes to 1088 files new changesets 96ee1d7354c4:5223b5e3265f - $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1" + $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1" 200 Script output follows content-type: application/mercurial-0.2 $ f --size body --hexdump --bytes 100 - body: size=232 + body: size=140 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......| - 0010: cf 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |..ERROR:ABORT...| - 0020: 00 01 01 07 3c 04 72 6d 65 73 73 61 67 65 73 74 |....<.rmessagest| + 0010: 73 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |s.ERROR:ABORT...| + 0020: 00 01 01 07 3c 04 16 6d 65 73 73 61 67 65 73 74 |....<..messagest| 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques| 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d| 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th| 0060: 69 73 20 66 |is f| #endif -#if stream-bundle2 +#if stream-bundle2-v3 $ hg debugcapabilities http://localhost:$HGPORT Main capabilities: batch @@ -195,6 +270,7 @@ changegroup 01 02 + 03 checkheads related digests @@ -224,16 +300,16 @@ added 3 changesets with 1088 changes to 1088 files new changesets 96ee1d7354c4:5223b5e3265f - $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1" + $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1" 200 Script output follows content-type: application/mercurial-0.2 $ f --size body --hexdump --bytes 100 - body: size=232 + body: size=140 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......| - 0010: cf 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |..ERROR:ABORT...| - 0020: 00 01 01 07 3c 04 72 6d 65 73 73 61 67 65 73 74 |....<.rmessagest| + 0010: 73 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |s.ERROR:ABORT...| + 0020: 00 01 01 07 3c 04 16 6d 65 73 73 61 67 65 73 74 |....<..messagest| 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques| 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d| 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th| @@ -260,7 +336,7 @@ no changes found $ cat server/errors.txt #endif -#if stream-bundle2 +#if stream-bundle2-v2 $ hg clone --stream -U http://localhost:$HGPORT clone1 streaming all changes 1093 files to transfer, 102 KB of data (no-zstd !) @@ -281,10 +357,30 @@ tags2-served $ cat server/errors.txt #endif +#if stream-bundle2-v3 + $ hg clone --stream -U http://localhost:$HGPORT clone1 + streaming all changes + 1093 entries to transfer + transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) + transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !) + + $ ls -1 clone1/.hg/cache + branch2-base + branch2-immutable + branch2-served + branch2-served.hidden + branch2-visible + branch2-visible-hidden + rbc-names-v1 + rbc-revs-v1 + tags2 + tags2-served + $ cat server/errors.txt +#endif getbundle requests with stream=1 are uncompressed - $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto '0.1 0.2 comp=zlib,none' --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1" + $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto '0.1 0.2 comp=zlib,none' --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Astream%253Dv2&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1" 200 Script output follows content-type: application/mercurial-0.2 @@ -384,7 +480,7 @@ searching for changes no changes found #endif -#if stream-bundle2 +#if stream-bundle2-v2 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed streaming all changes 1093 files to transfer, 102 KB of data (no-zstd !) @@ -392,6 +488,13 @@ 1093 files to transfer, 98.9 KB of data (zstd !) transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !) #endif +#if stream-bundle2-v3 + $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed + streaming all changes + 1093 entries to transfer + transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) + transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !) +#endif Clone with background file closing enabled @@ -423,7 +526,7 @@ updating the branch cache (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob) #endif -#if stream-bundle2 +#if stream-bundle2-v2 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding using http://localhost:$HGPORT/ sending capabilities command @@ -450,6 +553,32 @@ updating the branch cache (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob) #endif +#if stream-bundle2-v3 + $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding + using http://localhost:$HGPORT/ + sending capabilities command + query 1; heads + sending batch command + streaming all changes + sending getbundle command + bundle2-input-bundle: with-transaction + bundle2-input-part: "stream3-exp" (params: 1 mandatory) supported + applying stream bundle + 1093 entries to transfer + starting 4 threads for background file closing + starting 4 threads for background file closing + updating the branch cache + transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) + bundle2-input-part: total payload size 120079 (no-zstd !) + transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !) + bundle2-input-part: total payload size 117240 (zstd no-bigendian !) + bundle2-input-part: total payload size 116138 (zstd bigendian !) + bundle2-input-part: "listkeys" (params: 1 mandatory) supported + bundle2-input-bundle: 2 parts total + checking for updated bookmarks + updating the branch cache + (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob) +#endif Cannot stream clone when there are secret changesets @@ -482,7 +611,7 @@ searching for changes no changes found #endif -#if stream-bundle2 +#if stream-bundle2-v2 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed streaming all changes 1093 files to transfer, 102 KB of data (no-zstd !) @@ -490,6 +619,13 @@ 1093 files to transfer, 98.9 KB of data (zstd !) transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !) #endif +#if stream-bundle2-v3 + $ hg clone --stream -U http://localhost:$HGPORT secret-allowed + streaming all changes + 1093 entries to transfer + transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) + transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !) +#endif $ killdaemons.py @@ -543,40 +679,6 @@ $ mkdir changing $ cd changing -extension for delaying the server process so we reliably can modify the repo -while cloning - - $ cat > stream_steps.py < import os - > import sys - > from mercurial import ( - > encoding, - > extensions, - > streamclone, - > testing, - > ) - > WALKED_FILE_1 = encoding.environ[b'HG_TEST_STREAM_WALKED_FILE_1'] - > WALKED_FILE_2 = encoding.environ[b'HG_TEST_STREAM_WALKED_FILE_2'] - > - > def _test_sync_point_walk_1(orig, repo): - > testing.write_file(WALKED_FILE_1) - > - > def _test_sync_point_walk_2(orig, repo): - > assert repo._currentlock(repo._lockref) is None - > testing.wait_file(WALKED_FILE_2) - > - > extensions.wrapfunction( - > streamclone, - > '_test_sync_point_walk_1', - > _test_sync_point_walk_1 - > ) - > extensions.wrapfunction( - > streamclone, - > '_test_sync_point_walk_2', - > _test_sync_point_walk_2 - > ) - > EOF - prepare repo with small and big file to cover both code paths in emitrevlogdata $ hg init repo @@ -636,7 +738,7 @@ updating to branch default 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved #endif -#if stream-bundle2 +#if stream-bundle2-v2 $ hg clone --stream http://localhost:$HGPORT with-bookmarks streaming all changes 1096 files to transfer, 102 KB of data (no-zstd !) @@ -646,6 +748,15 @@ updating to branch default 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved #endif +#if stream-bundle2-v3 + $ hg clone --stream http://localhost:$HGPORT with-bookmarks + streaming all changes + 1096 entries to transfer + transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) + transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !) + updating to branch default + 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved +#endif $ hg verify -R with-bookmarks -q $ hg -R with-bookmarks bookmarks some-bookmark 2:5223b5e3265f @@ -672,7 +783,7 @@ updating to branch default 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved #endif -#if stream-bundle2 +#if stream-bundle2-v2 $ hg clone --stream http://localhost:$HGPORT phase-publish streaming all changes 1096 files to transfer, 102 KB of data (no-zstd !) @@ -682,6 +793,15 @@ updating to branch default 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved #endif +#if stream-bundle2-v3 + $ hg clone --stream http://localhost:$HGPORT phase-publish + streaming all changes + 1096 entries to transfer + transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) + transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !) + updating to branch default + 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved +#endif $ hg verify -R phase-publish -q $ hg -R phase-publish phase -r 'all()' 0: public @@ -718,7 +838,7 @@ 1: public 2: public #endif -#if stream-bundle2 +#if stream-bundle2-v2 $ hg clone --stream http://localhost:$HGPORT phase-no-publish streaming all changes 1097 files to transfer, 102 KB of data (no-zstd !) @@ -732,6 +852,19 @@ 1: draft 2: draft #endif +#if stream-bundle2-v3 + $ hg clone --stream http://localhost:$HGPORT phase-no-publish + streaming all changes + 1097 entries to transfer + transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) + transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !) + updating to branch default + 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved + $ hg -R phase-no-publish phase -r 'all()' + 0: draft + 1: draft + 2: draft +#endif $ hg verify -R phase-no-publish -q $ killdaemons.py @@ -742,7 +875,7 @@ no obsolescence markers exchange in stream v1. #endif -#if stream-bundle2 +#if stream-bundle2-v2 Stream repository with obsolescence ----------------------------------- @@ -792,6 +925,55 @@ $ killdaemons.py #endif +#if stream-bundle2-v3 + +Stream repository with obsolescence +----------------------------------- + +Clone non-publishing with obsolescence + + $ cat >> $HGRCPATH << EOF + > [experimental] + > evolution=all + > EOF + + $ cd server + $ echo foo > foo + $ hg -q commit -m 'about to be pruned' + $ hg debugobsolete `hg log -r . -T '{node}'` -d '0 0' -u test --record-parents + 1 new obsolescence markers + obsoleted 1 changesets + $ hg up null -q + $ hg log -T '{rev}: {phase}\n' + 2: draft + 1: draft + 0: draft + $ hg serve -p $HGPORT -d --pid-file=hg.pid + $ cat hg.pid > $DAEMON_PIDS + $ cd .. + + $ hg clone -U --stream http://localhost:$HGPORT with-obsolescence + streaming all changes + 1098 entries to transfer + transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) + transferred 99.5 KB in * seconds (* */sec) (glob) (zstd !) + $ hg -R with-obsolescence log -T '{rev}: {phase}\n' + 2: draft + 1: draft + 0: draft + $ hg debugobsolete -R with-obsolescence + 8c206a663911c1f97f2f9d7382e417ae55872cfa 0 {5223b5e3265f0df40bb743da62249413d74ac70f} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'} + $ hg verify -R with-obsolescence -q + + $ hg clone -U --stream --config experimental.evolution=0 http://localhost:$HGPORT with-obsolescence-no-evolution + streaming all changes + remote: abort: server has obsolescence markers, but client cannot receive them via stream clone + abort: pull failed on remote + [100] + + $ killdaemons.py + +#endif Cloning a repo with no requirements doesn't give some obscure error diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-clonebundles-autogen.t --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/tests/test-clonebundles-autogen.t Thu Jun 22 11:36:37 2023 +0200 @@ -0,0 +1,490 @@ + +#require no-reposimplestore no-chg + +initial setup + + $ hg init server + $ cat >> server/.hg/hgrc << EOF + > [extensions] + > clonebundles = + > + > [clone-bundles] + > auto-generate.on-change = yes + > auto-generate.formats = v2 + > upload-command = cp "\$HGCB_BUNDLE_PATH" "$TESTTMP"/final-upload/ + > delete-command = rm -f "$TESTTMP/final-upload/\$HGCB_BASENAME" + > url-template = file://$TESTTMP/final-upload/{basename} + > + > [devel] + > debug.clonebundles=yes + > EOF + + $ mkdir final-upload + $ hg clone server client + updating to branch default + 0 files updated, 0 files merged, 0 files removed, 0 files unresolved + $ cd client + +Test bundles are generated on push +================================== + + $ touch foo + $ hg -q commit -A -m 'add foo' + $ touch bar + $ hg -q commit -A -m 'add bar' + $ hg push + pushing to $TESTTMP/server + searching for changes + adding changesets + adding manifests + adding file changes + 2 changesets found + added 2 changesets with 2 changes to 2 files + clone-bundles: starting bundle generation: bzip2-v2 + $ cat ../server/.hg/clonebundles.manifest + file:/*/$TESTTMP/final-upload/full-bzip2-v2-2_revs-aaff8d2ffbbf_tip-*_txn.hg BUNDLESPEC=bzip2-v2 (glob) + $ ls -1 ../final-upload + full-bzip2-v2-2_revs-aaff8d2ffbbf_tip-*_txn.hg (glob) + $ ls -1 ../server/.hg/tmp-bundles + +Newer bundles are generated with more pushes +-------------------------------------------- + + $ touch baz + $ hg -q commit -A -m 'add baz' + $ touch buz + $ hg -q commit -A -m 'add buz' + $ hg push + pushing to $TESTTMP/server + searching for changes + adding changesets + adding manifests + adding file changes + 4 changesets found + added 2 changesets with 2 changes to 2 files + clone-bundles: starting bundle generation: bzip2-v2 + + $ cat ../server/.hg/clonebundles.manifest + file:/*/$TESTTMP/final-upload/full-bzip2-v2-4_revs-6427147b985a_tip-*_txn.hg BUNDLESPEC=bzip2-v2 (glob) + $ ls -1 ../final-upload + full-bzip2-v2-2_revs-aaff8d2ffbbf_tip-*_txn.hg (glob) + full-bzip2-v2-4_revs-6427147b985a_tip-*_txn.hg (glob) + $ ls -1 ../server/.hg/tmp-bundles + +Older bundles are cleaned up with more pushes +--------------------------------------------- + + $ touch faz + $ hg -q commit -A -m 'add faz' + $ touch fuz + $ hg -q commit -A -m 'add fuz' + $ hg push + pushing to $TESTTMP/server + searching for changes + adding changesets + adding manifests + adding file changes + clone-bundles: deleting bundle full-bzip2-v2-2_revs-aaff8d2ffbbf_tip-*_txn.hg (glob) + 6 changesets found + added 2 changesets with 2 changes to 2 files + clone-bundles: starting bundle generation: bzip2-v2 + + $ cat ../server/.hg/clonebundles.manifest + file:/*/$TESTTMP/final-upload/full-bzip2-v2-6_revs-b1010e95ea00_tip-*_txn.hg BUNDLESPEC=bzip2-v2 (glob) + $ ls -1 ../final-upload + full-bzip2-v2-4_revs-6427147b985a_tip-*_txn.hg (glob) + full-bzip2-v2-6_revs-b1010e95ea00_tip-*_txn.hg (glob) + $ ls -1 ../server/.hg/tmp-bundles + +Test conditions to get them generated +===================================== + +Check ratio + + $ cat >> ../server/.hg/hgrc << EOF + > [clone-bundles] + > trigger.below-bundled-ratio = 0.5 + > EOF + $ touch far + $ hg -q commit -A -m 'add far' + $ hg push + pushing to $TESTTMP/server + searching for changes + adding changesets + adding manifests + adding file changes + added 1 changesets with 1 changes to 1 files + $ cat ../server/.hg/clonebundles.manifest + file:/*/$TESTTMP/final-upload/full-bzip2-v2-6_revs-b1010e95ea00_tip-*_txn.hg BUNDLESPEC=bzip2-v2 (glob) + $ ls -1 ../final-upload + full-bzip2-v2-4_revs-6427147b985a_tip-*_txn.hg (glob) + full-bzip2-v2-6_revs-b1010e95ea00_tip-*_txn.hg (glob) + $ ls -1 ../server/.hg/tmp-bundles + +Check absolute number of revisions + + $ cat >> ../server/.hg/hgrc << EOF + > [clone-bundles] + > trigger.revs = 2 + > EOF + $ touch bur + $ hg -q commit -A -m 'add bur' + $ hg push + pushing to $TESTTMP/server + searching for changes + adding changesets + adding manifests + adding file changes + clone-bundles: deleting bundle full-bzip2-v2-4_revs-6427147b985a_tip-*_txn.hg (glob) + 8 changesets found + added 1 changesets with 1 changes to 1 files + clone-bundles: starting bundle generation: bzip2-v2 + $ cat ../server/.hg/clonebundles.manifest + file:/*/$TESTTMP/final-upload/full-bzip2-v2-8_revs-8353e8af1306_tip-*_txn.hg BUNDLESPEC=bzip2-v2 (glob) + $ ls -1 ../final-upload + full-bzip2-v2-6_revs-b1010e95ea00_tip-*_txn.hg (glob) + full-bzip2-v2-8_revs-8353e8af1306_tip-*_txn.hg (glob) + $ ls -1 ../server/.hg/tmp-bundles + +(that one would not generate new bundles) + + $ touch tur + $ hg -q commit -A -m 'add tur' + $ hg push + pushing to $TESTTMP/server + searching for changes + adding changesets + adding manifests + adding file changes + added 1 changesets with 1 changes to 1 files + $ cat ../server/.hg/clonebundles.manifest + file:/*/$TESTTMP/final-upload/full-bzip2-v2-8_revs-8353e8af1306_tip-*_txn.hg BUNDLESPEC=bzip2-v2 (glob) + $ ls -1 ../final-upload + full-bzip2-v2-6_revs-b1010e95ea00_tip-*_txn.hg (glob) + full-bzip2-v2-8_revs-8353e8af1306_tip-*_txn.hg (glob) + $ ls -1 ../server/.hg/tmp-bundles + +Test generation through the dedicated command +============================================= + + $ cat >> ../server/.hg/hgrc << EOF + > [clone-bundles] + > auto-generate.on-change = no + > EOF + +Check the command can generate content when needed +-------------------------------------------------- + +Do a push that makes the condition fulfilled, +Yet it should not automatically generate a bundle with +"auto-generate.on-change" not set. + + $ touch quoi + $ hg -q commit -A -m 'add quoi' + + $ pre_push_manifest=`cat ../server/.hg/clonebundles.manifest|f --sha256 | sed 's/.*=//' | cat` + $ pre_push_upload=`ls -1 ../final-upload|f --sha256 | sed 's/.*=//' | cat` + $ ls -1 ../server/.hg/tmp-bundles + + $ hg push + pushing to $TESTTMP/server + searching for changes + adding changesets + adding manifests + adding file changes + added 1 changesets with 1 changes to 1 files + + $ post_push_manifest=`cat ../server/.hg/clonebundles.manifest|f --sha256 | sed 's/.*=//' | cat` + $ post_push_upload=`ls -1 ../final-upload|f --sha256 | sed 's/.*=//' | cat` + $ ls -1 ../server/.hg/tmp-bundles + $ test "$pre_push_manifest" = "$post_push_manifest" + $ test "$pre_push_upload" = "$post_push_upload" + +Running the command should detect the stale bundles, and do the full automatic +generation logic. + + $ hg -R ../server/ admin::clone-bundles-refresh + clone-bundles: deleting bundle full-bzip2-v2-6_revs-b1010e95ea00_tip-*_txn.hg (glob) + clone-bundles: starting bundle generation: bzip2-v2 + 10 changesets found + $ cat ../server/.hg/clonebundles.manifest + file:/*/$TESTTMP/final-upload/full-bzip2-v2-10_revs-3b6f57f17d70_tip-*_acbr.hg BUNDLESPEC=bzip2-v2 (glob) + $ ls -1 ../final-upload + full-bzip2-v2-10_revs-3b6f57f17d70_tip-*_acbr.hg (glob) + full-bzip2-v2-8_revs-8353e8af1306_tip-*_txn.hg (glob) + $ ls -1 ../server/.hg/tmp-bundles + +Check the command cleans up older bundles when possible +------------------------------------------------------- + + $ hg -R ../server/ admin::clone-bundles-refresh + clone-bundles: deleting bundle full-bzip2-v2-8_revs-8353e8af1306_tip-*_txn.hg (glob) + $ cat ../server/.hg/clonebundles.manifest + file:/*/$TESTTMP/final-upload/full-bzip2-v2-10_revs-3b6f57f17d70_tip-*_acbr.hg BUNDLESPEC=bzip2-v2 (glob) + $ ls -1 ../final-upload + full-bzip2-v2-10_revs-3b6f57f17d70_tip-*_acbr.hg (glob) + $ ls -1 ../server/.hg/tmp-bundles + +Nothing is generated when the bundles are sufficiently up to date +----------------------------------------------------------------- + + $ touch feur + $ hg -q commit -A -m 'add feur' + + $ pre_push_manifest=`cat ../server/.hg/clonebundles.manifest|f --sha256 | sed 's/.*=//' | cat` + $ pre_push_upload=`ls -1 ../final-upload|f --sha256 | sed 's/.*=//' | cat` + $ ls -1 ../server/.hg/tmp-bundles + + $ hg push + pushing to $TESTTMP/server + searching for changes + adding changesets + adding manifests + adding file changes + added 1 changesets with 1 changes to 1 files + + $ post_push_manifest=`cat ../server/.hg/clonebundles.manifest|f --sha256 | sed 's/.*=//' | cat` + $ post_push_upload=`ls -1 ../final-upload|f --sha256 | sed 's/.*=//' | cat` + $ ls -1 ../server/.hg/tmp-bundles + $ test "$pre_push_manifest" = "$post_push_manifest" + $ test "$pre_push_upload" = "$post_push_upload" + + $ hg -R ../server/ admin::clone-bundles-refresh + + $ post_refresh_manifest=`cat ../server/.hg/clonebundles.manifest|f --sha256 | sed 's/.*=//' | cat` + $ post_refresh_upload=`ls -1 ../final-upload|f --sha256 | sed 's/.*=//' | cat` + $ ls -1 ../server/.hg/tmp-bundles + $ test "$pre_push_manifest" = "$post_refresh_manifest" + $ test "$pre_push_upload" = "$post_refresh_upload" + +Test modification of configuration +================================== + +Testing that later runs adapt to configuration changes even if the repository is +unchanged. + +adding more formats +------------------- + +bundle for added formats should be generated + +change configuration + + $ cat >> ../server/.hg/hgrc << EOF + > [clone-bundles] + > auto-generate.formats = v1, v2 + > EOF + +refresh the bundles + + $ hg -R ../server/ admin::clone-bundles-refresh + clone-bundles: starting bundle generation: bzip2-v1 + 11 changesets found + +the bundle for the "new" format should have been added + + $ cat ../server/.hg/clonebundles.manifest + file:/*/$TESTTMP/final-upload/full-bzip2-v1-11_revs-4226b1cd5fda_tip-*_acbr.hg BUNDLESPEC=bzip2-v1 (glob) + file:/*/$TESTTMP/final-upload/full-bzip2-v2-10_revs-3b6f57f17d70_tip-*_acbr.hg BUNDLESPEC=bzip2-v2 (glob) + $ ls -1 ../final-upload + full-bzip2-v1-11_revs-4226b1cd5fda_tip-*_acbr.hg (glob) + full-bzip2-v2-10_revs-3b6f57f17d70_tip-*_acbr.hg (glob) + $ ls -1 ../server/.hg/tmp-bundles + +Changing the ratio +------------------ + +Changing the ratio to something that would have triggered a bundle during the last push. + + $ cat >> ../server/.hg/hgrc << EOF + > [clone-bundles] + > trigger.below-bundled-ratio = 0.95 + > EOF + +refresh the bundles + + $ hg -R ../server/ admin::clone-bundles-refresh + clone-bundles: starting bundle generation: bzip2-v2 + 11 changesets found + + +the "outdated' bundle should be refreshed + + $ cat ../server/.hg/clonebundles.manifest + file:/*/$TESTTMP/final-upload/full-bzip2-v1-11_revs-4226b1cd5fda_tip-*_acbr.hg BUNDLESPEC=bzip2-v1 (glob) + file:/*/$TESTTMP/final-upload/full-bzip2-v2-11_revs-4226b1cd5fda_tip-*_acbr.hg BUNDLESPEC=bzip2-v2 (glob) + $ ls -1 ../final-upload + full-bzip2-v1-11_revs-4226b1cd5fda_tip-*_acbr.hg (glob) + full-bzip2-v2-10_revs-3b6f57f17d70_tip-*_acbr.hg (glob) + full-bzip2-v2-11_revs-4226b1cd5fda_tip-*_acbr.hg (glob) + $ ls -1 ../server/.hg/tmp-bundles + +Test more command options +========================= + +bundle clearing +--------------- + + $ hg -R ../server/ admin::clone-bundles-clear + clone-bundles: deleting bundle full-bzip2-v1-11_revs-4226b1cd5fda_tip-*_acbr.hg (glob) + clone-bundles: deleting bundle full-bzip2-v2-10_revs-3b6f57f17d70_tip-*_acbr.hg (glob) + clone-bundles: deleting bundle full-bzip2-v2-11_revs-4226b1cd5fda_tip-*_acbr.hg (glob) + +Nothing should remain + + $ cat ../server/.hg/clonebundles.manifest + $ ls -1 ../final-upload + $ ls -1 ../server/.hg/tmp-bundles + +background generation +--------------------- + +generate bundle using background subprocess +(since we are in devel mode, the command will still wait for the background +process to end) + + $ hg -R ../server/ admin::clone-bundles-refresh --background + 11 changesets found + 11 changesets found + clone-bundles: starting bundle generation: bzip2-v1 + clone-bundles: starting bundle generation: bzip2-v2 + +bundles should have been generated + + $ cat ../server/.hg/clonebundles.manifest + file:/*/$TESTTMP/final-upload/full-bzip2-v1-11_revs-4226b1cd5fda_tip-*_acbr.hg BUNDLESPEC=bzip2-v1 (glob) + file:/*/$TESTTMP/final-upload/full-bzip2-v2-11_revs-4226b1cd5fda_tip-*_acbr.hg BUNDLESPEC=bzip2-v2 (glob) + $ ls -1 ../final-upload + full-bzip2-v1-11_revs-4226b1cd5fda_tip-*_acbr.hg (glob) + full-bzip2-v2-11_revs-4226b1cd5fda_tip-*_acbr.hg (glob) + $ ls -1 ../server/.hg/tmp-bundles + +Test HTTP URL +========================= + + $ hg -R ../server/ admin::clone-bundles-clear + clone-bundles: deleting bundle full-bzip2-v1-11_revs-4226b1cd5fda_tip-*_acbr.hg (glob) + clone-bundles: deleting bundle full-bzip2-v2-11_revs-4226b1cd5fda_tip-*_acbr.hg (glob) + + $ cat >> ../server/.hg/hgrc << EOF + > [clone-bundles] + > url-template = https://example.com/final-upload/{basename} + > EOF + $ hg -R ../server/ admin::clone-bundles-refresh + clone-bundles: starting bundle generation: bzip2-v1 + 11 changesets found + clone-bundles: starting bundle generation: bzip2-v2 + 11 changesets found + + +bundles should have been generated with the SNIREQUIRED option + + $ cat ../server/.hg/clonebundles.manifest + https://example.com/final-upload/full-bzip2-v1-11_revs-4226b1cd5fda_tip-*_acbr.hg BUNDLESPEC=bzip2-v1 REQUIRESNI=true (glob) + https://example.com/final-upload/full-bzip2-v2-11_revs-4226b1cd5fda_tip-*_acbr.hg BUNDLESPEC=bzip2-v2 REQUIRESNI=true (glob) + +Test serving them through inline-clone bundle +============================================= + + $ cat >> ../server/.hg/hgrc << EOF + > [clone-bundles] + > auto-generate.serve-inline=yes + > EOF + $ hg -R ../server/ admin::clone-bundles-clear + clone-bundles: deleting bundle full-bzip2-v1-11_revs-4226b1cd5fda_tip-*_acbr.hg (glob) + clone-bundles: deleting bundle full-bzip2-v2-11_revs-4226b1cd5fda_tip-*_acbr.hg (glob) + +initial generation +------------------ + + + $ hg -R ../server/ admin::clone-bundles-refresh + clone-bundles: starting bundle generation: bzip2-v1 + 11 changesets found + clone-bundles: starting bundle generation: bzip2-v2 + 11 changesets found + $ cat ../server/.hg/clonebundles.manifest + peer-bundle-cache://full-bzip2-v1-11_revs-4226b1cd5fda_tip-*_acbr.hg BUNDLESPEC=bzip2-v1 (glob) + peer-bundle-cache://full-bzip2-v2-11_revs-4226b1cd5fda_tip-*_acbr.hg BUNDLESPEC=bzip2-v2 (glob) + $ ls -1 ../server/.hg/bundle-cache + full-bzip2-v1-11_revs-4226b1cd5fda_tip-*_acbr.hg (glob) + full-bzip2-v2-11_revs-4226b1cd5fda_tip-*_acbr.hg (glob) + $ ls -1 ../final-upload + +Regeneration eventually cleanup the old ones +-------------------------------------------- + +create more content + $ touch voit + $ hg -q commit -A -m 'add voit' + $ touch ar + $ hg -q commit -A -m 'add ar' + $ hg push + pushing to $TESTTMP/server + searching for changes + adding changesets + adding manifests + adding file changes + added 2 changesets with 2 changes to 2 files + +check first regeneration + + $ hg -R ../server/ admin::clone-bundles-refresh + clone-bundles: starting bundle generation: bzip2-v1 + 13 changesets found + clone-bundles: starting bundle generation: bzip2-v2 + 13 changesets found + $ cat ../server/.hg/clonebundles.manifest + peer-bundle-cache://full-bzip2-v1-13_revs-8a81f9be54ea_tip-*_acbr.hg BUNDLESPEC=bzip2-v1 (glob) + peer-bundle-cache://full-bzip2-v2-13_revs-8a81f9be54ea_tip-*_acbr.hg BUNDLESPEC=bzip2-v2 (glob) + $ ls -1 ../server/.hg/bundle-cache + full-bzip2-v1-11_revs-4226b1cd5fda_tip-*_acbr.hg (glob) + full-bzip2-v1-13_revs-8a81f9be54ea_tip-*_acbr.hg (glob) + full-bzip2-v2-11_revs-4226b1cd5fda_tip-*_acbr.hg (glob) + full-bzip2-v2-13_revs-8a81f9be54ea_tip-*_acbr.hg (glob) + $ ls -1 ../final-upload + +check first regeneration (should cleanup the one before that last) + + $ touch "investi" + $ hg -q commit -A -m 'add investi' + $ touch "lesgisla" + $ hg -q commit -A -m 'add lesgisla' + $ hg push + pushing to $TESTTMP/server + searching for changes + adding changesets + adding manifests + adding file changes + added 2 changesets with 2 changes to 2 files + + $ hg -R ../server/ admin::clone-bundles-refresh + clone-bundles: deleting inline bundle full-bzip2-v1-11_revs-4226b1cd5fda_tip-*_acbr.hg (glob) + clone-bundles: deleting inline bundle full-bzip2-v2-11_revs-4226b1cd5fda_tip-*_acbr.hg (glob) + clone-bundles: starting bundle generation: bzip2-v1 + 15 changesets found + clone-bundles: starting bundle generation: bzip2-v2 + 15 changesets found + $ cat ../server/.hg/clonebundles.manifest + peer-bundle-cache://full-bzip2-v1-15_revs-17615b3984c2_tip-*_acbr.hg BUNDLESPEC=bzip2-v1 (glob) + peer-bundle-cache://full-bzip2-v2-15_revs-17615b3984c2_tip-*_acbr.hg BUNDLESPEC=bzip2-v2 (glob) + $ ls -1 ../server/.hg/bundle-cache + full-bzip2-v1-13_revs-8a81f9be54ea_tip-*_acbr.hg (glob) + full-bzip2-v1-15_revs-17615b3984c2_tip-*_acbr.hg (glob) + full-bzip2-v2-13_revs-8a81f9be54ea_tip-*_acbr.hg (glob) + full-bzip2-v2-15_revs-17615b3984c2_tip-*_acbr.hg (glob) + $ ls -1 ../final-upload + +Check the url is correct +------------------------ + + $ hg clone -U ssh://user@dummy/server ssh-inline-clone + applying clone bundle from peer-bundle-cache://full-bzip2-v1-15_revs-17615b3984c2_tip-*_acbr.hg (glob) + adding changesets + adding manifests + adding file changes + added 15 changesets with 15 changes to 15 files + finished applying clone bundle + searching for changes + no changes found + 15 local changesets published diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-clonebundles.t --- a/tests/test-clonebundles.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-clonebundles.t Thu Jun 22 11:36:37 2023 +0200 @@ -233,6 +233,93 @@ no changes found 2 local changesets published +Inline bundle +============= + +Checking bundle retrieved over the wireprotocol + +Feature works over SSH with inline bundle +----------------------------------------- + + $ mkdir server/.hg/bundle-cache/ + $ cp full.hg server/.hg/bundle-cache/ + $ echo "peer-bundle-cache://full.hg" > server/.hg/clonebundles.manifest + $ hg clone -U ssh://user@dummy/server ssh-inline-clone + applying clone bundle from peer-bundle-cache://full.hg + adding changesets + adding manifests + adding file changes + added 2 changesets with 2 changes to 2 files + finished applying clone bundle + searching for changes + no changes found + 2 local changesets published + +HTTP Supports +------------- + + $ hg clone -U http://localhost:$HGPORT http-inline-clone + applying clone bundle from peer-bundle-cache://full.hg + adding changesets + adding manifests + adding file changes + added 2 changesets with 2 changes to 2 files + finished applying clone bundle + searching for changes + no changes found + 2 local changesets published + + +Check local behavior +-------------------- + +We don't use the clone bundle, but we do not crash either. + + $ hg clone -U ./server local-inline-clone-default + $ hg clone -U ./server local-inline-clone-pull --pull + requesting all changes + adding changesets + adding manifests + adding file changes + added 2 changesets with 2 changes to 2 files + new changesets 53245c60e682:aaff8d2ffbbf + +Pre-transmit Hook +----------------- + +Hooks work with inline bundle + + $ cp server/.hg/hgrc server/.hg/hgrc-beforeinlinehooks + $ echo "[hooks]" >> server/.hg/hgrc + $ echo "pretransmit-inline-clone-bundle=echo foo" >> server/.hg/hgrc + $ hg clone -U ssh://user@dummy/server ssh-inline-clone-hook + applying clone bundle from peer-bundle-cache://full.hg + remote: foo + adding changesets + adding manifests + adding file changes + added 2 changesets with 2 changes to 2 files + finished applying clone bundle + searching for changes + no changes found + 2 local changesets published + +Hooks can make an inline bundle fail + + $ cp server/.hg/hgrc-beforeinlinehooks server/.hg/hgrc + $ echo "[hooks]" >> server/.hg/hgrc + $ echo "pretransmit-inline-clone-bundle=echo bar && false" >> server/.hg/hgrc + $ hg clone -U ssh://user@dummy/server ssh-inline-clone-hook-fail + applying clone bundle from peer-bundle-cache://full.hg + remote: bar + remote: abort: pretransmit-inline-clone-bundle hook exited with status 1 + abort: stream ended unexpectedly (got 0 bytes, expected 1) + [255] + $ cp server/.hg/hgrc-beforeinlinehooks server/.hg/hgrc + +Other tests +=========== + Entry with unknown BUNDLESPEC is filtered and not used $ cat > server/.hg/clonebundles.manifest << EOF @@ -584,7 +671,7 @@ $ hg clone -U --debug --config ui.available-memory=16MB http://localhost:$HGPORT gzip-too-large using http://localhost:$HGPORT/ sending capabilities command - sending clonebundles command + sending clonebundles_manifest command filtering http://localhost:$HGPORT1/gz-a.hg as it needs more than 2/3 of system memory no compatible clone bundles available on server; falling back to regular clone (you may want to report this to the server operator) @@ -601,7 +688,7 @@ adding file changes adding bar revisions adding foo revisions - bundle2-input-part: total payload size 920 + bundle2-input-part: total payload size 936 bundle2-input-part: "listkeys" (params: 1 mandatory) supported bundle2-input-part: "phase-heads" supported bundle2-input-part: total payload size 24 @@ -617,7 +704,7 @@ $ hg clone -U --debug --config ui.available-memory=32MB http://localhost:$HGPORT gzip-too-large2 using http://localhost:$HGPORT/ sending capabilities command - sending clonebundles command + sending clonebundles_manifest command applying clone bundle from http://localhost:$HGPORT1/gz-a.hg bundle2-input-bundle: 1 params with-transaction bundle2-input-part: "changegroup" (params: 1 mandatory 1 advisory) supported diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-commit-amend.t --- a/tests/test-commit-amend.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-commit-amend.t Thu Jun 22 11:36:37 2023 +0200 @@ -121,15 +121,15 @@ committing changelog 1 changesets found uncompressed size of bundle content: - 254 (changelog) - 163 (manifests) - 133 a + 256 (changelog) + 165 (manifests) + 135 a saved backup bundle to $TESTTMP/repo/.hg/strip-backup/47343646fa3d-c2758885-amend.hg 1 changesets found uncompressed size of bundle content: - 250 (changelog) - 163 (manifests) - 133 a + 252 (changelog) + 165 (manifests) + 135 a adding branch adding changesets adding manifests @@ -265,15 +265,15 @@ committing changelog 1 changesets found uncompressed size of bundle content: - 249 (changelog) - 163 (manifests) - 135 a + 251 (changelog) + 165 (manifests) + 137 a saved backup bundle to $TESTTMP/repo/.hg/strip-backup/a9a13940fc03-7c2e8674-amend.hg 1 changesets found uncompressed size of bundle content: - 257 (changelog) - 163 (manifests) - 135 a + 259 (changelog) + 165 (manifests) + 137 a adding branch adding changesets adding manifests @@ -301,15 +301,15 @@ committing changelog 1 changesets found uncompressed size of bundle content: - 257 (changelog) - 163 (manifests) - 135 a + 259 (changelog) + 165 (manifests) + 137 a saved backup bundle to $TESTTMP/repo/.hg/strip-backup/64a124ba1b44-10374b8f-amend.hg 1 changesets found uncompressed size of bundle content: - 257 (changelog) - 163 (manifests) - 137 a + 259 (changelog) + 165 (manifests) + 139 a adding branch adding changesets adding manifests diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-completion.t --- a/tests/test-completion.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-completion.t Thu Jun 22 11:36:37 2023 +0200 @@ -78,6 +78,8 @@ debug-repair-issue6528 debug-revlog-index debug-revlog-stats + debug::stable-tail-sort + debug::stable-tail-sort-leaps debugancestor debugantivirusrunning debugapplystreamclonebundle @@ -273,6 +275,8 @@ debug-repair-issue6528: to-report, from-report, paranoid, dry-run debug-revlog-index: changelog, manifest, dir, template debug-revlog-stats: changelog, manifest, filelogs, template + debug::stable-tail-sort: template + debug::stable-tail-sort-leaps: template, specific debugancestor: debugantivirusrunning: debugapplystreamclonebundle: @@ -309,7 +313,7 @@ debugmanifestfulltextcache: clear, add debugmergestate: style, template debugnamecomplete: - debugnodemap: dump-new, dump-disk, check, metadata + debugnodemap: changelog, manifest, dir, dump-new, dump-disk, check, metadata debugobsolete: flags, record-parents, rev, exclusive, index, delete, date, user, template debugp1copies: rev debugp2copies: rev @@ -364,7 +368,7 @@ parents: rev, style, template paths: template phase: public, draft, secret, force, rev - pull: update, force, confirm, rev, bookmark, branch, ssh, remotecmd, insecure + pull: update, force, confirm, rev, bookmark, branch, remote-hidden, ssh, remotecmd, insecure purge: abort-on-err, all, ignored, dirs, files, print, print0, confirm, include, exclude push: force, rev, bookmark, all-bookmarks, branch, new-branch, pushvars, publish, ssh, remotecmd, insecure recover: verify diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-contrib-perf.t --- a/tests/test-contrib-perf.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-contrib-perf.t Thu Jun 22 11:36:37 2023 +0200 @@ -188,6 +188,12 @@ perf::startup (no help text available) perf::status benchmark the performance of a single status call + perf::stream-consume + benchmark the full application of a stream clone + perf::stream-generate + benchmark the full generation of a stream clone + perf::stream-locked-section + benchmark the initial, repo-locked, section of a stream-clone perf::tags (no help text available) perf::templating test the rendering time of a given template diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-debug-revlog-stats.t --- a/tests/test-debug-revlog-stats.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-debug-revlog-stats.t Thu Jun 22 11:36:37 2023 +0200 @@ -18,8 +18,6 @@ $ hg debug-revlog-stats rev-count data-size inl type target - 0 0 yes changelog - 0 0 yes manifest $ mkdir folder $ touch a b folder/c folder/d diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-debugcommands.t --- a/tests/test-debugcommands.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-debugcommands.t Thu Jun 22 11:36:37 2023 +0200 @@ -636,6 +636,7 @@ changegroup 01 02 + 03 checkheads related digests @@ -673,7 +674,7 @@ devel-peer-request: pairs: 81 bytes sending hello command sending between command - remote: 468 + remote: 473 remote: capabilities: batch branchmap $USUAL_BUNDLE2_CAPS$ changegroupsubset getbundle known lookup protocaps pushkey streamreqs=generaldelta,revlog-compression-zstd,revlogv1,sparserevlog unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash remote: 1 devel-peer-request: protocaps @@ -693,7 +694,7 @@ devel-peer-request: pairs: 81 bytes sending hello command sending between command - remote: 468 + remote: 473 remote: capabilities: batch branchmap $USUAL_BUNDLE2_CAPS$ changegroupsubset getbundle known lookup protocaps pushkey streamreqs=generaldelta,revlog-compression-zstd,revlogv1,sparserevlog unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash remote: 1 devel-peer-request: protocaps @@ -713,7 +714,7 @@ devel-peer-request: pairs: 81 bytes sending hello command sending between command - remote: 444 + remote: 449 remote: capabilities: batch branchmap $USUAL_BUNDLE2_CAPS$ changegroupsubset getbundle known lookup protocaps pushkey streamreqs=generaldelta,revlogv1,sparserevlog unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash remote: 1 devel-peer-request: protocaps diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-dirstate-version-fallback.t --- a/tests/test-dirstate-version-fallback.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-dirstate-version-fallback.t Thu Jun 22 11:36:37 2023 +0200 @@ -47,5 +47,5 @@ $ hg st abort: working directory state appears damaged! (no-rhg !) (falling back to dirstate-v1 from v2 also failed) (no-rhg !) - abort: Too little data for dirstate. (rhg !) + abort: Too little data for dirstate: 16 bytes. (rhg !) [255] diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-fncache.t --- a/tests/test-fncache.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-fncache.t Thu Jun 22 11:36:37 2023 +0200 @@ -104,7 +104,7 @@ .hg/phaseroots .hg/requires .hg/undo - .hg/undo.backup.branch + .hg/undo.backup.branch.bck .hg/undo.backupfiles .hg/undo.desc .hg/wcache @@ -144,7 +144,7 @@ .hg/store/requires .hg/store/undo .hg/store/undo.backupfiles - .hg/undo.backup.branch + .hg/undo.backup.branch.bck .hg/undo.desc .hg/wcache .hg/wcache/checkisexec (execbit !) @@ -514,6 +514,7 @@ $ hg clone -q . tobundle fncache load triggered! fncache load triggered! + fncache load triggered! $ echo 'new line' > tobundle/bar $ hg -R tobundle ci -qm bar diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-generaldelta.t --- a/tests/test-generaldelta.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-generaldelta.t Thu Jun 22 11:36:37 2023 +0200 @@ -163,7 +163,7 @@ saved backup bundle to $TESTTMP/aggressive/.hg/strip-backup/1c5d4dc9a8b8-6c68e60c-backup.hg $ hg debugbundle .hg/strip-backup/* Stream params: {Compression: BZ} - changegroup -- {nbchanges: 1, version: 02} (mandatory: True) + changegroup -- {nbchanges: 1, version: 03} (mandatory: True) 1c5d4dc9a8b8d6e1750966d343e94db665e7a1e9 cache:rev-branch-cache -- {} (mandatory: False) phase-heads -- {} (mandatory: True) diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-hardlinks.t --- a/tests/test-hardlinks.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-hardlinks.t Thu Jun 22 11:36:37 2023 +0200 @@ -52,7 +52,7 @@ 1 r1/.hg/store/phaseroots 1 r1/.hg/store/requires 1 r1/.hg/store/undo - 1 r1/.hg/store/undo.backup.fncache (repofncache !) + 1 r1/.hg/store/undo.backup.fncache.bck (repofncache !) 1 r1/.hg/store/undo.backupfiles @@ -93,7 +93,7 @@ 1 r1/.hg/store/phaseroots 1 r1/.hg/store/requires 1 r1/.hg/store/undo - 1 r1/.hg/store/undo.backup.fncache (repofncache !) + 1 r1/.hg/store/undo.backup.fncache.bck (repofncache !) 1 r1/.hg/store/undo.backupfiles $ nlinksdir r2/.hg/store @@ -252,8 +252,8 @@ 2 r4/.hg/store/requires 2 r4/.hg/store/undo 2 r4/.hg/store/undo.backupfiles - [24] r4/.hg/undo.backup.branch (re) - 2 r4/\.hg/undo\.backup\.dirstate (re) + [24] r4/.hg/undo.backup.branch.bck (re) + 2 r4/\.hg/undo\.backup\.dirstate.bck (re) 2 r4/.hg/undo.desc 2 r4/.hg/wcache/checkisexec (execbit !) 2 r4/.hg/wcache/checklink-target (symlink !) @@ -266,9 +266,9 @@ Update back to revision 12 in r4 should break hardlink of file f1 and f3: #if hardlink-whitelisted - $ nlinksdir r4/.hg/undo.backup.dirstate r4/.hg/dirstate + $ nlinksdir r4/.hg/undo.backup.dirstate.bck r4/.hg/dirstate 2 r4/.hg/dirstate - 2 r4/.hg/undo.backup.dirstate + 2 r4/.hg/undo.backup.dirstate.bck #endif @@ -305,8 +305,8 @@ 2 r4/.hg/store/requires 2 r4/.hg/store/undo 2 r4/.hg/store/undo.backupfiles - [23] r4/.hg/undo.backup.branch (re) - 2 r4/\.hg/undo\.backup\.dirstate (re) + [23] r4/.hg/undo.backup.branch.bck (re) + 2 r4/\.hg/undo\.backup\.dirstate.bck (re) 2 r4/.hg/undo.desc 2 r4/.hg/wcache/checkisexec (execbit !) 2 r4/.hg/wcache/checklink-target (symlink !) @@ -319,9 +319,9 @@ 2 r4/f3 (no-execbit !) #if hardlink-whitelisted - $ nlinksdir r4/.hg/undo.backup.dirstate r4/.hg/dirstate + $ nlinksdir r4/.hg/undo.backup.dirstate.bck r4/.hg/dirstate 1 r4/.hg/dirstate - 2 r4/.hg/undo.backup.dirstate + 2 r4/.hg/undo.backup.dirstate.bck #endif Test hardlinking outside hg: diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-help.t --- a/tests/test-help.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-help.t Thu Jun 22 11:36:37 2023 +0200 @@ -987,6 +987,11 @@ dump index data for a revlog debug-revlog-stats display statistics about revlogs in the store + debug::stable-tail-sort + display the stable-tail sort of the ancestors of a given node + debug::stable-tail-sort-leaps + display the leaps in the stable-tail sort of a node, one per + line debugancestor find the ancestor revision of two revisions in a given index debugantivirusrunning @@ -1780,7 +1785,10 @@ Extension Commands: - qclone clone main and patch repository at same time + admin::clone-bundles-clear remove existing clone bundle caches + admin::clone-bundles-refresh generate clone bundles according to the + configuration + qclone clone main and patch repository at same time Test unfound topic diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-hgweb-json.t --- a/tests/test-hgweb-json.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-hgweb-json.t Thu Jun 22 11:36:37 2023 +0200 @@ -777,6 +777,7 @@ { "bookmarks": [], "branch": "default", + "children": [], "date": [ 0.0, 0 @@ -809,6 +810,9 @@ { "bookmarks": [], "branch": "default", + "children": [ + "93a8ce14f89156426b7fa981af8042da53f03aa0" + ], "date": [ 0.0, 0 @@ -897,6 +901,9 @@ "bookmark1" ], "branch": "default", + "children": [ + "78896eb0e102174ce9278438a95e12543e4367a7" + ], "date": [ 0.0, 0 @@ -957,6 +964,9 @@ { "bookmarks": [], "branch": "test-branch", + "children": [ + "ed66c30e87eb65337c05a4229efaa5f1d5285a90" + ], "date": [ 0.0, 0 diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-hook.t --- a/tests/test-hook.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-hook.t Thu Jun 22 11:36:37 2023 +0200 @@ -465,7 +465,7 @@ phaseroots requires undo - undo.backup.fncache (repofncache !) + undo.backup.fncache.bck (repofncache !) undo.backupfiles @@ -985,13 +985,11 @@ Traceback (most recent call last): SyntaxError: * (glob) Traceback (most recent call last): - ImportError: No module named 'hgext_syntaxerror' (no-py36 !) - ModuleNotFoundError: No module named 'hgext_syntaxerror' (py36 !) + ModuleNotFoundError: No module named 'hgext_syntaxerror' Traceback (most recent call last): SyntaxError: * (glob) Traceback (most recent call last): - ImportError: No module named 'hgext_syntaxerror' (no-py36 !) - ModuleNotFoundError: No module named 'hgext_syntaxerror' (py36 !) + ModuleNotFoundError: No module named 'hgext_syntaxerror' Traceback (most recent call last): raise error.HookLoadError( (py38 !) mercurial.error.HookLoadError: preoutgoing.syntaxerror hook is invalid: import of "syntaxerror" failed @@ -1147,21 +1145,16 @@ $ hg --traceback commit -ma 2>&1 | egrep '^exception|ImportError|ModuleNotFoundError|Traceback|HookLoadError|abort' exception from first failed import attempt: Traceback (most recent call last): - ImportError: No module named 'somebogusmodule' (no-py36 !) - ModuleNotFoundError: No module named 'somebogusmodule' (py36 !) + ModuleNotFoundError: No module named 'somebogusmodule' exception from second failed import attempt: Traceback (most recent call last): - ImportError: No module named 'somebogusmodule' (no-py36 !) - ModuleNotFoundError: No module named 'somebogusmodule' (py36 !) + ModuleNotFoundError: No module named 'somebogusmodule' Traceback (most recent call last): - ImportError: No module named 'hgext_importfail' (no-py36 !) - ModuleNotFoundError: No module named 'hgext_importfail' (py36 !) + ModuleNotFoundError: No module named 'hgext_importfail' Traceback (most recent call last): - ImportError: No module named 'somebogusmodule' (no-py36 !) - ModuleNotFoundError: No module named 'somebogusmodule' (py36 !) + ModuleNotFoundError: No module named 'somebogusmodule' Traceback (most recent call last): - ImportError: No module named 'hgext_importfail' (no-py36 !) - ModuleNotFoundError: No module named 'hgext_importfail' (py36 !) + ModuleNotFoundError: No module named 'hgext_importfail' Traceback (most recent call last): raise error.HookLoadError( (py38 !) mercurial.error.HookLoadError: precommit.importfail hook is invalid: import of "importfail" failed diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-http-bad-server.t --- a/tests/test-http-bad-server.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-http-bad-server.t Thu Jun 22 11:36:37 2023 +0200 @@ -130,10 +130,8 @@ readline(*) -> (*) host: localhost:$HGPORT\r\n (glob) readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob) readline(*) -> (2) \r\n (glob) - sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) (py36 !) - sendall(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) (py36 !) - write(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) (no-py36 !) - write(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) (no-py36 !) + sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) + sendall(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) readline(~) -> (26) GET /?cmd=batch HTTP/1.1\r\n (glob) readline(*) -> (1?) Accept-Encoding* (glob) read limit reached; closing socket @@ -153,7 +151,7 @@ $ hg serve \ > --config badserver.close-after-recv-patterns="GET /\?cmd=batch,user-agent: mercurial/proto-1.0,GET /\?cmd=getbundle" \ - > --config badserver.close-after-recv-bytes=110,26,274 \ + > --config badserver.close-after-recv-bytes=110,26,281 \ > -p $HGPORT -d --pid-file=hg.pid -E error.log $ cat hg.pid > $DAEMON_PIDS $ hg clone http://localhost:$HGPORT/ clone @@ -172,10 +170,8 @@ readline(*) -> (*) host: localhost:$HGPORT\r\n (glob) readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob) readline(*) -> (2) \r\n (glob) - sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) (py36 !) - sendall(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) (py36 !) - write(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) (no-py36 !) - write(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) (no-py36 !) + sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) + sendall(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) readline(~) -> (26) GET /?cmd=batch HTTP/1.1\r\n (glob) readline(*) -> (27) Accept-Encoding: identity\r\n (glob) readline(*) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob) @@ -191,16 +187,14 @@ readline(*) -> (*) host: localhost:$HGPORT\r\n (glob) readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob) readline(*) -> (2) \r\n (glob) - sendall(159) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n (py36 !) - sendall(42) -> 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; (py36 !) - write(159) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n (no-py36 !) - write(42) -> 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; (no-py36 !) + sendall(159) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n + sendall(42) -> 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; readline(24 from ~) -> (*) GET /?cmd=getbundle HTTP* (glob) read limit reached; closing socket readline(~) -> (30) GET /?cmd=getbundle HTTP/1.1\r\n - readline(274 from *) -> (27) Accept-Encoding: identity\r\n (glob) - readline(247 from *) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob) - readline(218 from *) -> (218) x-hgarg-1: bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtag (glob) + readline(281 from *) -> (27) Accept-Encoding: identity\r\n (glob) + readline(254 from *) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob) + readline(225 from *) -> (225) x-hgarg-1: bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtag (glob) read limit reached; closing socket $ rm -f error.log @@ -228,10 +222,8 @@ readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob) readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob) readline(*) -> (2) \r\n (glob) - sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) (py36 !) - sendall(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx httppostargs known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) (py36 !) - write(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) (no-py36 !) - write(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx httppostargs known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) (no-py36 !) + sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) + sendall(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx httppostargs known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) readline(~) -> (27) POST /?cmd=batch HTTP/1.1\r\n (glob) readline(*) -> (27) Accept-Encoding: identity\r\n (glob) readline(*) -> (41) content-type: application/mercurial-0.1\r\n (glob) @@ -256,7 +248,6 @@ Traceback (most recent call last): Exception: connection closed after receiving N bytes - write(126) -> HTTP/1.1 500 Internal Server Error\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nTransfer-Encoding: chunked\r\n\r\n (no-py36 !) $ rm -f error.log @@ -282,14 +273,12 @@ readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob) readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob) readline(*) -> (2) \r\n (glob) - sendall(1 from 160) -> (0) H (py36 !) - write(1 from 160) -> (0) H (no-py36 !) + sendall(1 from 160) -> (0) H write limit reached; closing socket $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=capabilities': (glob) Traceback (most recent call last): Exception: connection closed after sending N bytes - write(286) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\nHTTP/1.1 500 Internal Server Error\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nTransfer-Encoding: chunked\r\n\r\n (glob) (no-py36 !) $ rm -f error.log @@ -315,10 +304,8 @@ readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob) readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob) readline(*) -> (2) \r\n (glob) - sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) (py36 !) - sendall(20 from *) -> (0) batch branchmap bund (glob) (py36 !) - write(160) -> (20) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) (no-py36 !) - write(20 from *) -> (0) batch branchmap bund (glob) (no-py36 !) + sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) + sendall(20 from *) -> (0) batch branchmap bund (glob) write limit reached; closing socket $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=capabilities': (glob) Traceback (most recent call last): @@ -354,10 +341,8 @@ readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob) readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob) readline(*) -> (2) \r\n (glob) - sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) (py36 !) - sendall(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) (py36 !) - write(160) -> (568) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) (no-py36 !) - write(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) (no-py36 !) + sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) + sendall(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) readline(~) -> (26) GET /?cmd=batch HTTP/1.1\r\n readline(*) -> (27) Accept-Encoding: identity\r\n (glob) readline(*) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob) @@ -367,14 +352,12 @@ readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob) readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob) readline(*) -> (2) \r\n (glob) - sendall(118 from 159) -> (0) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: applicat (py36 !) - write(118 from 159) -> (0) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: applicat (no-py36 !) + sendall(118 from 159) -> (0) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: applicat write limit reached; closing socket $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=batch': (glob) Traceback (most recent call last): Exception: connection closed after sending N bytes - write(285) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\nHTTP/1.1 500 Internal Server Error\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nTransfer-Encoding: chunked\r\n\r\n (no-py36 !) $ rm -f error.log @@ -400,10 +383,8 @@ readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob) readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob) readline(*) -> (2) \r\n (glob) - sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) (py36 !) - sendall(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) (py36 !) - write(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) (no-py36 !) - write(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) (no-py36 !) + sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) + sendall(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) readline(~) -> (26) GET /?cmd=batch HTTP/1.1\r\n readline(*) -> (27) Accept-Encoding: identity\r\n (glob) readline(*) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob) @@ -413,10 +394,8 @@ readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob) readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob) readline(*) -> (2) \r\n (glob) - sendall(159) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n (py36 !) - sendall(24 from 42) -> (0) 96ee1d7354c4ad7372047672 (py36 !) - write(159) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n (no-py36 !) - write(24 from 42) -> (0) 96ee1d7354c4ad7372047672 (no-py36 !) + sendall(159) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n + sendall(24 from 42) -> (0) 96ee1d7354c4ad7372047672 write limit reached; closing socket $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=batch': (glob) Traceback (most recent call last): @@ -453,10 +432,8 @@ readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob) readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob) readline(*) -> (2) \r\n (glob) - sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) (py36 !) - sendall(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) (py36 !) - write(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) (no-py36 !) - write(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) (no-py36 !) + sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) + sendall(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) readline(~) -> (26) GET /?cmd=batch HTTP/1.1\r\n readline(*) -> (27) Accept-Encoding: identity\r\n (glob) readline(*) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob) @@ -466,27 +443,23 @@ readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob) readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob) readline(*) -> (2) \r\n (glob) - sendall(159) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n (py36 !) - sendall(42) -> 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; (py36 !) - write(159) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n (no-py36 !) - write(42) -> 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; (no-py36 !) + sendall(159) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n + sendall(42) -> 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; readline(~) -> (30) GET /?cmd=getbundle HTTP/1.1\r\n readline(*) -> (27) Accept-Encoding: identity\r\n (glob) readline(*) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob) - readline(*) -> (440) x-hgarg-1: bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n (glob) + readline(*) -> (447) x-hgarg-1: bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n (glob) readline(*) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n (glob) readline(*) -> (35) accept: application/mercurial-0.1\r\n (glob) readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob) readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob) readline(*) -> (2) \r\n (glob) - sendall(129 from 167) -> (0) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercuri (py36 !) - write(129 from 167) -> (0) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercuri (no-py36 !) + sendall(129 from 167) -> (0) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercuri write limit reached; closing socket $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob) Traceback (most recent call last): Exception: connection closed after sending N bytes - write(293) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\nHTTP/1.1 500 Internal Server Error\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nTransfer-Encoding: chunked\r\n\r\n (no-py36 !) $ rm -f error.log @@ -505,7 +478,6 @@ $ killdaemons.py $DAEMON_PIDS -#if py36 $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -6 sendall(162 from 167) -> (0) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunke write limit reached; closing socket @@ -513,19 +485,6 @@ Traceback (most recent call last): Exception: connection closed after sending N bytes - -#else - $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -7 - write(41) -> Content-Type: application/mercurial-0.2\r\n - write(25 from 28) -> (0) Transfer-Encoding: chunke - write limit reached; closing socket - $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob) - Traceback (most recent call last): - Exception: connection closed after sending N bytes - write(293) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\nHTTP/1.1 500 Internal Server Error\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nTransfer-Encoding: chunked\r\n\r\n - -#endif - $ rm -f error.log Server sends empty HTTP body for getbundle @@ -551,10 +510,8 @@ readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob) readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob) readline(*) -> (2) \r\n (glob) - sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) (py36 !) - sendall(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) (py36 !) - write(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) (no-py36 !) - write(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) (no-py36 !) + sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) + sendall(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) readline(~) -> (26) GET /?cmd=batch HTTP/1.1\r\n readline(*) -> (27) Accept-Encoding: identity\r\n (glob) readline(*) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob) @@ -564,27 +521,23 @@ readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob) readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob) readline(*) -> (2) \r\n (glob) - sendall(159) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n (py36 !) - sendall(42) -> 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; (py36 !) - write(159) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n (no-py36 !) - write(42) -> 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; (no-py36 !) + sendall(159) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n + sendall(42) -> 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; readline(~) -> (30) GET /?cmd=getbundle HTTP/1.1\r\n readline(*) -> (27) Accept-Encoding: identity\r\n (glob) readline(*) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob) - readline(*) -> (440) x-hgarg-1: bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n (glob) + readline(*) -> (447) x-hgarg-1: bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n (glob) readline(*) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n (glob) readline(*) -> (35) accept: application/mercurial-0.1\r\n (glob) readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob) readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob) readline(*) -> (2) \r\n (glob) - sendall(167 from 167) -> (0) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n (py36 !) - write(167 from 167) -> (0) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n (no-py36 !) + sendall(167 from 167) -> (0) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n write limit reached; closing socket $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob) Traceback (most recent call last): Exception: connection closed after sending N bytes - write(293) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\nHTTP/1.1 500 Internal Server Error\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nTransfer-Encoding: chunked\r\n\r\n (no-py36 !) $ rm -f error.log @@ -611,10 +564,8 @@ readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob) readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob) readline(*) -> (2) \r\n (glob) - sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) (py36 !) - sendall(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) (py36 !) - write(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) (no-py36 !) - write(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) (no-py36 !) + sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: *\r\n\r\n (glob) + sendall(*) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=* unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (glob) readline(~) -> (26) GET /?cmd=batch HTTP/1.1\r\n readline(*) -> (27) Accept-Encoding: identity\r\n (glob) readline(*) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob) @@ -624,23 +575,21 @@ readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob) readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob) readline(*) -> (2) \r\n (glob) - sendall(159) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n (py36 !) - sendall(42) -> 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; (py36 !) - write(159) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n (no-py36 !) + sendall(159) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n + sendall(42) -> 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; readline(~) -> (30) GET /?cmd=getbundle HTTP/1.1\r\n readline(*) -> (27) Accept-Encoding: identity\r\n (glob) readline(*) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob) - readline(*) -> (440) x-hgarg-1: bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n (glob) + readline(*) -> (447) x-hgarg-1: bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n (glob) readline(*) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n (glob) readline(*) -> (35) accept: application/mercurial-0.1\r\n (glob) readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob) readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob) readline(*) -> (2) \r\n (glob) - sendall(167) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n (py36 !) - sendall(6) -> 1\\r\\n\x04\\r\\n (esc) (py36 !) - sendall(9) -> 4\r\nnone\r\n (py36 !) - sendall(9 from 9) -> (0) 4\r\nHG20\r\n (py36 !) - write(167) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n (no-py36 !) + sendall(167) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n + sendall(6) -> 1\\r\\n\x04\\r\\n (esc) + sendall(9) -> 4\r\nnone\r\n + sendall(9 from 9) -> (0) 4\r\nHG20\r\n write limit reached; closing socket $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob) Traceback (most recent call last): @@ -665,7 +614,6 @@ $ killdaemons.py $DAEMON_PIDS -#if py36 $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -9 sendall(167) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n sendall(6) -> 1\\r\\n\x04\\r\\n (esc) @@ -676,21 +624,6 @@ Traceback (most recent call last): Exception: connection closed after sending N bytes - -#else - $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -11 - readline(~) -> (2) \r\n - write(167) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n - write(6) -> 1\\r\\n\x04\\r\\n (esc) - write(9) -> 4\r\nnone\r\n - write(6 from 9) -> (0) 4\r\nHG2 - write limit reached; closing socket - $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob) - Traceback (most recent call last): - Exception: connection closed after sending N bytes - -#endif - $ rm -f error.log Server sends incomplete bundle2 stream params length @@ -709,7 +642,6 @@ $ killdaemons.py $DAEMON_PIDS -#if py36 $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -10 sendall(167) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n sendall(6) -> 1\\r\\n\x04\\r\\n (esc) @@ -721,23 +653,6 @@ Traceback (most recent call last): Exception: connection closed after sending N bytes - -#else - $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -12 - readline(~) -> (2) \r\n - write(167) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n - write(41) -> Content-Type: application/mercurial-0.2\r\n - write(6) -> 1\\r\\n\x04\\r\\n (esc) - write(9) -> 4\r\nnone\r\n - write(9) -> 4\r\nHG20\r\n - write(6 from 9) -> (0) 4\\r\\n\x00\x00\x00 (esc) - write limit reached; closing socket - $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob) - Traceback (most recent call last): - Exception: connection closed after sending N bytes - -#endif - $ rm -f error.log Servers stops after bundle2 stream params header @@ -756,7 +671,6 @@ $ killdaemons.py $DAEMON_PIDS -#if py36 $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -10 sendall(167) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n sendall(6) -> 1\\r\\n\x04\\r\\n (esc) @@ -768,23 +682,6 @@ Traceback (most recent call last): Exception: connection closed after sending N bytes - -#else - $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -12 - readline(~) -> (2) \r\n - write(167) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n - write(41) -> Content-Type: application/mercurial-0.2\r\n - write(6) -> 1\\r\\n\x04\\r\\n (esc) - write(9) -> 4\r\nnone\r\n - write(9) -> 4\r\nHG20\r\n - write(9 from 9) -> (0) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) - write limit reached; closing socket - $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob) - Traceback (most recent call last): - Exception: connection closed after sending N bytes - -#endif - $ rm -f error.log Server stops sending after bundle2 part header length @@ -803,7 +700,6 @@ $ killdaemons.py $DAEMON_PIDS -#if py36 $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -11 sendall(167) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n sendall(6) -> 1\\r\\n\x04\\r\\n (esc) @@ -816,32 +712,13 @@ Traceback (most recent call last): Exception: connection closed after sending N bytes - -#else - - $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -13 - readline(~) -> (2) \r\n - write(167) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n - write(41) -> Content-Type: application/mercurial-0.2\r\n - write(6) -> 1\\r\\n\x04\\r\\n (esc) - write(9) -> 4\r\nnone\r\n - write(9) -> 4\r\nHG20\r\n - write(9) -> 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) - write(9 from 9) -> (0) 4\\r\\n\x00\x00\x00)\\r\\n (esc) - write limit reached; closing socket - $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob) - Traceback (most recent call last): - Exception: connection closed after sending N bytes - -#endif - $ rm -f error.log Server stops sending after bundle2 part header ---------------------------------------------- $ hg serve \ - > --config badserver.close-after-send-patterns="version02nbchanges1\\r\\n" \ + > --config badserver.close-after-send-patterns="version03nbchanges1\\r\\n" \ > -p $HGPORT -d --pid-file=hg.pid -E error.log $ cat hg.pid > $DAEMON_PIDS @@ -856,7 +733,6 @@ $ killdaemons.py $DAEMON_PIDS -#if py36 $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -12 sendall(167) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n sendall(6) -> 1\\r\\n\x04\\r\\n (esc) @@ -864,38 +740,19 @@ sendall(9) -> 4\r\nHG20\r\n sendall(9) -> 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) sendall(9) -> 4\\r\\n\x00\x00\x00)\\r\\n (esc) - sendall(47 from 47) -> (0) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc) + sendall(47 from 47) -> (0) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version03nbchanges1\\r\\n (esc) write limit reached; closing socket $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob) Traceback (most recent call last): Exception: connection closed after sending N bytes - -#else - $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -14 - readline(~) -> (2) \r\n - write(167) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n - write(41) -> Content-Type: application/mercurial-0.2\r\n - write(6) -> 1\\r\\n\x04\\r\\n (esc) - write(9) -> 4\r\nnone\r\n - write(9) -> 4\r\nHG20\r\n - write(9) -> 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x00)\\r\\n (esc) - write(47 from 47) -> (0) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc) - write limit reached; closing socket - $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob) - Traceback (most recent call last): - Exception: connection closed after sending N bytes - -#endif - $ rm -f error.log Server stops after bundle2 part payload chunk size -------------------------------------------------- $ hg serve \ - > --config badserver.close-after-send-patterns='1d2\r\n.......' \ + > --config badserver.close-after-send-patterns='1dc\r\n.......' \ > -p $HGPORT -d --pid-file=hg.pid -E error.log $ cat hg.pid > $DAEMON_PIDS @@ -910,7 +767,6 @@ $ killdaemons.py $DAEMON_PIDS -#if py36 $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -14 sendall(167) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n sendall(6) -> 1\\r\\n\x04\\r\\n (esc) @@ -918,41 +774,21 @@ sendall(9) -> 4\r\nHG20\r\n sendall(9) -> 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) sendall(9) -> 4\\r\\n\x00\x00\x00)\\r\\n (esc) - sendall(47) -> 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc) - sendall(9) -> 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc) - sendall(12 from 473) -> (0) 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1d (esc) + sendall(47) -> 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version03nbchanges1\\r\\n (esc) + sendall(9) -> 4\\r\\n\x00\x00\x01\xdc\\r\\n (esc) + sendall(12 from 483) -> (0) 1dc\\r\\n\x00\x00\x00\xb4\x96\xee\x1d (esc) write limit reached; closing socket $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob) Traceback (most recent call last): Exception: connection closed after sending N bytes - -#else - $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -15 - write(167) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n - write(28) -> Transfer-Encoding: chunked\r\n - write(6) -> 1\\r\\n\x04\\r\\n (esc) - write(9) -> 4\r\nnone\r\n - write(9) -> 4\r\nHG20\r\n - write(9) -> 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x00)\\r\\n (esc) - write(47) -> 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc) - write(12 from 473) -> (0) 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1d (esc) - write limit reached; closing socket - $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob) - Traceback (most recent call last): - Exception: connection closed after sending N bytes - -#endif - $ rm -f error.log Server stops sending in middle of bundle2 payload chunk ------------------------------------------------------- $ hg serve \ - > --config badserver.close-after-send-patterns=':jL\0\0\x00\0\0\0\0\0\r\n' \ + > --config badserver.close-after-send-patterns=':jL\0\0\x00\0\0\0\0\0\0\0\r\n' \ > -p $HGPORT -d --pid-file=hg.pid -E error.log $ cat hg.pid > $DAEMON_PIDS @@ -967,7 +803,6 @@ $ killdaemons.py $DAEMON_PIDS -#if py36 $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -14 sendall(167) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n sendall(6) -> 1\\r\\n\x04\\r\\n (esc) @@ -975,35 +810,14 @@ sendall(9) -> 4\r\nHG20\r\n sendall(9) -> 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) sendall(9) -> 4\\r\\n\x00\x00\x00)\\r\\n (esc) - sendall(47) -> 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc) - sendall(9) -> 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc) - sendall(473 from 473) -> (0) 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version03nbchanges1\\r\\n (esc) + sendall(9) -> 4\\r\\n\x00\x00\x01\xdc\\r\\n (esc) + sendall(483 from 483) -> (0) 1dc\\r\\n\x00\x00\x00\xb4\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa3j=\xf4\xde8\x8f (2) \r\n - write(167) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n - write(41) -> Content-Type: application/mercurial-0.2\r\n - write(6) -> 1\\r\\n\x04\\r\\n (esc) - write(9) -> 4\r\nnone\r\n - write(9) -> 4\r\nHG20\r\n - write(9) -> 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x00)\\r\\n (esc) - write(47) -> 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc) - write(473 from 473) -> (0) 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f 1\\r\\n\x04\\r\\n (esc) sendall(9) -> 4\r\nnone\r\n sendall(9) -> 4\r\nHG20\r\n sendall(9) -> 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) sendall(9) -> 4\\r\\n\x00\x00\x00)\\r\\n (esc) - sendall(47) -> 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc) - sendall(9) -> 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc) - sendall(473) -> 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version03nbchanges1\\r\\n (esc) + sendall(9) -> 4\\r\\n\x00\x00\x01\xdc\\r\\n (esc) + sendall(483) -> 1dc\\r\\n\x00\x00\x00\xb4\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa3j=\xf4\xde8\x8f 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) sendall(9) -> 4\\r\\n\x00\x00\x00 \\r\\n (esc) sendall(13 from 38) -> (0) 20\\r\\n\x08LISTKEYS (esc) @@ -1045,28 +858,6 @@ Traceback (most recent call last): Exception: connection closed after sending N bytes - -#else - $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -17 - write(2) -> \r\n - write(6) -> 1\\r\\n\x04\\r\\n (esc) - write(9) -> 4\r\nnone\r\n - write(9) -> 4\r\nHG20\r\n - write(9) -> 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x00)\\r\\n (esc) - write(47) -> 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc) - write(473) -> 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x00 \\r\\n (esc) - write(13 from 38) -> (0) 20\\r\\n\x08LISTKEYS (esc) - write limit reached; closing socket - $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob) - Traceback (most recent call last): - Exception: connection closed after sending N bytes - -#endif - $ rm -f error.log Server stops sending after 0 part bundle part header (indicating end of bundle2 payload) @@ -1091,13 +882,12 @@ $ killdaemons.py $DAEMON_PIDS -#if py36 $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -20 sendall(9) -> 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) sendall(9) -> 4\\r\\n\x00\x00\x00)\\r\\n (esc) - sendall(47) -> 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc) - sendall(9) -> 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc) - sendall(473) -> 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version03nbchanges1\\r\\n (esc) + sendall(9) -> 4\\r\\n\x00\x00\x01\xdc\\r\\n (esc) + sendall(483) -> 1dc\\r\\n\x00\x00\x00\xb4\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa3j=\xf4\xde8\x8f 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) sendall(9) -> 4\\r\\n\x00\x00\x00 \\r\\n (esc) sendall(38) -> 20\\r\\n\x08LISTKEYS\x00\x00\x00\x01\x01\x00 \x06namespacephases\\r\\n (esc) @@ -1113,32 +903,6 @@ Traceback (most recent call last): Exception: connection closed after sending N bytes - -#else - $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -21 - write(9) -> 4\r\nHG20\r\n - write(9) -> 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x00)\\r\\n (esc) - write(47) -> 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc) - write(473) -> 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x00 \\r\\n (esc) - write(38) -> 20\\r\\n\x08LISTKEYS\x00\x00\x00\x01\x01\x00 \x06namespacephases\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x00:\\r\\n (esc) - write(64) -> 3a\r\n96ee1d7354c4ad7372047672c36a1f561e3a6a4c 1\npublishing True\r\n - write(9) -> 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x00#\\r\\n (esc) - write(41) -> 23\\r\\n\x08LISTKEYS\x00\x00\x00\x02\x01\x00 namespacebookmarks\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) - write(9 from 9) -> (0) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) - write limit reached; closing socket - $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob) - Traceback (most recent call last): - Exception: connection closed after sending N bytes - -#endif - $ rm -f error.log $ rm -rf clone @@ -1162,13 +926,12 @@ $ killdaemons.py $DAEMON_PIDS -#if py36 $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -21 sendall(9) -> 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) sendall(9) -> 4\\r\\n\x00\x00\x00)\\r\\n (esc) - sendall(47) -> 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc) - sendall(9) -> 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc) - sendall(473) -> 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version03nbchanges1\\r\\n (esc) + sendall(9) -> 4\\r\\n\x00\x00\x01\xdc\\r\\n (esc) + sendall(483) -> 1dc\\r\\n\x00\x00\x00\xb4\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa3j=\xf4\xde8\x8f 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) sendall(9) -> 4\\r\\n\x00\x00\x00 \\r\\n (esc) sendall(38) -> 20\\r\\n\x08LISTKEYS\x00\x00\x00\x01\x01\x00 \x06namespacephases\\r\\n (esc) @@ -1185,32 +948,5 @@ Traceback (most recent call last): Exception: connection closed after sending N bytes - -#else - $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -22 - write(9) -> 4\r\nHG20\r\n - write(9) -> 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x00)\\r\\n (esc) - write(47) -> 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc) - write(473) -> 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x00 \\r\\n (esc) - write(38) -> 20\\r\\n\x08LISTKEYS\x00\x00\x00\x01\x01\x00 \x06namespacephases\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x00:\\r\\n (esc) - write(64) -> 3a\r\n96ee1d7354c4ad7372047672c36a1f561e3a6a4c 1\npublishing True\r\n - write(9) -> 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x00#\\r\\n (esc) - write(41) -> 23\\r\\n\x08LISTKEYS\x00\x00\x00\x02\x01\x00 namespacebookmarks\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) - write(9) -> 4\\r\\n\x00\x00\x00\x00\\r\\n (esc) - write(3 from 5) -> (0) 0\r\n - write limit reached; closing socket - $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob) - Traceback (most recent call last): - Exception: connection closed after sending N bytes - -#endif - $ rm -f error.log $ rm -rf clone diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-http.t --- a/tests/test-http.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-http.t Thu Jun 22 11:36:37 2023 +0200 @@ -341,20 +341,20 @@ list of changesets: 7f4e523d01f2cc3765ac8934da3d14db775ff872 bundle2-output-bundle: "HG20", 5 parts total - bundle2-output-part: "replycaps" 207 bytes payload + bundle2-output-part: "replycaps" 210 bytes payload bundle2-output-part: "check:phases" 24 bytes payload bundle2-output-part: "check:updated-heads" streamed payload bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload bundle2-output-part: "phase-heads" 24 bytes payload sending unbundle command - sending 1023 bytes + sending 1036 bytes devel-peer-request: POST http://localhost:$HGPORT2/?cmd=unbundle - devel-peer-request: Content-length 1023 + devel-peer-request: Content-length 1036 devel-peer-request: Content-type application/mercurial-0.1 devel-peer-request: Vary X-HgArg-1,X-HgProto-1 devel-peer-request: X-hgproto-1 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull devel-peer-request: 16 bytes of commands arguments in headers - devel-peer-request: 1023 bytes of data + devel-peer-request: 1036 bytes of data devel-peer-request: finished in *.???? seconds (200) (glob) bundle2-input-bundle: no-transaction bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-infinitepush-bundlestore.t --- a/tests/test-infinitepush-bundlestore.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-infinitepush-bundlestore.t Thu Jun 22 11:36:37 2023 +0200 @@ -17,6 +17,10 @@ > hg ci -m "$1" > } $ hg init repo + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ cd repo Check that we can send a scratch on the server and it does not show there in @@ -24,22 +28,82 @@ $ setupserver $ cd .. $ hg clone ssh://user@dummy/repo client -q + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ cd client $ mkcommit initialcommit + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ hg push -r . + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pushing to ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes remote: adding changesets remote: adding manifests remote: adding file changes remote: added 1 changesets with 1 changes to 1 files $ mkcommit scratchcommit + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ hg push -r . -B scratch/mybranch + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pushing to ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. remote: pushing 1 commit: remote: 20759b6926ce scratchcommit $ hg log -G + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. @ changeset: 1:20759b6926ce | bookmark: scratch/mybranch | tag: tip @@ -53,6 +117,10 @@ summary: initialcommit $ hg log -G -R ../repo + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. o changeset: 0:67145f466344 tag: tip user: test @@ -76,10 +144,46 @@ $ cd .. $ hg clone ssh://user@dummy/repo client2 -q + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ cd client2 $ hg pull -B scratch/mybranch --traceback + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pulling from ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. adding changesets adding manifests adding file changes @@ -87,6 +191,10 @@ new changesets 20759b6926ce (1 drafts) (run 'hg update' to get a working copy) $ hg log -G + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. o changeset: 1:20759b6926ce | bookmark: scratch/mybranch | tag: tip @@ -105,17 +213,45 @@ $ cd client $ hg up 0 + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. 0 files updated, 0 files merged, 1 files removed, 0 files unresolved $ mkcommit newcommit + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. created new head $ hg push -r . + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pushing to ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes remote: adding changesets remote: adding manifests remote: adding file changes remote: added 1 changesets with 1 changes to 1 files $ hg log -G -T '{desc} {phase} {bookmarks}' + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. @ newcommit public | | o scratchcommit draft scratch/mybranch @@ -126,14 +262,46 @@ Push to scratch branch $ cd ../client2 $ hg up -q scratch/mybranch + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ mkcommit 'new scratch commit' + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ hg push -r . -B scratch/mybranch + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pushing to ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. remote: pushing 2 commits: remote: 20759b6926ce scratchcommit remote: 1de1d7d92f89 new scratch commit $ hg log -G -T '{desc} {phase} {bookmarks}' + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. @ new scratch commit draft scratch/mybranch | o scratchcommit draft @@ -149,12 +317,32 @@ Push scratch bookmark with no new revs $ hg push -r . -B scratch/anotherbranch + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pushing to ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. remote: pushing 2 commits: remote: 20759b6926ce scratchcommit remote: 1de1d7d92f89 new scratch commit $ hg log -G -T '{desc} {phase} {bookmarks}' + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. @ new scratch commit draft scratch/anotherbranch scratch/mybranch | o scratchcommit draft @@ -168,10 +356,38 @@ Pull scratch and non-scratch bookmark at the same time $ hg -R ../repo book newbook + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ cd ../client $ hg pull -B newbook -B scratch/mybranch --traceback + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pulling from ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. adding changesets adding manifests adding file changes @@ -180,6 +396,10 @@ new changesets 1de1d7d92f89 (1 drafts) (run 'hg update' to get a working copy) $ hg log -G -T '{desc} {phase} {bookmarks}' + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. o new scratch commit draft scratch/mybranch | | @ newcommit public @@ -192,8 +412,24 @@ Push scratch revision without bookmark with --bundle-store $ hg up -q tip + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ mkcommit scratchcommitnobook + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ hg log -G -T '{desc} {phase} {bookmarks}' + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. @ scratchcommitnobook draft | o new scratch commit draft scratch/mybranch @@ -205,13 +441,33 @@ o initialcommit public $ hg push -r . --bundle-store + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pushing to ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. remote: pushing 3 commits: remote: 20759b6926ce scratchcommit remote: 1de1d7d92f89 new scratch commit remote: 2b5d271c7e0d scratchcommitnobook $ hg -R ../repo log -G -T '{desc} {phase}' + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. o newcommit public | o initialcommit public @@ -224,15 +480,43 @@ Test with pushrebase $ mkcommit scratchcommitwithpushrebase + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ hg push -r . -B scratch/mybranch + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pushing to ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. remote: pushing 4 commits: remote: 20759b6926ce scratchcommit remote: 1de1d7d92f89 new scratch commit remote: 2b5d271c7e0d scratchcommitnobook remote: d8c4f54ab678 scratchcommitwithpushrebase $ hg -R ../repo log -G -T '{desc} {phase}' + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. o newcommit public | o initialcommit public @@ -245,9 +529,33 @@ Change the order of pushrebase and infinitepush $ mkcommit scratchcommitwithpushrebase2 + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ hg push -r . -B scratch/mybranch + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pushing to ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. remote: pushing 5 commits: remote: 20759b6926ce scratchcommit remote: 1de1d7d92f89 new scratch commit @@ -255,6 +563,10 @@ remote: d8c4f54ab678 scratchcommitwithpushrebase remote: 6c10d49fe927 scratchcommitwithpushrebase2 $ hg -R ../repo log -G -T '{desc} {phase}' + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. o newcommit public | o initialcommit public @@ -269,6 +581,10 @@ Non-fastforward scratch bookmark push $ hg log -GT "{rev}:{node} {desc}\n" + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. @ 6:6c10d49fe92751666c40263f96721b918170d3da scratchcommitwithpushrebase2 | o 5:d8c4f54ab678fd67cb90bb3f272a2dc6513a59a7 scratchcommitwithpushrebase @@ -284,12 +600,28 @@ o 0:67145f4663446a9580364f70034fea6e21293b6f initialcommit $ hg up 6c10d49fe927 + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. 0 files updated, 0 files merged, 0 files removed, 0 files unresolved $ echo 1 > amend $ hg add amend + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ hg ci --amend -m 'scratch amended commit' + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. saved backup bundle to $TESTTMP/client/.hg/strip-backup/6c10d49fe927-c99ffec5-amend.hg $ hg log -G -T '{desc} {phase} {bookmarks}' + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. @ scratch amended commit draft scratch/mybranch | o scratchcommitwithpushrebase draft @@ -309,8 +641,24 @@ scratch/anotherbranch 1de1d7d92f8965260391d0513fe8a8d5973d3042 scratch/mybranch 6c10d49fe92751666c40263f96721b918170d3da $ hg push -r . -B scratch/mybranch + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pushing to ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. remote: pushing 5 commits: remote: 20759b6926ce scratchcommit remote: 1de1d7d92f89 new scratch commit @@ -321,6 +669,10 @@ scratch/anotherbranch 1de1d7d92f8965260391d0513fe8a8d5973d3042 scratch/mybranch 8872775dd97a750e1533dc1fbbca665644b32547 $ hg log -G -T '{desc} {phase} {bookmarks}' + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. @ scratch amended commit draft scratch/mybranch | o scratchcommitwithpushrebase draft @@ -343,22 +695,54 @@ Checkout last non-scrath commit $ hg up 91894e11e8255 + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. 1 files updated, 0 files merged, 6 files removed, 0 files unresolved $ mkcommit peercommit + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. Use --force because this push creates new head $ hg push peer -r . -f + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pushing to ssh://user@dummy/client2 + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes remote: adding changesets remote: adding manifests remote: adding file changes remote: added 2 changesets with 2 changes to 2 files (+1 heads) $ hg -R ../repo log -G -T '{desc} {phase} {bookmarks}' + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. o newcommit public | o initialcommit public $ hg -R ../client2 log -G -T '{desc} {phase} {bookmarks}' + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. o peercommit public | o newcommit public diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-infinitepush-ci.t --- a/tests/test-infinitepush-ci.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-infinitepush-ci.t Thu Jun 22 11:36:37 2023 +0200 @@ -19,14 +19,40 @@ $ echo "pushtobundlestore = True" >> .hg/hgrc $ echo "[extensions]" >> .hg/hgrc $ echo "infinitepush=" >> .hg/hgrc + $ echo "[infinitepush]" >> .hg/hgrc + $ echo "deprecation-abort=no" >> .hg/hgrc $ echo initialcommit > initialcommit $ hg ci -Aqm "initialcommit" + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact (chg !) + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be (chg !) + unused and barring learning of users of this functionality, we drop this (chg !) + extension in Mercurial 6.6. (chg !) $ hg phase --public . + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ cd .. $ hg clone repo client -q + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ hg clone repo client2 -q + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ hg clone ssh://user@dummy/repo client3 -q + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. $ cd client Pushing a new commit from the client to the server @@ -42,8 +68,16 @@ $ hg push pushing to $TESTTMP/repo + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes storing changesets on the bundlestore + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pushing 1 commit: 6cb0989601f1 added a @@ -74,6 +108,10 @@ ------------------------------------------------------------------ $ hg glog + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. @ 0:67145f466344 initialcommit public @@ -81,6 +119,10 @@ -------------------------------------------- $ hg unbundle .hg/scratchbranches/filebundlestore/3b/41/3b414252ff8acab801318445d88ff48faf4a28c3 + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. adding changesets adding manifests adding file changes @@ -89,6 +131,10 @@ (run 'hg update' to get a working copy) $ hg glog + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. o 1:6cb0989601f1 added a | public @ 0:67145f466344 initialcommit @@ -114,8 +160,16 @@ $ hg push pushing to $TESTTMP/repo + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes storing changesets on the bundlestore + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pushing 2 commits: eaba929e866c added b bf8a6e3011b3 added c @@ -124,6 +178,10 @@ ------------------------------------------------------ $ hg glog -R ../repo + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. o 1:6cb0989601f1 added a | public @ 0:67145f466344 initialcommit @@ -146,8 +204,16 @@ XXX: we should have pushed only the parts which are not in bundlestore $ hg push pushing to $TESTTMP/repo + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes storing changesets on the bundlestore + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pushing 4 commits: eaba929e866c added b bf8a6e3011b3 added c @@ -166,6 +232,10 @@ ----------------------------------------------------------------------- $ hg incoming + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. comparing with $TESTTMP/repo searching for changes no changes found @@ -173,6 +243,10 @@ $ hg pull pulling from $TESTTMP/repo + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes no changes found @@ -184,6 +258,10 @@ $ cd ../client2 $ hg pull -r 6cb0989601f1 pulling from $TESTTMP/repo + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes adding changesets adding manifests @@ -203,6 +281,10 @@ $ hg pull -r b4e4bce660512ad3e71189e14588a70ac8e31fef pulling from $TESTTMP/repo + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. abort: unknown revision 'b4e4bce660512ad3e71189e14588a70ac8e31fef' [10] $ hg glog @@ -221,6 +303,10 @@ $ cd ../client3 $ hg pull -r 6cb0989601f1 pulling from ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. searching for changes adding changesets adding manifests @@ -240,13 +326,25 @@ XXX: we should support this $ hg pull -r b4e4bce660512 pulling from ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. abort: unknown revision 'b4e4bce660512' [255] XXX: we should show better message when the pull is happening from bundlestore $ hg pull -r b4e4bce660512ad3e71189e14588a70ac8e31fef pulling from ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. searching for changes + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. adding changesets adding manifests adding file changes @@ -288,8 +386,16 @@ $ hg push pushing to $TESTTMP/repo + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes storing changesets on the bundlestore + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pushing 5 commits: eaba929e866c added b bf8a6e3011b3 added c @@ -315,6 +421,10 @@ eaba929e866c59bc9a6aada5a9dd2f6990db83c0 280a46a259a268f0e740c81c5a7751bdbfaec85f $ hg unbundle .hg/scratchbranches/filebundlestore/28/0a/280a46a259a268f0e740c81c5a7751bdbfaec85f + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. adding changesets adding manifests adding file changes @@ -323,6 +433,10 @@ (run 'hg update' to get a working copy) $ hg glog + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. o 6:9b42578d4447 added f | draft o 5:b4e4bce66051 added e @@ -374,8 +488,16 @@ $ hg push -f pushing to $TESTTMP/repo + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes storing changesets on the bundlestore + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pushing 1 commit: 99949238d9ac added f @@ -399,6 +521,14 @@ eaba929e866c59bc9a6aada5a9dd2f6990db83c0 280a46a259a268f0e740c81c5a7751bdbfaec85f $ hg glog + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact (chg !) + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be (chg !) + unused and barring learning of users of this functionality, we drop this (chg !) + extension in Mercurial 6.6. (chg !) o 6:9b42578d4447 added f | draft o 5:b4e4bce66051 added e @@ -415,6 +545,10 @@ public $ hg unbundle .hg/scratchbranches/filebundlestore/09/0a/090a24fe63f31d3b4bee714447f835c8c362ff57 + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. adding changesets adding manifests adding file changes @@ -425,6 +559,10 @@ (run 'hg heads' to see heads, 'hg merge' to merge) $ hg glog + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. o 7:99949238d9ac added f | draft | o 5:b4e4bce66051 added e diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-infinitepush.t --- a/tests/test-infinitepush.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-infinitepush.t Thu Jun 22 11:36:37 2023 +0200 @@ -13,35 +13,107 @@ $ cp $HGRCPATH $TESTTMP/defaulthgrc $ setupcommon $ hg init repo + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ cd repo $ setupserver $ echo initialcommit > initialcommit $ hg ci -Aqm "initialcommit" + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ hg phase --public . + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ cd .. $ hg clone ssh://user@dummy/repo client -q + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. Create two heads. Push first head alone, then two heads together. Make sure that multihead push works. $ cd client $ echo multihead1 > multihead1 $ hg add multihead1 + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ hg ci -m "multihead1" + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ hg up null + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. 0 files updated, 0 files merged, 2 files removed, 0 files unresolved $ echo multihead2 > multihead2 $ hg ci -Am "multihead2" + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. adding multihead2 created new head $ hg push -r . --bundle-store + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pushing to ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. remote: pushing 1 commit: remote: ee4802bf6864 multihead2 $ hg push -r '1:2' --bundle-store + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pushing to ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. remote: pushing 2 commits: remote: bc22f9a30a82 multihead1 remote: ee4802bf6864 multihead2 @@ -51,35 +123,123 @@ Create two new scratch bookmarks $ hg up 0 + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. 1 files updated, 0 files merged, 1 files removed, 0 files unresolved $ echo scratchfirstpart > scratchfirstpart $ hg ci -Am "scratchfirstpart" + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. adding scratchfirstpart created new head $ hg push -r . -B scratch/firstpart + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pushing to ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. remote: pushing 1 commit: remote: 176993b87e39 scratchfirstpart $ hg up 0 + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. 0 files updated, 0 files merged, 1 files removed, 0 files unresolved $ echo scratchsecondpart > scratchsecondpart $ hg ci -Am "scratchsecondpart" + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. adding scratchsecondpart created new head $ hg push -r . -B scratch/secondpart + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pushing to ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. remote: pushing 1 commit: remote: 8db3891c220e scratchsecondpart Pull two bookmarks from the second client $ cd .. $ hg clone ssh://user@dummy/repo client2 -q + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ cd client2 $ hg pull -B scratch/firstpart -B scratch/secondpart + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pulling from ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. adding changesets adding manifests adding file changes @@ -90,28 +250,88 @@ new changesets * (glob) (run 'hg heads' to see heads, 'hg merge' to merge) $ hg log -r scratch/secondpart -T '{node}' + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. 8db3891c220e216f6da214e8254bd4371f55efca (no-eol) $ hg log -r scratch/firstpart -T '{node}' + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. 176993b87e39bd88d66a2cccadabe33f0b346339 (no-eol) Make two commits to the scratch branch $ echo testpullbycommithash1 > testpullbycommithash1 $ hg ci -Am "testpullbycommithash1" + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. adding testpullbycommithash1 created new head $ hg log -r '.' -T '{node}\n' > ../testpullbycommithash1 + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ echo testpullbycommithash2 > testpullbycommithash2 $ hg ci -Aqm "testpullbycommithash2" + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ hg push -r . -B scratch/mybranch -q + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. Create third client and pull by commit hash. Make sure testpullbycommithash2 has not fetched $ cd .. $ hg clone ssh://user@dummy/repo client3 -q + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ cd client3 $ hg pull -r `cat ../testpullbycommithash1` + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pulling from ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. adding changesets adding manifests adding file changes @@ -119,6 +339,10 @@ new changesets 33910bfe6ffe (1 drafts) (run 'hg update' to get a working copy) $ hg log -G -T '{desc} {phase} {bookmarks}' + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. o testpullbycommithash1 draft | @ initialcommit public @@ -128,10 +352,30 @@ $ cd ../repo $ echo publiccommit > publiccommit $ hg ci -Aqm "publiccommit" + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ hg phase --public . + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ cd ../client3 $ hg pull + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pulling from ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes adding changesets adding manifests @@ -140,6 +384,10 @@ new changesets a79b6597f322 (run 'hg heads' to see heads, 'hg merge' to merge) $ hg log -G -T '{desc} {phase} {bookmarks} {node|short}' + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. o publiccommit public a79b6597f322 | | o testpullbycommithash1 draft 33910bfe6ffe @@ -147,18 +395,66 @@ @ initialcommit public 67145f466344 $ hg up a79b6597f322 + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. 1 files updated, 0 files merged, 0 files removed, 0 files unresolved $ echo scratchontopofpublic > scratchontopofpublic $ hg ci -Aqm "scratchontopofpublic" + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. $ hg push -r . -B scratch/scratchontopofpublic + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pushing to ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. remote: pushing 1 commit: remote: c70aee6da07d scratchontopofpublic $ cd ../client2 $ hg pull -B scratch/scratchontopofpublic + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. pulling from ssh://user@dummy/repo + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. searching for changes + remote: IMPORTANT: if you use this extension, please contact + remote: mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + remote: unused and barring learning of users of this functionality, we drop this + remote: extension in Mercurial 6.6. adding changesets adding manifests adding file changes @@ -169,4 +465,8 @@ new changesets a79b6597f322:c70aee6da07d (1 drafts) (run 'hg heads .' to see heads, 'hg merge' to merge) $ hg log -r scratch/scratchontopofpublic -T '{phase}' + IMPORTANT: if you use this extension, please contact + mercurial-devel@mercurial-scm.org IMMEDIATELY. This extension is believed to be + unused and barring learning of users of this functionality, we drop this + extension in Mercurial 6.6. draft (no-eol) diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-inherit-mode.t --- a/tests/test-inherit-mode.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-inherit-mode.t Thu Jun 22 11:36:37 2023 +0200 @@ -95,7 +95,7 @@ 00600 ./.hg/store/requires 00660 ./.hg/store/undo 00660 ./.hg/store/undo.backupfiles - 00660 ./.hg/undo.backup.branch + 00660 ./.hg/undo.backup.branch.bck 00660 ./.hg/undo.desc 00770 ./.hg/wcache/ 00711 ./.hg/wcache/checkisexec @@ -153,7 +153,7 @@ 00660 ../push/.hg/store/requires 00660 ../push/.hg/store/undo 00660 ../push/.hg/store/undo.backupfiles - 00660 ../push/.hg/undo.backup.branch + 00660 ../push/.hg/undo.backup.branch.bck 00660 ../push/.hg/undo.desc 00770 ../push/.hg/wcache/ diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-issue6528.t --- a/tests/test-issue6528.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-issue6528.t Thu Jun 22 11:36:37 2023 +0200 @@ -201,9 +201,9 @@ Dry-run the fix $ hg debug-repair-issue6528 --dry-run - found affected revision 1 for filelog 'data/D.txt.i' - found affected revision 1 for filelog 'data/b.txt.i' - found affected revision 3 for filelog 'data/b.txt.i' + found affected revision 1 for file 'D.txt' + found affected revision 1 for file 'b.txt' + found affected revision 3 for file 'b.txt' $ hg st M D.txt M b.txt @@ -220,9 +220,9 @@ Test the --paranoid option $ hg debug-repair-issue6528 --dry-run --paranoid - found affected revision 1 for filelog 'data/D.txt.i' - found affected revision 1 for filelog 'data/b.txt.i' - found affected revision 3 for filelog 'data/b.txt.i' + found affected revision 1 for file 'D.txt' + found affected revision 1 for file 'b.txt' + found affected revision 3 for file 'b.txt' $ hg st M D.txt M b.txt @@ -239,10 +239,10 @@ Run the fix $ hg debug-repair-issue6528 - found affected revision 1 for filelog 'data/D.txt.i' + found affected revision 1 for file 'D.txt' repaired revision 1 of 'filelog data/D.txt.i' - found affected revision 1 for filelog 'data/b.txt.i' - found affected revision 3 for filelog 'data/b.txt.i' + found affected revision 1 for file 'b.txt' + found affected revision 3 for file 'b.txt' repaired revision 1 of 'filelog data/b.txt.i' repaired revision 3 of 'filelog data/b.txt.i' @@ -281,9 +281,9 @@ $ tar -xf - < "$TESTDIR"/bundles/issue6528.tar $ hg debug-repair-issue6528 --to-report $TESTTMP/report.txt - found affected revision 1 for filelog 'data/D.txt.i' - found affected revision 1 for filelog 'data/b.txt.i' - found affected revision 3 for filelog 'data/b.txt.i' + found affected revision 1 for file 'D.txt' + found affected revision 1 for file 'b.txt' + found affected revision 3 for file 'b.txt' $ cat $TESTTMP/report.txt 2a80419dfc31d7dfb308ac40f3f138282de7d73b D.txt a58b36ad6b6545195952793099613c2116f3563b,ea4f2f2463cca5b29ddf3461012b8ce5c6dac175 b.txt @@ -392,10 +392,10 @@ Run the fix on the non-inline revlog $ hg debug-repair-issue6528 - found affected revision 1 for filelog 'data/D.txt.i' + found affected revision 1 for file 'D.txt' repaired revision 1 of 'filelog data/D.txt.i' - found affected revision 1 for filelog 'data/b.txt.i' - found affected revision 3 for filelog 'data/b.txt.i' + found affected revision 1 for file 'b.txt' + found affected revision 3 for file 'b.txt' repaired revision 1 of 'filelog data/b.txt.i' repaired revision 3 of 'filelog data/b.txt.i' @@ -556,9 +556,9 @@ And that the repair command find issue to fix. $ hg debug-repair-issue6528 --dry-run - found affected revision 1 for filelog 'data/D.txt.i' - found affected revision 1 for filelog 'data/b.txt.i' - found affected revision 3 for filelog 'data/b.txt.i' + found affected revision 1 for file 'D.txt' + found affected revision 1 for file 'b.txt' + found affected revision 3 for file 'b.txt' $ cd .. @@ -604,8 +604,8 @@ And that the repair command find issue to fix. $ hg debug-repair-issue6528 --dry-run - found affected revision 1 for filelog 'data/D.txt.i' - found affected revision 1 for filelog 'data/b.txt.i' - found affected revision 3 for filelog 'data/b.txt.i' + found affected revision 1 for file 'D.txt' + found affected revision 1 for file 'b.txt' + found affected revision 3 for file 'b.txt' $ cd .. diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-largefiles-cache.t --- a/tests/test-largefiles-cache.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-largefiles-cache.t Thu Jun 22 11:36:37 2023 +0200 @@ -184,7 +184,7 @@ $ find share_dst/.hg/largefiles/* | sort share_dst/.hg/largefiles/dirstate - share_dst/.hg/largefiles/undo.backup.dirstate + share_dst/.hg/largefiles/undo.backup.dirstate.bck $ find src/.hg/largefiles/* | egrep "(dirstate|$hash)" | sort src/.hg/largefiles/dirstate diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-largefiles.t --- a/tests/test-largefiles.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-largefiles.t Thu Jun 22 11:36:37 2023 +0200 @@ -1114,16 +1114,16 @@ all local changesets known remotely 6 changesets found uncompressed size of bundle content: - 1389 (changelog) - 1698 (manifests) - 254 .hglf/large1 - 564 .hglf/large3 - 572 .hglf/sub/large4 - 182 .hglf/sub2/large6 - 182 .hglf/sub2/large7 - 212 normal1 - 457 normal3 - 465 sub/normal4 + 1401 (changelog) + 1710 (manifests) + 256 .hglf/large1 + 570 .hglf/large3 + 578 .hglf/sub/large4 + 184 .hglf/sub2/large6 + 184 .hglf/sub2/large7 + 214 normal1 + 463 normal3 + 471 sub/normal4 adding changesets adding manifests adding file changes diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-lfs-serve-access.t --- a/tests/test-lfs-serve-access.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-lfs-serve-access.t Thu Jun 22 11:36:37 2023 +0200 @@ -66,7 +66,7 @@ $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 400 - (glob) $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob) $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob) - $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Acheckheads%253Drelated%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob) + $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&$USUAL_BUNDLE_CAPS$&cg=1&common=0000000000000000000000000000000000000000&heads=525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob) $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 400 - (glob) $ rm -f $TESTTMP/access.log $TESTTMP/errors.log @@ -165,7 +165,7 @@ $LOCALIP - - [$LOGDATE$] "POST /missing/objects/batch HTTP/1.1" 404 - (glob) $LOCALIP - - [$LOGDATE$] "GET /subdir/mount/point?cmd=capabilities HTTP/1.1" 200 - (glob) $LOCALIP - - [$LOGDATE$] "GET /subdir/mount/point?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob) - $LOCALIP - - [$LOGDATE$] "GET /subdir/mount/point?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Acheckheads%253Drelated%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob) + $LOCALIP - - [$LOGDATE$] "GET /subdir/mount/point?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&$USUAL_BUNDLE_CAPS$&cg=1&common=0000000000000000000000000000000000000000&heads=525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob) $LOCALIP - - [$LOGDATE$] "POST /subdir/mount/point/.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob) $LOCALIP - - [$LOGDATE$] "GET /subdir/mount/point/.hg/lfs/objects/f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e HTTP/1.1" 200 - (glob) @@ -311,7 +311,7 @@ $ cat $TESTTMP/access.log $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob) $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob) - $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Acheckheads%253Drelated%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob) + $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&$USUAL_BUNDLE_CAPS$&cg=1&common=0000000000000000000000000000000000000000&heads=525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob) $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob) $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob) $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D392c05922088bacf8e68a6939b480017afbf245d x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob) @@ -330,7 +330,7 @@ $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c HTTP/1.1" 422 - (glob) $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob) $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D392c05922088bacf8e68a6939b480017afbf245d x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob) - $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Acheckheads%253Drelated%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Astream%253Dv2&cg=1&common=525251863cad618e55d483555f3d00a2ca99597e&heads=506bf3d83f78c54b89e81c6411adee19fdf02156+525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob) + $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&$USUAL_BUNDLE_CAPS$&cg=1&common=525251863cad618e55d483555f3d00a2ca99597e&heads=506bf3d83f78c54b89e81c6411adee19fdf02156+525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob) $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob) $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d HTTP/1.1" 500 - (glob) $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob) @@ -487,7 +487,7 @@ $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 401 - (glob) $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob) $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob) - $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Acheckheads%253Drelated%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=506bf3d83f78c54b89e81c6411adee19fdf02156+525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob) + $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&$USUAL_BUNDLE_CAPS$&cg=1&common=0000000000000000000000000000000000000000&heads=506bf3d83f78c54b89e81c6411adee19fdf02156+525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob) $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 401 - (glob) $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob) $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d HTTP/1.1" 200 - (glob) diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-lfs-serve.t --- a/tests/test-lfs-serve.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-lfs-serve.t Thu Jun 22 11:36:37 2023 +0200 @@ -308,9 +308,14 @@ $ hg -R $TESTTMP/client4_pull pull http://localhost:$HGPORT pulling from http://localhost:$HGPORT/ requesting all changes - remote: abort: no common changegroup version - abort: pull failed on remote - [100] + adding changesets + adding manifests + adding file changes + transaction abort! + rollback completed + abort: missing processor for flag '0x2000' + (the lfs extension must be enabled) + [50] $ hg debugrequires -R $TESTTMP/client4_pull/ | grep 'lfs' [1] $ hg debugrequires -R $SERVER_PATH --config extensions.lfs= | grep 'lfs' diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-match.py --- a/tests/test-match.py Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-match.py Thu Jun 22 11:36:37 2023 +0200 @@ -140,6 +140,28 @@ self.assertEqual(m.visitchildrenset(b'dir/subdir'), b'this') self.assertEqual(m.visitchildrenset(b'dir/subdir/x'), b'this') + def testVisitdirFilepath(self): + m = matchmod.match( + util.localpath(b'/repo'), b'', patterns=[b'filepath:dir/z'] + ) + assert isinstance(m, matchmod.patternmatcher) + self.assertTrue(m.visitdir(b'')) + self.assertTrue(m.visitdir(b'dir')) + self.assertFalse(m.visitdir(b'folder')) + self.assertFalse(m.visitdir(b'dir/subdir')) + self.assertFalse(m.visitdir(b'dir/subdir/x')) + + def testVisitchildrensetFilepath(self): + m = matchmod.match( + util.localpath(b'/repo'), b'', patterns=[b'filepath:dir/z'] + ) + assert isinstance(m, matchmod.patternmatcher) + self.assertEqual(m.visitchildrenset(b''), b'this') + self.assertEqual(m.visitchildrenset(b'folder'), set()) + self.assertEqual(m.visitchildrenset(b'dir'), b'this') + self.assertEqual(m.visitchildrenset(b'dir/subdir'), set()) + self.assertEqual(m.visitchildrenset(b'dir/subdir/x'), set()) + class IncludeMatcherTests(unittest.TestCase): def testVisitdirPrefix(self): @@ -212,6 +234,28 @@ self.assertEqual(m.visitchildrenset(b'dir/subdir'), b'this') self.assertEqual(m.visitchildrenset(b'dir/subdir/x'), b'this') + def testVisitdirFilepath(self): + m = matchmod.match( + util.localpath(b'/repo'), b'', include=[b'filepath:dir/z'] + ) + assert isinstance(m, matchmod.includematcher) + self.assertTrue(m.visitdir(b'')) + self.assertTrue(m.visitdir(b'dir')) + self.assertFalse(m.visitdir(b'folder')) + self.assertFalse(m.visitdir(b'dir/subdir')) + self.assertFalse(m.visitdir(b'dir/subdir/x')) + + def testVisitchildrensetFilepath(self): + m = matchmod.match( + util.localpath(b'/repo'), b'', include=[b'filepath:dir/z'] + ) + assert isinstance(m, matchmod.includematcher) + self.assertEqual(m.visitchildrenset(b''), {b'dir'}) + self.assertEqual(m.visitchildrenset(b'folder'), set()) + self.assertEqual(m.visitchildrenset(b'dir'), {b'z'}) + self.assertEqual(m.visitchildrenset(b'dir/subdir'), set()) + self.assertEqual(m.visitchildrenset(b'dir/subdir/x'), set()) + class ExactMatcherTests(unittest.TestCase): def testVisitdir(self): diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-narrow.t --- a/tests/test-narrow.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-narrow.t Thu Jun 22 11:36:37 2023 +0200 @@ -73,14 +73,8 @@ The "narrow" repo requirement is ignored by [debugupgraderepo] -#if tree - $ (cd should-work; hg debugupgraderepo) - abort: cannot upgrade repository; unsupported source requirement: treemanifest - [255] -#else $ (cd should-work; hg debugupgraderepo | grep 'no format upgrades found in existing repository') (no format upgrades found in existing repository) -#endif Test repo with local changes $ hg clone --narrow ssh://user@dummy/master narrow-local-changes --include d0 --include d3 --include d6 @@ -492,14 +486,14 @@ looking for unused includes to remove path:d0 path:d2 - remove these unused includes (yn)? n + remove these unused includes (Yn)? n $ hg tracked --auto-remove-includes comparing with ssh://user@dummy/master searching for changes looking for unused includes to remove path:d0 path:d2 - remove these unused includes (yn)? y + remove these unused includes (Yn)? y looking for local changes to affected paths moving unwanted changesets to backup saved backup bundle to $TESTTMP/narrow-auto-remove/.hg/strip-backup/*-narrow.hg (glob) @@ -527,7 +521,7 @@ looking for unused includes to remove path:d0 path:d2 - remove these unused includes (yn)? y + remove these unused includes (Yn)? y looking for local changes to affected paths deleting unwanted changesets deleting data/d0/f.i diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-obsolete-changeset-exchange.t --- a/tests/test-obsolete-changeset-exchange.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-obsolete-changeset-exchange.t Thu Jun 22 11:36:37 2023 +0200 @@ -164,7 +164,7 @@ adding manifests adding file changes adding foo revisions - bundle2-input-part: total payload size 476 + bundle2-input-part: total payload size 486 bundle2-input-part: "listkeys" (params: 1 mandatory) supported bundle2-input-part: "phase-heads" supported bundle2-input-part: total payload size 24 diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-obsolete-distributed.t --- a/tests/test-obsolete-distributed.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-obsolete-distributed.t Thu Jun 22 11:36:37 2023 +0200 @@ -163,7 +163,7 @@ adding manifests adding file changes adding c_B1 revisions - bundle2-input-part: total payload size 485 + bundle2-input-part: total payload size 495 bundle2-input-part: "listkeys" (params: 1 mandatory) supported bundle2-input-part: "obsmarkers" supported bundle2-input-part: total payload size 143 diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-obsolete.t --- a/tests/test-obsolete.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-obsolete.t Thu Jun 22 11:36:37 2023 +0200 @@ -1600,7 +1600,7 @@ $ hg debugbundle .hg/strip-backup/e008cf283490-*-backup.hg Stream params: {Compression: BZ} - changegroup -- {nbchanges: 1, version: 02} (mandatory: True) + changegroup -- {nbchanges: 1, version: 03} (mandatory: True) e008cf2834908e5d6b0f792a9d4b0e2272260fb8 cache:rev-branch-cache -- {} (mandatory: False) phase-heads -- {} (mandatory: True) @@ -1643,7 +1643,7 @@ $ hg debugbundle .hg/strip-backup/e016b03fd86f-*-backup.hg Stream params: {Compression: BZ} - changegroup -- {nbchanges: 2, version: 02} (mandatory: True) + changegroup -- {nbchanges: 2, version: 03} (mandatory: True) e016b03fd86fcccc54817d120b90b751aaf367d6 b0551702f918510f01ae838ab03a463054c67b46 cache:rev-branch-cache -- {} (mandatory: False) diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-persistent-nodemap-stream-clone.t --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/tests/test-persistent-nodemap-stream-clone.t Thu Jun 22 11:36:37 2023 +0200 @@ -0,0 +1,288 @@ +======================================================= +Test the persistent on-disk nodemap during stream-clone +======================================================= + +#testcases stream-v2 stream-v3 + +#if stream-v3 + $ cat << EOF >> $HGRCPATH + > [experimental] + > stream-v3=yes + > EOF +#endif + +Setup +===== + +#if no-rust + + $ cat << EOF >> $HGRCPATH + > [format] + > use-persistent-nodemap=yes + > [devel] + > persistent-nodemap=yes + > [storage] + > # to avoid spamming the test + > revlog.persistent-nodemap.slow-path=allow + > EOF + +#endif + +Recreate the same repo as in `test-persistent-nodemap.t` + + $ hg init test-repo --config storage.revlog.persistent-nodemap.slow-path=allow + $ hg -R test-repo debugbuilddag .+5000 --new-file + +stream clone +============ + +The persistent nodemap should exist after a streaming clone + +Simple case +----------- + +No race condition + + $ hg clone -U --stream ssh://user@dummy/test-repo stream-clone --debug | egrep '00(changelog|manifest)' + adding [s] 00manifest.n (62 bytes) + adding [s] 00manifest-*.nd (118 KB) (glob) + adding [s] 00manifest.d (4?? KB) (glob) + adding [s] 00manifest.i (313 KB) + adding [s] 00changelog.n (62 bytes) + adding [s] 00changelog-*.nd (118 KB) (glob) + adding [s] 00changelog.d (3?? KB) (glob) + adding [s] 00changelog.i (313 KB) + $ ls -1 stream-clone/.hg/store/ | egrep '00(changelog|manifest)(\.n|-.*\.nd)' + 00changelog-*.nd (glob) + 00changelog.n + 00manifest-*.nd (glob) + 00manifest.n + $ hg -R stream-clone debugnodemap --metadata + uid: * (glob) + tip-rev: 5000 + tip-node: 6b02b8c7b96654c25e86ba69eda198d7e6ad8b3c + data-length: 121088 + data-unused: 0 + data-unused: 0.000% + $ hg verify -R stream-clone + checking changesets + checking manifests + crosschecking files in changesets and manifests + checking files + checking dirstate + checked 5001 changesets with 5001 changes to 5001 files + +new data appened +----------------- + +Other commit happening on the server during the stream clone + +setup the step-by-step stream cloning + + $ HG_TEST_STREAM_WALKED_FILE_1="$TESTTMP/sync_file_walked_1" + $ export HG_TEST_STREAM_WALKED_FILE_1 + $ HG_TEST_STREAM_WALKED_FILE_2="$TESTTMP/sync_file_walked_2" + $ export HG_TEST_STREAM_WALKED_FILE_2 + $ HG_TEST_STREAM_WALKED_FILE_3="$TESTTMP/sync_file_walked_3" + $ export HG_TEST_STREAM_WALKED_FILE_3 + $ cat << EOF >> test-repo/.hg/hgrc + > [extensions] + > steps=$RUNTESTDIR/testlib/ext-stream-clone-steps.py + > EOF + +Check and record file state beforehand + + $ f --size test-repo/.hg/store/00changelog* + test-repo/.hg/store/00changelog-*.nd: size=121088 (glob) + test-repo/.hg/store/00changelog.d: size=3????? (glob) + test-repo/.hg/store/00changelog.i: size=320064 + test-repo/.hg/store/00changelog.n: size=62 + $ hg -R test-repo debugnodemap --metadata | tee server-metadata.txt + uid: * (glob) + tip-rev: 5000 + tip-node: 6b02b8c7b96654c25e86ba69eda198d7e6ad8b3c + data-length: 121088 + data-unused: 0 + data-unused: 0.000% + +Prepare a commit + + $ echo foo >> test-repo/foo + $ hg -R test-repo/ add test-repo/foo + +Do a mix of clone and commit at the same time so that the file listed on disk differ at actual transfer time. + + $ (hg clone -U --stream ssh://user@dummy/test-repo stream-clone-race-1 --debug 2>> clone-output | egrep '00(changelog|manifest)' >> clone-output; touch $HG_TEST_STREAM_WALKED_FILE_3) & + $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_1 + $ hg -R test-repo/ commit -m foo + created new head + $ touch $HG_TEST_STREAM_WALKED_FILE_2 + $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_3 + $ cat clone-output + adding [s] 00manifest.n (62 bytes) + adding [s] 00manifest-*.nd (118 KB) (glob) + adding [s] 00manifest.d (4?? KB) (glob) + adding [s] 00manifest.i (313 KB) + adding [s] 00changelog.n (62 bytes) + adding [s] 00changelog-*.nd (118 KB) (glob) + adding [s] 00changelog.d (36? KB) (glob) + adding [s] 00changelog.i (313 KB) + +Check the result state + + $ f --size stream-clone-race-1/.hg/store/00changelog* + stream-clone-race-1/.hg/store/00changelog-*.nd: size=121088 (glob) + stream-clone-race-1/.hg/store/00changelog.d: size=3????? (glob) + stream-clone-race-1/.hg/store/00changelog.i: size=320064 + stream-clone-race-1/.hg/store/00changelog.n: size=62 + + $ hg -R stream-clone-race-1 debugnodemap --metadata | tee client-metadata.txt + uid: * (glob) + tip-rev: 5000 + tip-node: 6b02b8c7b96654c25e86ba69eda198d7e6ad8b3c + data-length: 121088 + data-unused: 0 + data-unused: 0.000% + $ hg verify -R stream-clone-race-1 + checking changesets + checking manifests + crosschecking files in changesets and manifests + checking files + checking dirstate + checked 5001 changesets with 5001 changes to 5001 files + +We get a usable nodemap, so no rewrite would be needed and the metadata should be identical +(ie: the following diff should be empty) + +This isn't the case for the `no-rust` `no-pure` implementation as it use a very minimal nodemap implementation that unconditionnaly rewrite the nodemap "all the time". + +#if no-rust no-pure + $ diff -u server-metadata.txt client-metadata.txt + --- server-metadata.txt * (glob) + +++ client-metadata.txt * (glob) + @@ -1,4 +1,4 @@ + -uid: * (glob) + +uid: * (glob) + tip-rev: 5000 + tip-node: 6b02b8c7b96654c25e86ba69eda198d7e6ad8b3c + data-length: 121088 + [1] +#else + $ diff -u server-metadata.txt client-metadata.txt +#endif + + +Clean up after the test. + + $ rm -f "$HG_TEST_STREAM_WALKED_FILE_1" + $ rm -f "$HG_TEST_STREAM_WALKED_FILE_2" + $ rm -f "$HG_TEST_STREAM_WALKED_FILE_3" + +full regeneration +----------------- + +A full nodemap is generated + +(ideally this test would append enough data to make sure the nodemap data file +get changed, however to make thing simpler we will force the regeneration for +this test. + +Check the initial state + + $ f --size test-repo/.hg/store/00changelog* + test-repo/.hg/store/00changelog-*.nd: size=121??? (glob) + test-repo/.hg/store/00changelog.d: size=3????? (glob) + test-repo/.hg/store/00changelog.i: size=320128 + test-repo/.hg/store/00changelog.n: size=62 + $ hg -R test-repo debugnodemap --metadata | tee server-metadata-2.txt + uid: * (glob) + tip-rev: 5001 + tip-node: e63c23eaa88ae77967edcf4ea194d31167c478b0 + data-length: 121408 (pure !) + data-unused: 256 (pure !) + data-unused: 0.211% (pure !) + data-length: 121408 (rust !) + data-unused: 256 (rust !) + data-unused: 0.211% (rust !) + data-length: 121152 (no-pure no-rust !) + data-unused: 0 (no-pure no-rust !) + data-unused: 0.000% (no-pure no-rust !) + +Performe the mix of clone and full refresh of the nodemap, so that the files +(and filenames) are different between listing time and actual transfer time. + + $ (hg clone -U --stream ssh://user@dummy/test-repo stream-clone-race-2 --debug 2>> clone-output-2 | egrep '00(changelog|manifest)' >> clone-output-2; touch $HG_TEST_STREAM_WALKED_FILE_3) & + $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_1 + $ rm test-repo/.hg/store/00changelog.n + $ rm test-repo/.hg/store/00changelog-*.nd + $ hg -R test-repo/ debugupdatecache + $ touch $HG_TEST_STREAM_WALKED_FILE_2 + $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_3 + +(note: the stream clone code wronly pick the `undo.` files) + + $ cat clone-output-2 + adding [s] 00manifest.n (62 bytes) + adding [s] 00manifest-*.nd (118 KB) (glob) + adding [s] 00manifest.d (4?? KB) (glob) + adding [s] 00manifest.i (313 KB) + adding [s] 00changelog.n (62 bytes) + adding [s] 00changelog-*.nd (11? KB) (glob) + adding [s] 00changelog.d (3?? KB) (glob) + adding [s] 00changelog.i (313 KB) + +Check the result. + + $ f --size stream-clone-race-2/.hg/store/00changelog* + stream-clone-race-2/.hg/store/00changelog-*.nd: size=1????? (glob) + stream-clone-race-2/.hg/store/00changelog.d: size=3????? (glob) + stream-clone-race-2/.hg/store/00changelog.i: size=320128 + stream-clone-race-2/.hg/store/00changelog.n: size=62 + + $ hg -R stream-clone-race-2 debugnodemap --metadata | tee client-metadata-2.txt + uid: * (glob) + tip-rev: 5001 + tip-node: e63c23eaa88ae77967edcf4ea194d31167c478b0 + data-length: 121408 (pure !) + data-unused: 256 (pure !) + data-unused: 0.211% (pure !) + data-length: 121408 (rust !) + data-unused: 256 (rust !) + data-unused: 0.211% (rust !) + data-length: 121152 (no-pure no-rust !) + data-unused: 0 (no-pure no-rust !) + data-unused: 0.000% (no-pure no-rust !) + $ hg verify -R stream-clone-race-2 + checking changesets + checking manifests + crosschecking files in changesets and manifests + checking files + checking dirstate + checked 5002 changesets with 5002 changes to 5002 files + +We get a usable nodemap, so no rewrite would be needed and the metadata should be identical +(ie: the following diff should be empty) + +This isn't the case for the `no-rust` `no-pure` implementation as it use a very minimal nodemap implementation that unconditionnaly rewrite the nodemap "all the time". + +#if no-rust no-pure + $ diff -u server-metadata-2.txt client-metadata-2.txt + --- server-metadata-2.txt * (glob) + +++ client-metadata-2.txt * (glob) + @@ -1,4 +1,4 @@ + -uid: * (glob) + +uid: * (glob) + tip-rev: 5001 + tip-node: e63c23eaa88ae77967edcf4ea194d31167c478b0 + data-length: 121152 + [1] +#else + $ diff -u server-metadata-2.txt client-metadata-2.txt +#endif + +Clean up after the test + + $ rm -f $HG_TEST_STREAM_WALKED_FILE_1 + $ rm -f $HG_TEST_STREAM_WALKED_FILE_2 + $ rm -f $HG_TEST_STREAM_WALKED_FILE_3 + diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-persistent-nodemap.t --- a/tests/test-persistent-nodemap.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-persistent-nodemap.t Thu Jun 22 11:36:37 2023 +0200 @@ -90,6 +90,14 @@ $ f --size .hg/store/00changelog.n .hg/store/00changelog.n: size=62 + $ hg debugnodemap --metadata --manifest + uid: ???????? (glob) + tip-rev: 5000 + tip-node: 513d42790a19f0f60c6ebea54b9543bc9537b959 + data-length: 120960 + data-unused: 0 + data-unused: 0.000% + Simple lookup works $ ANYNODE=`hg log --template '{node|short}\n' --rev tip` @@ -154,8 +162,8 @@ #endif $ hg debugnodemap --check - revision in index: 5001 - revision in nodemap: 5001 + revisions in index: 5001 + revisions in nodemap: 5001 add a new commit @@ -241,8 +249,8 @@ #endif $ hg debugnodemap --check - revision in index: 5002 - revision in nodemap: 5002 + revisions in index: 5002 + revisions in nodemap: 5002 Test code path without mmap --------------------------- @@ -252,11 +260,11 @@ $ hg ci -m 'bar' --config storage.revlog.persistent-nodemap.mmap=no $ hg debugnodemap --check --config storage.revlog.persistent-nodemap.mmap=yes - revision in index: 5003 - revision in nodemap: 5003 + revisions in index: 5003 + revisions in nodemap: 5003 $ hg debugnodemap --check --config storage.revlog.persistent-nodemap.mmap=no - revision in index: 5003 - revision in nodemap: 5003 + revisions in index: 5003 + revisions in nodemap: 5003 #if pure @@ -1003,258 +1011,3 @@ date: Thu Jan 01 00:00:00 1970 +0000 summary: a2 - - -stream clone -============ - -The persistent nodemap should exist after a streaming clone - -Simple case ------------ - -No race condition - - $ hg clone -U --stream ssh://user@dummy/test-repo stream-clone --debug | egrep '00(changelog|manifest)' - adding [s] 00manifest.n (62 bytes) - adding [s] 00manifest-*.nd (118 KB) (glob) - adding [s] 00changelog.n (62 bytes) - adding [s] 00changelog-*.nd (118 KB) (glob) - adding [s] 00manifest.d (452 KB) (no-zstd !) - adding [s] 00manifest.d (491 KB) (zstd no-bigendian !) - adding [s] 00manifest.d (492 KB) (zstd bigendian !) - adding [s] 00changelog.d (360 KB) (no-zstd !) - adding [s] 00changelog.d (368 KB) (zstd !) - adding [s] 00manifest.i (313 KB) - adding [s] 00changelog.i (313 KB) - $ ls -1 stream-clone/.hg/store/ | egrep '00(changelog|manifest)(\.n|-.*\.nd)' - 00changelog-*.nd (glob) - 00changelog.n - 00manifest-*.nd (glob) - 00manifest.n - $ hg -R stream-clone debugnodemap --metadata - uid: * (glob) - tip-rev: 5005 - tip-node: 90d5d3ba2fc47db50f712570487cb261a68c8ffe - data-length: 121088 - data-unused: 0 - data-unused: 0.000% - -new data appened ------------------ - -Other commit happening on the server during the stream clone - -setup the step-by-step stream cloning - - $ HG_TEST_STREAM_WALKED_FILE_1="$TESTTMP/sync_file_walked_1" - $ export HG_TEST_STREAM_WALKED_FILE_1 - $ HG_TEST_STREAM_WALKED_FILE_2="$TESTTMP/sync_file_walked_2" - $ export HG_TEST_STREAM_WALKED_FILE_2 - $ HG_TEST_STREAM_WALKED_FILE_3="$TESTTMP/sync_file_walked_3" - $ export HG_TEST_STREAM_WALKED_FILE_3 - $ cat << EOF >> test-repo/.hg/hgrc - > [extensions] - > steps=$RUNTESTDIR/testlib/ext-stream-clone-steps.py - > EOF - -Check and record file state beforehand - - $ f --size test-repo/.hg/store/00changelog* - test-repo/.hg/store/00changelog-*.nd: size=121088 (glob) - test-repo/.hg/store/00changelog.d: size=376891 (zstd no-bigendian !) - test-repo/.hg/store/00changelog.d: size=376889 (zstd bigendian !) - test-repo/.hg/store/00changelog.d: size=368890 (no-zstd !) - test-repo/.hg/store/00changelog.i: size=320384 - test-repo/.hg/store/00changelog.n: size=62 - $ hg -R test-repo debugnodemap --metadata | tee server-metadata.txt - uid: * (glob) - tip-rev: 5005 - tip-node: 90d5d3ba2fc47db50f712570487cb261a68c8ffe - data-length: 121088 - data-unused: 0 - data-unused: 0.000% - -Prepare a commit - - $ echo foo >> test-repo/foo - $ hg -R test-repo/ add test-repo/foo - -Do a mix of clone and commit at the same time so that the file listed on disk differ at actual transfer time. - - $ (hg clone -U --stream ssh://user@dummy/test-repo stream-clone-race-1 --debug 2>> clone-output | egrep '00(changelog|manifest)' >> clone-output; touch $HG_TEST_STREAM_WALKED_FILE_3) & - $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_1 - $ hg -R test-repo/ commit -m foo - $ touch $HG_TEST_STREAM_WALKED_FILE_2 - $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_3 - $ cat clone-output - adding [s] 00manifest.n (62 bytes) - adding [s] 00manifest-*.nd (118 KB) (glob) - adding [s] 00changelog.n (62 bytes) - adding [s] 00changelog-*.nd (118 KB) (glob) - adding [s] 00manifest.d (452 KB) (no-zstd !) - adding [s] 00manifest.d (491 KB) (zstd no-bigendian !) - adding [s] 00manifest.d (492 KB) (zstd bigendian !) - adding [s] 00changelog.d (360 KB) (no-zstd !) - adding [s] 00changelog.d (368 KB) (zstd !) - adding [s] 00manifest.i (313 KB) - adding [s] 00changelog.i (313 KB) - -Check the result state - - $ f --size stream-clone-race-1/.hg/store/00changelog* - stream-clone-race-1/.hg/store/00changelog-*.nd: size=121088 (glob) - stream-clone-race-1/.hg/store/00changelog.d: size=368890 (no-zstd !) - stream-clone-race-1/.hg/store/00changelog.d: size=376891 (zstd no-bigendian !) - stream-clone-race-1/.hg/store/00changelog.d: size=376889 (zstd bigendian !) - stream-clone-race-1/.hg/store/00changelog.i: size=320384 - stream-clone-race-1/.hg/store/00changelog.n: size=62 - - $ hg -R stream-clone-race-1 debugnodemap --metadata | tee client-metadata.txt - uid: * (glob) - tip-rev: 5005 - tip-node: 90d5d3ba2fc47db50f712570487cb261a68c8ffe - data-length: 121088 - data-unused: 0 - data-unused: 0.000% - -We get a usable nodemap, so no rewrite would be needed and the metadata should be identical -(ie: the following diff should be empty) - -This isn't the case for the `no-rust` `no-pure` implementation as it use a very minimal nodemap implementation that unconditionnaly rewrite the nodemap "all the time". - -#if no-rust no-pure - $ diff -u server-metadata.txt client-metadata.txt - --- server-metadata.txt * (glob) - +++ client-metadata.txt * (glob) - @@ -1,4 +1,4 @@ - -uid: * (glob) - +uid: * (glob) - tip-rev: 5005 - tip-node: 90d5d3ba2fc47db50f712570487cb261a68c8ffe - data-length: 121088 - [1] -#else - $ diff -u server-metadata.txt client-metadata.txt -#endif - - -Clean up after the test. - - $ rm -f "$HG_TEST_STREAM_WALKED_FILE_1" - $ rm -f "$HG_TEST_STREAM_WALKED_FILE_2" - $ rm -f "$HG_TEST_STREAM_WALKED_FILE_3" - -full regeneration ------------------ - -A full nodemap is generated - -(ideally this test would append enough data to make sure the nodemap data file -get changed, however to make thing simpler we will force the regeneration for -this test. - -Check the initial state - - $ f --size test-repo/.hg/store/00changelog* - test-repo/.hg/store/00changelog-*.nd: size=121344 (glob) (rust !) - test-repo/.hg/store/00changelog-*.nd: size=121344 (glob) (pure !) - test-repo/.hg/store/00changelog-*.nd: size=121152 (glob) (no-rust no-pure !) - test-repo/.hg/store/00changelog.d: size=376950 (zstd no-bigendian !) - test-repo/.hg/store/00changelog.d: size=376948 (zstd bigendian !) - test-repo/.hg/store/00changelog.d: size=368949 (no-zstd !) - test-repo/.hg/store/00changelog.i: size=320448 - test-repo/.hg/store/00changelog.n: size=62 - $ hg -R test-repo debugnodemap --metadata | tee server-metadata-2.txt - uid: * (glob) - tip-rev: 5006 - tip-node: ed2ec1eef9aa2a0ec5057c51483bc148d03e810b - data-length: 121344 (rust !) - data-length: 121344 (pure !) - data-length: 121152 (no-rust no-pure !) - data-unused: 192 (rust !) - data-unused: 192 (pure !) - data-unused: 0 (no-rust no-pure !) - data-unused: 0.158% (rust !) - data-unused: 0.158% (pure !) - data-unused: 0.000% (no-rust no-pure !) - -Performe the mix of clone and full refresh of the nodemap, so that the files -(and filenames) are different between listing time and actual transfer time. - - $ (hg clone -U --stream ssh://user@dummy/test-repo stream-clone-race-2 --debug 2>> clone-output-2 | egrep '00(changelog|manifest)' >> clone-output-2; touch $HG_TEST_STREAM_WALKED_FILE_3) & - $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_1 - $ rm test-repo/.hg/store/00changelog.n - $ rm test-repo/.hg/store/00changelog-*.nd - $ hg -R test-repo/ debugupdatecache - $ touch $HG_TEST_STREAM_WALKED_FILE_2 - $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_3 - -(note: the stream clone code wronly pick the `undo.` files) - - $ cat clone-output-2 - adding [s] undo.backup.00manifest.n (62 bytes) (known-bad-output !) - adding [s] undo.backup.00changelog.n (62 bytes) (known-bad-output !) - adding [s] 00manifest.n (62 bytes) - adding [s] 00manifest-*.nd (118 KB) (glob) - adding [s] 00changelog.n (62 bytes) - adding [s] 00changelog-*.nd (118 KB) (glob) - adding [s] 00manifest.d (492 KB) (zstd !) - adding [s] 00manifest.d (452 KB) (no-zstd !) - adding [s] 00changelog.d (360 KB) (no-zstd !) - adding [s] 00changelog.d (368 KB) (zstd !) - adding [s] 00manifest.i (313 KB) - adding [s] 00changelog.i (313 KB) - -Check the result. - - $ f --size stream-clone-race-2/.hg/store/00changelog* - stream-clone-race-2/.hg/store/00changelog-*.nd: size=121344 (glob) (rust !) - stream-clone-race-2/.hg/store/00changelog-*.nd: size=121344 (glob) (pure !) - stream-clone-race-2/.hg/store/00changelog-*.nd: size=121152 (glob) (no-rust no-pure !) - stream-clone-race-2/.hg/store/00changelog.d: size=376950 (zstd no-bigendian !) - stream-clone-race-2/.hg/store/00changelog.d: size=376948 (zstd bigendian !) - stream-clone-race-2/.hg/store/00changelog.d: size=368949 (no-zstd !) - stream-clone-race-2/.hg/store/00changelog.i: size=320448 - stream-clone-race-2/.hg/store/00changelog.n: size=62 - - $ hg -R stream-clone-race-2 debugnodemap --metadata | tee client-metadata-2.txt - uid: * (glob) - tip-rev: 5006 - tip-node: ed2ec1eef9aa2a0ec5057c51483bc148d03e810b - data-length: 121344 (rust !) - data-unused: 192 (rust !) - data-unused: 0.158% (rust !) - data-length: 121152 (no-rust no-pure !) - data-unused: 0 (no-rust no-pure !) - data-unused: 0.000% (no-rust no-pure !) - data-length: 121344 (pure !) - data-unused: 192 (pure !) - data-unused: 0.158% (pure !) - -We get a usable nodemap, so no rewrite would be needed and the metadata should be identical -(ie: the following diff should be empty) - -This isn't the case for the `no-rust` `no-pure` implementation as it use a very minimal nodemap implementation that unconditionnaly rewrite the nodemap "all the time". - -#if no-rust no-pure - $ diff -u server-metadata-2.txt client-metadata-2.txt - --- server-metadata-2.txt * (glob) - +++ client-metadata-2.txt * (glob) - @@ -1,4 +1,4 @@ - -uid: * (glob) - +uid: * (glob) - tip-rev: 5006 - tip-node: ed2ec1eef9aa2a0ec5057c51483bc148d03e810b - data-length: 121152 - [1] -#else - $ diff -u server-metadata-2.txt client-metadata-2.txt -#endif - -Clean up after the test - - $ rm -f $HG_TEST_STREAM_WALKED_FILE_1 - $ rm -f $HG_TEST_STREAM_WALKED_FILE_2 - $ rm -f $HG_TEST_STREAM_WALKED_FILE_3 - diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-phase-archived.t --- a/tests/test-phase-archived.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-phase-archived.t Thu Jun 22 11:36:37 2023 +0200 @@ -141,3 +141,58 @@ date: Thu Jan 01 00:00:00 1970 +0000 summary: root + +Test that a strip will preserve unrelated changeset archived +------------------------------------------------------------ + +prepare a suitable tree + + $ echo foo > bar + $ hg add bar + $ hg commit -m 'some more commit' + $ hg log -G --hidden -T '{rev} {node|short} [{phase}] {desc|firstline}\n' + @ 3 f90bf4e57854 [draft] some more commit + | + o 2 d1e73e428f29 [draft] unbundletesting + | + | o 1 883aadbbf309 [draft] unbundletesting + |/ + o 0 c1863a3840c6 [draft] root + + $ hg strip --soft --rev '.' + 0 files updated, 0 files merged, 1 files removed, 0 files unresolved + saved backup bundle to $TESTTMP/repo/.hg/strip-backup/f90bf4e57854-56b37ff2-backup.hg + $ hg log -G --hidden -T '{rev} {node|short} [{phase}] {desc|firstline}\n' + o 3 f90bf4e57854 [archived] some more commit + | + @ 2 d1e73e428f29 [draft] unbundletesting + | + | o 1 883aadbbf309 [draft] unbundletesting + |/ + o 0 c1863a3840c6 [draft] root + + + +Strips the other (lower rev-num) head + + $ hg strip --rev 'min(head() and not .)' + saved backup bundle to $TESTTMP/repo/.hg/strip-backup/883aadbbf309-efc55adc-backup.hg + +The archived changeset should still be hidden + + $ hg log -G -T '{rev} {node|short} [{phase}] {desc|firstline}\n' + @ 1 d1e73e428f29 [draft] unbundletesting + | + o 0 c1863a3840c6 [draft] root + + +It may still be around: + + $ hg log --hidden -G -T '{rev} {node|short} [{phase}] {desc|firstline}\n' + o 2 f90bf4e57854 [archived] some more commit + | + @ 1 d1e73e428f29 [draft] unbundletesting + | + o 0 c1863a3840c6 [draft] root + + diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-phases-exchange.t --- a/tests/test-phases-exchange.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-phases-exchange.t Thu Jun 22 11:36:37 2023 +0200 @@ -852,9 +852,9 @@ searching for changes 1 changesets found uncompressed size of bundle content: - 178 (changelog) - 165 (manifests) - 131 a-H + 180 (changelog) + 167 (manifests) + 133 a-H adding changesets adding manifests adding file changes diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-push-warn.t --- a/tests/test-push-warn.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-push-warn.t Thu Jun 22 11:36:37 2023 +0200 @@ -151,9 +151,9 @@ searching for changes 2 changesets found uncompressed size of bundle content: - 352 (changelog) - 326 (manifests) - 25\d foo (re) + 356 (changelog) + 330 (manifests) + 261 foo adding changesets adding manifests adding file changes diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-rebase-abort.t --- a/tests/test-rebase-abort.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-rebase-abort.t Thu Jun 22 11:36:37 2023 +0200 @@ -392,7 +392,6 @@ .hg/dirstate .hg/merge/state .hg/rebasestate - .hg/undo.backup.dirstate .hg/updatestate $ hg rebase -s 3 -d tip diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-rebase-conflicts.t --- a/tests/test-rebase-conflicts.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-rebase-conflicts.t Thu Jun 22 11:36:37 2023 +0200 @@ -296,9 +296,8 @@ bundle2-output-part: "cache:rev-branch-cache" (advisory) streamed payload bundle2-output-part: "phase-heads" 24 bytes payload saved backup bundle to $TESTTMP/issue4041/.hg/strip-backup/e31216eec445-15f7a814-rebase.hg - 3 changesets found + 2 changesets found list of changesets: - 4c9fbe56a16f30c0d5dcc40ec1a97bbe3325209c 19c888675e133ab5dff84516926a65672eaf04d9 c1ffa3b5274e92a9388fe782854e295d2e8d0443 bundle2-output-bundle: "HG20", 3 parts total @@ -309,15 +308,14 @@ bundle2-input-bundle: with-transaction bundle2-input-part: "changegroup" (params: 1 mandatory 1 advisory) supported adding changesets - add changeset 4c9fbe56a16f add changeset 19c888675e13 add changeset c1ffa3b5274e adding manifests adding file changes adding f1.txt revisions - bundle2-input-part: total payload size 1739 + bundle2-input-part: total payload size 1255 bundle2-input-part: "cache:rev-branch-cache" (advisory) supported - bundle2-input-part: total payload size 74 + bundle2-input-part: total payload size 54 bundle2-input-part: "phase-heads" supported bundle2-input-part: total payload size 24 bundle2-input-bundle: 3 parts total diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-rebase-mq-skip.t --- a/tests/test-rebase-mq-skip.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-rebase-mq-skip.t Thu Jun 22 11:36:37 2023 +0200 @@ -75,17 +75,17 @@ $TESTTMP/a/.hg/patches/p0.patch 2 changesets found uncompressed size of bundle content: - 348 (changelog) - 324 (manifests) - 129 p0 - 129 p1 + 352 (changelog) + 328 (manifests) + 131 p0 + 131 p1 saved backup bundle to $TESTTMP/a/.hg/strip-backup/13a46ce44f60-5da6ecfb-rebase.hg 2 changesets found uncompressed size of bundle content: - 403 (changelog) - 324 (manifests) - 129 p0 - 129 p1 + 407 (changelog) + 328 (manifests) + 131 p0 + 131 p1 adding branch adding changesets adding manifests diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-rebase-newancestor.t --- a/tests/test-rebase-newancestor.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-rebase-newancestor.t Thu Jun 22 11:36:37 2023 +0200 @@ -263,15 +263,15 @@ rebase merging completed 1 changesets found uncompressed size of bundle content: - 199 (changelog) - 216 (manifests) - 182 other + 201 (changelog) + 218 (manifests) + 184 other saved backup bundle to $TESTTMP/parentorder/.hg/strip-backup/4c5f12f25ebe-f46990e5-rebase.hg 1 changesets found uncompressed size of bundle content: - 254 (changelog) - 167 (manifests) - 182 other + 256 (changelog) + 169 (manifests) + 184 other adding branch adding changesets adding manifests diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-remote-hidden.t --- a/tests/test-remote-hidden.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-remote-hidden.t Thu Jun 22 11:36:37 2023 +0200 @@ -6,6 +6,8 @@ $ . $TESTDIR/testlib/obsmarker-common.sh $ cat >> $HGRCPATH << EOF + > [ui] + > ssh = "$PYTHON" "$RUNTESTDIR/dummyssh" > [phases] > # public changeset are not obsolete > publish=false @@ -111,3 +113,294 @@ revision: 0 $ killdaemons.py + +Test --remote-hidden for local peer +----------------------------------- + + $ hg clone --pull repo-with-hidden client + requesting all changes + adding changesets + adding manifests + adding file changes + added 2 changesets with 2 changes to 1 files + 2 new obsolescence markers + new changesets 5f354f46e585:c33affeb3f6b (1 drafts) + updating to branch default + 1 files updated, 0 files merged, 0 files removed, 0 files unresolved + $ hg -R client log -G --hidden -v + @ 1:c33affeb3f6b c_Amend_New [draft] + | + o 0:5f354f46e585 c_Public [public] + + +pulling an hidden changeset should fail: + + $ hg -R client pull -r be215fbb8c50 + pulling from $TESTTMP/repo-with-hidden + abort: filtered revision 'be215fbb8c50' (not in 'served' subset) + [10] + +pulling an hidden changeset with --remote-hidden should succeed: + + $ hg -R client pull --remote-hidden --traceback -r be215fbb8c50 + pulling from $TESTTMP/repo-with-hidden + searching for changes + adding changesets + adding manifests + adding file changes + added 1 changesets with 1 changes to 1 files (+1 heads) + (1 other changesets obsolete on arrival) + (run 'hg heads' to see heads) + $ hg -R client log -G --hidden -v + x 2:be215fbb8c50 c_Amend_Old [draft] + | + | @ 1:c33affeb3f6b c_Amend_New [draft] + |/ + o 0:5f354f46e585 c_Public [public] + + +Pulling a secret changeset is still forbidden: + +secret visible: + + $ hg -R client pull --remote-hidden -r 8d28cbe335f3 + pulling from $TESTTMP/repo-with-hidden + abort: filtered revision '8d28cbe335f3' (not in 'served.hidden' subset) + [10] + +secret hidden: + + $ hg -R client pull --remote-hidden -r 1c6afd79eb66 + pulling from $TESTTMP/repo-with-hidden + abort: filtered revision '1c6afd79eb66' (not in 'served.hidden' subset) + [10] + +Test accessing hidden changeset through hgweb +--------------------------------------------- + + $ hg -R repo-with-hidden serve -p $HGPORT -d --pid-file hg.pid --config "experimental.server.allow-hidden-access=*" -E error.log --accesslog access.log + $ cat hg.pid >> $DAEMON_PIDS + +Hidden changeset are hidden by default: + + $ get-with-headers.py localhost:$HGPORT 'log?style=raw' | grep revision: + revision: 2 + revision: 0 + +Hidden changeset are visible when requested: + + $ get-with-headers.py localhost:$HGPORT 'log?style=raw&access-hidden=1' | grep revision: + revision: 3 + revision: 2 + revision: 1 + revision: 0 + +Same check on a server that do not allow hidden access: +``````````````````````````````````````````````````````` + + $ hg -R repo-with-hidden serve -p $HGPORT1 -d --pid-file hg2.pid --config "experimental.server.allow-hidden-access=" -E error.log --accesslog access.log + $ cat hg2.pid >> $DAEMON_PIDS + +Hidden changeset are hidden by default: + + $ get-with-headers.py localhost:$HGPORT1 'log?style=raw' | grep revision: + revision: 2 + revision: 0 + +Hidden changeset are still hidden despite being the hidden access request: + + $ get-with-headers.py localhost:$HGPORT1 'log?style=raw&access-hidden=1' | grep revision: + revision: 2 + revision: 0 + +Test --remote-hidden for http peer +---------------------------------- + + $ hg clone --pull http://localhost:$HGPORT client-http + requesting all changes + adding changesets + adding manifests + adding file changes + added 2 changesets with 2 changes to 1 files + 2 new obsolescence markers + new changesets 5f354f46e585:c33affeb3f6b (1 drafts) + updating to branch default + 1 files updated, 0 files merged, 0 files removed, 0 files unresolved + $ hg -R client-http log -G --hidden -v + @ 1:c33affeb3f6b c_Amend_New [draft] + | + o 0:5f354f46e585 c_Public [public] + + +pulling an hidden changeset should fail: + + $ hg -R client-http pull -r be215fbb8c50 + pulling from http://localhost:$HGPORT/ + abort: filtered revision 'be215fbb8c50' (not in 'served' subset) + [255] + +pulling an hidden changeset with --remote-hidden should succeed: + + $ hg -R client-http pull --remote-hidden -r be215fbb8c50 + pulling from http://localhost:$HGPORT/ + searching for changes + adding changesets + adding manifests + adding file changes + added 1 changesets with 1 changes to 1 files (+1 heads) + (1 other changesets obsolete on arrival) + (run 'hg heads' to see heads) + $ hg -R client-http log -G --hidden -v + x 2:be215fbb8c50 c_Amend_Old [draft] + | + | @ 1:c33affeb3f6b c_Amend_New [draft] + |/ + o 0:5f354f46e585 c_Public [public] + + +Pulling a secret changeset is still forbidden: + +secret visible: + + $ hg -R client-http pull --remote-hidden -r 8d28cbe335f3 + pulling from http://localhost:$HGPORT/ + abort: filtered revision '8d28cbe335f3' (not in 'served.hidden' subset) + [255] + +secret hidden: + + $ hg -R client-http pull --remote-hidden -r 1c6afd79eb66 + pulling from http://localhost:$HGPORT/ + abort: filtered revision '1c6afd79eb66' (not in 'served.hidden' subset) + [255] + +Same check on a server that do not allow hidden access: +``````````````````````````````````````````````````````` + + $ hg clone --pull http://localhost:$HGPORT1 client-http2 + requesting all changes + adding changesets + adding manifests + adding file changes + added 2 changesets with 2 changes to 1 files + 2 new obsolescence markers + new changesets 5f354f46e585:c33affeb3f6b (1 drafts) + updating to branch default + 1 files updated, 0 files merged, 0 files removed, 0 files unresolved + $ hg -R client-http2 log -G --hidden -v + @ 1:c33affeb3f6b c_Amend_New [draft] + | + o 0:5f354f46e585 c_Public [public] + + +pulling an hidden changeset should fail: + + $ hg -R client-http2 pull -r be215fbb8c50 + pulling from http://localhost:$HGPORT1/ + abort: filtered revision 'be215fbb8c50' (not in 'served' subset) + [255] + +pulling an hidden changeset with --remote-hidden should fail too: + + $ hg -R client-http2 pull --remote-hidden -r be215fbb8c50 + pulling from http://localhost:$HGPORT1/ + abort: filtered revision 'be215fbb8c50' (not in 'served' subset) + [255] + +Test --remote-hidden for ssh peer +---------------------------------- + + $ hg clone --pull ssh://user@dummy/repo-with-hidden client-ssh + requesting all changes + adding changesets + adding manifests + adding file changes + added 2 changesets with 2 changes to 1 files + 2 new obsolescence markers + new changesets 5f354f46e585:c33affeb3f6b (1 drafts) + updating to branch default + 1 files updated, 0 files merged, 0 files removed, 0 files unresolved + $ hg -R client-ssh log -G --hidden -v + @ 1:c33affeb3f6b c_Amend_New [draft] + | + o 0:5f354f46e585 c_Public [public] + + +Check on a server that do not allow hidden access: +`````````````````````````````````````````````````` + +pulling an hidden changeset should fail: + + $ hg -R client-ssh pull -r be215fbb8c50 + pulling from ssh://user@dummy/repo-with-hidden + abort: filtered revision 'be215fbb8c50' (not in 'served' subset) + [255] + +pulling an hidden changeset with --remote-hidden should succeed: + + $ hg -R client-ssh pull --remote-hidden -r be215fbb8c50 + pulling from ssh://user@dummy/repo-with-hidden + remote: ignoring request to access hidden changeset by unauthorized user: * (glob) + abort: filtered revision 'be215fbb8c50' (not in 'served' subset) + [255] + $ hg -R client-ssh log -G --hidden -v + @ 1:c33affeb3f6b c_Amend_New [draft] + | + o 0:5f354f46e585 c_Public [public] + + +Check on a server that do allow hidden access: +`````````````````````````````````````````````` + + $ cat << EOF >> repo-with-hidden/.hg/hgrc + > [experimental] + > server.allow-hidden-access=* + > EOF + +pulling an hidden changeset should fail: + + $ hg -R client-ssh pull -r be215fbb8c50 + pulling from ssh://user@dummy/repo-with-hidden + abort: filtered revision 'be215fbb8c50' (not in 'served' subset) + [255] + +pulling an hidden changeset with --remote-hidden should succeed: + + $ hg -R client-ssh pull --remote-hidden -r be215fbb8c50 + pulling from ssh://user@dummy/repo-with-hidden + searching for changes + adding changesets + adding manifests + adding file changes + added 1 changesets with 1 changes to 1 files (+1 heads) + (1 other changesets obsolete on arrival) + (run 'hg heads' to see heads) + $ hg -R client-ssh log -G --hidden -v + x 2:be215fbb8c50 c_Amend_Old [draft] + | + | @ 1:c33affeb3f6b c_Amend_New [draft] + |/ + o 0:5f354f46e585 c_Public [public] + + +Pulling a secret changeset is still forbidden: + +secret visible: + + $ hg -R client-ssh pull --remote-hidden -r 8d28cbe335f3 + pulling from ssh://user@dummy/repo-with-hidden + abort: filtered revision '8d28cbe335f3' (not in 'served.hidden' subset) + [255] + +secret hidden: + + $ hg -R client-ssh pull --remote-hidden -r 1c6afd79eb66 + pulling from ssh://user@dummy/repo-with-hidden + abort: filtered revision '1c6afd79eb66' (not in 'served.hidden' subset) + [255] + +============= +Final cleanup +============= + + $ killdaemons.py diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-rhg.t --- a/tests/test-rhg.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-rhg.t Thu Jun 22 11:36:37 2023 +0200 @@ -71,6 +71,21 @@ ../../../file2 ../../../file3 + $ $NO_FALLBACK rhg files --config ui.relative-paths=legacy + ../../../file1 + ../../../file2 + ../../../file3 + + $ $NO_FALLBACK rhg files --config ui.relative-paths=false + file1 + file2 + file3 + + $ $NO_FALLBACK rhg files --config ui.relative-paths=true + ../../../file1 + ../../../file2 + ../../../file3 + Listing tracked files through broken pipe $ $NO_FALLBACK rhg files | head -n 1 ../../../file1 diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-rollback.t --- a/tests/test-rollback.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-rollback.t Thu Jun 22 11:36:37 2023 +0200 @@ -72,7 +72,7 @@ $ hg update bar 1 files updated, 0 files merged, 1 files removed, 0 files unresolved (activating bookmark bar) - $ cat .hg/undo.backup.branch + $ cat .hg/undo.backup.branch.bck test $ hg log -G --template '{rev} [{branch}] ({bookmarks}) {desc|firstline}\n' o 2 [test] (foo) add b diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-ssh.t --- a/tests/test-ssh.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-ssh.t Thu Jun 22 11:36:37 2023 +0200 @@ -529,7 +529,7 @@ no changes found devel-peer-request: getbundle devel-peer-request: bookmarks: 1 bytes - devel-peer-request: bundlecaps: 270 bytes + devel-peer-request: bundlecaps: 275 bytes devel-peer-request: cg: 1 bytes devel-peer-request: common: 122 bytes devel-peer-request: heads: 122 bytes diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-stabletailgraph.t --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/tests/test-stabletailgraph.t Thu Jun 22 11:36:37 2023 +0200 @@ -0,0 +1,1146 @@ +==================================== +Test for the stabletailgraph package +==================================== + +This test file contains a bunch of small test graphs with some minimal yet +non-trivial structure, on which the various stable-tail graph and stable-tail +sort functions are tested. + +Each case consists of the creation of the interesting graph structure, followed +by a check, for each noteworthy node, of: +- the stable-tail sort output (with the linear parts globbed), +- the leap set, +- the specific leap set. + +In the ASCII art of the diagrams, the side of the exclusive part which is +followed in priority is denoted with "<" or ">" if it is on the left or right +respectively. + +The intermediary linear parts in the example graph are there to force the +exclusive part choice (made on a min rank condition). + + +Setup +===== + +Enable the rank computation to test sorting based on the rank. + + $ cat << EOF >> $HGRCPATH + > [format] + > exp-use-changelog-v2=enable-unstable-format-and-corrupt-my-data + > + > [alias] + > test-sts = debug::stable-tail-sort -T '{tags},' + > test-leaps = debug::stable-tail-sort-leaps -T '{tags}' + > test-log = log --graph -T '{tags} rank={_fast_rank}' --rev 'tagged()' + > EOF + + +Example 1: single merge node +============================ + +A base case with one branchpoint "b" and one merge node "e". + +The exclusive part, starting with the lowest-ranking parent "c" of "e", +appears first in stable-tail sort of "e" and "f". + +# f +# | +# e +# | +# --<-- +# | | +# c d +# | | +# --+-- <- at this point, the sort of "e" is done consuming its +# | exclusive part [c] and jumps back to its other parent "d" +# b +# | +# a + + $ hg init example-1 + $ cd example-1 + $ hg debugbuilddag '.:a*a:b*b:c-- | <- in the sort of "f", we need to skip "c" and leap to the +# | | | inherited part of "d" +# | +---- +# b | +# | c +# | | +# --+-- +# | +# a + + $ hg init example-4 + $ cd example-4 + $ hg debugbuilddag '.:a*a+1:b-- | +# | | | +# | g | +# | | | +# | +---- <- in the sort of "f", leaping from "g" to "b" +# b | +# | c +# | | +# --+-- +# | +# a + + $ hg init example-5 + $ cd example-5 + $ hg debugbuilddag '.:a*a+2:b-- --<-- +# | | | | +# g e h i +# | | | | +# | --+-- | <- at this point, for the sort of "l", the iteration on +# f | | the end of excl(j) is postponed to the iteration of +# | d | excl(k) +# | | | +# | c | +# | | | +# ---+--- | +# | | +# b | +# | | +# ----+----- +# | +# a + + $ hg init example-7 + $ cd example-7 + $ hg debugbuilddag \ + > '.:a*a:b*b:c*c:d*d:e*b:f------ +# | | +# n l +# | | +# | ----<---- +# | | | +# | i k +# m | | +# | ---<--- | +# | | | | +# | d h | +# | | | j +# | | g | +# | c | | +# | | +----- +# -----+ | +# | f +# b | +# | e <- Done with excl(o) by element count, without +# | | having emitted "b". Implicitly unstack open +# ---+--- merges to leap e->n. +# | +# a + + $ hg init example-10 + $ cd example-10 + $ hg debugbuilddag ' + > .:a + > *a:b.:c.:d + > *a:e.:f.:g.:h + > *d/h:i + > *f:j+6:k + > *i/k:l + > *b:m+15:n + > *n/l:o. + > ' + $ hg test-log + o o rank=34 + |\ + | o n rank=18 + | : + | o m rank=3 + | | + o | l rank=17 + |\ \ + | o | k rank=10 + | : | + | o | j rank=4 + | | | + o | | i rank=9 + |\ \ \ + | o | | h rank=5 + | | | | + | o | | g rank=4 + | |/ / + | o | f rank=3 + | | | + | o | e rank=2 + | | | + o | | d rank=4 + | | | + o---+ c rank=3 + / / + | o b rank=2 + |/ + o a rank=1 + + +Check the stable-tail sort of "o": + + $ hg test-sts o + o,l,i,d,c,h,g,k,*,j,f,e,n,*,m,b,a, (no-eol) (glob) + +Stale-tail sort of "l" for reference: + + $ hg test-sts l + l,i,d,c,b,h,g,k,*,j,f,e,a, (no-eol) (glob) + +Check the corresponding leaps: + + $ hg test-leaps o + ch + gk + en + + $ hg test-leaps --specific o + ch + + $ hg test-leaps l + bh + gk + + $ hg test-leaps --specific l + + $ cd .. + + +Example 11: adjusting other leaps with the same destination +=========================================================== + +This is a variant of the previous test, checking the adjustment of leaps having +the same destination in particular. + +# r +# | +# ------>------ +# | | +# | o +# q | +# | ------>------ +# | | | +# | n l +# | | | +# | | ----<---- +# p | | | +# | | i k +# | m | | +# | | ---<--- | +# | | | | | +# | | d h | +# | | | | j +# -----]|[---+ | | <- in sts(r): leap d->h +# | | g | +# | c | | +# | | +----- +# -----+ | <- the leap c->h of sts(o) +# | f is shadowed in sts(r) +# b | +# | e +# | | +# ---+--- +# | +# a + + $ hg init example-11 + $ cd example-11 + $ hg debugbuilddag ' + > .:a + > *a:b.:c.:d + > *a:e.:f.:g.:h + > *d/h:i + > *f:j+6:k + > *i/k:l + > *b:m+15:n + > *n/l:o + > *c:p+31:q + > *o/q:r. + > ' + $ hg test-log + o r rank=67 + |\ + | o q rank=35 + | : + | o p rank=4 + | | + o | o rank=34 + |\ \ + | o | n rank=18 + | : | + | o | m rank=3 + | | | + o | | l rank=17 + |\ \ \ + | o | | k rank=10 + | : | | + | o | | j rank=4 + | | | | + o | | | i rank=9 + |\ \ \ \ + | o | | | h rank=5 + | | | | | + | o | | | g rank=4 + | |/ / / + | o | | f rank=3 + | | | | + | o | | e rank=2 + | | | | + o-----+ d rank=4 + / / / + | | o c rank=3 + | |/ + | o b rank=2 + |/ + o a rank=1 + + +Check the stable-tail sort of "r": + + $ hg test-sts r + r,o,l,i,d,h,g,k,*,j,f,e,n,*,m,q,*,p,c,b,a, (no-eol) (glob) + +Stable-tail sort of "o" for reference: + + $ hg test-sts o + o,l,i,d,c,h,g,k,*,j,f,e,n,*,m,b,a, (no-eol) (glob) + +Check the associated leaps: + + $ hg test-leaps r + dh + gk + en + mq + + $ hg test-leaps --specific r + dh + + $ hg test-leaps o + ch + gk + en + + $ hg test-leaps --specific o + ch + + $ cd .. + + +Example 12 +========== + +This is a variant of the previous test, checking the adjustments of leaps +in the open merge stack having a lower destination (which should appear only +later in the stable-tail sort of the head). + +# t +# | +# ------>------ +# | | +# | o +# s | +# | ------>------ +# | | | +# | n l +# r | | +# | | ----<---- +# | | | | +# --<-- | i k +# | | m | | +# p q | ---<--- | +# | | | | | | +# | ---]|[--]|[----+ | +# | | | | | +# | | d h | +# | | | | j +# -------]|[---+ | | <- d->k is sts(t) +# | | g | +# | c | | +# | | +----- +# -----+ | <- c->h in sts(o), not applying in sts(t) +# | f +# b | +# | e +# | | +# ---+--- +# | +# a + + $ hg init example-12 + $ cd example-12 + $ hg debugbuilddag ' + > .:a + > *a:b.:c.:d + > *a:e.:f.:g.:h + > *d/h:i + > *f:j+6:k + > *i/k:l + > *b:m+15:n + > *n/l:o + > *c:p + > *h:q + > *p/q:r+25:s + > *o/s:t. + > ' + $ hg test-log + o t rank=63 + |\ + | o s rank=35 + | : + | o r rank=10 + | |\ + | | o q rank=6 + | | | + | o | p rank=4 + | | | + o | | o rank=34 + |\ \ \ + | o | | n rank=18 + | : | | + | o | | m rank=3 + | | | | + o | | | l rank=17 + |\ \ \ \ + | o | | | k rank=10 + | : | | | + | o | | | j rank=4 + | | | | | + o-------+ i rank=9 + | | | | | + | | | | o h rank=5 + | | | | | + | +-----o g rank=4 + | | | | + | o | | f rank=3 + | | | | + | o | | e rank=2 + | | | | + o-----+ d rank=4 + / / / + | | o c rank=3 + | |/ + | o b rank=2 + |/ + o a rank=1 + + +Check the stable-tail sort of "t": + + $ hg test-sts t + t,o,l,i,d,k,*,j,n,*,m,s,*,r,p,c,b,q,h,g,f,e,a, (no-eol) (glob) + +Stable-tail sort of "o" for reference: + + $ hg test-sts o + o,l,i,d,c,h,g,k,*,j,f,e,n,*,m,b,a, (no-eol) (glob) + +Check the associated leaps: + + $ hg test-leaps t + dk + jn + ms + bq + + $ hg test-leaps --specific t + dk + jn + + $ hg test-leaps o + ch + gk + en + + $ hg test-leaps --specific o + ch + + $ cd .. diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-status.t --- a/tests/test-status.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-status.t Thu Jun 22 11:36:37 2023 +0200 @@ -246,6 +246,11 @@ ! deleted ? unknown +hg status -0: + + $ hg status -0 --config rhg.on-unsupported=abort + A added\x00A copied\x00R removed\x00! deleted\x00? unknown\x00 (no-eol) (esc) + hg status -A: $ hg status -A diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-stream-bundle-v2.t --- a/tests/test-stream-bundle-v2.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-stream-bundle-v2.t Thu Jun 22 11:36:37 2023 +0200 @@ -1,6 +1,21 @@ #require no-reposimplestore -Test creating a consuming stream bundle v2 +#testcases stream-v2 stream-v3 + +#if stream-v2 + $ bundle_format="streamv2" + $ stream_version="v2" +#endif +#if stream-v3 + $ bundle_format="streamv3-exp" + $ stream_version="v3-exp" + $ cat << EOF >> $HGRCPATH + > [experimental] + > stream-v3=yes + > EOF +#endif + +Test creating a consuming stream bundle v2 and v3 $ getmainid() { > hg -R main log --template '{node}\n' --rev "$1" @@ -42,16 +57,22 @@ > A > EOF - $ hg bundle -a --type="none-v2;stream=v2" bundle.hg + $ hg bundle -a --type="none-v2;stream=$stream_version" bundle.hg $ hg debugbundle bundle.hg Stream params: {} - stream2 -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlogv1%2Csparserevlog} (mandatory: True) (no-zstd !) - stream2 -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (zstd no-rust !) - stream2 -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (rust !) + stream2 -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v2 no-zstd !) + stream2 -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v2 zstd no-rust !) + stream2 -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v2 rust !) + stream3-exp -- {requirements: generaldelta%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v3 no-zstd !) + stream3-exp -- {requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v3 zstd no-rust !) + stream3-exp -- {requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v3 rust !) $ hg debugbundle --spec bundle.hg - none-v2;stream=v2;requirements%3Dgeneraldelta%2Crevlogv1%2Csparserevlog (no-zstd !) - none-v2;stream=v2;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (zstd no-rust !) - none-v2;stream=v2;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (rust !) + none-v2;stream=v2;requirements%3Dgeneraldelta%2Crevlogv1%2Csparserevlog (stream-v2 no-zstd !) + none-v2;stream=v2;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v2 zstd no-rust !) + none-v2;stream=v2;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v2 rust !) + none-v2;stream=v3-exp;requirements%3Dgeneraldelta%2Crevlogv1%2Csparserevlog (stream-v3 no-zstd !) + none-v2;stream=v3-exp;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v3 zstd no-rust !) + none-v2;stream=v3-exp;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v3 rust !) Test that we can apply the bundle as a stream clone bundle @@ -66,10 +87,12 @@ $ cat http.pid >> $DAEMON_PIDS $ cd .. - $ hg clone http://localhost:$HGPORT streamv2-clone-implicit --debug + +#if stream-v2 + $ hg clone http://localhost:$HGPORT stream-clone-implicit --debug using http://localhost:$HGPORT/ sending capabilities command - sending clonebundles command + sending clonebundles_manifest command applying clone bundle from http://localhost:$HGPORT1/bundle.hg bundle2-input-bundle: with-transaction bundle2-input-part: "stream2" (params: 3 mandatory) supported @@ -82,9 +105,9 @@ adding [s] data/C.i (66 bytes) adding [s] data/D.i (66 bytes) adding [s] data/E.i (66 bytes) + adding [s] phaseroots (43 bytes) adding [s] 00manifest.i (584 bytes) adding [s] 00changelog.i (595 bytes) - adding [s] phaseroots (43 bytes) adding [c] branch2-served (94 bytes) adding [c] rbc-names-v1 (7 bytes) adding [c] rbc-revs-v1 (40 bytes) @@ -123,10 +146,10 @@ updating the branch cache (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob) - $ hg clone --stream http://localhost:$HGPORT streamv2-clone-explicit --debug + $ hg clone --stream http://localhost:$HGPORT stream-clone-explicit --debug using http://localhost:$HGPORT/ sending capabilities command - sending clonebundles command + sending clonebundles_manifest command applying clone bundle from http://localhost:$HGPORT1/bundle.hg bundle2-input-bundle: with-transaction bundle2-input-part: "stream2" (params: 3 mandatory) supported @@ -139,9 +162,9 @@ adding [s] data/C.i (66 bytes) adding [s] data/D.i (66 bytes) adding [s] data/E.i (66 bytes) + adding [s] phaseroots (43 bytes) adding [s] 00manifest.i (584 bytes) adding [s] 00changelog.i (595 bytes) - adding [s] phaseroots (43 bytes) adding [c] branch2-served (94 bytes) adding [c] rbc-names-v1 (7 bytes) adding [c] rbc-revs-v1 (40 bytes) @@ -179,3 +202,122 @@ 5 files updated, 0 files merged, 0 files removed, 0 files unresolved updating the branch cache (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob) + +#endif + +#if stream-v3 + $ hg clone http://localhost:$HGPORT stream-clone-implicit --debug + using http://localhost:$HGPORT/ + sending capabilities command + sending clonebundles_manifest command + applying clone bundle from http://localhost:$HGPORT1/bundle.hg + bundle2-input-bundle: with-transaction + bundle2-input-part: "stream3-exp" (params: 1 mandatory) supported + applying stream bundle + 11 entries to transfer + starting 4 threads for background file closing (?) + starting 4 threads for background file closing (?) + adding [s] data/A.i (66 bytes) + adding [s] data/B.i (66 bytes) + adding [s] data/C.i (66 bytes) + adding [s] data/D.i (66 bytes) + adding [s] data/E.i (66 bytes) + adding [s] phaseroots (43 bytes) + adding [s] 00manifest.i (584 bytes) + adding [s] 00changelog.i (595 bytes) + adding [c] branch2-served (94 bytes) + adding [c] rbc-names-v1 (7 bytes) + adding [c] rbc-revs-v1 (40 bytes) + transferred 1.65 KB in * seconds (* */sec) (glob) + bundle2-input-part: total payload size 1852 + bundle2-input-bundle: 1 parts total + updating the branch cache + finished applying clone bundle + query 1; heads + sending batch command + searching for changes + all remote heads known locally + no changes found + sending getbundle command + bundle2-input-bundle: with-transaction + bundle2-input-part: "listkeys" (params: 1 mandatory) supported + bundle2-input-part: "phase-heads" supported + bundle2-input-part: total payload size 24 + bundle2-input-bundle: 2 parts total + checking for updated bookmarks + updating to branch default + resolving manifests + branchmerge: False, force: False, partial: False + ancestor: 000000000000, local: 000000000000+, remote: 9bc730a19041 + A: remote created -> g + getting A + B: remote created -> g + getting B + C: remote created -> g + getting C + D: remote created -> g + getting D + E: remote created -> g + getting E + 5 files updated, 0 files merged, 0 files removed, 0 files unresolved + updating the branch cache + (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob) + + $ hg clone --stream http://localhost:$HGPORT stream-clone-explicit --debug + using http://localhost:$HGPORT/ + sending capabilities command + sending clonebundles_manifest command + applying clone bundle from http://localhost:$HGPORT1/bundle.hg + bundle2-input-bundle: with-transaction + bundle2-input-part: "stream3-exp" (params: 1 mandatory) supported + applying stream bundle + 11 entries to transfer + starting 4 threads for background file closing (?) + starting 4 threads for background file closing (?) + adding [s] data/A.i (66 bytes) + adding [s] data/B.i (66 bytes) + adding [s] data/C.i (66 bytes) + adding [s] data/D.i (66 bytes) + adding [s] data/E.i (66 bytes) + adding [s] phaseroots (43 bytes) + adding [s] 00manifest.i (584 bytes) + adding [s] 00changelog.i (595 bytes) + adding [c] branch2-served (94 bytes) + adding [c] rbc-names-v1 (7 bytes) + adding [c] rbc-revs-v1 (40 bytes) + transferred 1.65 KB in * seconds (* */sec) (glob) + bundle2-input-part: total payload size 1852 + bundle2-input-bundle: 1 parts total + updating the branch cache + finished applying clone bundle + query 1; heads + sending batch command + searching for changes + all remote heads known locally + no changes found + sending getbundle command + bundle2-input-bundle: with-transaction + bundle2-input-part: "listkeys" (params: 1 mandatory) supported + bundle2-input-part: "phase-heads" supported + bundle2-input-part: total payload size 24 + bundle2-input-bundle: 2 parts total + checking for updated bookmarks + updating to branch default + resolving manifests + branchmerge: False, force: False, partial: False + ancestor: 000000000000, local: 000000000000+, remote: 9bc730a19041 + A: remote created -> g + getting A + B: remote created -> g + getting B + C: remote created -> g + getting C + D: remote created -> g + getting D + E: remote created -> g + getting E + 5 files updated, 0 files merged, 0 files removed, 0 files unresolved + updating the branch cache + (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob) + +#endif diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-strip.t --- a/tests/test-strip.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-strip.t Thu Jun 22 11:36:37 2023 +0200 @@ -251,7 +251,7 @@ $ hg debugbundle .hg/strip-backup/* Stream params: {Compression: BZ} - changegroup -- {nbchanges: 1, version: 02} (mandatory: True) + changegroup -- {nbchanges: 1, version: 03} (mandatory: True) 264128213d290d868c54642d13aeaa3675551a78 cache:rev-branch-cache -- {} (mandatory: False) phase-heads -- {} (mandatory: True) diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-transaction-rollback-on-revlog-split.t --- a/tests/test-transaction-rollback-on-revlog-split.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-transaction-rollback-on-revlog-split.t Thu Jun 22 11:36:37 2023 +0200 @@ -120,6 +120,41 @@ $ cd .. +Test a succesful pull +===================== + +Make sure everything goes though as expect if we don't do any crash + + $ hg clone --quiet --rev 1 troffset-computation troffset-success + $ cd troffset-success + +Reference size: + $ f -s file + file: size=1024 + $ f -s .hg/store/data/file* + .hg/store/data/file.i: size=1174 + + $ hg pull ../troffset-computation + pulling from ../troffset-computation + searching for changes + adding changesets + adding manifests + adding file changes + added 3 changesets with 18 changes to 6 files + new changesets c99a94cae9b1:64874a3b0160 + (run 'hg update' to get a working copy) + + +The inline revlog has been replaced + + $ f -s .hg/store/data/file* + .hg/store/data/file.d: size=267307 + .hg/store/data/file.i: size=320 + + + $ hg verify -q + $ cd .. + Test a hard crash after the file was split but before the transaction was committed =================================================================================== @@ -181,7 +216,7 @@ data/file.i 1174 data/file.d 0 $ cat .hg/store/journal.backupfiles | tr -s '\000' ' ' | tr -s '\00' ' '| grep 'data.*/file' - data/file.i data/journal.backup.file.i 0 + data/file.i data/journal.backup.file.i.bck 0 data-s/file 0 recover is rolling the split back, the fncache is still valid @@ -415,6 +450,9 @@ $ cat $TESTTMP/reader.stderr $ cat $TESTTMP/reader.stdout 1 (no-eol) + + $ hg verify -q + $ cd .. pending hooks @@ -453,5 +491,7 @@ size=1024 $ cat stderr + $ hg verify -q + $ cd .. diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-treemanifest.t --- a/tests/test-treemanifest.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-treemanifest.t Thu Jun 22 11:36:37 2023 +0200 @@ -211,12 +211,14 @@ (branch merge, don't forget to commit) $ hg ci -m 'merge of flat manifests to new flat manifest' - $ hg serve -p $HGPORT -d --pid-file=hg.pid --errorlog=errors.log - $ cat hg.pid >> $DAEMON_PIDS + $ cd .. + $ hg -R repo-flat serve -p $HGPORT -d \ + > --pid-file=port-0-hg.pid \ + > --errorlog=port-0-errors.log + $ cat port-0-hg.pid >> $DAEMON_PIDS Create clone with tree manifests enabled - $ cd .. $ hg clone --config experimental.treemanifest=1 \ > http://localhost:$HGPORT repo-mixed -r 1 adding changesets @@ -226,6 +228,7 @@ new changesets 5b02a3e8db7e:581ef6037d8b updating to branch default 11 files updated, 0 files merged, 0 files removed, 0 files unresolved + $ cat port-0-errors.log $ cd repo-mixed $ test -d .hg/store/meta [1] @@ -654,9 +657,12 @@ $ cp -R .hg/store-newcopy/. .hg/store Test cloning a treemanifest repo over http. - $ hg serve -p $HGPORT -d --pid-file=hg.pid --errorlog=errors.log - $ cat hg.pid >> $DAEMON_PIDS $ cd .. + $ hg -R deeprepo serve -p $HGPORT -d \ + > --pid-file=port-0-hg.pid \ + > --errorlog=port-0-errors.log + $ cat port-0-hg.pid >> $DAEMON_PIDS + We can clone even with the knob turned off and we'll get a treemanifest repo. $ hg clone --config experimental.treemanifest=False \ > --config experimental.changegroup3=True \ @@ -670,7 +676,8 @@ updating to branch default 8 files updated, 0 files merged, 0 files removed, 0 files unresolved No server errors. - $ cat deeprepo/errors.log + $ cat port-0-errors.log + requires got updated to include treemanifest $ hg debugrequires -R deepclone | grep treemanifest treemanifest @@ -713,12 +720,13 @@ new changesets 775704be6f52:523e5c631710 updating to branch default 8 files updated, 0 files merged, 0 files removed, 0 files unresolved - $ cd deeprepo-basicstore - $ hg debugrequires | grep store + $ hg -R deeprepo-basicstore debugrequires | grep store [1] - $ hg serve -p $HGPORT1 -d --pid-file=hg.pid --errorlog=errors.log - $ cat hg.pid >> $DAEMON_PIDS - $ cd .. + $ hg -R deeprepo-basicstore serve -p $HGPORT1 -d \ + > --pid-file=port-1-hg.pid \ + > --errorlog=port-1-errors.log + $ cat port-1-hg.pid >> $DAEMON_PIDS + $ hg clone --config format.usefncache=False \ > --config experimental.changegroup3=True \ > http://localhost:$HGPORT deeprepo-encodedstore @@ -730,12 +738,12 @@ new changesets 775704be6f52:523e5c631710 updating to branch default 8 files updated, 0 files merged, 0 files removed, 0 files unresolved - $ cd deeprepo-encodedstore - $ hg debugrequires | grep fncache + $ hg -R deeprepo-encodedstore debugrequires | grep fncache [1] - $ hg serve -p $HGPORT2 -d --pid-file=hg.pid --errorlog=errors.log - $ cat hg.pid >> $DAEMON_PIDS - $ cd .. + $ hg -R deeprepo-encodedstore serve -p $HGPORT2 -d \ + > --pid-file=port-2-hg.pid \ + > --errorlog=port-2-errors.log + $ cat port-2-hg.pid >> $DAEMON_PIDS Local clone with basicstore $ hg clone -U deeprepo-basicstore local-clone-basicstore @@ -756,6 +764,7 @@ 28 files to transfer, * of data (glob) transferred * in * seconds (*) (glob) $ hg -R stream-clone-basicstore verify -q + $ cat port-1-errors.log Stream clone with encodedstore $ hg clone --config experimental.changegroup3=True --stream -U \ @@ -764,6 +773,7 @@ 28 files to transfer, * of data (glob) transferred * in * seconds (*) (glob) $ hg -R stream-clone-encodedstore verify -q + $ cat port-2-errors.log Stream clone with fncachestore $ hg clone --config experimental.changegroup3=True --stream -U \ @@ -772,6 +782,7 @@ 22 files to transfer, * of data (glob) transferred * in * seconds (*) (glob) $ hg -R stream-clone-fncachestore verify -q + $ cat port-0-errors.log Packed bundle $ hg -R deeprepo debugcreatestreamclonebundle repo-packed.hg @@ -842,3 +853,52 @@ 1:678d3574b88c 1:678d3574b88c $ hg --config extensions.strip= strip -r . -q + +Testing repository upgrade +-------------------------- + + $ for x in 1 2 3 4 5 6 7 8 9; do + > echo $x > file-$x # make sure we have interresting compression + > echo $x > dir/foo-$x # make sure we have interresting compression + > hg add file-$x + > hg add dir/foo-$x + > done + $ hg ci -m 'have some content' + $ f -s .hg/store/00manifest.* + .hg/store/00manifest.i: size=798 (no-pure !) + .hg/store/00manifest.i: size=784 (pure !) + $ f -s .hg/store/meta/dir/00manifest* + .hg/store/meta/dir/00manifest.i: size=556 (no-pure !) + .hg/store/meta/dir/00manifest.i: size=544 (pure !) + $ hg debugupgraderepo --config format.revlog-compression=none --config experimental.treemanifest=yes --run --quiet --no-backup + upgrade will perform the following actions: + + requirements + preserved: * (glob) + removed: revlog-compression-zstd (no-pure !) + added: exp-compression-none + + processed revlogs: + - all-filelogs + - changelog + - manifest + + $ hg verify + checking changesets + checking manifests + checking directory manifests + crosschecking files in changesets and manifests + checking files + checking dirstate + checked 4 changesets with 22 changes to 20 files + $ f -s .hg/store/00manifest.* + .hg/store/00manifest.i: size=1002 + $ f -s .hg/store/meta/dir/00manifest* + .hg/store/meta/dir/00manifest.i: size=721 + $ hg files --rev tip | wc -l + \s*20 (re) + +testing cache update warming persistent nodemaps +------------------------------------------------ + + $ hg debugupdatecache diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-upgrade-repo.t --- a/tests/test-upgrade-repo.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-upgrade-repo.t Thu Jun 22 11:36:37 2023 +0200 @@ -192,11 +192,11 @@ summary: r7 -Do not yet support upgrading treemanifest repos +Do not yet support downgrading treemanifest repos $ hg --config experimental.treemanifest=true init treemanifest $ hg -R treemanifest debugupgraderepo - abort: cannot upgrade repository; unsupported source requirement: treemanifest + abort: cannot upgrade repository; requirement would be removed: treemanifest [255] Cannot add treemanifest requirement during upgrade @@ -868,7 +868,7 @@ phaseroots requires undo - undo.backup.fncache + undo.backup.fncache.bck undo.backupfiles unless --no-backup is passed diff -r 41b9eb302d95 -r 9a4db474ef1a tests/test-walk.t --- a/tests/test-walk.t Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/test-walk.t Thu Jun 22 11:36:37 2023 +0200 @@ -61,6 +61,37 @@ f mammals/Procyonidae/raccoon mammals/Procyonidae/raccoon f mammals/skunk mammals/skunk +Test 'filepath:' pattern + + $ hg debugwalk -v -I 'filepath:mammals/Procyonidae/cacomistle' + * matcher: + + f mammals/Procyonidae/cacomistle mammals/Procyonidae/cacomistle + + $ hg debugwalk -v -I 'filepath:mammals/Procyonidae' + * matcher: + + + $ hg debugwalk -v -X 'filepath:beans/borlotti' + * matcher: + , + m2=> + f beans/black beans/black + f beans/kidney beans/kidney + f beans/navy beans/navy + f beans/pinto beans/pinto + f beans/turtle beans/turtle + f fennel fennel + f fenugreek fenugreek + f fiddlehead fiddlehead + f mammals/Procyonidae/cacomistle mammals/Procyonidae/cacomistle + f mammals/Procyonidae/coatimundi mammals/Procyonidae/coatimundi + f mammals/Procyonidae/raccoon mammals/Procyonidae/raccoon + f mammals/skunk mammals/skunk + +Test relative paths + $ cd mammals $ hg debugwalk -v * matcher: diff -r 41b9eb302d95 -r 9a4db474ef1a tests/testlib/ext-stream-clone-steps.py --- a/tests/testlib/ext-stream-clone-steps.py Thu Jun 22 11:18:47 2023 +0200 +++ b/tests/testlib/ext-stream-clone-steps.py Thu Jun 22 11:36:37 2023 +0200 @@ -1,3 +1,18 @@ +# A utility extension that help taking a break during streamclone operation +# +# This extension is used through two environment variable +# +# HG_TEST_STREAM_WALKED_FILE_1 +# +# path of a file created by the process generating the streaming clone when +# it is done gathering data and is ready to unlock the repository and move +# to the streaming of content. +# +# HG_TEST_STREAM_WALKED_FILE_2 +# +# path of a file to be manually created to let the process generating the +# streaming clone proceed to streaming file content. + from mercurial import ( encoding, extensions,