Tue, 04 Oct 2016 12:51:54 +0200 mergecopies: add logic to process incomplete data
Gábor Stefanik <gabor.stefanik@nng.com> [Tue, 04 Oct 2016 12:51:54 +0200] rev 30202
mergecopies: add logic to process incomplete data We first combine incomplete copies on the two sides of the topological CA into complete copies. Any leftover incomplete copies are then combined with the incomplete divergences to reconstruct divergences spanning over the topological CA. Finally we promote any divergences falsely flagged as incomplete to full divergences. Right now, there is nothing generating incomplete copy/divergence data, so this code does nothing. Changes to _checkcopies to populate these dicts are coming later in this series.
Wed, 12 Oct 2016 11:54:03 +0200 checkcopies: handle divergences contained entirely in tca::ctx
Gábor Stefanik <gabor.stefanik@nng.com> [Wed, 12 Oct 2016 11:54:03 +0200] rev 30201
checkcopies: handle divergences contained entirely in tca::ctx During a graftlike merge, _checkcopies runs from ctx to tca, possibly passing over the merge base. If there is a rename both before and after the base, then we're actually dealing with divergent renames. If there is no rename on the other side of tca, then the divergence is contained entirely in the range of one _checkcopies invocation, and should be detected "in the loop" without having to rely on the other _checkcopies pass.
Thu, 25 Aug 2016 22:02:26 +0200 update: enable copy tracing for backwards and non-linear updates
Gábor Stefanik <gabor.stefanik@nng.com> [Thu, 25 Aug 2016 22:02:26 +0200] rev 30200
update: enable copy tracing for backwards and non-linear updates As a followup to the issue4028 series, this fixes a variant of the issue that can occur when updating with uncommited local changes. The duplicated .hgsub warning is coming from wc.dirty(). We would previously skip this call because it's only relevant when we're going to perform copy tracing, which we didn't do before. The change to the update summary line is because we now treat the rename as a proper rename (which counts as a change), rather than an add+delete pair (which counts as a change and a delete).
Mon, 26 Sep 2016 10:47:37 +0200 bashcompletion: allow skipping completion for 'hg status'
Mathias De Maré <mathias.de_mare@nokia.com> [Mon, 26 Sep 2016 10:47:37 +0200] rev 30199
bashcompletion: allow skipping completion for 'hg status' On systems with large repositories and slow disks, the calls to 'hg status' make autocomplete annoyingly slow. This fix makes it possible to avoid the slowdown.
Sun, 21 Aug 2016 01:12:00 +0200 tests: add more test coverage of phase changes when pushing
Mads Kiilerich <madski@unity3d.com> [Sun, 21 Aug 2016 01:12:00 +0200] rev 30198
tests: add more test coverage of phase changes when pushing Prepare for test coverage of phase updates with future push --readonly option, both with and without actually pushing changesets.
Thu, 13 Oct 2016 02:19:43 +0200 mergecopies: invoke _computenonoverlap for both base and tca during merges
Gábor Stefanik <gabor.stefanik@nng.com> [Thu, 13 Oct 2016 02:19:43 +0200] rev 30197
mergecopies: invoke _computenonoverlap for both base and tca during merges The algorithm of _checkcopies can only walk backwards in the DAG, never forward. Because of this, the two _checkcopies patches need to run from their respective endpoints to the TCA to cover the entire subgraph where the merge is being performed. However, detection of files new in both endpoints, as well as directory rename detection, need to run with respect to the merge base, so we need lists of new files both from the TCA's and the merge base's viewpoint to correctly detect renames in a graft-like merge scenario. (Series reworked by Pierre-Yves David)
Tue, 18 Oct 2016 00:00:43 +0200 copies: make it possible to distinguish betwen _computenonoverlap invocations
Pierre-Yves David <pierre-yves.david@ens-lyon.org> [Tue, 18 Oct 2016 00:00:43 +0200] rev 30196
copies: make it possible to distinguish betwen _computenonoverlap invocations _computenonoverlap needs to be invoked twice during a graft, and debugging messages should be distinguishable between the two invocations
Thu, 13 Oct 2016 02:03:54 +0200 copies: make _checkcopies handle simple renames in a rotated DAG
Gábor Stefanik <gabor.stefanik@nng.com> [Thu, 13 Oct 2016 02:03:54 +0200] rev 30195
copies: make _checkcopies handle simple renames in a rotated DAG This introduces a distinction between "merge base" and "topological common ancestor". During a regular merge, these two are identical. Graft, however, performs a merge in a rotated DAG, where the merge base will not be a common ancestor at all in the original DAG. To correctly find copies in case of a graft, we need to take both the merge base and the topological CA into account, and track any renames between them in reverse. Fortunately we can detect this in advance, see comment in the code about "backwards". This patch only supports finding non-divergent renames contained entirely between the merge base and the topological CA. Further patches are coming to support more complex cases. (Pierre-Yves David was involved in the cleanup of this patch.)
Thu, 13 Oct 2016 02:03:49 +0200 copies: compute a suitable TCA if base turns out to be unsuitable
Gábor Stefanik <gabor.stefanik@nng.com> [Thu, 13 Oct 2016 02:03:49 +0200] rev 30194
copies: compute a suitable TCA if base turns out to be unsuitable This will be used later in an update to _checkcopies. (Pierre-Yves David was involved in the cleanup of this patch.)
Thu, 13 Oct 2016 01:47:33 +0200 copies: detect graft-like merges
Gábor Stefanik <gabor.stefanik@nng.com> [Thu, 13 Oct 2016 01:47:33 +0200] rev 30193
copies: detect graft-like merges Right now, nothing changes as a result of this, but we want to handle grafts differently from ordinary merges later. (Series developed together with Pierre-Yves David)
Wed, 12 Oct 2016 12:41:28 +0200 tests: introduce tests for grafting through renames
Gábor Stefanik <gabor.stefanik@nng.com> [Wed, 12 Oct 2016 12:41:28 +0200] rev 30192
tests: introduce tests for grafting through renames These cover all currently known cases of renames being grafted, or changes being grafted through renames. Right now, most of these cases are broken. Later patches in this series will make them behave correctly. The testcases heavily rely on each other, which would make it very difficult to separate them and add them one-by-one for each case fixed by a patch. Separating them should perhaps be a 4.1 task, if it doesn't slow down the tests too much. (Developed together with Pierre-Yves David)
Mon, 17 Oct 2016 17:12:24 +0200 largefiles: fix 'deleted' files sometimes persistently appearing with R status stable
Mads Kiilerich <madski@unity3d.com> [Mon, 17 Oct 2016 17:12:24 +0200] rev 30191
largefiles: fix 'deleted' files sometimes persistently appearing with R status A code snippet that has been around since largefiles was introduced was wrong: Standins no longer found in lfdirstate has *not* been removed - they have probably just been deleted ... or not created. This wrong reporting did that 'up -C' didn't undo the change and didn't sync the two dirstates. Instead of reporting such files as removed, propagate the deletion to the standin file and report the file as deleted.
Sun, 16 Oct 2016 02:29:45 +0200 largefiles: more safe handling of interruptions while updating modifications stable
Mads Kiilerich <madski@unity3d.com> [Sun, 16 Oct 2016 02:29:45 +0200] rev 30190
largefiles: more safe handling of interruptions while updating modifications Largefiles are fragile with the design where dirstate and lfdirstate must be kept in sync. To be less fragile, mark all clean largefiles as unsure ("normallookup") before updating standins. After standins have been updated and we know exactly which largefile standins actually was changed, mark the unchanged largefiles back to clean ("normal"). This will make the failure mode more safe. If interrupted, the next command will continue to perform extra hashing of all largefiles. That will do that all largefiles that are out of sync with their standin will be marked dirty and they will show up in status and can be cleaned with update --clean.
Sun, 16 Oct 2016 02:26:38 +0200 largefiles: test coverage of fatal interruption of update stable
Mads Kiilerich <madski@unity3d.com> [Sun, 16 Oct 2016 02:26:38 +0200] rev 30189
largefiles: test coverage of fatal interruption of update Test using existing changesets in a clean working directory, revealing problems with files that don't show up as modified or do show up as removed when they just not have been written yet.
Wed, 12 Oct 2016 21:33:45 +0200 checkcopies: add a sanity check against false-positive copies
Gábor Stefanik <gabor.stefanik@nng.com> [Wed, 12 Oct 2016 21:33:45 +0200] rev 30188
checkcopies: add a sanity check against false-positive copies When grafting a copy backwards through a rename, a copy is wrongly detected, which causes the graft to be applied inappropriately, in a destructive way. Make sure that the old file name really exists in the common ancestor, and bail out if it doesn't. This fixes the aggravated case of bug 5343, although the basic issue (failure to duplicate the copy information) still occurs.
Sun, 16 Oct 2016 10:38:52 -0700 exchange: refactor APIs to obtain bundle data (API)
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 16 Oct 2016 10:38:52 -0700] rev 30187
exchange: refactor APIs to obtain bundle data (API) Currently, exchange.getbundle() returns either a cg1unpacker or a util.chunkbuffer (in the case of bundle2). This is kinda OK, as both expose a .read() to consumers. However, localpeer.getbundle() has code inferring what the response type is based on arguments and converts the util.chunkbuffer returned in the bundle2 case to a bundle2.unbundle20 instance. This is a sign that the API for exchange.getbundle() is not ideal because it doesn't consistently return an "unbundler" instance. In addition, unbundlers mask the fact that there is an underlying generator of changegroup data. In both cg1 and bundle2, this generator is being fed into a util.chunkbuffer so it can be re-exposed as a file object. util.chunkbuffer is a nice abstraction. However, it should only be used "at the edges." This is because keeping data as a generator is more efficient than converting it to a chunkbuffer, especially if we convert that chunkbuffer back to a generator (as is the case in some code paths currently). This patch refactors exchange.getbundle() into exchange.getbundlechunks(). The new API returns an iterator of chunks instead of a file-like object. Callers of exchange.getbundle() have been updated to use the new API. There is a minor change of behavior in test-getbundle.t. This is because `hg debuggetbundle` isn't defining bundlecaps. As a result, a cg1 data stream and unpacker is being produced. This is getting fed into a new bundle20 instance via bundle2.writebundle(), which uses a backchannel mechanism between changegroup generation to add the "nbchanges" part parameter. I never liked this backchannel mechanism and I plan to remove it someday. `hg bundle` still produces the "nbchanges" part parameter, so there should be no user-visible change of behavior. I consider this "regression" a bug in `hg debuggetbundle`. And that bug is captured by an existing "TODO" in the code to use bundle2 capabilities.
Thu, 13 Oct 2016 01:30:14 +0200 mergecopies: rename 'ca' to 'base'
Pierre-Yves David <pierre-yves.david@ens-lyon.org> [Thu, 13 Oct 2016 01:30:14 +0200] rev 30186
mergecopies: rename 'ca' to 'base' This variable was named after the common ancestor. It is actually the merge base that might differ from the common ancestor in the graft case. We rename the variable before a larger refactoring to clarify the situation. Similar rename was also applied to 'checkcopies' in a prior changeset.
Thu, 13 Oct 2016 01:26:33 +0200 copies: move variable document from checkcopies to mergecopies
Pierre-Yves David <pierre-yves.david@ens-lyon.org> [Thu, 13 Oct 2016 01:26:33 +0200] rev 30185
copies: move variable document from checkcopies to mergecopies It appears that 'mergecopies' is the function consuming these data so we move the documentation there.
Tue, 11 Oct 2016 02:21:42 +0200 checkcopies: pass data as a dictionary of dictionaries
Pierre-Yves David <pierre-yves.david@ens-lyon.org> [Tue, 11 Oct 2016 02:21:42 +0200] rev 30184
checkcopies: pass data as a dictionary of dictionaries more are coming
Tue, 11 Oct 2016 02:15:23 +0200 checkcopies: move 'movewithdir' initialisation right before its usage
Pierre-Yves David <pierre-yves.david@ens-lyon.org> [Tue, 11 Oct 2016 02:15:23 +0200] rev 30183
checkcopies: move 'movewithdir' initialisation right before its usage The 'movewithdir' had a lot of related logic all around the 'mergecopies'. However it is actually never containing anything until the very last loop in that function. We move the (simplified) variable definition there for clarity
Fri, 14 Oct 2016 01:53:15 +0200 cmdutil: satisfy expections in dirstateguard.__del__, even if __init__ fails
Mads Kiilerich <madski@unity3d.com> [Fri, 14 Oct 2016 01:53:15 +0200] rev 30182
cmdutil: satisfy expections in dirstateguard.__del__, even if __init__ fails Python "delstructors" are terrible - this one because it assumed that __init__ had completed before it was called. That would not necessarily be the case if the repository was read only or broken and saving the dirstate thus failed in unexpected ways. That could give confusing warnings about missing '_active' after failures. To fix that, make sure all member variables are "declared" before doing anything that possibly could fail. [Famous last words.]
Fri, 14 Oct 2016 01:53:15 +0200 util: increase filechunkiter size to 128k
Mads Kiilerich <madski@unity3d.com> [Fri, 14 Oct 2016 01:53:15 +0200] rev 30181
util: increase filechunkiter size to 128k util.filechunkiter has been using a chunk size of 64k for more than 10 years, also in years where Moore's law still was a law. It is probably ok to bump it now and perhaps get a slight win in some cases. Also, largefiles have been using 128k for a long time. Specifying that size multiple times (or forgetting to do it) seems a bit stupid. Decreasing it to 64k also seems unfortunate. Thus, we will set the default chunksize to 128k and use the default everywhere.
Wed, 12 Oct 2016 12:22:18 +0200 largefiles: always use filechunkiter when iterating files
Mads Kiilerich <madski@unity3d.com> [Wed, 12 Oct 2016 12:22:18 +0200] rev 30180
largefiles: always use filechunkiter when iterating files Before, we would sometimes use the default iterator over large files. That iterator is line based and would add extra buffering and use odd chunk sizes which could give some overhead. copyandhash can't just apply a filechunkiter as it sometimes is passed a genuine generator when downloading remotely.
Fri, 14 Oct 2016 23:33:00 +0900 revset: for x^2, do not take null as a valid p2 revision
Yuya Nishihara <yuya@tcha.org> [Fri, 14 Oct 2016 23:33:00 +0900] rev 30179
revset: for x^2, do not take null as a valid p2 revision Since we don't count null p2 revision as a parent, x^2 should never return null even if null is explicitly populated.
Mon, 10 Oct 2016 22:30:09 +0200 revset: make follow() reject more than one start revisions
Yuya Nishihara <yuya@tcha.org> [Mon, 10 Oct 2016 22:30:09 +0200] rev 30178
revset: make follow() reject more than one start revisions Taking only the last revision is inconsistent because ancestors(set) follows all revisions given, and theoretically follow(startrev=set) == ancestors(set). I'm planning to add a support for multiple start revisions, but that won't fit to the 4.0 time frame. So reject multiple revisions now to avoid future BC. len(revs) might be slow if revs were large, but we don't care since a valid revs should have only one element.
Sat, 15 Oct 2016 17:10:53 -0700 bundle2: only emit compressed chunks if they have data
Gregory Szorc <gregory.szorc@gmail.com> [Sat, 15 Oct 2016 17:10:53 -0700] rev 30177
bundle2: only emit compressed chunks if they have data This is similar to 58467204cac0. Not all calls into the compressor return compressed data, as the compressor may buffer compressed output internally. It is cheaper to check for empty chunks than to send empty chunks through the generator. When generating a gzip-v2 bundle of the mozilla-unified repo, this change results in 50,093 empty chunks not being sent through the generator (out of 1,902,996 total input chunks).
Sat, 15 Oct 2016 15:01:14 -0700 color: add some documentation for custom terminfo codes
Danek Duvall <danek.duvall@oracle.com> [Sat, 15 Oct 2016 15:01:14 -0700] rev 30176
color: add some documentation for custom terminfo codes
Thu, 13 Oct 2016 13:10:01 -0700 color: debugcolor should emit the user-defined colors
Danek Duvall <danek.duvall@oracle.com> [Thu, 13 Oct 2016 13:10:01 -0700] rev 30175
color: debugcolor should emit the user-defined colors This also fixes a long-standing bug that reversed the sense of the color name and the label used to print it, which was never relevant before.
Thu, 13 Oct 2016 12:01:41 -0700 color: ignore effects missing from terminfo
Danek Duvall <danek.duvall@oracle.com> [Thu, 13 Oct 2016 12:01:41 -0700] rev 30174
color: ignore effects missing from terminfo If terminfo mode is in effect, and an effect is used which is missing from the terminfo database, simply silently ignore the request, leaving the output unaffected rather than causing a crash.
Thu, 13 Oct 2016 11:48:17 -0700 color: allow for user-configurable terminfo codes for effects
Danek Duvall <danek.duvall@oracle.com> [Thu, 13 Oct 2016 11:48:17 -0700] rev 30173
color: allow for user-configurable terminfo codes for effects If the entry in the terminfo database for your terminal is missing some attributes, it should be possible to create them on the fly without resorting to just making them a color. This change allows you to have [color] terminfo.<effect> = <code> where <effect> might be something like "dim" or "bold", and <code> is the escape sequence that would otherwise have come from a call to tigetstr(). If an escape character is needed, use "\E". Any such settings will override attributes that are present in the terminfo database.
Tue, 04 Oct 2016 04:06:48 -0700 update: warn if cwd was deleted
Stanislau Hlebik <stash@fb.com> [Tue, 04 Oct 2016 04:06:48 -0700] rev 30172
update: warn if cwd was deleted During update directories are deleted as soon as they have no entries. But if current working directory is deleted then it cause problems in complex commands like 'hg split'. This commit adds a warning that will help users figure the problem faster.
Thu, 13 Oct 2016 13:34:53 +0200 parsers: avoid PySliceObject cast on Python 3
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 13 Oct 2016 13:34:53 +0200] rev 30171
parsers: avoid PySliceObject cast on Python 3 PySlice_GetIndicesEx() accepts a PySliceObject* on Python 2 and a PyObject* on Python 3. Casting to PySliceObject* on Python 3 was yielding a compiler warning. So stop doing that. With this patch, I no longer see any compiler warnings when building the core extensions for Python 3!
Thu, 13 Oct 2016 13:27:14 +0200 bdiff: include util.h
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 13 Oct 2016 13:27:14 +0200] rev 30170
bdiff: include util.h Without this, IS_PY3K isn't define and the preprocessor uses the incorrect module loading code, causing the module fail to load at run-time. After this patch, all our C extensions (except for watchman's) appear to import correctly in Python 3!
Thu, 13 Oct 2016 13:22:40 +0200 parsers: alias more PyInt* symbols on Python 3
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 13 Oct 2016 13:22:40 +0200] rev 30169
parsers: alias more PyInt* symbols on Python 3 I feel dirty for having to do this. But this is currently our approach for dealing with PyInt -> PyLong in Python 3 for this file. This removes a ton of compiler warnings by fixing unresolved symbols.
Thu, 13 Oct 2016 13:17:23 +0200 manifest: use PyVarObject_HEAD_INIT
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 13 Oct 2016 13:17:23 +0200] rev 30168
manifest: use PyVarObject_HEAD_INIT More appeasing the Python 3 and compiler overlords. The code is equivalent.
Thu, 13 Oct 2016 13:14:14 +0200 dirs: use PyVarObject_HEAD_INIT
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 13 Oct 2016 13:14:14 +0200] rev 30167
dirs: use PyVarObject_HEAD_INIT This makes a compiler warning go away on Python 3.
Thu, 13 Oct 2016 09:27:37 +0100 py3: use namedtuple._replace to produce new tokens
Martijn Pieters <mjpieters@fb.com> [Thu, 13 Oct 2016 09:27:37 +0100] rev 30166
py3: use namedtuple._replace to produce new tokens
Fri, 14 Oct 2016 17:55:02 +0100 py3: refactor token parsing to handle call args properly
Martijn Pieters <mjpieters@fb.com> [Fri, 14 Oct 2016 17:55:02 +0100] rev 30165
py3: refactor token parsing to handle call args properly The token parsing was getting unwieldy and was too naive about accessing arguments.
Thu, 13 Oct 2016 13:47:47 +0200 eol: make sure we always release the wlock when writing cache
Pierre-Yves David <pierre-yves.david@ens-lyon.org> [Thu, 13 Oct 2016 13:47:47 +0200] rev 30164
eol: make sure we always release the wlock when writing cache If any exception were to happen after we acquired the wlock, we could leave it unreleased. We move the wlock release in a 'finally:' close as it should be.
Thu, 13 Oct 2016 21:42:11 +0200 pathencode: use assert() for PyBytes_Check()
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 13 Oct 2016 21:42:11 +0200] rev 30163
pathencode: use assert() for PyBytes_Check() This should have been added in a8c948ee3668. I sent the patch to the list prematurely.
Wed, 12 Oct 2016 12:22:18 +0200 merge: clarify warning for (not) merging flags without ancestor
Mads Kiilerich <madski@unity3d.com> [Wed, 12 Oct 2016 12:22:18 +0200] rev 30162
merge: clarify warning for (not) merging flags without ancestor Give hints why it can't merge and what it will do instead.
Wed, 12 Oct 2016 12:22:18 +0200 merge: only show "cannot merge flags for %s" warning if flags are different
Mads Kiilerich <madski@unity3d.com> [Wed, 12 Oct 2016 12:22:18 +0200] rev 30161
merge: only show "cannot merge flags for %s" warning if flags are different
Wed, 12 Oct 2016 12:22:18 +0200 tests: add test coverage of merging x flag without ancestor
Mads Kiilerich <madski@unity3d.com> [Wed, 12 Oct 2016 12:22:18 +0200] rev 30160
tests: add test coverage of merging x flag without ancestor It is more noisy than necessary - we will fix that later.
Sat, 08 Oct 2016 17:07:43 +0200 dirs: document Py_SIZE weirdness
Gregory Szorc <gregory.szorc@gmail.com> [Sat, 08 Oct 2016 17:07:43 +0200] rev 30159
dirs: document Py_SIZE weirdness Assigning to what looks like a function is clown shoes. Document that it is a macro referring to a struct member.
Wed, 12 Oct 2016 12:22:54 +0200 record: return code from underlying commit
Philippe Pepiot <philippe.pepiot@logilab.fr> [Wed, 12 Oct 2016 12:22:54 +0200] rev 30158
record: return code from underlying commit
Fri, 14 Oct 2016 09:52:38 +0200 commit: return 1 for interactive commit with no changes (issue5397)
Philippe Pepiot <philippe.pepiot@logilab.fr> [Fri, 14 Oct 2016 09:52:38 +0200] rev 30157
commit: return 1 for interactive commit with no changes (issue5397) For consistency with non interactive commit
Fri, 14 Oct 2016 03:03:39 +0200 demandimport: disable lazy import of __builtin__
Mads Kiilerich <madski@unity3d.com> [Fri, 14 Oct 2016 03:03:39 +0200] rev 30156
demandimport: disable lazy import of __builtin__ Demandimport uses the "try to import __builtin__, else use builtins" trick to handle Python 3. External libraries and extensions might do something similar. On Fedora 25 subversion-python-1.9.4-4.fc25.x86_64 will do just that (except the opposite) ... and it failed all subversion convert tests because demandimport was hiding that it didn't have builtins but should use __builtin__. The builtin module has already been imported when demandimport is loaded so there is no point in trying to import it on demand. Just always ignore both variants in demandimport.
Thu, 13 Oct 2016 12:50:27 +0200 changelog: disable delta chains
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 13 Oct 2016 12:50:27 +0200] rev 30155
changelog: disable delta chains This patch disables delta chains on changelogs. After this patch, new entries on changelogs - including existing changelogs - will be stored as the fulltext of that data (likely compressed). No delta computation will be performed. An overview of delta chains and data justifying this change follows. Revlogs try to store entries as a delta against a previous entry (either a parent revision in the case of generaldelta or the previous physical revision when not using generaldelta). Most of the time this is the correct thing to do: it frequently results in less CPU usage and smaller storage. Delta chains are most effective when the base revision being deltad against is similar to the current data. This tends to occur naturally for manifests and file data, since only small parts of each tend to change with each revision. Changelogs, however, are a different story. Changelog entries represent changesets/commits. And unless commits in a repository are homogonous (same author, changing same files, similar commit messages, etc), a delta from one entry to the next tends to be relatively large compared to the size of the entry. This means that delta chains tend to be short. How short? Here is the full vs delta revision breakdown on some real world repos: Repo % Full % Delta Max Length hg 45.8 54.2 6 mozilla-central 42.4 57.6 8 mozilla-unified 42.5 57.5 17 pypy 46.1 53.9 6 python-zstandard 46.1 53.9 3 (I threw in python-zstandard as an example of a repo that is homogonous. It contains a small Python project with changes all from the same author.) Contrast this with the manifest revlog for these repos, where 99+% of revisions are deltas and delta chains run into the thousands. So delta chains aren't as useful on changelogs. But even a short delta chain may provide benefits. Let's measure that. Delta chains may require less CPU to read revisions if the CPU time spent reading smaller deltas is less than the CPU time used to decompress larger individual entries. We can measure this via `hg perfrevlog -c -d 1` to iterate a revlog to resolve each revision's fulltext. Here are the results of that command on a repo using delta chains in its changelog and on a repo without delta chains: hg (forward) ! wall 0.407008 comb 0.410000 user 0.410000 sys 0.000000 (best of 25) ! wall 0.390061 comb 0.390000 user 0.390000 sys 0.000000 (best of 26) hg (reverse) ! wall 0.515221 comb 0.520000 user 0.520000 sys 0.000000 (best of 19) ! wall 0.400018 comb 0.400000 user 0.390000 sys 0.010000 (best of 25) mozilla-central (forward) ! wall 4.508296 comb 4.490000 user 4.490000 sys 0.000000 (best of 3) ! wall 4.370222 comb 4.370000 user 4.350000 sys 0.020000 (best of 3) mozilla-central (reverse) ! wall 5.758995 comb 5.760000 user 5.720000 sys 0.040000 (best of 3) ! wall 4.346503 comb 4.340000 user 4.320000 sys 0.020000 (best of 3) mozilla-unified (forward) ! wall 4.957088 comb 4.950000 user 4.940000 sys 0.010000 (best of 3) ! wall 4.660528 comb 4.650000 user 4.630000 sys 0.020000 (best of 3) mozilla-unified (reverse) ! wall 6.119827 comb 6.110000 user 6.090000 sys 0.020000 (best of 3) ! wall 4.675136 comb 4.670000 user 4.670000 sys 0.000000 (best of 3) pypy (forward) ! wall 1.231122 comb 1.240000 user 1.230000 sys 0.010000 (best of 8) ! wall 1.164896 comb 1.160000 user 1.160000 sys 0.000000 (best of 9) pypy (reverse) ! wall 1.467049 comb 1.460000 user 1.460000 sys 0.000000 (best of 7) ! wall 1.160200 comb 1.170000 user 1.160000 sys 0.010000 (best of 9) The data clearly shows that it takes less wall and CPU time to resolve revisions when there are no delta chains in the changelogs, regardless of the direction of traversal. Furthermore, not using a delta chain means that fulltext resolution in reverse is as fast as iterating forward. So not using delta chains on the changelog is a clear CPU win for reading operations. An example of a user-visible operation showing this speed-up is revset evaluation. Here are results for `hg perfrevset 'author(gps) or author(mpm)'`: hg ! wall 1.655506 comb 1.660000 user 1.650000 sys 0.010000 (best of 6) ! wall 1.612723 comb 1.610000 user 1.600000 sys 0.010000 (best of 7) mozilla-central ! wall 17.629826 comb 17.640000 user 17.600000 sys 0.040000 (best of 3) ! wall 17.311033 comb 17.300000 user 17.260000 sys 0.040000 (best of 3) What about 00changelog.i size? Repo Delta Chains No Delta Chains hg 7,033,250 6,976,771 mozilla-central 82,978,748 81,574,623 mozilla-unified 88,112,349 86,702,162 pypy 20,740,699 20,659,741 The data shows that removing delta chains from the changelog makes the changelog smaller. Delta chains are also used during changegroup generation. This operation essentially converts a series of revisions to one large delta chain. And changegroup generation is smart: if the delta in the revlog matches what the changegroup is emitting, it will reuse the delta instead of recalculating it. We can measure the impact removing changelog delta chains has on changegroup generation via `hg perfchangegroupchangelog`: hg ! wall 1.589245 comb 1.590000 user 1.590000 sys 0.000000 (best of 7) ! wall 1.788060 comb 1.790000 user 1.790000 sys 0.000000 (best of 6) mozilla-central ! wall 17.382585 comb 17.380000 user 17.340000 sys 0.040000 (best of 3) ! wall 20.161357 comb 20.160000 user 20.120000 sys 0.040000 (best of 3) mozilla-unified ! wall 18.722839 comb 18.720000 user 18.680000 sys 0.040000 (best of 3) ! wall 21.168075 comb 21.170000 user 21.130000 sys 0.040000 (best of 3) pypy ! wall 4.828317 comb 4.830000 user 4.820000 sys 0.010000 (best of 3) ! wall 5.415455 comb 5.420000 user 5.410000 sys 0.010000 (best of 3) The data shows eliminating delta chains makes the changelog part of changegroup generation slower. This is expected since we now have to compute deltas for revisions where we could recycle the delta before. It is worth putting this regression into context of overall changegroup times. Here is the rough total CPU time spent in changegroup generation for various repos while using delta chains on the changelog: Repo CPU Time (s) CPU Time w/ compression hg 4.50 7.05 mozilla-central 111.1 222.0 pypy 28.68 75.5 Before compression, removing delta chains from the changegroup adds ~4.4% overhead to hg changegroup generation, 1.3% to mozilla-central, and 2.0% to pypy. When you factor in zlib compression, these percentages are roughly divided by 2. While the increased CPU usage for changegroup generation is unfortunate, I think it is acceptable because the percentage is small, server operators (those likely impacted most by this) have other mechanisms to mitigate CPU consumption (namely reducing zlib compression level and pre-generated clone bundles), and because there is room to optimize this in the future. For example, we could use the nullid as the base revision, effectively encoding the full revision for each entry in the changegroup. When doing this, `hg perfchangegroupchangelog` nearly halves: mozilla-unified ! wall 21.168075 comb 21.170000 user 21.130000 sys 0.040000 (best of 3) ! wall 11.196461 comb 11.200000 user 11.190000 sys 0.010000 (best of 3) This looks very promising as a future optimization opportunity. It's worth that the changes in test-acl.t to the changegroup part size. This is because revision 6 in the changegroup had a delta chain of length 2 before and after this patch the base revision is nullrev. When the base revision is nullrev, cg2packer.deltaparent() hardcodes the *previous* revision from the changegroup as the delta parent. This caused the delta in the changegroup to switch base revisions, the delta to change, and the size to change accordingly. While the size increased in this case, I think sizes will remain the same on average, as the delta base for changelog revisions doesn't matter too much (as this patch shows). So, I don't consider this a regression.
(0) -30000 -10000 -3000 -1000 -300 -100 -48 +48 +100 +300 +1000 +3000 +10000 tip