Wed, 13 Mar 2024 11:51:11 +0100 tags-cache: directly operate on rev-num warming hgtagsfnodescache
Pierre-Yves David <pierre-yves.david@octobus.net> [Wed, 13 Mar 2024 11:51:11 +0100] rev 51598
tags-cache: directly operate on rev-num warming hgtagsfnodescache Not having to goes through nodeid speed up things notably. ### data-env-vars.name = mozilla-try-2023-03-22-zstd-sparse-revlog # benchmark.name = hg.debug.debug-update-cache # bin-env-vars.hg.flavor = default # bin-env-vars.hg.py-re2-module = default # benchmark.variants.pre-state = warm before-this-series: 19.947581 before-this-changes: 18.916804 (-5.17%, -1.03) this-changesets: 17.493725 (-12.30%, -2.45)
Wed, 13 Mar 2024 11:38:28 +0100 tags-cache: skip the filternode step if we are not going to use it
Pierre-Yves David <pierre-yves.david@octobus.net> [Wed, 13 Mar 2024 11:38:28 +0100] rev 51597
tags-cache: skip the filternode step if we are not going to use it When warming the hgtagsfnodescache, we don't need the actual result, so we can simply skip the part that "filter" fnode we read from the cache. So provide a quite visible speed up to the top level `hg debugupdatecache` function. ### data-env-vars.name = mozilla-try-2023-03-22-zstd-sparse-revlog # benchmark.name = hg.debug.debug-update-cache # bin-env-vars.hg.flavor = default # bin-env-vars.hg.py-re2-module = default # benchmark.variants.pre-state = warm before: 19.947581 after: 18.916804 (-5.17%, -1.03)
Wed, 13 Mar 2024 11:34:21 +0100 tags-cache: add a dedicated warm cache function to hgtagsfnodescache
Pierre-Yves David <pierre-yves.david@octobus.net> [Wed, 13 Mar 2024 11:34:21 +0100] rev 51596
tags-cache: add a dedicated warm cache function to hgtagsfnodescache Having a dedicated API point will help to optimize that specific usage. Right doing a full phases weam takes a long time, even when the cache is already filled.
Tue, 09 Apr 2024 22:37:15 +0200 outgoing: add a simple fastpath when there is no common
Pierre-Yves David <pierre-yves.david@octobus.net> [Tue, 09 Apr 2024 22:37:15 +0200] rev 51595
outgoing: add a simple fastpath when there is no common This further speed up case like `hg bundle --all` for larger repository. ### data-env-vars.name = mozilla-try-2023-03-22-zstd-sparse-revlog # benchmark.name = hg.command.bundle # bin-env-vars.hg.flavor = default # bin-env-vars.hg.py-re2-module = default # benchmark.variants.revs = all # benchmark.variants.type = none-streamv2 before: 316.749699 after: 311.165461 (-1.76%, -5.58) There is further work to be done in this area like not doing any outgoing computation in the stream case for example. however the recent changes already gives use a large win for a small amount of local work. ### benchmark.name = hg.command.bundle # bin-env-vars.hg.flavor = default # bin-env-vars.hg.py-re2-module = default # benchmark.variants.revs = all # benchmark.variants.type = none-streamv2 ## data-env-vars.name = mercurial-public-2024-03-22-zstd-sparse-revlog pre-%ln-change: 1.263859 the-%ln-change: 0.700229 (-44.60%, -0.56) prev-changeset: 0.496050 (-60.75%, -0.77) this-changeset: 0.495243 (-60.81%, -0.77) ## data-env-vars.name = tryton-public-2024-03-22-zstd-sparse-revlog pre-%ln-change: 2.975765 the-%ln-change: 1.870798 (-37.13%, -1.10) prev-changeset: 1.461583 (-50.88%, -1.51) this-changeset: 1.469185 (-50.63%, -1.51) ## data-env-vars.name = pypy-2024-03-22-zstd-sparse-revlog pre-%ln-change: 4.540080 the-%ln-change: 3.401700 (-25.07%, -1.14) prev-changeset: 2.915810 (-35.78%, -1.62) this-changeset: 2.911643 (-35.87%, -1.63) ## data-env-vars.name = heptapod-public-2024-03-25-zstd-sparse-revlog pre-%ln-change: 10.138396 the-%ln-change: 7.750458 (-23.55%, -2.39) prev-changeset: 6.665565 (-34.25%, -3.47) this-changeset: 6.672078 (-34.19%, -3.47) ## data-env-vars.name = mozilla-try-2023-03-22-zstd-sparse-revlog pre-%ln-change: 399.484481 the-%ln-change: 346.508952 (-13.26%, -52.98) prev-changeset: 316.749699 (-20.71%, -82.73) this-changeset: 311.165461 (-22.11%, -88.32)
Tue, 09 Apr 2024 22:36:35 +0200 outgoing: rework the handling of the `missingroots` case to be faster
Pierre-Yves David <pierre-yves.david@octobus.net> [Tue, 09 Apr 2024 22:36:35 +0200] rev 51594
outgoing: rework the handling of the `missingroots` case to be faster The previous implementation was slow, to the point it was taking a significant amount of `hg bundle --type none-streamv2` call. We rework the code to compute the same value much faster, making the operation disappear from the `hg bundle --type none-streamv2` profile. Someone would remark that producing a streamclone does not requires an `outgoing` object. However that is a matter for another day. There is other user of `missingroots` (non stream `hg bundle` call for example), and they will also benefit from this rework. We implement an old TODO in the process, directly computing the missing and common attribute as we have most element at hand already. ### benchmark.name = hg.command.bundle # bin-env-vars.hg.flavor = default # bin-env-vars.hg.py-re2-module = default # benchmark.variants.revs = all # benchmark.variants.type = none-streamv2 ## data-env-vars.name = heptapod-public-2024-03-25-zstd-sparse-revlog before: 7.750458 after: 6.665565 (-14.00%, -1.08) ## data-env-vars.name = mercurial-public-2024-03-22-zstd-sparse-revlog before: 0.700229 after: 0.496050 (-29.16%, -0.20) ## data-env-vars.name = mozilla-try-2023-03-22-zstd-sparse-revlog before: 346.508952 after: 316.749699 (-8.59%, -29.76) ## data-env-vars.name = pypy-2024-03-22-zstd-sparse-revlog before: 3.401700 after: 2.915810 (-14.28%, -0.49) ## data-env-vars.name = tryton-public-2024-03-22-zstd-sparse-revlog before: 1.870798 after: 1.461583 (-21.87%, -0.41) note: this whole `missingroots` of outgoing has a limited number of callers and could likely be replace by something simpler (like taking an explicit "missing_revs" set for example). However this is a wider change and we focus on a small impact, quick rework that does not change the API for now.
Sun, 14 Apr 2024 02:27:10 +0200 proxy-vfs: also proxy the `audit` attribute
Pierre-Yves David <pierre-yves.david@octobus.net> [Sun, 14 Apr 2024 02:27:10 +0200] rev 51593
proxy-vfs: also proxy the `audit` attribute In the previous changeset, we had to do a little dance to access the useful `audit` attribute. We now provide a proper accessors to it. We don't update the code in `perf.py` because it has to remain compatible with older version of Mercurial. This will just be nicer in the future.
Sat, 13 Apr 2024 23:40:28 +0200 perf: clear vfs audit_cache before each run
Pierre-Yves David <pierre-yves.david@octobus.net> [Sat, 13 Apr 2024 23:40:28 +0200] rev 51592
perf: clear vfs audit_cache before each run When generating a stream clone, we spend a large amount of time auditing path. Before this changes, the first run was warming the vfs cache for the other runs, leading to a large runtime difference and a "faulty" reported timing for the operation. We now clear this important cache between run to get a more realistic timing. Below are some example of median time change when clearing these cases. The maximum time for a run did not changed significantly. ### data-env-vars.name = mozilla-central-2018-08-01-zstd-sparse-revlog # benchmark.name = hg.perf.exchange.stream.generate # bin-env-vars.hg.flavor = default # bin-env-vars.hg.py-re2-module = default # benchmark.variants.version = latest no-clearing: 17.289905 cache-clearing: 21.587965 (+24.86%, +4.30) ## data-env-vars.name = mozilla-central-2024-03-22-zstd-sparse-revlog no-clearing: 32.670748 cache-clearing: 40.467095 (+23.86%, +7.80) ## data-env-vars.name = mozilla-try-2019-02-18-zstd-sparse-revlog no-clearing: 37.838858 cache-clearing: 46.072749 (+21.76%, +8.23) ## data-env-vars.name = mozilla-unified-2024-03-22-zstd-sparse-revlog no-clearing: 32.969395 cache-clearing: 39.646209 (+20.25%, +6.68) In addition, this significantly reduce the timing difference between the performance command, from the perf extensions and a `real `hg bundle` call producing a stream bundle. Some significant differences remain especially on the "mozilla-try" repositories, but they are now smaller. Note that some of that difference will actually not be attributable to the stream generation (like maybe phases or branch map computation). Below are some benchmarks done on a currently draft changeset fixing some unrelated slowness in `hg bundle` (34a78972af409d1ff37c29e60f6ca811ad1a457d) ### data-env-vars.name = mozilla-central-2018-08-01-zstd-sparse-revlog # bin-env-vars.hg.flavor = default # bin-env-vars.hg.py-re2-module = default hg.perf.exchange.stream.generate: 21.587965 hg.command.bundle: 24.301799 (+12.57%, +2.71) ## data-env-vars.name = mozilla-central-2024-03-22-zstd-sparse-revlog hg.perf.exchange.stream.generate: 40.467095 hg.command.bundle: 44.831317 (+10.78%, +4.36) ## data-env-vars.name = mozilla-unified-2024-03-22-zstd-sparse-revlog hg.perf.exchange.stream.generate: 39.646209 hg.command.bundle: 45.395258 (+14.50%, +5.75) ## data-env-vars.name = mozilla-try-2019-02-18-zstd-sparse-revlog hg.perf.exchange.stream.generate: 46.072749 hg.command.bundle: 55.882608 (+21.29%, +9.81) ## data-env-vars.name = mozilla-try-2023-03-22-zlib-general-delta hg.perf.exchange.stream.generate: 334.716708 hg.command.bundle: 377.856767 (+12.89%, +43.14) ## data-env-vars.name = mozilla-try-2023-03-22-zstd-sparse-revlog hg.perf.exchange.stream.generate: 302.972301 hg.command.bundle: 326.098755 (+7.63%, +23.13)
Sun, 14 Apr 2024 02:41:36 +0200 perf: start recording total time after warming
Pierre-Yves David <pierre-yves.david@octobus.net> [Sun, 14 Apr 2024 02:41:36 +0200] rev 51591
perf: start recording total time after warming The warming might be costly and this should not affect the "time profile" of the actual collection.
Sun, 14 Apr 2024 02:40:15 +0200 perf: run the gc before each run
Pierre-Yves David <pierre-yves.david@octobus.net> [Sun, 14 Apr 2024 02:40:15 +0200] rev 51590
perf: run the gc before each run The python garbage collector is a large source of performance troubles, we run it right before the timed section to reduce the change for the gc to add noise to the benchmark.
Sun, 14 Apr 2024 02:38:41 +0200 perf: allow profiling of more than one run
Pierre-Yves David <pierre-yves.david@octobus.net> [Sun, 14 Apr 2024 02:38:41 +0200] rev 51589
perf: allow profiling of more than one run By default, we still profile the first run only. However profiling more run help to understand side effect from one run to the other. So we add an option to be able to do so.
Sun, 14 Apr 2024 02:36:55 +0200 profiler: flush after writing the profiler output
Pierre-Yves David <pierre-yves.david@octobus.net> [Sun, 14 Apr 2024 02:36:55 +0200] rev 51588
profiler: flush after writing the profiler output Otherwise, the profiler output might only partially appears until the next flush of the buffer. Since profiling often happens for long operation, the next flush can be a long time away.
Sun, 14 Apr 2024 02:33:36 +0200 stream-clone: disable gc for the entry listing section for the v2 format
Pierre-Yves David <pierre-yves.david@octobus.net> [Sun, 14 Apr 2024 02:33:36 +0200] rev 51587
stream-clone: disable gc for the entry listing section for the v2 format This is similar to the change we did for the v3 format in 6e4c8366c5ce. The benchmark bellow show this gives us a notable gains, especially on larger repositories. ### benchmark.name = hg.perf.stream-locked-section # benchmark.name = hg.perf.stream-locked-section # bin-env-vars.hg.flavor = default # bin-env-vars.hg.py-re2-module = default # benchmark.variants.version = v2 ## data-env-vars.name = pypy-2018-08-01-zstd-sparse-revlog 5e931bf8707c: 0.503820 ~~~~~ 1106d1bf695e: 0.470078 (-6.70%, -0.03) ## data-env-vars.name = pypy-2024-03-22-zstd-sparse-revlog 5e931bf8707c: 0.535756 ~~~~~ 1106d1bf695e: 0.490249 (-8.49%, -0.05) ## data-env-vars.name = heptapod-public-2024-03-25-zstd-sparse-revlog 5e931bf8707c: 1.327041 ~~~~~ 1106d1bf695e: 1.174636 (-11.48%, -0.15) ## data-env-vars.name = netbeans-2018-08-01-zstd-sparse-revlog 5e931bf8707c: 2.439158 ~~~~~ 1106d1bf695e: 2.220515 (-8.96%, -0.22) ## data-env-vars.name = netbeans-2019-11-07-zstd-sparse-revlog 5e931bf8707c: 2.630794 ~~~~~ 1106d1bf695e: 2.261473 (-14.04%, -0.37) ## data-env-vars.name = mozilla-central-2018-08-01-zstd-sparse-revlog 5e931bf8707c: 5.769002 ~~~~~ 1106d1bf695e: 5.062000 (-12.26%, -0.71) ## data-env-vars.name = mozilla-try-2019-02-18-zstd-sparse-revlog 5e931bf8707c: 13.351750 ~~~~~ 1106d1bf695e: 12.346655 (-7.53%, -1.01) ## data-env-vars.name = mozilla-central-2024-03-22-zstd-sparse-revlog 5e931bf8707c: 10.772939 ~~~~~ 1106d1bf695e: 9.495407 (-11.86%, -1.28) ## data-env-vars.name = mozilla-unified-2024-03-22-zstd-sparse-revlog 5e931bf8707c: 10.864297 ~~~~~ 1106d1bf695e: 9.475597 (-12.78%, -1.39) ## data-env-vars.name = mozilla-try-2023-03-22-zstd-sparse-revlog 5e931bf8707c: 17.448335 ~~~~~ 1106d1bf695e: 16.027474 (-8.14%, -1.42)
Tue, 09 Apr 2024 02:54:19 +0200 phases: rework the logic of _pushdiscoveryphase to bound complexity
Pierre-Yves David <pierre-yves.david@octobus.net> [Tue, 09 Apr 2024 02:54:19 +0200] rev 51586
phases: rework the logic of _pushdiscoveryphase to bound complexity This rework the various graph traversal in _pushdiscoveryphase to keep the complexity in check. This is done though a couple of things: - first, limiting the space we have to explore, for example, if we are not in publishing push, we don't need to consider remote draft roots that are also draft locally, as there is nothing to be moved there. - avoid unbounded descendant computation, and use the faster "rev between" computation. This provide a massive boost to performance when exchanging with repository with a massive amount of draft, like mozilla-try: ### data-env-vars.name = mozilla-try-2023-03-22-zstd-sparse-revlog # benchmark.name = hg.command.push # bin-env-vars.hg.flavor = default # bin-env-vars.hg.py-re2-module = default # benchmark.variants.explicit-rev = all-out-heads # benchmark.variants.issue6528 = disabled # benchmark.variants.protocol = ssh # benchmark.variants.reuse-external-delta-parent = default ## benchmark.variants.revs = any-1-extra-rev before: 20.346590 seconds after: 11.232059 seconds (-38.15%, -7.48 seconds) ## benchmark.variants.revs = any-100-extra-rev before: 24.752051 seconds after: 15.367412 seconds (-37.91%, -9.38 seconds) After this changes, the push operation is still quite too slow. Some of this can be attributed to general phases slowness (reading all the roots from disk for example) and other know slowness (not using persistent-nodemap, branchmap, tags, etc. We are also working on them, but with this series, phase discovery during push no longer showing up in profile and this is a pretty nice and bit low-hanging fruit out of the way. ### (same case as the above) # benchmark.variants.revs = any-1-extra-rev pre-%ln-change: 44.235070 this-changeset: 11.232059 seconds (-74.61%, -33.00 seconds) # benchmark.variants.revs = any-100-extra-rev pre-%ln-change: 49.234697 this-changeset: 15.367412 seconds (-68.79%, -33.87 seconds) Note that with this change, the `hg push` performance is now much closer to the `hg pull` performance, even it still lagging behind a bit. (and the overall performance are still too slow). ### data-env-vars.name = mozilla-try-2023-03-22-ds2-pnm # benchmark.variants.explicit-rev = all-out-heads # benchmark.variants.issue6528 = disabled # benchmark.variants.protocol = ssh # benchmark.variants.pulled-delta-reuse-policy = default # bin-env-vars.hg.flavor = rust ## benchmark.variants.revs = any-1-extra-rev hg.command.pull: 6.517450 hg.command.push: 11.219888 ## benchmark.variants.revs = any-100-extra-rev hg.command.pull: 10.160991 hg.command.push: 14.251107 ### data-env-vars.name = mozilla-try-2023-03-22-zstd-sparse-revlog # bin-env-vars.hg.py-re2-module = default # benchmark.variants.explicit-rev = all-out-heads # benchmark.variants.issue6528 = disabled # benchmark.variants.protocol = ssh # benchmark.variants.pulled-delta-reuse-policy = default ## bin-env-vars.hg.flavor = default ## benchmark.variants.revs = any-1-extra-rev hg.command.pull: 8.577772 hg.command.push: 11.232059 ## bin-env-vars.hg.flavor = default ## benchmark.variants.revs = any-100-extra-rev hg.command.pull: 13.152976 hg.command.push: 15.367412 ## bin-env-vars.hg.flavor = rust ## benchmark.variants.revs = any-1-extra-rev hg.command.pull: 8.731982 hg.command.push: 11.178751 ## bin-env-vars.hg.flavor = rust ## benchmark.variants.revs = any-100-extra-rev hg.command.pull: 13.184236 hg.command.push: 15.620843
Fri, 05 Apr 2024 22:47:44 +0200 phases: introduce a performant efficient way to access revision in a set
Pierre-Yves David <pierre-yves.david@octobus.net> [Fri, 05 Apr 2024 22:47:44 +0200] rev 51585
phases: introduce a performant efficient way to access revision in a set This will be useful in the next changesets.
Fri, 05 Apr 2024 14:13:47 +0200 phases: use revision number in `_pushdiscoveryphase`
Pierre-Yves David <pierre-yves.david@octobus.net> [Fri, 05 Apr 2024 14:13:47 +0200] rev 51584
phases: use revision number in `_pushdiscoveryphase` We now reach our target checkpoint in terms of rev-num conversion. The `_pushdiscoveryphase` function is now performing graph computation based on revision number only. Avoiding repeated conversion from node-id to rev-num. See previous changeset updated `new_heads` for rationnal. Again, time saved in the 100 milliseconds order of magnitude for the mozilla-try benchmark I have been using. However, wow that the logic is done using revision number, we can look into having better logic in the next changesets, which will provide a much bigger speedup.
Fri, 05 Apr 2024 14:11:02 +0200 phases: move RemotePhasesSummary to revision number
Pierre-Yves David <pierre-yves.david@octobus.net> [Fri, 05 Apr 2024 14:11:02 +0200] rev 51583
phases: move RemotePhasesSummary to revision number This continue our quest to align more logic on revision number instead of node-ids. The motivation is similar to the change to `new_heads` and `analyze_remote_phases` a few changeset earlier. Again, we take this as an opportunity to rename the class, and the attribute to the new naming scheme. This will highlight the need for code update for any code using it an expecting node-ids. Many of the rev-num → node-id conversion we had to introduce in the previous changesets can now be removed. More will be removed in the future as we continue to align code toward rev-num usage. time saved in the 100 milliseconds order of magnitude for the mozilla-try benchmark I have been using.
Fri, 05 Apr 2024 12:24:47 +0200 phases: stop using `repo.set` in `remotephasessummary`
Pierre-Yves David <pierre-yves.david@octobus.net> [Fri, 05 Apr 2024 12:24:47 +0200] rev 51582
phases: stop using `repo.set` in `remotephasessummary` The `repository.set` create changectx on the fly, an expensive operation. Using `repo.revs` and a direct rev-num → node-id translation will be significantly faster. This is especially true as we prepare ourself to no longer do the rev-num → node-id transalation there. The speedup is a bit lost in the overall noisyness of the slow phase discovery algorithm, but it save a small amount of time in my benchmark.
Fri, 05 Apr 2024 12:02:43 +0200 phases: use revision number in analyze_remote_phases
Pierre-Yves David <pierre-yves.david@octobus.net> [Fri, 05 Apr 2024 12:02:43 +0200] rev 51581
phases: use revision number in analyze_remote_phases Same logic as the previous change to `new_heads`, see rationnal there. This avoids a small number of `nodes -> revs` conversion speeding thing up in the 100 milliseconds order of magnitude for the worses cases. However, the rest of the logic is noisy enough that it hardly matters for now.
Fri, 05 Apr 2024 11:33:47 +0200 phases: use revision number in new_heads
Pierre-Yves David <pierre-yves.david@octobus.net> [Fri, 05 Apr 2024 11:33:47 +0200] rev 51580
phases: use revision number in new_heads All graph operations will be done using revision numbers, so passing nodes only means they will eventually get converted to revision numbers internally. As part of an effort to align the code on using revision number we make the `phases.newheads` function operated on revision number, taking them as input and using them in returns, instead of the node-id it used to consume and produce. This is part of multiple changesets effort to translate more part of the logic, but is done step by step to facilitate the identification of issue that might arise in mercurial core and extensions. To make the change simpler to handle for third party extensions, we also rename the function, using a more modern form. This will help detecting the different between the node-id version and the rev-num version. I also take this as an opportunity to add some comment about possible performance improvement for the future. They don't matter too much now, but they are worse exploring in a while.
Mon, 08 Apr 2024 15:11:49 +0200 phases: convert remote phase root to node while reading them
Pierre-Yves David <pierre-yves.david@octobus.net> [Mon, 08 Apr 2024 15:11:49 +0200] rev 51579
phases: convert remote phase root to node while reading them This is currently a bit silly as we will convert them back to node right after, but that is an intermediate step before doing more disruptive changes.
Fri, 05 Apr 2024 11:17:25 +0200 phases: more compact error handling in analyzeremotephases
Pierre-Yves David <pierre-yves.david@octobus.net> [Fri, 05 Apr 2024 11:17:25 +0200] rev 51578
phases: more compact error handling in analyzeremotephases using an intermediate variable result in more readable code, so let us use it.
Tue, 09 Apr 2024 02:54:12 +0200 push: rework the computation of fallbackheads to be correct
Pierre-Yves David <pierre-yves.david@octobus.net> [Tue, 09 Apr 2024 02:54:12 +0200] rev 51577
push: rework the computation of fallbackheads to be correct The previous computation tried to be smart but ended up being wrong. This was caught by phase movement test while reworking the phase discovery logic to be faster. The previous logic was failing to catch case where the pushed set was not based on a common heads (i.e. when the discovery seemed to have "over discovered" content, outside the pushed set) In the following graph, `e` is a common head and we `hg push -r f`. We need to detect `c` as a fallback heads and we previous failed to do so:: e | d f |/ c | b | a The performance impact of the change seems minimal. On the most impacted repository at hand (mozilla-try), the slowdown seems mostly mixed in the overall noise `hg push` but seems to be in the hundred of milliseconds order of magnitude. When using rust, we seems to be a bit faster, probably because we leverage more accelaratd internals. I added a couple of performance related common for further investigation later on.
Fri, 05 Apr 2024 11:05:54 +0200 revset: stop serializing node when using "%ln"
Pierre-Yves David <pierre-yves.david@octobus.net> [Fri, 05 Apr 2024 11:05:54 +0200] rev 51576
revset: stop serializing node when using "%ln" Turning hundred of thousand of node from node to hex and back can be slow… what about we stop doing it? In many case were we are using node id we should be using revision id. However this is not a good reason to have a stupidly slow implementation of "%ln". This caught my attention again because the phase discovery during push make an extensive use of "%ln" or huge set. In absolute, that phase discovery probably should use "%ld" and need to improves its algorithmic complexity, but improving "%ln" seems simple and long overdue. This greatly speeds up `hg push` on repository with many drafts. Here are some relevant poulpe benchmarks: ### data-env-vars.name = mozilla-try-2023-03-22-zstd-sparse-revlog # benchmark.name = hg.command.push # bin-env-vars.hg.flavor = default # bin-env-vars.hg.py-re2-module = default # benchmark.variants.explicit-rev = all-out-heads # benchmark.variants.issue6528 = disabled # benchmark.variants.protocol = ssh # benchmark.variants.reuse-external-delta-parent = default ## benchmark.variants.revs = any-1-extra-rev before: 44.235070 after: 20.416329 (-53.85%, -23.82) ## benchmark.variants.revs = any-100-extra-rev before: 49.234697 after: 26.519829 (-46.14%, -22.71) ### benchmark.name = hg.command.bundle # bin-env-vars.hg.flavor = default # bin-env-vars.hg.py-re2-module = default # benchmark.variants.revs = all # benchmark.variants.type = none-streamv2 ## data-env-vars.name = heptapod-public-2024-03-25-zstd-sparse-revlog before: 10.138396 after: 7.750458 (-23.55%, -2.39) ## data-env-vars.name = mercurial-public-2024-03-22-zstd-sparse-revlog before: 1.263859 after: 0.700229 (-44.60%, -0.56) ## data-env-vars.name = mozilla-try-2023-03-22-zstd-sparse-revlog before: 399.484481 after: 346.5089 (-13.26%, -52.98) ## data-env-vars.name = pypy-2024-03-22-zstd-sparse-revlog before: 4.540080 after: 3.401700 (-25.07%, -1.14) ## data-env-vars.name = tryton-public-2024-03-22-zstd-sparse-revlog before: 2.975765 after: 1.870798 (-37.13%, -1.10)
Tue, 09 Apr 2024 14:41:48 +0200 bundlespec: drop unused _bundlespecvariants dictionary
Pierre-Yves David <pierre-yves.david@octobus.net> [Tue, 09 Apr 2024 14:41:48 +0200] rev 51575
bundlespec: drop unused _bundlespecvariants dictionary Why do we have a `_bundlespecvariants`?
Tue, 09 Apr 2024 14:37:24 +0200 bundlespec: type the _bundlespeccontentopts dictionary
Pierre-Yves David <pierre-yves.david@octobus.net> [Tue, 09 Apr 2024 14:37:24 +0200] rev 51574
bundlespec: type the _bundlespeccontentopts dictionary If only we had a tool to detect the kind of stupid error we just fixed… ho wait.
Tue, 09 Apr 2024 14:36:01 +0200 bundlespec: fix the "streamv2" and "streamv3-exp" variant
Pierre-Yves David <pierre-yves.david@octobus.net> [Tue, 09 Apr 2024 14:36:01 +0200] rev 51573
bundlespec: fix the "streamv2" and "streamv3-exp" variant In c4aab3661f25, we broken this feature by adding unicode instead of bytes to the dictionary. On the other hand, this feature was never tested, so augment the tests to tests this.
Thu, 04 Apr 2024 14:15:32 +0100 wireprotoserver: ensure that output stream gets flushed on exception stable
Arseniy Alekseyev <aalekseyev@janestreet.com> [Thu, 04 Apr 2024 14:15:32 +0100] rev 51572
wireprotoserver: ensure that output stream gets flushed on exception Previously flush was happening due to Python finalizer being run on `BufferedWriter`. With upgrade to Python 3.11 this started randomly failing. My guess is that the finalizer on the raw `FileIO` object may be running before the finalizer of `BufferedWriter` has a chance to run. At any rate, since we're not relying on finalizers in the happy case we should also not rely on them in case of exception.
Mon, 15 Apr 2024 16:33:37 +0100 match: strengthen visit_children_set invariant, Recursive means "all files" stable
Arseniy Alekseyev <aalekseyev@janestreet.com> [Mon, 15 Apr 2024 16:33:37 +0100] rev 51571
match: strengthen visit_children_set invariant, Recursive means "all files" My previous interpretation of "Recursive" was too relaxed: I thought it instructed the caller to do something like this: > you can stop calling `visit_children_set` because you'll need to descend into > every directory recursively, but you should still check every file if it > matches or not Whereas the real instruction seems to be: > I guarantee that everything in this subtree matches, you can stop > querying the matcher for all files and dirs altogether. The evidence to support this: - the test actually passes with the stronger invariant, revealing no exceptions from this rule - the implementation of `visit_children_set` for `DifferenceMatcher` clearly relies on this requirement, so it must hold for that not to lead to bugs.
Fri, 12 Apr 2024 16:09:45 +0100 match: fix the rust-side bug in visit_children_set for rootfilesin matchers stable
Arseniy Alekseyev <aalekseyev@janestreet.com> [Fri, 12 Apr 2024 16:09:45 +0100] rev 51570
match: fix the rust-side bug in visit_children_set for rootfilesin matchers The fix is checked by `test_pattern_matcher_visit_children_set` test, which is what caught the bug in the first place, but also by an end-to-end test that I made for this purpose. Accept the new results of Cargo tests Many of these were already annotated with "FIXME", which is a good sign.
Fri, 12 Apr 2024 15:39:21 +0100 match: fix the "visitdir" method on "rootfilesin" matchers stable
Arseniy Alekseyev <aalekseyev@janestreet.com> [Fri, 12 Apr 2024 15:39:21 +0100] rev 51569
match: fix the "visitdir" method on "rootfilesin" matchers This fixes just the Python side, the fix for the rust side will follow shortly.
(0) -30000 -10000 -3000 -1000 -300 -100 -50 -30 +30 +50 +100 +300 tip