Fri, 23 Feb 2024 14:07:33 +0100 perf: add a --as-push option to perf::unbundle
Pierre-Yves David <pierre-yves.david@octobus.net> [Fri, 23 Feb 2024 14:07:33 +0100] rev 51436
perf: add a --as-push option to perf::unbundle This turned out to make a quite significant difference.
Fri, 23 Feb 2024 06:25:09 +0100 chainsaw-update: exit early if one of the intermediate command fails
Pierre-Yves David <pierre-yves.david@octobus.net> [Fri, 23 Feb 2024 06:25:09 +0100] rev 51435
chainsaw-update: exit early if one of the intermediate command fails That will prevent the user to be presented with a start that pretend to be consistent with the request, but is not.
Fri, 23 Feb 2024 03:32:35 +0100 chainsaw-update: lock the repository for the duration of the operation
Pierre-Yves David <pierre-yves.david@octobus.net> [Fri, 23 Feb 2024 03:32:35 +0100] rev 51434
chainsaw-update: lock the repository for the duration of the operation This should prevent and catch some misusage where something else try to touch the repository.
Fri, 23 Feb 2024 11:41:55 +0100 chainsaw-update: taking care of initial cloning
Georges Racinet <georges.racinet@octobus.net> [Fri, 23 Feb 2024 11:41:55 +0100] rev 51433
chainsaw-update: taking care of initial cloning Perhaps we should go just a bit lower level than this `instance()`, since the main added value in our use-case is full path resolution, that we need to do anyway for the rmtree cleanup.
Fri, 23 Feb 2024 11:30:58 +0100 chainsaw-update: use a graph with branching in graph
Pierre-Yves David <pierre-yves.david@octobus.net> [Fri, 23 Feb 2024 11:30:58 +0100] rev 51432
chainsaw-update: use a graph with branching in graph This will be relevant for the next improvement of `chainsaw-update`.
Wed, 17 Jan 2024 14:39:06 +0100 chainsaw-update: log actual locks breaking
Georges Racinet <georges.racinet@octobus.net> [Wed, 17 Jan 2024 14:39:06 +0100] rev 51431
chainsaw-update: log actual locks breaking Previously, the command would simply state that it was about to break locks, not if there was actually some to break. This version is race-free. It would be also possible to display the content of the lock before hand (not race-free but informative in almost all cases).
Wed, 17 Jan 2024 14:26:58 +0100 vfs: have tryunlink tell what it did
Georges Racinet <georges.racinet@octobus.net> [Wed, 17 Jan 2024 14:26:58 +0100] rev 51430
vfs: have tryunlink tell what it did It is useful in certain circumstances to know whether vfs.tryunlink() actually removed something or not, be it for logging purposes.
Sat, 26 Nov 2022 12:23:56 +0100 chainsaw: new extension for dangerous operations
Georges Racinet <georges.racinet@octobus.net> [Sat, 26 Nov 2022 12:23:56 +0100] rev 51429
chainsaw: new extension for dangerous operations The first provided command is `chainsaw-update`, whose one and single job is to make sure that it will pull, update and purge the target repository, no matter what may be in the way (locks, notably), see docstring for rationale.
Fri, 23 Feb 2024 03:45:07 +0100 rust: disable the RustIndex without persistent nodemap
Pierre-Yves David <pierre-yves.david@octobus.net> [Fri, 23 Feb 2024 03:45:07 +0100] rev 51428
rust: disable the RustIndex without persistent nodemap See rational inline.
Fri, 23 Feb 2024 03:44:56 +0100 rust: stop claiming the C index is compatible with the rust code
Pierre-Yves David <pierre-yves.david@octobus.net> [Fri, 23 Feb 2024 03:44:56 +0100] rev 51427
rust: stop claiming the C index is compatible with the rust code This is no longer the case since the introduction of the pure Rust Index, and was probably not the case since the MixedIndex itself. So we fix the dedicated attribute value.
Thu, 22 Feb 2024 15:11:26 +0100 rust-index: remove one collect when converting back
Raphaël Gomès <rgomes@octobus.net> [Thu, 22 Feb 2024 15:11:26 +0100] rev 51426
rust-index: remove one collect when converting back Turns out this is slightly faster. Sending the results back to Python is still the most costly (like 75% of the time) of the whole method, but it's about as fast as it can be now. hg perf::phases on mozilla-try-2023-03-22 before: 0.267114 after: 0.247101
Thu, 22 Feb 2024 15:06:16 +0100 rust-index: improve phase computation speed
Raphaël Gomès <rgomes@octobus.net> [Thu, 22 Feb 2024 15:06:16 +0100] rev 51425
rust-index: improve phase computation speed While less memory efficient, using an array is *much* faster than using a HashMap, especially with the default hasher. It even makes the code simpler, so I'm not really sure what I was thinking in the first place, maybe it's more obvious now. This fix a significant performance regression when using the rust version of the code. (however, the C code still outperform rust on this operation) hg perf::phases on mozilla-try-2023-03-22 - 6.6.3: 0.451239 seconds - before: 0.982495 seconds - after: 0.265347 seconds - C code: 0.183241 second
Fri, 23 Feb 2024 06:37:25 +0100 phases: directly update the phase sets in advanceboundary
Pierre-Yves David <pierre-yves.david@octobus.net> [Fri, 23 Feb 2024 06:37:25 +0100] rev 51424
phases: directly update the phase sets in advanceboundary This is similar to what we do in retractboundary. There is no need to invalidate the cache if we have everything at hand to update it.
Fri, 23 Feb 2024 05:25:35 +0100 phases: large rework of advance boundary
Pierre-Yves David <pierre-yves.david@octobus.net> [Fri, 23 Feb 2024 05:25:35 +0100] rev 51423
phases: large rework of advance boundary In a similar spirit as the rework of retractboundary, the new algorithm is doing an amount of work in the order of magnitude of the amount of changeset that changes phases. (except to find new roots in impacted higher phases if any may exists). This result in a very significant speedup for repository with many old draft like mozilla try. runtime of perf:unbundle for a bundle constaining a single changeset (C code): before 6.7 phase work: 14.497 seconds before this change: 6.311 seconds (-55%) with this change: 2.240 seconds (-85%) Combined with the other patches that fixes the phases computation in the Rust index, the rust code with a persistent nodemap get back to quite interresting performances with 2.026 seconds for the same operation, about 10% faster than the C code.
Thu, 22 Feb 2024 19:21:14 +0100 phases: apply similar early filtering to advanceboundary
Pierre-Yves David <pierre-yves.david@octobus.net> [Thu, 22 Feb 2024 19:21:14 +0100] rev 51422
phases: apply similar early filtering to advanceboundary advanceboundary is called the push's unbundle (but not the other unbundle) so advanceboundary did not show up the profile I looked at so far. We start with simple pre-filtering to avoid doing any work if we don't needs too.
Wed, 21 Feb 2024 11:09:25 +0100 phases: filter revision that are already in the right phase
Pierre-Yves David <pierre-yves.david@octobus.net> [Wed, 21 Feb 2024 11:09:25 +0100] rev 51421
phases: filter revision that are already in the right phase No need to compute new roots if everything is already in order.
Wed, 21 Feb 2024 13:05:29 +0100 phases: invalidate the phases set less often on retract boundary
Pierre-Yves David <pierre-yves.david@octobus.net> [Wed, 21 Feb 2024 13:05:29 +0100] rev 51420
phases: invalidate the phases set less often on retract boundary We already have the information to update the phase set, so we do so directly instead of invalidating the cache. This show a sizeable speedup in our `perf::unbundle` benchmark on the many-draft mozilla-try repository. ### data-env-vars.name = mozilla-try-2023-03-22-zstd-sparse-revlog # benchmark.name = hg.perf.perf-unbundle # bin-env-vars.hg.flavor = no-rust # bin-env-vars.hg.py-re2-module = default # benchmark.variants.issue6528 = disabled # benchmark.variants.revs = last-10 before: 2.055259 seconds after: 1.887064 seconds (-8.18%) # benchmark.variants.revs = last-100 before: 2.409239 seconds after: 2.222429 seconds (-7.75%) # benchmark.variants.revs = last-1000 before: 3.945648 seconds after: 3.762480 seconds (-4.64%)
Wed, 21 Feb 2024 13:05:23 +0100 phases: incrementally update the phase sets when reasonable
Pierre-Yves David <pierre-yves.david@octobus.net> [Wed, 21 Feb 2024 13:05:23 +0100] rev 51419
phases: incrementally update the phase sets when reasonable When the amount of manual walking is small, we update the phases set manually instead of computing them from scratch. This should help small update. The next changesets will make this used more often by reducing the amount of full invalidation we do on roots upgrade. The criteria for using an incremental upgrade are arbitrary, however, it "should never hurt".
Fri, 23 Feb 2024 00:01:33 +0100 phasees: properly shallow caopy the phase sets dictionary
Pierre-Yves David <pierre-yves.david@octobus.net> [Fri, 23 Feb 2024 00:01:33 +0100] rev 51418
phasees: properly shallow caopy the phase sets dictionary We are about to increments the set more incrementally in some case, so we need to make a proper shallow copy of it.
Wed, 21 Feb 2024 14:42:13 +0100 phases: pass an unfiltered repository to _ensure_phase_sets
Pierre-Yves David <pierre-yves.david@octobus.net> [Wed, 21 Feb 2024 14:42:13 +0100] rev 51417
phases: pass an unfiltered repository to _ensure_phase_sets It seems better for such a low level function to be able to assume it operate on a real repository.
Wed, 21 Feb 2024 13:01:25 +0100 phases: drop set building in `hasnonpublicphases`
Pierre-Yves David <pierre-yves.david@octobus.net> [Wed, 21 Feb 2024 13:01:25 +0100] rev 51416
phases: drop set building in `hasnonpublicphases` We don't actually use the set, so why do we ensure they are built? (we should also clean up the use of repository argument but that's a quest for later).
Wed, 21 Feb 2024 11:59:28 +0100 phases: gather the logic for phasesets update in a single method
Pierre-Yves David <pierre-yves.david@octobus.net> [Wed, 21 Feb 2024 11:59:28 +0100] rev 51415
phases: gather the logic for phasesets update in a single method This logic is duplicated around for no good reason, we gather it in a single place. The conditional is the new function are a bit weird as we about going to extend it soon.
Thu, 22 Feb 2024 10:58:54 +0100 phases: change the way we warm the phasecache in repocache
Pierre-Yves David <pierre-yves.david@octobus.net> [Thu, 22 Feb 2024 10:58:54 +0100] rev 51414
phases: change the way we warm the phasecache in repocache Same logic as for the previous chngeset. We are about to rename and change the method used here.
Thu, 22 Feb 2024 10:56:05 +0100 phases: use a more generic way to trigger a phases computation for perf
Pierre-Yves David <pierre-yves.david@octobus.net> [Thu, 22 Feb 2024 10:56:05 +0100] rev 51413
phases: use a more generic way to trigger a phases computation for perf Querying the tip most revision will require the cache to warm the same as calling the dedicated method. This avoid using a method that is mostly meant for internal use and will be renamed in a coming changesets.
Wed, 21 Feb 2024 12:01:09 +0100 phases: fix an overzealous invalidation of the phase sets
Pierre-Yves David <pierre-yves.david@octobus.net> [Wed, 21 Feb 2024 12:01:09 +0100] rev 51412
phases: fix an overzealous invalidation of the phase sets If `len(cl) == self._loadedrevslen` the cache is up to date.
Wed, 21 Feb 2024 11:04:56 +0100 phases: type annotation for `_phasesets`
Pierre-Yves David <pierre-yves.david@octobus.net> [Wed, 21 Feb 2024 11:04:56 +0100] rev 51411
phases: type annotation for `_phasesets` Does not hurt.
Tue, 20 Feb 2024 23:46:21 +0100 phases: leverage the collected information to record phase update
Pierre-Yves David <pierre-yves.david@octobus.net> [Tue, 20 Feb 2024 23:46:21 +0100] rev 51410
phases: leverage the collected information to record phase update Since the lower level function already gather this information, we can directly use it. This comes with a small change to the test that are actually fixing them. The previous version over-reported some phase change that did not exists. In both case, we are force revision `1` to be secret and `0` remains draft`, the previous code wrongly reported `0` as moving to secret while it properly remained draft in the repository.
Wed, 21 Feb 2024 10:41:09 +0100 phases: large rewrite on retract boundary
Pierre-Yves David <pierre-yves.david@octobus.net> [Wed, 21 Feb 2024 10:41:09 +0100] rev 51409
phases: large rewrite on retract boundary The new code is still pure Python, so we still have room to going significantly faster. However its complexity of the complex part is `O(|[min_new_draft, tip]|)` instead of `O(|[min_draft, tip]|` which should help tremendously one repository with old draft (like mercurial-devel or mozilla-try). This is especially useful as the most common "retract boundary" operation happens when we commit/rewrite new drafts or when we push new draft to a non-publishing server. In this case, the smallest new_revs is very close to the tip and there is very few work to do. A few smaller optimisation could be done for these cases and will be introduced in later changesets. We still have iterate over large sets of roots, but this is already a great improvement for a very small amount of work. We gather information on the affected changeset as we go as we can put it to use in the next changesets. This extra data collection might slowdown the `register_new` case a bit, however for register_new, it should not really matters. The set of new nodes is either small, so the impact is negligible, or the set of new nodes is large, and the amount of work to do to had them will dominate the overhead the collecting information in `changed_revs`. As this new code compute the changes on the fly, it unlock other interesting improvement to be done in later changeset.
Thu, 22 Feb 2024 15:49:21 +0100 phases: fast path public phase advance when everything is public
Pierre-Yves David <pierre-yves.david@octobus.net> [Thu, 22 Feb 2024 15:49:21 +0100] rev 51408
phases: fast path public phase advance when everything is public Everything is already public, so we have nothing to do here.
Wed, 21 Feb 2024 15:24:22 +0100 phases: fast path retract of public phase
Pierre-Yves David <pierre-yves.david@octobus.net> [Wed, 21 Feb 2024 15:24:22 +0100] rev 51407
phases: fast path retract of public phase There are no boundary to retract, so lets do nothing.
(0) -30000 -10000 -3000 -1000 -300 -100 -50 -30 +30 +50 +100 +300 tip