Thu, 22 Feb 2024 15:06:16 +0100 rust-index: improve phase computation speed
Raphaël Gomès <rgomes@octobus.net> [Thu, 22 Feb 2024 15:06:16 +0100] rev 51425
rust-index: improve phase computation speed While less memory efficient, using an array is *much* faster than using a HashMap, especially with the default hasher. It even makes the code simpler, so I'm not really sure what I was thinking in the first place, maybe it's more obvious now. This fix a significant performance regression when using the rust version of the code. (however, the C code still outperform rust on this operation) hg perf::phases on mozilla-try-2023-03-22 - 6.6.3: 0.451239 seconds - before: 0.982495 seconds - after: 0.265347 seconds - C code: 0.183241 second
Fri, 23 Feb 2024 06:37:25 +0100 phases: directly update the phase sets in advanceboundary
Pierre-Yves David <pierre-yves.david@octobus.net> [Fri, 23 Feb 2024 06:37:25 +0100] rev 51424
phases: directly update the phase sets in advanceboundary This is similar to what we do in retractboundary. There is no need to invalidate the cache if we have everything at hand to update it.
Fri, 23 Feb 2024 05:25:35 +0100 phases: large rework of advance boundary
Pierre-Yves David <pierre-yves.david@octobus.net> [Fri, 23 Feb 2024 05:25:35 +0100] rev 51423
phases: large rework of advance boundary In a similar spirit as the rework of retractboundary, the new algorithm is doing an amount of work in the order of magnitude of the amount of changeset that changes phases. (except to find new roots in impacted higher phases if any may exists). This result in a very significant speedup for repository with many old draft like mozilla try. runtime of perf:unbundle for a bundle constaining a single changeset (C code): before 6.7 phase work: 14.497 seconds before this change: 6.311 seconds (-55%) with this change: 2.240 seconds (-85%) Combined with the other patches that fixes the phases computation in the Rust index, the rust code with a persistent nodemap get back to quite interresting performances with 2.026 seconds for the same operation, about 10% faster than the C code.
(0) -30000 -10000 -3000 -1000 -300 -100 -30 -10 -3 +3 +10 +30 +100 +300 tip