Fri, 17 Aug 2018 16:00:32 -0700 remotephase: fast path newheads computation in simple case (issue5964) stable
Boris Feld <boris.feld@octobus.net> [Fri, 17 Aug 2018 16:00:32 -0700] rev 38772
remotephase: fast path newheads computation in simple case (issue5964) Changeset 88efb7d6bcb6 fixed the logic of `phases.newheads` but greatly regressed its performance (up to many order of magnitude). The first step to fix the regression is to exit early when there is no work to do. If there are no heads to filter or not roots to filter them, we don't have to do any work. This fixes the regression when talking to an all public changeset. The performance is even better than before. pypy, compared to an all public repo ------------------------------------ 8eeed92475d5: 0.005758 seconds 88efb7d6bcb6: 0.602517 seconds (x104) this code: 0.001508 seconds (-74% from base) mercurial compared to an all public repo ---------------------------------------- 8eeed92475d5: 0.000577 seconds 88efb7d6bcb6: 0.185316 seconds (x321) this code: 0.000150 seconds (-74% from base) The performance of newheads, when actual computations are required, is fixed in the next changeset.
Fri, 17 Aug 2018 17:51:06 +0200 perf: add a perfphasesremote command stable
Boris Feld <boris.feld@octobus.net> [Fri, 17 Aug 2018 17:51:06 +0200] rev 38771
perf: add a perfphasesremote command This command measure the time we spend analysing remote phase during push and pull and display some information relevant to this computation. The `test-contrib-perf.t` expected output has to be updated but I do need these module for this perf command.
Wed, 15 Aug 2018 14:43:40 +0200 sparse-revlog: fix delta validity computation stable
Boris Feld <boris.feld@octobus.net> [Wed, 15 Aug 2018 14:43:40 +0200] rev 38770
sparse-revlog: fix delta validity computation When considering the validity of a delta with sparse-revlog, we check the size of the largest read. To do so, we use some regular logic with the extra delta information. Some of this logic was not handling this extra delta properly, confusing it with "nullrev". This confusion with nullrev lead to wrong results for this computation but preventing a crash. Changeset 781b2720d2ac on default revealed this error, crashing. This changeset fixes the logic on stable so that the computation is correct (and the crash is averted). The fix is made on stable as this will impact 4.7 clients interacting with sparse-revlog repositories (eg: created by later version).
Tue, 14 Aug 2018 14:00:35 -0400 convert: don't drop missing or corrupt tag entries stable
Matt Harbison <matt_harbison@yahoo.com> [Tue, 14 Aug 2018 14:00:35 -0400] rev 38769
convert: don't drop missing or corrupt tag entries Cleaning up the tags file could be a useful feature in some cases, so maybe there should be a switch for this. However, the default hg -> hg convert tries to maintain identical hashes (thus convert.hg.saverev is off by default, but is on by default for other source types). It looks like _rewritesubstate() has a `continue` in it, and therefore a similar problem. I ran into this conversion divergence when a coworker "merged" two repositories by copy/pasting all of the files from the source repo and massaging the code, and forgetting to revert the .hg* files. That silently emptied the .hgtags file after the conversion. (This isn't the manifest node bug Yuya has been helping with- this occurred well after the bzr -> hg conversion and wasn't a merge commit, which made it extra puzzling. That bug is still an issue.)
(0) -30000 -10000 -3000 -1000 -300 -100 -30 -10 -4 +4 +10 +30 +100 +300 +1000 +3000 +10000 tip