Wed, 20 Feb 2019 09:04:54 +0100 rust-discovery: using from Python code
Georges Racinet <georges.racinet@octobus.net> [Wed, 20 Feb 2019 09:04:54 +0100] rev 42761
rust-discovery: using from Python code As previously done in other topics, the Rust version is used if it's been built. The version fully in Rust of the partialdiscovery class has the performance advantage over the Python version (actually using the Rust MissingAncestor) if the undecided set is big enough. Otherwise no sampling occurs, and the discovery is reasonably fast anyway. Note: it's hard to predict the size of the initial undecided set, it can depend on the kind of topological changes between the local and remote graphs. The point of the Rust version is to make the bad cases acceptable. More specifically, the performance advantages are: - faster sampling, especially takefullsample() - much faster addmissings() in almost all cases (see commit message in grandparent of the present changeset) - no conversion cost of the undecided set at the interface between Rust and Python == Measurements with big undecided sets For an extreme example, discovery between mozilla-try and mozilla-unified (over one million undecided revisions, same case as in dbd0fcca6dfc), we get roughly a x2.5/x3 better performance: Growing sample size (5% starting with 200): time goes down from 210 to 72 seconds. Constant sample size of 200: time down from 1853 to 659 seconds. With a sample size computed from number of roots and heads of the undecided set (`respectsize` is `False`), here are perfdiscovery results: Before ! wall 9.358729 comb 9.360000 user 9.310000 sys 0.050000 (median of 50) After ! wall 3.793819 comb 3.790000 user 3.750000 sys 0.040000 (median of 50) In that later case, the sample sizes are routinely in the hundreds of thousands of revisions. While still faster, the Rust iteration in addmissings has less of an advantage than with smaller sample sizes, but one sees addcommons becoming faster, probably a consequence of not having to copy big sets back and forth. This example is not a goal in itself, but it showcases several different areas in which the process can become slow, due to different factors, and how this full Rust version can help. == Measurements with small undecided sets In cases the undecided set is small enough than no sampling occurs, the Rust version has a disadvantage at init if `targetheads` is really big (some time is lost in the translation to Rust data structures), and that is compensated by the faster `addmissings()`. On a private repository with over one million commits, we still get a minor improvement, of 6.8%: Before ! wall 0.593585 comb 0.590000 user 0.550000 sys 0.040000 (median of 50) After ! wall 0.553035 comb 0.550000 user 0.520000 sys 0.030000 (median of 50) What's interesting in that case is the first addinfo() at 180ms for Rust and 233ms for Python+C, mostly due to add_missings and the children cache computation being done in less than 0.2ms on the Rust side vs over 40ms on the Python side. The worst case we have on hand is with mozilla-try, prepared with discovery-helper.sh for 10 heads and depth 10, time goes up 2.2% on the median. In this case `targetheads` is really huge with 165842 server heads. Before ! wall 0.823884 comb 0.810000 user 0.790000 sys 0.020000 (median of 50) After ! wall 0.842607 comb 0.840000 user 0.800000 sys 0.040000 (median of 50) If that would be considered a problem, more adjustments can be made, which are prematurate at this stage: cooking special variants of methods of the inner MissingAncestors object, retrieving local heads directly from Rust to avoid the cost of conversion. Effort would probably be better spent at this point improving the surroundings if needed. Here's another data point with a smaller repository, pypy, where performance is almost identical Before ! wall 0.015121 comb 0.030000 user 0.020000 sys 0.010000 (median of 186) After ! wall 0.015009 comb 0.010000 user 0.010000 sys 0.000000 (median of 184) Differential Revision: https://phab.mercurial-scm.org/D6430
Tue, 21 May 2019 12:46:38 +0200 rust-discovery: optimization of add commons/missings for empty arguments
Georges Racinet on percheron.racinet.fr <georges@racinet.fr> [Tue, 21 May 2019 12:46:38 +0200] rev 42760
rust-discovery: optimization of add commons/missings for empty arguments These two cases have to be catched early for different reasons. In the case of add_missing_revisions, we don't want to trigger the computation of the undecided set (and the children cache) too early: the later the better. In the case of add_common_revisions, the inner `MissingAncestors` object wouldn't know that all ancestors of its bases have already been removed from the undecided. In principle, that would in itself be a lead for further improvement: this remove_ancestors_from could be more incremental, but the current performance seems to be good enough. Differential Revision: https://phab.mercurial-scm.org/D6429
Tue, 16 Apr 2019 01:16:39 +0200 rust-discovery: using the children cache in add_missing
Georges Racinet <georges.racinet@octobus.net> [Tue, 16 Apr 2019 01:16:39 +0200] rev 42759
rust-discovery: using the children cache in add_missing The DAG range computation often needs to get back to very old revisions, and turns out to be disproportionately long, given that the end goal is to remove the descendents of the given missing revisons from the undecided set. The fast iteration capabilities available in the Rust case make it possible to avoid the DAG range entirely, at the cost of precomputing the children cache, and to simply iterate on children of the given missing revisions. This is a case where staying on the same side of the interface between the two languages has clear benefits. On discoveries with initial undecided sets small enough to bypass sampling entirely, the total cost of computing the children cache and the subsequent iteration becomes better than the Python + C counterpart, which relies on reachableroots2. For example, on a repo with more than one million revisions with an initial undecided set of 11 elements, we get these figures: Rust version with simple iteration addcommons: 57.287us first undecided computation: 184.278334ms first children cache computation: 131.056us addmissings iteration: 42.766us first addinfo total: 185.24 ms Python + C version first addcommons: 0.29 ms addcommons 0.21 ms first undecided computation 191.35 ms addmissings 45.75 ms first addinfo total: 237.77 ms On discoveries with large undecided sets, the initial price paid makes the first addinfo slower than the Python + C version, but that's more than compensated by the gain in sampling and subsequent iterations. Here's an extreme example with an undecided set of a million revisions: Rust version: first undecided computation: 293.842629ms first children cache computation: 407.911297ms addmissings iteration: 34.312869ms first addinfo total: 776.02 ms taking initial sample query 2: sampling time: 1318.38 ms query 2; still undecided: 1005013, sample size is: 200 addmissings: 143.062us Python + C version: first undecided computation 298.13 ms addmissings 80.13 ms first addinfo total: 399.62 ms taking initial sample query 2: sampling time: 3957.23 ms query 2; still undecided: 1005013, sample size is: 200 addmissings 52.88 ms Differential Revision: https://phab.mercurial-scm.org/D6428
Tue, 21 May 2019 17:44:15 +0200 discovery: new devel.discovery.randomize option
Georges Racinet <georges.racinet@octobus.net> [Tue, 21 May 2019 17:44:15 +0200] rev 42758
discovery: new devel.discovery.randomize option By default, this is True, but setting it to False is a uniform way to kill all randomness in integration tests such as test-setdiscovery.t By "uniform" we mean that it can be passed to implementations in other languages, for which the monkey-patching of random.sample would be irrelevant. In the above mentioned test file, we use it right away, replacing the adhoc extension that had the same purpose, and to derandomize a case with many round-trips, that we'll need to behave uniformly in the Rust version. Differential Revision: https://phab.mercurial-scm.org/D6427
Tue, 21 May 2019 17:43:55 +0200 rust-discovery: optionally don't randomize at all, for tests
Georges Racinet <georges.racinet@octobus.net> [Tue, 21 May 2019 17:43:55 +0200] rev 42757
rust-discovery: optionally don't randomize at all, for tests As seen from Python, this is a new `randomize` kwarg in init of the discovery object. It replaces random picking by some arbitrary yet deterministic strategy. This is the same as what test-setdiscovery.t does, with the added benefit to be usable both in Python and Rust implementations. Differential Revision: https://phab.mercurial-scm.org/D6426
Fri, 17 May 2019 01:56:57 +0200 rust-discovery: exposing sampling to python
Georges Racinet <georges.racinet@octobus.net> [Fri, 17 May 2019 01:56:57 +0200] rev 42756
rust-discovery: exposing sampling to python Differential Revision: https://phab.mercurial-scm.org/D6425
Fri, 17 May 2019 01:56:57 +0200 rust-discovery: takefullsample() core implementation
Georges Racinet <georges.racinet@octobus.net> [Fri, 17 May 2019 01:56:57 +0200] rev 42755
rust-discovery: takefullsample() core implementation take_full_sample() browses the undecided set in both directions: from its roots as well as from its heads. Following what's done on the Python side, we alter update_sample() signature to take a closure returning an iterator: either ParentsIterator or an iterator over the children found in `children_cache`. These constructs should probably be split off in a separate module. This is a first concrete example where a more abstract graph notion (probably a trait) would be useful, as this is nothing but an operation on the reversed DAG. A similar motivation in the context of the discovery process would be to replace the call to dagops::range in `add_missing_revisions()` with a simple iteration over descendents, again an operation on the reversed graph. Differential Revision: https://phab.mercurial-scm.org/D6424
Fri, 17 May 2019 01:56:56 +0200 rust-discovery: core implementation for take_quick_sample()
Georges Racinet <georges.racinet@octobus.net> [Fri, 17 May 2019 01:56:56 +0200] rev 42754
rust-discovery: core implementation for take_quick_sample() This makes in particular `rand` no longer a testing dependency. We keep a seedable random generator on the `PartialDiscovery` object itself, to avoid lengthy initialization. In take_quick_sample() itself, we had to avoid keeping the reference to `self.undecided` to cope with the mutable reference introduced by the the call to `limit_sample`, but it's still manageable without resorting to inner mutability. Sampling being prone to be improved in the mid-term future, testing is minimal, amounting to checking which code path got executed. Differential Revision: https://phab.mercurial-scm.org/D6423
Wed, 12 Jun 2019 14:31:41 +0100 rust-discovery: read the index from a repo passed at init
Georges Racinet <georges.racinet@octobus.net> [Wed, 12 Jun 2019 14:31:41 +0100] rev 42753
rust-discovery: read the index from a repo passed at init This makes the API of the Rust PartialDiscovery object now the same (or rather a subset) of the Python object, hence easier to control through module policy down the road. Differential Revision: https://phab.mercurial-scm.org/D6517
Wed, 12 Jun 2019 14:18:12 +0100 rust-discovery: accept the new 'respectsize' init arg
Georges Racinet <georges.racinet@octobus.net> [Wed, 12 Jun 2019 14:18:12 +0100] rev 42752
rust-discovery: accept the new 'respectsize' init arg At this stage, we don't do anything about it: it will be meaningful in sampling methods that aren't implemented yet. Differential Revision: https://phab.mercurial-scm.org/D6516
(0) -30000 -10000 -3000 -1000 -300 -100 -10 +10 +100 +300 +1000 +3000 tip