setdiscovery: remove unnecessary sample size limiting
Both _takequicksample() and _takefullsample() already limit their
result to the request size, so there's no need to let the caller do
that again.
Differential Revision: https://phab.mercurial-scm.org/D2645
setdiscovery: remove initialsamplesize from a condition
It seems more direct to compare the actual sample size. That way we
can change the sample taken earlier in the code without breaking the
condition.
Differential Revision: https://phab.mercurial-scm.org/D2644
setdiscovery: back out changeset
5cfdf6137af8 (
issue5809)
As explained in the bug report, this commit caused a performance
regression. The problem occurs when the local repo has very many
heads. Before
5cfdf6137af8, we used to get the remote's list of heads
and if these heads mostly overlapped with the local repo's heads, we
would mark these common heads as common, which would greatly reduce
the size of the set of undecided nodes.
Note that a similar problem existed before
5cfdf6137af8: If the local
repo had very many heads and the server just had a few (or many heads
from a disjoint set), we would do the same kind of slow discovery as
we would with
5cfdf6137af8 in the case where local and remote repos
share a large set of common nodes.
For now, we just back out
5cfdf6137af8. We should improve the
discovery in the "local has many heads, remote has few heads" case,
but let's do that after backing this out.
Differential Revision: https://phab.mercurial-scm.org/D2643