setdiscovery: back out changeset
5cfdf6137af8 (
issue5809)
As explained in the bug report, this commit caused a performance
regression. The problem occurs when the local repo has very many
heads. Before
5cfdf6137af8, we used to get the remote's list of heads
and if these heads mostly overlapped with the local repo's heads, we
would mark these common heads as common, which would greatly reduce
the size of the set of undecided nodes.
Note that a similar problem existed before
5cfdf6137af8: If the local
repo had very many heads and the server just had a few (or many heads
from a disjoint set), we would do the same kind of slow discovery as
we would with
5cfdf6137af8 in the case where local and remote repos
share a large set of common nodes.
For now, we just back out
5cfdf6137af8. We should improve the
discovery in the "local has many heads, remote has few heads" case,
but let's do that after backing this out.
Differential Revision: https://phab.mercurial-scm.org/D2643
--- a/mercurial/discovery.py Sun Mar 04 13:04:12 2018 -0500
+++ b/mercurial/discovery.py Sun Mar 04 07:37:08 2018 -0800
@@ -57,7 +57,7 @@
if all(knownnode(h) for h in heads):
return (heads, False, heads)
- res = setdiscovery.findcommonheads(repo.ui, repo, remote, heads,
+ res = setdiscovery.findcommonheads(repo.ui, repo, remote,
abortwhenunrelated=not force,
ancestorsof=ancestorsof)
common, anyinc, srvheads = res
--- a/mercurial/setdiscovery.py Sun Mar 04 13:04:12 2018 -0500
+++ b/mercurial/setdiscovery.py Sun Mar 04 07:37:08 2018 -0800
@@ -130,7 +130,7 @@
sample = set(random.sample(sample, desiredlen))
return sample
-def findcommonheads(ui, local, remote, heads=None,
+def findcommonheads(ui, local, remote,
initialsamplesize=100,
fullsamplesize=200,
abortwhenunrelated=True,
@@ -155,15 +155,11 @@
sample = _limitsample(ownheads, initialsamplesize)
# indices between sample and externalized version must match
sample = list(sample)
- if heads:
- srvheadhashes = heads
- yesno = remote.known(dag.externalizeall(sample))
- else:
- batch = remote.iterbatch()
- batch.heads()
- batch.known(dag.externalizeall(sample))
- batch.submit()
- srvheadhashes, yesno = batch.results()
+ batch = remote.iterbatch()
+ batch.heads()
+ batch.known(dag.externalizeall(sample))
+ batch.submit()
+ srvheadhashes, yesno = batch.results()
if cl.tip() == nullid:
if srvheadhashes != [nullid]: