Mercurial > hg
view mercurial/setdiscovery.py @ 22196:23fe278bde43
largefiles: keep largefiles from colliding with normal one during linear merge
Before this patch, linear merging of modified or newly added largefile
causes unexpected result, if (1) largefile collides with same name
normal one in the target revision and (2) "local" largefile is chosen,
even though branch merging between such revisions doesn't.
Expected result of such linear merging is:
(1) (not yet recorded) largefile is kept in the working directory
(2) largefile is marked as (re-)"added"
(3) colliding normal file is marked as "removed"
But actual result is:
(1) largefile in the working directory is unlinked
(2) largefile is marked as "normal" (so treated as "missing")
(3) the dirstate entry for colliding normal file is just dropped
(1) is very serious, because there is no way to restore temporarily
modified largefiles.
(3) prevents the next commit from adding the manifest with correct
"removal of (normal) file" information for newly created changeset.
The root cause of this problem is putting "lfile" into "actions['r']"
in linear-merging case. At liner merging, "actions['r']" causes:
- unlinking "target file" in the working directory, but "lfile" as
"target file" is also largefile itself in this case
- dropping the dirstate entry for target file
"actions['f']" (= "forget") does only the latter, and this is reason
why this patch doesn't choose putting "lfile" into it instead of
"actions['r']".
This patch newly introduces action "lfmr" (LargeFiles: Mark as
Removed) to mark colliding normal file as "removed" without unlinking
it.
This patch uses "hg debugdirstate" instead of "hg status" in test,
because:
- choosing "local largefile" hides "removed" status of "remote
normal file" in "hg status" output, and
- "hg status" for "large2" in this case has another problem fixed in
the subsequent patch
author | FUJIWARA Katsunori <foozy@lares.dti.ne.jp> |
---|---|
date | Fri, 15 Aug 2014 20:28:51 +0900 |
parents | cdecbc5ab504 |
children | ee45f5c2ffcc |
line wrap: on
line source
# setdiscovery.py - improved discovery of common nodeset for mercurial # # Copyright 2010 Benoit Boissinot <bboissin@gmail.com> # and Peter Arrenbrecht <peter@arrenbrecht.ch> # # This software may be used and distributed according to the terms of the # GNU General Public License version 2 or any later version. """ Algorithm works in the following way. You have two repository: local and remote. They both contains a DAG of changelists. The goal of the discovery protocol is to find one set of node *common*, the set of nodes shared by local and remote. One of the issue with the original protocol was latency, it could potentially require lots of roundtrips to discover that the local repo was a subset of remote (which is a very common case, you usually have few changes compared to upstream, while upstream probably had lots of development). The new protocol only requires one interface for the remote repo: `known()`, which given a set of changelists tells you if they are present in the DAG. The algorithm then works as follow: - We will be using three sets, `common`, `missing`, `unknown`. Originally all nodes are in `unknown`. - Take a sample from `unknown`, call `remote.known(sample)` - For each node that remote knows, move it and all its ancestors to `common` - For each node that remote doesn't know, move it and all its descendants to `missing` - Iterate until `unknown` is empty There are a couple optimizations, first is instead of starting with a random sample of missing, start by sending all heads, in the case where the local repo is a subset, you computed the answer in one round trip. Then you can do something similar to the bisecting strategy used when finding faulty changesets. Instead of random samples, you can try picking nodes that will maximize the number of nodes that will be classified with it (since all ancestors or descendants will be marked as well). """ from node import nullid from i18n import _ import random import util, dagutil def _updatesample(dag, nodes, sample, always, quicksamplesize=0): # if nodes is empty we scan the entire graph if nodes: heads = dag.headsetofconnecteds(nodes) else: heads = dag.heads() dist = {} visit = util.deque(heads) seen = set() factor = 1 while visit: curr = visit.popleft() if curr in seen: continue d = dist.setdefault(curr, 1) if d > factor: factor *= 2 if d == factor: if curr not in always: # need this check for the early exit below sample.add(curr) if quicksamplesize and (len(sample) >= quicksamplesize): return seen.add(curr) for p in dag.parents(curr): if not nodes or p in nodes: dist.setdefault(p, d + 1) visit.append(p) def _setupsample(dag, nodes, size): if len(nodes) <= size: return set(nodes), None, 0 always = dag.headsetofconnecteds(nodes) desiredlen = size - len(always) if desiredlen <= 0: # This could be bad if there are very many heads, all unknown to the # server. We're counting on long request support here. return always, None, desiredlen return always, set(), desiredlen def _takequicksample(dag, nodes, size, initial): always, sample, desiredlen = _setupsample(dag, nodes, size) if sample is None: return always if initial: fromset = None else: fromset = nodes _updatesample(dag, fromset, sample, always, quicksamplesize=desiredlen) sample.update(always) return sample def _takefullsample(dag, nodes, size): always, sample, desiredlen = _setupsample(dag, nodes, size) if sample is None: return always # update from heads _updatesample(dag, nodes, sample, always) # update from roots _updatesample(dag.inverse(), nodes, sample, always) assert sample if len(sample) > desiredlen: sample = set(random.sample(sample, desiredlen)) elif len(sample) < desiredlen: more = desiredlen - len(sample) sample.update(random.sample(list(nodes - sample - always), more)) sample.update(always) return sample def findcommonheads(ui, local, remote, initialsamplesize=100, fullsamplesize=200, abortwhenunrelated=True): '''Return a tuple (common, anyincoming, remoteheads) used to identify missing nodes from or in remote. ''' roundtrips = 0 cl = local.changelog dag = dagutil.revlogdag(cl) # early exit if we know all the specified remote heads already ui.debug("query 1; heads\n") roundtrips += 1 ownheads = dag.heads() sample = ownheads if remote.local(): # stopgap until we have a proper localpeer that supports batch() srvheadhashes = remote.heads() yesno = remote.known(dag.externalizeall(sample)) elif remote.capable('batch'): batch = remote.batch() srvheadhashesref = batch.heads() yesnoref = batch.known(dag.externalizeall(sample)) batch.submit() srvheadhashes = srvheadhashesref.value yesno = yesnoref.value else: # compatibility with pre-batch, but post-known remotes during 1.9 # development srvheadhashes = remote.heads() sample = [] if cl.tip() == nullid: if srvheadhashes != [nullid]: return [nullid], True, srvheadhashes return [nullid], False, [] # start actual discovery (we note this before the next "if" for # compatibility reasons) ui.status(_("searching for changes\n")) srvheads = dag.internalizeall(srvheadhashes, filterunknown=True) if len(srvheads) == len(srvheadhashes): ui.debug("all remote heads known locally\n") return (srvheadhashes, False, srvheadhashes,) if sample and util.all(yesno): ui.note(_("all local heads known remotely\n")) ownheadhashes = dag.externalizeall(ownheads) return (ownheadhashes, True, srvheadhashes,) # full blown discovery # own nodes where I don't know if remote knows them undecided = dag.nodeset() # own nodes I know we both know common = set() # own nodes I know remote lacks missing = set() # treat remote heads (and maybe own heads) as a first implicit sample # response common.update(dag.ancestorset(srvheads)) undecided.difference_update(common) full = False while undecided: if sample: commoninsample = set(n for i, n in enumerate(sample) if yesno[i]) common.update(dag.ancestorset(commoninsample, common)) missinginsample = [n for i, n in enumerate(sample) if not yesno[i]] missing.update(dag.descendantset(missinginsample, missing)) undecided.difference_update(missing) undecided.difference_update(common) if not undecided: break if full: ui.note(_("sampling from both directions\n")) sample = _takefullsample(dag, undecided, size=fullsamplesize) elif common: # use cheapish initial sample ui.debug("taking initial sample\n") sample = _takefullsample(dag, undecided, size=fullsamplesize) else: # use even cheaper initial sample ui.debug("taking quick initial sample\n") sample = _takequicksample(dag, undecided, size=initialsamplesize, initial=True) roundtrips += 1 ui.progress(_('searching'), roundtrips, unit=_('queries')) ui.debug("query %i; still undecided: %i, sample size is: %i\n" % (roundtrips, len(undecided), len(sample))) # indices between sample and externalized version must match sample = list(sample) yesno = remote.known(dag.externalizeall(sample)) full = True result = dag.headsetofconnecteds(common) ui.progress(_('searching'), None) ui.debug("%d total queries\n" % roundtrips) if not result and srvheadhashes != [nullid]: if abortwhenunrelated: raise util.Abort(_("repository is unrelated")) else: ui.warn(_("warning: repository is unrelated\n")) return (set([nullid]), True, srvheadhashes,) anyincoming = (srvheadhashes != [nullid]) return dag.externalizeall(result), anyincoming, srvheadhashes