Mercurial > hg
view mercurial/discovery.py @ 14365:a8e3931e3fb5
revlog: linearize created changegroups in generaldelta revlogs
This greatly improves the speed of the bundling process, and often reduces the
bundle size considerably. (Although if the repository is already ordered, this
has little effect on both time and bundle size.)
For non-generaldelta clients, the reduced bundle size translates to a reduced
repository size, similar to shrinking the revlogs (which uses the exact same
algorithm). For generaldelta clients the difference is minor.
When the new bundle format comes, reordering will not be necessary since we
can then store the deltaparent relationsships directly. The eventual default
behavior for clients and servers is presented in the table below, where "new"
implies support for GD as well as the new bundle format:
old client new client
old server old bundle, no reorder old bundle, no reorder
new server, non-GD old bundle, no reorder[1] old bundle, no reorder[2]
new server, GD old bundle, reorder[3] new bundle, no reorder[4]
[1] reordering is expensive on the server in this case, skip it
[2] client can choose to do its own redelta here
[3] reordering is needed because otherwise the pull does a lot of extra
work on the server
[4] reordering isn't needed because client can get deltabase in bundle
format
Currently, the default is to reorder on GD-servers, and not otherwise. A new
setting, bundle.reorder, has been added to override the default reordering
behavior. It can be set to either 'auto' (the default), or any true or false
value as a standard boolean setting, to either force the reordering on or off
regardless of generaldelta.
Some timing data from a relatively branch test repository follows. All
bundling is done with --all --type none options.
Non-generaldelta, non-shrunk repo:
-----------------------------------
Size: 276M
Without reorder (default):
Bundle time: 14.4 seconds
Bundle size: 939M
With reorder:
Bundle time: 1 minute, 29.3 seconds
Bundle size: 381M
Generaldelta, non-shrunk repo:
-----------------------------------
Size: 87M
Without reorder:
Bundle time: 2 minutes, 1.4 seconds
Bundle size: 939M
With reorder (default):
Bundle time: 25.5 seconds
Bundle size: 381M
author | Sune Foldager <cryo@cyanite.org> |
---|---|
date | Wed, 18 May 2011 23:26:26 +0200 |
parents | 30273f0c776b |
children | 97d2259af787 |
line wrap: on
line source
# discovery.py - protocol changeset discovery functions # # Copyright 2010 Matt Mackall <mpm@selenic.com> # # This software may be used and distributed according to the terms of the # GNU General Public License version 2 or any later version. from node import nullid, short from i18n import _ import util, setdiscovery, treediscovery def findcommonincoming(repo, remote, heads=None, force=False): """Return a tuple (common, anyincoming, heads) used to identify the common subset of nodes between repo and remote. "common" is a list of (at least) the heads of the common subset. "anyincoming" is testable as a boolean indicating if any nodes are missing locally. If remote does not support getbundle, this actually is a list of roots of the nodes that would be incoming, to be supplied to changegroupsubset. No code except for pull should be relying on this fact any longer. "heads" is either the supplied heads, or else the remote's heads. If you pass heads and they are all known locally, the reponse lists justs these heads in "common" and in "heads". Please use findcommonoutgoing to compute the set of outgoing nodes to give extensions a good hook into outgoing. """ if not remote.capable('getbundle'): return treediscovery.findcommonincoming(repo, remote, heads, force) if heads: allknown = True nm = repo.changelog.nodemap for h in heads: if nm.get(h) is None: allknown = False break if allknown: return (heads, False, heads) res = setdiscovery.findcommonheads(repo.ui, repo, remote, abortwhenunrelated=not force) common, anyinc, srvheads = res return (list(common), anyinc, heads or list(srvheads)) def findcommonoutgoing(repo, other, onlyheads=None, force=False, commoninc=None): '''Return a tuple (common, anyoutgoing, heads) used to identify the set of nodes present in repo but not in other. If onlyheads is given, only nodes ancestral to nodes in onlyheads (inclusive) are included. If you already know the local repo's heads, passing them in onlyheads is faster than letting them be recomputed here. If commoninc is given, it must the the result of a prior call to findcommonincoming(repo, other, force) to avoid recomputing it here. The returned tuple is meant to be passed to changelog.findmissing.''' common, _any, _hds = commoninc or findcommonincoming(repo, other, force=force) return (common, onlyheads or repo.heads()) def prepush(repo, remote, force, revs, newbranch): '''Analyze the local and remote repositories and determine which changesets need to be pushed to the remote. Return value depends on circumstances: If we are not going to push anything, return a tuple (None, outgoing) where outgoing is 0 if there are no outgoing changesets and 1 if there are, but we refuse to push them (e.g. would create new remote heads). Otherwise, return a tuple (changegroup, remoteheads), where changegroup is a readable file-like object whose read() returns successive changegroup chunks ready to be sent over the wire and remoteheads is the list of remote heads.''' commoninc = findcommonincoming(repo, remote, force=force) common, revs = findcommonoutgoing(repo, remote, onlyheads=revs, commoninc=commoninc, force=force) _common, inc, remoteheads = commoninc cl = repo.changelog outg = cl.findmissing(common, revs) if not outg: repo.ui.status(_("no changes found\n")) return None, 1 if not force and remoteheads != [nullid]: if remote.capable('branchmap'): # Check for each named branch if we're creating new remote heads. # To be a remote head after push, node must be either: # - unknown locally # - a local outgoing head descended from update # - a remote head that's known locally and not # ancestral to an outgoing head # 1. Create set of branches involved in the push. branches = set(repo[n].branch() for n in outg) # 2. Check for new branches on the remote. remotemap = remote.branchmap() newbranches = branches - set(remotemap) if newbranches and not newbranch: # new branch requires --new-branch branchnames = ', '.join(sorted(newbranches)) raise util.Abort(_("push creates new remote branches: %s!") % branchnames, hint=_("use 'hg push --new-branch' to create" " new remote branches")) branches.difference_update(newbranches) # 3. Construct the initial oldmap and newmap dicts. # They contain information about the remote heads before and # after the push, respectively. # Heads not found locally are not included in either dict, # since they won't be affected by the push. # unsynced contains all branches with incoming changesets. oldmap = {} newmap = {} unsynced = set() for branch in branches: remotebrheads = remotemap[branch] prunedbrheads = [h for h in remotebrheads if h in cl.nodemap] oldmap[branch] = prunedbrheads newmap[branch] = list(prunedbrheads) if len(remotebrheads) > len(prunedbrheads): unsynced.add(branch) # 4. Update newmap with outgoing changes. # This will possibly add new heads and remove existing ones. ctxgen = (repo[n] for n in outg) repo._updatebranchcache(newmap, ctxgen) else: # 1-4b. old servers: Check for new topological heads. # Construct {old,new}map with branch = None (topological branch). # (code based on _updatebranchcache) oldheads = set(h for h in remoteheads if h in cl.nodemap) newheads = oldheads.union(outg) if len(newheads) > 1: for latest in reversed(outg): if latest not in newheads: continue minhrev = min(cl.rev(h) for h in newheads) reachable = cl.reachable(latest, cl.node(minhrev)) reachable.remove(latest) newheads.difference_update(reachable) branches = set([None]) newmap = {None: newheads} oldmap = {None: oldheads} unsynced = inc and branches or set() # 5. Check for new heads. # If there are more heads after the push than before, a suitable # error message, depending on unsynced status, is displayed. error = None for branch in branches: newhs = set(newmap[branch]) oldhs = set(oldmap[branch]) if len(newhs) > len(oldhs): if error is None: if branch: error = _("push creates new remote heads " "on branch '%s'!") % branch else: error = _("push creates new remote heads!") if branch in unsynced: hint = _("you should pull and merge or " "use push -f to force") else: hint = _("did you forget to merge? " "use push -f to force") if branch: repo.ui.debug("new remote heads on branch '%s'\n" % branch) for h in (newhs - oldhs): repo.ui.debug("new remote head %s\n" % short(h)) if error: raise util.Abort(error, hint=hint) # 6. Check for unsynced changes on involved branches. if unsynced: repo.ui.warn(_("note: unsynced remote changes!\n")) if revs is None: # use the fast path, no race possible on push cg = repo._changegroup(outg, 'push') else: cg = repo.getbundle('push', heads=revs, common=common) return cg, remoteheads