Mercurial > hg
view mercurial/dagutil.py @ 30779:38aa1ca97b6a
repair: migrate revlogs during upgrade
Our next step for in-place upgrade is to migrate store data. Revlogs
are the biggest source of data within the store and a store is useless
without them, so we implement their migration first.
Our strategy for migrating revlogs is to walk the store and call
`revlog.clone()` on each revlog. There are some minor complications.
Because revlogs have different storage options (e.g. changelog has
generaldelta and delta chains disabled), we need to obtain the
correct class of revlog so inserted data is encoded properly for its
type.
Various attempts at implementing progress indicators that didn't lead
to frustration from false "it's almost done" indicators were made.
I initially used a single progress bar based on number of revlogs.
However, this quickly churned through all filelogs, got to 99% then
effectively froze at 99.99% when it got to the manifest.
So I converted the progress bar to total revision count. This was a
little bit better. But the manifest was still significantly slower
than filelogs and it took forever to process the last few percent.
I then tried both revision/chunk bytes and raw bytes as the
denominator. This had the opposite effect: because so much data is in
manifests, it would churn through filelogs without showing much
progress. When it got to manifests, it would fill in 90+% of the
progress bar.
I finally gave up having a unified progress bar and instead implemented
3 progress bars: 1 for filelog revisions, 1 for manifest revisions, and
1 for changelog revisions. I added extra messages indicating the total
number of revisions of each so users know there are more progress bars
coming.
I also added extra messages before and after each stage to give extra
details about what is happening. Strictly speaking, this isn't
necessary. But the numbers are impressive. For example, when converting
a non-generaldelta mozilla-central repository, the messages you see are:
migrating 2475593 total revisions (1833043 in filelogs, 321156 in manifests, 321394 in changelog)
migrating 1.67 GB in store; 2508 GB tracked data
migrating 267868 filelogs containing 1833043 revisions (1.09 GB in store; 57.3 GB tracked data)
finished migrating 1833043 filelog revisions across 267868 filelogs; change in size: -415776 bytes
migrating 1 manifests containing 321156 revisions (518 MB in store; 2451 GB tracked data)
That "2508 GB" figure really blew me away. I had no clue that the raw
tracked data in mozilla-central was that large. Granted, 2451 GB is in
the manifest and "only" 57.3 GB is in filelogs. But still.
It's worth noting that gratuitous loading of source revlogs in order
to display numbers and progress bars does serve a purpose: it ensures
we can open all source revlogs. We don't want to spend several minutes
copying revlogs only to encounter a permissions error or similar later.
As part of this commit, we also add swapping of the store directory
to the upgrade function. After revlogs are converted, we move the
old store into the backup directory then move the temporary repo's
store into the old store's location. On well-behaved systems, this
should be 2 atomic operations and the window of inconsistency show be
very narrow.
There are still a few improvements to be made to store copying and
upgrading. But this commit gets the bulk of the work out of the way.
author | Gregory Szorc <gregory.szorc@gmail.com> |
---|---|
date | Sun, 18 Dec 2016 17:00:15 -0800 |
parents | 015ded095933 |
children | 09397d0dd3b7 |
line wrap: on
line source
# dagutil.py - dag utilities for mercurial # # Copyright 2010 Benoit Boissinot <bboissin@gmail.com> # and Peter Arrenbrecht <peter@arrenbrecht.ch> # # This software may be used and distributed according to the terms of the # GNU General Public License version 2 or any later version. from __future__ import absolute_import from .i18n import _ from .node import nullrev class basedag(object): '''generic interface for DAGs terms: "ix" (short for index) identifies a nodes internally, "id" identifies one externally. All params are ixs unless explicitly suffixed otherwise. Pluralized params are lists or sets. ''' def __init__(self): self._inverse = None def nodeset(self): '''set of all node ixs''' raise NotImplementedError def heads(self): '''list of head ixs''' raise NotImplementedError def parents(self, ix): '''list of parents ixs of ix''' raise NotImplementedError def inverse(self): '''inverse DAG, where parents becomes children, etc.''' raise NotImplementedError def ancestorset(self, starts, stops=None): ''' set of all ancestors of starts (incl), but stop walk at stops (excl) ''' raise NotImplementedError def descendantset(self, starts, stops=None): ''' set of all descendants of starts (incl), but stop walk at stops (excl) ''' return self.inverse().ancestorset(starts, stops) def headsetofconnecteds(self, ixs): ''' subset of connected list of ixs so that no node has a descendant in it By "connected list" we mean that if an ancestor and a descendant are in the list, then so is at least one path connecting them. ''' raise NotImplementedError def externalize(self, ix): '''return a node id''' return self._externalize(ix) def externalizeall(self, ixs): '''return a list of (or set if given a set) of node ids''' ids = self._externalizeall(ixs) if isinstance(ixs, set): return set(ids) return list(ids) def internalize(self, id): '''return a node ix''' return self._internalize(id) def internalizeall(self, ids, filterunknown=False): '''return a list of (or set if given a set) of node ixs''' ixs = self._internalizeall(ids, filterunknown) if isinstance(ids, set): return set(ixs) return list(ixs) class genericdag(basedag): '''generic implementations for DAGs''' def ancestorset(self, starts, stops=None): if stops: stops = set(stops) else: stops = set() seen = set() pending = list(starts) while pending: n = pending.pop() if n not in seen and n not in stops: seen.add(n) pending.extend(self.parents(n)) return seen def headsetofconnecteds(self, ixs): hds = set(ixs) if not hds: return hds for n in ixs: for p in self.parents(n): hds.discard(p) assert hds return hds class revlogbaseddag(basedag): '''generic dag interface to a revlog''' def __init__(self, revlog, nodeset): basedag.__init__(self) self._revlog = revlog self._heads = None self._nodeset = nodeset def nodeset(self): return self._nodeset def heads(self): if self._heads is None: self._heads = self._getheads() return self._heads def _externalize(self, ix): return self._revlog.index[ix][7] def _externalizeall(self, ixs): idx = self._revlog.index return [idx[i][7] for i in ixs] def _internalize(self, id): ix = self._revlog.rev(id) if ix == nullrev: raise LookupError(id, self._revlog.indexfile, _('nullid')) return ix def _internalizeall(self, ids, filterunknown): rl = self._revlog if filterunknown: return [r for r in map(rl.nodemap.get, ids) if (r is not None and r != nullrev and r not in rl.filteredrevs)] return map(self._internalize, ids) class revlogdag(revlogbaseddag): '''dag interface to a revlog''' def __init__(self, revlog): revlogbaseddag.__init__(self, revlog, set(revlog)) def _getheads(self): return [r for r in self._revlog.headrevs() if r != nullrev] def parents(self, ix): rlog = self._revlog idx = rlog.index revdata = idx[ix] prev = revdata[5] if prev != nullrev: prev2 = revdata[6] if prev2 == nullrev: return [prev] return [prev, prev2] prev2 = revdata[6] if prev2 != nullrev: return [prev2] return [] def inverse(self): if self._inverse is None: self._inverse = inverserevlogdag(self) return self._inverse def ancestorset(self, starts, stops=None): rlog = self._revlog idx = rlog.index if stops: stops = set(stops) else: stops = set() seen = set() pending = list(starts) while pending: rev = pending.pop() if rev not in seen and rev not in stops: seen.add(rev) revdata = idx[rev] for i in [5, 6]: prev = revdata[i] if prev != nullrev: pending.append(prev) return seen def headsetofconnecteds(self, ixs): if not ixs: return set() rlog = self._revlog idx = rlog.index headrevs = set(ixs) for rev in ixs: revdata = idx[rev] for i in [5, 6]: prev = revdata[i] if prev != nullrev: headrevs.discard(prev) assert headrevs return headrevs def linearize(self, ixs): '''linearize and topologically sort a list of revisions The linearization process tries to create long runs of revs where a child rev comes immediately after its first parent. This is done by visiting the heads of the given revs in inverse topological order, and for each visited rev, visiting its second parent, then its first parent, then adding the rev itself to the output list. ''' sorted = [] visit = list(self.headsetofconnecteds(ixs)) visit.sort(reverse=True) finished = set() while visit: cur = visit.pop() if cur < 0: cur = -cur - 1 if cur not in finished: sorted.append(cur) finished.add(cur) else: visit.append(-cur - 1) visit += [p for p in self.parents(cur) if p in ixs and p not in finished] assert len(sorted) == len(ixs) return sorted class inverserevlogdag(revlogbaseddag, genericdag): '''inverse of an existing revlog dag; see revlogdag.inverse()''' def __init__(self, orig): revlogbaseddag.__init__(self, orig._revlog, orig._nodeset) self._orig = orig self._children = {} self._roots = [] self._walkfrom = len(self._revlog) - 1 def _walkto(self, walkto): rev = self._walkfrom cs = self._children roots = self._roots idx = self._revlog.index while rev >= walkto: data = idx[rev] isroot = True for prev in [data[5], data[6]]: # parent revs if prev != nullrev: cs.setdefault(prev, []).append(rev) isroot = False if isroot: roots.append(rev) rev -= 1 self._walkfrom = rev def _getheads(self): self._walkto(nullrev) return self._roots def parents(self, ix): if ix is None: return [] if ix <= self._walkfrom: self._walkto(ix) return self._children.get(ix, []) def inverse(self): return self._orig