repair: begin implementation of in-place upgrading
Now that all the upgrade planning work is in place, we can start
doing the real work: actually upgrading a repository.
The main goal of this commit is to get the "framework" for running
in-place upgrade actions in place.
Rather than get too clever and low-level with regards to in-place
upgrades, our strategy is to create a new, temporary repository,
copy data to it, then replace the old data with the new. This allows
us to reuse a lot of code in localrepo.py around store interaction,
which will eventually consume the bulk of the upgrade code.
But we have to start small. This patch implements adding new
repository requirements. But it still sets up a temporary
repository and locks it and the source repo before performing the
requirements file swap. This means all the plumbing is in place
to implement store copying in subsequent commits.
--- a/mercurial/repair.py Sun Dec 18 16:51:09 2016 -0800
+++ b/mercurial/repair.py Sun Dec 18 16:59:04 2016 -0800
@@ -10,6 +10,7 @@
import errno
import hashlib
+import tempfile
from .i18n import _
from .node import short
@@ -19,6 +20,7 @@
error,
exchange,
obsolete,
+ scmutil,
util,
)
@@ -637,6 +639,50 @@
return newactions
+def _upgraderepo(ui, srcrepo, dstrepo, requirements, actions):
+ """Do the low-level work of upgrading a repository.
+
+ The upgrade is effectively performed as a copy between a source
+ repository and a temporary destination repository.
+
+ The source repository is unmodified for as long as possible so the
+ upgrade can abort at any time without causing loss of service for
+ readers and without corrupting the source repository.
+ """
+ assert srcrepo.currentwlock()
+ assert dstrepo.currentwlock()
+
+ # TODO copy store
+
+ backuppath = tempfile.mkdtemp(prefix='upgradebackup.', dir=srcrepo.path)
+ backupvfs = scmutil.vfs(backuppath)
+
+ # Make a backup of requires file first, as it is the first to be modified.
+ util.copyfile(srcrepo.join('requires'), backupvfs.join('requires'))
+
+ # We install an arbitrary requirement that clients must not support
+ # as a mechanism to lock out new clients during the data swap. This is
+ # better than allowing a client to continue while the repository is in
+ # an inconsistent state.
+ ui.write(_('marking source repository as being upgraded; clients will be '
+ 'unable to read from repository\n'))
+ scmutil.writerequires(srcrepo.vfs,
+ srcrepo.requirements | set(['upgradeinprogress']))
+
+ ui.write(_('starting in-place swap of repository data\n'))
+ ui.write(_('replaced files will be backed up at %s\n') %
+ backuppath)
+
+ # TODO do the store swap here.
+
+ # We first write the requirements file. Any new requirements will lock
+ # out legacy clients.
+ ui.write(_('finalizing requirements file and making repository readable '
+ 'again\n'))
+ scmutil.writerequires(srcrepo.vfs, requirements)
+
+ return backuppath
+
def upgraderepo(ui, repo, run=False, optimize=None):
"""Upgrade a repository in place."""
# Avoid cycle: cmdutil -> repair -> localrepo -> cmdutil
@@ -771,3 +817,43 @@
'"--optimize <name>":\n\n'))
for i in unusedoptimize:
ui.write(_('%s\n %s\n\n') % (i.name, i.description))
+ return
+
+ # Else we're in the run=true case.
+ ui.write(_('upgrade will perform the following actions:\n\n'))
+ printrequirements()
+ printupgradeactions()
+
+ ui.write(_('beginning upgrade...\n'))
+ with repo.wlock():
+ with repo.lock():
+ ui.write(_('repository locked and read-only\n'))
+ # Our strategy for upgrading the repository is to create a new,
+ # temporary repository, write data to it, then do a swap of the
+ # data. There are less heavyweight ways to do this, but it is easier
+ # to create a new repo object than to instantiate all the components
+ # (like the store) separately.
+ tmppath = tempfile.mkdtemp(prefix='upgrade.', dir=repo.path)
+ backuppath = None
+ try:
+ ui.write(_('creating temporary repository to stage migrated '
+ 'data: %s\n') % tmppath)
+ dstrepo = localrepo.localrepository(repo.baseui,
+ path=tmppath,
+ create=True)
+
+ with dstrepo.wlock():
+ with dstrepo.lock():
+ backuppath = _upgraderepo(ui, repo, dstrepo, newreqs,
+ actions)
+
+ finally:
+ ui.write(_('removing temporary repository %s\n') % tmppath)
+ repo.vfs.rmtree(tmppath, forcibly=True)
+
+ if backuppath:
+ ui.warn(_('copy of old repository backed up at %s\n') %
+ backuppath)
+ ui.warn(_('the old repository will not be deleted; remove '
+ 'it to free up disk space once the upgraded '
+ 'repository is verified\n'))
--- a/tests/test-upgrade-repo.t Sun Dec 18 16:51:09 2016 -0800
+++ b/tests/test-upgrade-repo.t Sun Dec 18 16:59:04 2016 -0800
@@ -180,3 +180,76 @@
deltas within internal storage will always be recalculated without reusing prior deltas; this will likely make execution run several times slower; this optimization is typically not needed
+ $ cd ..
+
+Upgrading a repository that is already modern essentially no-ops
+
+ $ hg init modern
+ $ hg -R modern debugupgraderepo --run
+ upgrade will perform the following actions:
+
+ requirements
+ preserved: dotencode, fncache, generaldelta, revlogv1, store
+
+ beginning upgrade...
+ repository locked and read-only
+ creating temporary repository to stage migrated data: $TESTTMP/modern/.hg/upgrade.* (glob)
+ marking source repository as being upgraded; clients will be unable to read from repository
+ starting in-place swap of repository data
+ replaced files will be backed up at $TESTTMP/modern/.hg/upgradebackup.* (glob)
+ finalizing requirements file and making repository readable again
+ removing temporary repository $TESTTMP/modern/.hg/upgrade.* (glob)
+ copy of old repository backed up at $TESTTMP/modern/.hg/upgradebackup.* (glob)
+ the old repository will not be deleted; remove it to free up disk space once the upgraded repository is verified
+
+Upgrading a repository to generaldelta works
+
+ $ hg --config format.usegeneraldelta=false init upgradegd
+ $ cd upgradegd
+ $ touch f0
+ $ hg -q commit -A -m initial
+ $ touch f1
+ $ hg -q commit -A -m 'add f1'
+ $ hg -q up -r 0
+ $ touch f2
+ $ hg -q commit -A -m 'add f2'
+
+ $ hg debugupgraderepo --run
+ upgrade will perform the following actions:
+
+ requirements
+ preserved: dotencode, fncache, revlogv1, store
+ added: generaldelta
+
+ generaldelta
+ repository storage will be able to create optimal deltas; new repository data will be smaller and read times should decrease; interacting with other repositories using this storage model should require less network and CPU resources, making "hg push" and "hg pull" faster
+
+ beginning upgrade...
+ repository locked and read-only
+ creating temporary repository to stage migrated data: $TESTTMP/upgradegd/.hg/upgrade.* (glob)
+ marking source repository as being upgraded; clients will be unable to read from repository
+ starting in-place swap of repository data
+ replaced files will be backed up at $TESTTMP/upgradegd/.hg/upgradebackup.* (glob)
+ finalizing requirements file and making repository readable again
+ removing temporary repository $TESTTMP/upgradegd/.hg/upgrade.* (glob)
+ copy of old repository backed up at $TESTTMP/upgradegd/.hg/upgradebackup.* (glob)
+ the old repository will not be deleted; remove it to free up disk space once the upgraded repository is verified
+
+Original requirements backed up
+
+ $ cat .hg/upgradebackup.*/requires
+ dotencode
+ fncache
+ revlogv1
+ store
+
+generaldelta added to original requirements files
+
+ $ cat .hg/requires
+ dotencode
+ fncache
+ generaldelta
+ revlogv1
+ store
+
+ $ cd ..