Mercurial > hg
view hgext/remotefilelog/extutil.py @ 42050:03f6480bfdda
unshelve: disable unshelve during merge (issue5123)
As stated in the issue5123, unshelve can destroy the second parent of
the context when tried to unshelve with an uncommitted merge. This
patch makes unshelve to abort when called with an uncommitted merge.
See how shelve.mergefiles works. Commit structure looks like this:
```
... -> pctx -> tmpwctx -> shelvectx
/
/
second
merge parent
pctx = parent before merging working context(first merge parent)
tmpwctx = commited working directory after merge(with two parents)
shelvectx = shelved context
```
shelve.mergefiles first updates to pctx then it reverts shelvectx to pctx with:
```
cmdutil.revert(ui, repo, shelvectx, repo.dirstate.parents(),
*pathtofiles(repo, files),
**{'no_backup': True})
```
Reverting tmpwctx files that were merged from second parent to pctx makes them
added because they are not in pctx.
Changing this revert operation is crucial to restore parents after unshelve.
This is a complicated issue as this is not fixing a regression. Thus, for the
time being, unshelve during an uncommitted merge can be aborted.
(Details taken from http://mercurial.808500.n3.nabble.com/PATCH-V3-shelve-restore-parents-after-unshelve-issue5123-tt4036858.html#a4037408)
Differential Revision: https://phab.mercurial-scm.org/D6169
author | Navaneeth Suresh <navaneeths1998@gmail.com> |
---|---|
date | Mon, 25 Mar 2019 12:33:41 +0530 |
parents | 3fbfbc8c9f82 |
children |
line wrap: on
line source
# extutil.py - useful utility methods for extensions # # Copyright 2016 Facebook # # This software may be used and distributed according to the terms of the # GNU General Public License version 2 or any later version. from __future__ import absolute_import import contextlib import errno import os import time from mercurial import ( error, lock as lockmod, util, vfs as vfsmod, ) @contextlib.contextmanager def flock(lockpath, description, timeout=-1): """A flock based lock object. Currently it is always non-blocking. Note that since it is flock based, you can accidentally take it multiple times within one process and the first one to be released will release all of them. So the caller needs to be careful to not create more than one instance per lock. """ # best effort lightweight lock try: import fcntl fcntl.flock except ImportError: # fallback to Mercurial lock vfs = vfsmod.vfs(os.path.dirname(lockpath)) with lockmod.lock(vfs, os.path.basename(lockpath), timeout=timeout): yield return # make sure lock file exists util.makedirs(os.path.dirname(lockpath)) with open(lockpath, 'a'): pass lockfd = os.open(lockpath, os.O_RDONLY, 0o664) start = time.time() while True: try: fcntl.flock(lockfd, fcntl.LOCK_EX | fcntl.LOCK_NB) break except IOError as ex: if ex.errno == errno.EAGAIN: if timeout != -1 and time.time() - start > timeout: raise error.LockHeld(errno.EAGAIN, lockpath, description, '') else: time.sleep(0.05) continue raise try: yield finally: fcntl.flock(lockfd, fcntl.LOCK_UN) os.close(lockfd)