view mercurial/lock.py @ 49658:523cacdfd324

delta-find: set the default candidate chunk size to 10 I ran performance and storage tests on repositories of various sizes and shapes for the following values of the config : 5, 10, 20, 50, 100, no-chunking The performance tests do not show any statistical impact on computation times for large pushes and pulls. For searching for an individual delta, this can provide a significant performance improvement with a minor degradation of space-quality on the result. (see data at the end of the commit). For overall store size, the change : - does not have any impact on many small repositories, - has an observable, but very negligible impact on most larger repositories. - One private repository we use for testing sees a small increase in size (1%) in the narrower version. We will try to get more numbers on a larger version of that repository to make sure nothing pathological happens. We pick "10" as the limit as "5" seems a bit more risky. There are room to improve the current code, by using more aggressive filtering and better (i.e any) sorting of the candidates. However this is already a large improvement for pathological cases, with little impact in the common situations. The initial motivation for this change is to fix performance of delta computation for a file where the previous code ended up testing 20 000 possible candidate-bases in one go, which is… slow. This affected about ½ of the file revisions leading to atrocious performance, especially during some push/pull operations. Details about individual delta finding timing: ---------------------------------------------- The vast majority of benchmark cases are unchanged but the three below. The first two do not see any impact on the final delta. The last one sees a change in delta-size that is negligible compared to the full text size. ### data-env-vars.name = mozilla-try-2019-02-18-zstd-sparse-revlog # benchmark.name = perf-delta-find # benchmark.variants.rev = manifest-snapshot-many-tries-a (revision 756096) ∞: 5.844783 5: 4.473523 (-23.46%) 10: 4.970053 (-14.97%) 20: 5.770386 (-1.27%) 50 5.821358 100: 5.834887 MANIFESTLOG: rev = 756096: (no-limit) delta-base = 301840 search-rounds = 6 try-count = 60 delta-type = snapshot snap-depth = 7 delta-size = 179 MANIFESTLOG: rev=756096: (limit = 10) delta-base=301840 search-rounds=9 try-count=51 delta-type=snapshot snap-depth=7 delta-size=179 ### data-env-vars.name = mozilla-try-2019-02-18-zstd-sparse-revlog # benchmark.name = perf-delta-find # benchmark.variants.rev = manifest-snapshot-many-tries-d (revision 754060) ∞: 5.017663 5: 3.655931 (-27.14%) 10: 4.095436 (-18.38%) 20: 4.828949 (-3.76%) 50 4.987574 100: 4.994889 MANIFESTLOG: rev=754060: (no limit) delta-base=301840 search-rounds=5 try-count=53 delta-type=snapshot snap-depth=7 delta-size = 179 MANIFESTLOG: rev=754060: (limite = 10) delta-base=301840 search-rounds=8 try-count=45 delta-type=snapshot snap-depth=7 delta-size = 179 ### data-env-vars.name = mozilla-try-2019-02-18-zstd-sparse-revlog # benchmark.name = perf-delta-find # bin-env-vars.hg.flavor = rust # benchmark.variants.rev = manifest-snapshot-many-tries-e (revision 693368) ∞: 4.869282 5: 2.039732 (-58.11%) 10: 2.413537 (-50.43%) 20: 4.449639 (-8.62%) 50 4.865863 100: 4.882649 MANIFESTLOG: rev=693368: delta-base=693336 search-rounds=6 try-count=53 delta-type=snapshot snap-depth=6 full-test-size=131065 delta-size=199 MANIFESTLOG: rev=693368: delta-base=278023 search-rounds=5 try-count=21 delta-type=snapshot snap-depth=4 full-test-size=131065 delta-size=278 Raw data for store size (in bytes) for various chunk size value below: ---------------------------------------------------------------------- 440 134 384 5 pypy/.hg/store/ 440 134 384 10 pypy/.hg/store/ 440 134 384 20 pypy/.hg/store/ 440 134 384 50 pypy/.hg/store/ 440 134 384 100 pypy/.hg/store/ 440 134 384 ... pypy/.hg/store/ 666 987 471 5 netbsd-xsrc-2022-11-15/.hg/store/ 666 987 471 10 netbsd-xsrc-2022-11-15/.hg/store/ 666 987 471 20 netbsd-xsrc-2022-11-15/.hg/store/ 666 987 471 50 netbsd-xsrc-2022-11-15/.hg/store/ 666 987 471 100 netbsd-xsrc-2022-11-15/.hg/store/ 666 987 471 ... netbsd-xsrc-2022-11-15/.hg/store/ 852 844 884 5 netbsd-pkgsrc-2022-11-15/.hg/store/ 852 844 884 10 netbsd-pkgsrc-2022-11-15/.hg/store/ 852 844 884 20 netbsd-pkgsrc-2022-11-15/.hg/store/ 852 844 884 50 netbsd-pkgsrc-2022-11-15/.hg/store/ 852 844 884 100 netbsd-pkgsrc-2022-11-15/.hg/store/ 852 844 884 ... netbsd-pkgsrc-2022-11-15/.hg/store/ 1 504 227 981 5 netbeans-2018-08-01-sparse-zstd/.hg/store/ 1 504 227 871 10 netbeans-2018-08-01-sparse-zstd/.hg/store/ 1 504 227 813 20 netbeans-2018-08-01-sparse-zstd/.hg/store/ 1 504 227 813 50 netbeans-2018-08-01-sparse-zstd/.hg/store/ 1 504 227 813 100 netbeans-2018-08-01-sparse-zstd/.hg/store/ 1 504 227 813 ... netbeans-2018-08-01-sparse-zstd/.hg/store/ 3 875 801 068 5 netbsd-src-2022-11-15/.hg/store/ 3 875 696 767 10 netbsd-src-2022-11-15/.hg/store/ 3 875 696 757 20 netbsd-src-2022-11-15/.hg/store/ 3 875 696 653 50 netbsd-src-2022-11-15/.hg/store/ 3 875 696 653 100 netbsd-src-2022-11-15/.hg/store/ 3 875 696 653 ... netbsd-src-2022-11-15/.hg/store/ 4 531 441 314 5 mozilla-central/.hg/store/ 4 531 435 157 10 mozilla-central/.hg/store/ 4 531 432 045 20 mozilla-central/.hg/store/ 4 531 429 119 50 mozilla-central/.hg/store/ 4 531 429 119 100 mozilla-central/.hg/store/ 4 531 429 119 ... mozilla-central/.hg/store/ 4 875 861 390 5 mozilla-unified/.hg/store/ 4 875 855 155 10 mozilla-unified/.hg/store/ 4 875 852 027 20 mozilla-unified/.hg/store/ 4 875 848 851 50 mozilla-unified/.hg/store/ 4 875 848 851 100 mozilla-unified/.hg/store/ 4 875 848 851 ... mozilla-unified/.hg/store/ 11 498 764 601 5 mozilla-try/.hg/store/ 11 497 968 858 10 mozilla-try/.hg/store/ 11 497 958 730 20 mozilla-try/.hg/store/ 11 497 927 156 50 mozilla-try/.hg/store/ 11 497 925 963 100 mozilla-try/.hg/store/ 11 497 923 428 ... mozilla-try/.hg/store/ 10 047 914 031 5 private-repo 9 969 132 101 10 private-repo 9 944 745 015 20 private-repo 9 939 756 703 50 private-repo 9 939 833 016 100 private-repo 9 939 822 035 ... private-repo
author Pierre-Yves David <pierre-yves.david@octobus.net>
date Wed, 23 Nov 2022 19:08:27 +0100
parents 050dc8730858
children 5586076b8030
line wrap: on
line source

# lock.py - simple advisory locking scheme for mercurial
#
# Copyright 2005, 2006 Olivia Mackall <olivia@selenic.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.


import contextlib
import errno
import os
import signal
import socket
import time
import warnings

from .i18n import _
from .pycompat import getattr

from . import (
    encoding,
    error,
    pycompat,
    util,
)

from .utils import procutil


def _getlockprefix():
    """Return a string which is used to differentiate pid namespaces

    It's useful to detect "dead" processes and remove stale locks with
    confidence. Typically it's just hostname. On modern linux, we include an
    extra Linux-specific pid namespace identifier.
    """
    result = encoding.strtolocal(socket.gethostname())
    if pycompat.sysplatform.startswith(b'linux'):
        try:
            result += b'/%x' % os.stat(b'/proc/self/ns/pid').st_ino
        except (FileNotFoundError, PermissionError, NotADirectoryError):
            pass
    return result


@contextlib.contextmanager
def _delayedinterrupt():
    """Block signal interrupt while doing something critical

    This makes sure that the code block wrapped by this context manager won't
    be interrupted.

    For Windows developers: It appears not possible to guard time.sleep()
    from CTRL_C_EVENT, so please don't use time.sleep() to test if this is
    working.
    """
    assertedsigs = []
    blocked = False
    orighandlers = {}

    def raiseinterrupt(num):
        if num == getattr(signal, 'SIGINT', None) or num == getattr(
            signal, 'CTRL_C_EVENT', None
        ):
            raise KeyboardInterrupt
        else:
            raise error.SignalInterrupt

    def catchterm(num, frame):
        if blocked:
            assertedsigs.append(num)
        else:
            raiseinterrupt(num)

    try:
        # save handlers first so they can be restored even if a setup is
        # interrupted between signal.signal() and orighandlers[] =.
        for name in [
            b'CTRL_C_EVENT',
            b'SIGINT',
            b'SIGBREAK',
            b'SIGHUP',
            b'SIGTERM',
        ]:
            num = getattr(signal, name, None)
            if num and num not in orighandlers:
                orighandlers[num] = signal.getsignal(num)
        try:
            for num in orighandlers:
                signal.signal(num, catchterm)
        except ValueError:
            pass  # in a thread? no luck

        blocked = True
        yield
    finally:
        # no simple way to reliably restore all signal handlers because
        # any loops, recursive function calls, except blocks, etc. can be
        # interrupted. so instead, make catchterm() raise interrupt.
        blocked = False
        try:
            for num, handler in orighandlers.items():
                signal.signal(num, handler)
        except ValueError:
            pass  # in a thread?

    # re-raise interrupt exception if any, which may be shadowed by a new
    # interrupt occurred while re-raising the first one
    if assertedsigs:
        raiseinterrupt(assertedsigs[0])


def trylock(ui, vfs, lockname, timeout, warntimeout, *args, **kwargs):
    """return an acquired lock or raise an a LockHeld exception

    This function is responsible to issue warnings and or debug messages about
    the held lock while trying to acquires it."""

    def printwarning(printer, locker):
        """issue the usual "waiting on lock" message through any channel"""
        # show more details for new-style locks
        if b':' in locker:
            host, pid = locker.split(b":", 1)
            msg = _(
                b"waiting for lock on %s held by process %r on host %r\n"
            ) % (
                pycompat.bytestr(l.desc),
                pycompat.bytestr(pid),
                pycompat.bytestr(host),
            )
        else:
            msg = _(b"waiting for lock on %s held by %r\n") % (
                l.desc,
                pycompat.bytestr(locker),
            )
        printer(msg)

    l = lock(vfs, lockname, 0, *args, dolock=False, **kwargs)

    debugidx = 0 if (warntimeout and timeout) else -1
    warningidx = 0
    if not timeout:
        warningidx = -1
    elif warntimeout:
        warningidx = warntimeout

    delay = 0
    while True:
        try:
            l._trylock()
            break
        except error.LockHeld as inst:
            if delay == debugidx:
                printwarning(ui.debug, inst.locker)
            if delay == warningidx:
                printwarning(ui.warn, inst.locker)
            if timeout <= delay:
                raise error.LockHeld(
                    errno.ETIMEDOUT, inst.filename, l.desc, inst.locker
                )
            time.sleep(1)
            delay += 1

    l.delay = delay
    if l.delay:
        if 0 <= warningidx <= l.delay:
            ui.warn(_(b"got lock after %d seconds\n") % l.delay)
        else:
            ui.debug(b"got lock after %d seconds\n" % l.delay)
    if l.acquirefn:
        l.acquirefn()
    return l


class lock:
    """An advisory lock held by one process to control access to a set
    of files.  Non-cooperating processes or incorrectly written scripts
    can ignore Mercurial's locking scheme and stomp all over the
    repository, so don't do that.

    Typically used via localrepository.lock() to lock the repository
    store (.hg/store/) or localrepository.wlock() to lock everything
    else under .hg/."""

    # lock is symlink on platforms that support it, file on others.

    # symlink is used because create of directory entry and contents
    # are atomic even over nfs.

    # old-style lock: symlink to pid
    # new-style lock: symlink to hostname:pid

    _host = None

    def __init__(
        self,
        vfs,
        fname,
        timeout=-1,
        releasefn=None,
        acquirefn=None,
        desc=None,
        signalsafe=True,
        dolock=True,
    ):
        self.vfs = vfs
        self.f = fname
        self.held = 0
        self.timeout = timeout
        self.releasefn = releasefn
        self.acquirefn = acquirefn
        self.desc = desc
        if signalsafe:
            self._maybedelayedinterrupt = _delayedinterrupt
        else:
            self._maybedelayedinterrupt = util.nullcontextmanager
        self.postrelease = []
        self.pid = self._getpid()
        if dolock:
            self.delay = self.lock()
            if self.acquirefn:
                self.acquirefn()

    def __enter__(self):
        return self

    def __exit__(self, exc_type, exc_value, exc_tb):
        success = all(a is None for a in (exc_type, exc_value, exc_tb))
        self.release(success=success)

    def __del__(self):
        if self.held:
            warnings.warn(
                "use lock.release instead of del lock",
                category=DeprecationWarning,
                stacklevel=2,
            )

            # ensure the lock will be removed
            # even if recursive locking did occur
            self.held = 1

        self.release()

    def _getpid(self):
        # wrapper around procutil.getpid() to make testing easier
        return procutil.getpid()

    def lock(self):
        timeout = self.timeout
        while True:
            try:
                self._trylock()
                return self.timeout - timeout
            except error.LockHeld as inst:
                if timeout != 0:
                    time.sleep(1)
                    if timeout > 0:
                        timeout -= 1
                    continue
                raise error.LockHeld(
                    errno.ETIMEDOUT, inst.filename, self.desc, inst.locker
                )

    def _trylock(self):
        if self.held:
            self.held += 1
            return
        if lock._host is None:
            lock._host = _getlockprefix()
        lockname = b'%s:%d' % (lock._host, self.pid)
        retry = 5
        while not self.held and retry:
            retry -= 1
            try:
                with self._maybedelayedinterrupt():
                    self.vfs.makelock(lockname, self.f)
                    self.held = 1
            except (OSError, IOError) as why:
                if why.errno == errno.EEXIST:
                    locker = self._readlock()
                    if locker is None:
                        continue

                    locker = self._testlock(locker)
                    if locker is not None:
                        raise error.LockHeld(
                            errno.EAGAIN,
                            self.vfs.join(self.f),
                            self.desc,
                            locker,
                        )
                else:
                    raise error.LockUnavailable(
                        why.errno, why.strerror, why.filename, self.desc
                    )

        if not self.held:
            # use empty locker to mean "busy for frequent lock/unlock
            # by many processes"
            raise error.LockHeld(
                errno.EAGAIN, self.vfs.join(self.f), self.desc, b""
            )

    def _readlock(self):
        """read lock and return its value

        Returns None if no lock exists, pid for old-style locks, and host:pid
        for new-style locks.
        """
        try:
            return self.vfs.readlock(self.f)
        except FileNotFoundError:
            return None

    def _lockshouldbebroken(self, locker):
        if locker is None:
            return False
        try:
            host, pid = locker.split(b":", 1)
        except ValueError:
            return False
        if host != lock._host:
            return False
        try:
            pid = int(pid)
        except ValueError:
            return False
        if procutil.testpid(pid):
            return False
        return True

    def _testlock(self, locker):
        if not self._lockshouldbebroken(locker):
            return locker

        # if locker dead, break lock.  must do this with another lock
        # held, or can race and break valid lock.
        try:
            with lock(self.vfs, self.f + b'.break', timeout=0):
                locker = self._readlock()
                if not self._lockshouldbebroken(locker):
                    return locker
                self.vfs.unlink(self.f)
        except error.LockError:
            return locker

    def testlock(self):
        """return id of locker if lock is valid, else None.

        If old-style lock, we cannot tell what machine locker is on.
        with new-style lock, if locker is on this machine, we can
        see if locker is alive.  If locker is on this machine but
        not alive, we can safely break lock.

        The lock file is only deleted when None is returned.

        """
        locker = self._readlock()
        return self._testlock(locker)

    def release(self, success=True):
        """release the lock and execute callback function if any

        If the lock has been acquired multiple times, the actual release is
        delayed to the last release call."""
        if self.held > 1:
            self.held -= 1
        elif self.held == 1:
            self.held = 0
            if self._getpid() != self.pid:
                # we forked, and are not the parent
                return
            try:
                if self.releasefn:
                    self.releasefn()
            finally:
                try:
                    self.vfs.unlink(self.f)
                except OSError:
                    pass
            # The postrelease functions typically assume the lock is not held
            # at all.
            for callback in self.postrelease:
                callback(success)
            # Prevent double usage and help clear cycles.
            self.postrelease = None


def release(*locks):
    for lock in locks:
        if lock is not None:
            lock.release()