view mercurial/pathutil.py @ 30451:41a8106789ca

util: implement zstd compression engine Now that zstd is vendored and being built (in some configurations), we can implement a compression engine for zstd! The zstd engine is a little different from existing engines. Because it may not always be present, we have to defer load the module in case importing it fails. We facilitate this via a cached property that holds a reference to the module or None. The "available" method is implemented to reflect reality. The zstd engine declares its ability to handle bundles using the "zstd" human name and the "ZS" internal name. The latter was chosen because internal names are 2 characters (by only convention I think) and "ZS" seems reasonable. The engine, like others, supports specifying the compression level. However, there are no consumers of this API that yet pass in that argument. I have plans to change that, so stay tuned. Since all we need to do to support bundle generation with a new compression engine is implement and register the compression engine, bundle generation with zstd "just works!" Tests demonstrating this have been added. How does performance of zstd for bundle generation compare? On the mozilla-unified repo, `hg bundle --all -t <engine>-v2` yields the following on my i7-6700K on Linux: engine CPU time bundle size vs orig size throughput none 97.0s 4,054,405,584 100.0% 41.8 MB/s bzip2 (l=9) 393.6s 975,343,098 24.0% 10.3 MB/s gzip (l=6) 184.0s 1,140,533,074 28.1% 22.0 MB/s zstd (l=1) 108.2s 1,119,434,718 27.6% 37.5 MB/s zstd (l=2) 111.3s 1,078,328,002 26.6% 36.4 MB/s zstd (l=3) 113.7s 1,011,823,727 25.0% 35.7 MB/s zstd (l=4) 116.0s 1,008,965,888 24.9% 35.0 MB/s zstd (l=5) 121.0s 977,203,148 24.1% 33.5 MB/s zstd (l=6) 131.7s 927,360,198 22.9% 30.8 MB/s zstd (l=7) 139.0s 912,808,505 22.5% 29.2 MB/s zstd (l=12) 198.1s 854,527,714 21.1% 20.5 MB/s zstd (l=18) 681.6s 789,750,690 19.5% 5.9 MB/s On compression, zstd for bundle generation delivers: * better compression than gzip with significantly less CPU utilization * better than bzip2 compression ratios while still being significantly faster than gzip * ability to aggressively tune compression level to achieve significantly smaller bundles That last point is important. With clone bundles, a server can pre-generate a bundle file, upload it to a static file server, and redirect clients to transparently download it during clone. The server could choose to produce a zstd bundle with the highest compression settings possible. This would take a very long time - a magnitude longer than a typical zstd bundle generation - but the result would be hundreds of megabytes smaller! For the clone volume we do at Mozilla, this could translate to petabytes of bandwidth savings per year and faster clones (due to smaller transfer size). I don't have detailed numbers to report on decompression. However, zstd decompression is fast: >1 GB/s output throughput on this machine, even through the Python bindings. And it can do that regardless of the compression level of the input. By the time you have enough data to worry about overhead of decompression, you have plenty of other things to worry about performance wise. zstd is wins all around. I can't wait to implement support for it on the wire protocol and in revlogs.
author Gregory Szorc <gregory.szorc@gmail.com>
date Fri, 11 Nov 2016 01:10:07 -0800
parents 318a24b52eeb
children cfe66dcf45c0
line wrap: on
line source

from __future__ import absolute_import

import errno
import os
import posixpath
import stat

from .i18n import _
from . import (
    encoding,
    error,
    util,
)

def _lowerclean(s):
    return encoding.hfsignoreclean(s.lower())

class pathauditor(object):
    '''ensure that a filesystem path contains no banned components.
    the following properties of a path are checked:

    - ends with a directory separator
    - under top-level .hg
    - starts at the root of a windows drive
    - contains ".."

    More check are also done about the file system states:
    - traverses a symlink (e.g. a/symlink_here/b)
    - inside a nested repository (a callback can be used to approve
      some nested repositories, e.g., subrepositories)

    The file system checks are only done when 'realfs' is set to True (the
    default). They should be disable then we are auditing path for operation on
    stored history.
    '''

    def __init__(self, root, callback=None, realfs=True):
        self.audited = set()
        self.auditeddir = set()
        self.root = root
        self._realfs = realfs
        self.callback = callback
        if os.path.lexists(root) and not util.fscasesensitive(root):
            self.normcase = util.normcase
        else:
            self.normcase = lambda x: x

    def __call__(self, path):
        '''Check the relative path.
        path may contain a pattern (e.g. foodir/**.txt)'''

        path = util.localpath(path)
        normpath = self.normcase(path)
        if normpath in self.audited:
            return
        # AIX ignores "/" at end of path, others raise EISDIR.
        if util.endswithsep(path):
            raise error.Abort(_("path ends in directory separator: %s") % path)
        parts = util.splitpath(path)
        if (os.path.splitdrive(path)[0]
            or _lowerclean(parts[0]) in ('.hg', '.hg.', '')
            or os.pardir in parts):
            raise error.Abort(_("path contains illegal component: %s") % path)
        # Windows shortname aliases
        for p in parts:
            if "~" in p:
                first, last = p.split("~", 1)
                if last.isdigit() and first.upper() in ["HG", "HG8B6C"]:
                    raise error.Abort(_("path contains illegal component: %s")
                                     % path)
        if '.hg' in _lowerclean(path):
            lparts = [_lowerclean(p.lower()) for p in parts]
            for p in '.hg', '.hg.':
                if p in lparts[1:]:
                    pos = lparts.index(p)
                    base = os.path.join(*parts[:pos])
                    raise error.Abort(_("path '%s' is inside nested repo %r")
                                     % (path, base))

        normparts = util.splitpath(normpath)
        assert len(parts) == len(normparts)

        parts.pop()
        normparts.pop()
        prefixes = []
        # It's important that we check the path parts starting from the root.
        # This means we won't accidentally traverse a symlink into some other
        # filesystem (which is potentially expensive to access).
        for i in range(len(parts)):
            prefix = os.sep.join(parts[:i + 1])
            normprefix = os.sep.join(normparts[:i + 1])
            if normprefix in self.auditeddir:
                continue
            if self._realfs:
                self._checkfs(prefix, path)
            prefixes.append(normprefix)

        self.audited.add(normpath)
        # only add prefixes to the cache after checking everything: we don't
        # want to add "foo/bar/baz" before checking if there's a "foo/.hg"
        self.auditeddir.update(prefixes)

    def _checkfs(self, prefix, path):
        """raise exception if a file system backed check fails"""
        curpath = os.path.join(self.root, prefix)
        try:
            st = os.lstat(curpath)
        except OSError as err:
            # EINVAL can be raised as invalid path syntax under win32.
            # They must be ignored for patterns can be checked too.
            if err.errno not in (errno.ENOENT, errno.ENOTDIR, errno.EINVAL):
                raise
        else:
            if stat.S_ISLNK(st.st_mode):
                msg = _('path %r traverses symbolic link %r') % (path, prefix)
                raise error.Abort(msg)
            elif (stat.S_ISDIR(st.st_mode) and
                  os.path.isdir(os.path.join(curpath, '.hg'))):
                if not self.callback or not self.callback(curpath):
                    msg = _("path '%s' is inside nested repo %r")
                    raise error.Abort(msg % (path, prefix))

    def check(self, path):
        try:
            self(path)
            return True
        except (OSError, error.Abort):
            return False

def canonpath(root, cwd, myname, auditor=None):
    '''return the canonical path of myname, given cwd and root'''
    if util.endswithsep(root):
        rootsep = root
    else:
        rootsep = root + os.sep
    name = myname
    if not os.path.isabs(name):
        name = os.path.join(root, cwd, name)
    name = os.path.normpath(name)
    if auditor is None:
        auditor = pathauditor(root)
    if name != rootsep and name.startswith(rootsep):
        name = name[len(rootsep):]
        auditor(name)
        return util.pconvert(name)
    elif name == root:
        return ''
    else:
        # Determine whether `name' is in the hierarchy at or beneath `root',
        # by iterating name=dirname(name) until that causes no change (can't
        # check name == '/', because that doesn't work on windows). The list
        # `rel' holds the reversed list of components making up the relative
        # file name we want.
        rel = []
        while True:
            try:
                s = util.samefile(name, root)
            except OSError:
                s = False
            if s:
                if not rel:
                    # name was actually the same as root (maybe a symlink)
                    return ''
                rel.reverse()
                name = os.path.join(*rel)
                auditor(name)
                return util.pconvert(name)
            dirname, basename = util.split(name)
            rel.append(basename)
            if dirname == name:
                break
            name = dirname

        # A common mistake is to use -R, but specify a file relative to the repo
        # instead of cwd.  Detect that case, and provide a hint to the user.
        hint = None
        try:
            if cwd != root:
                canonpath(root, root, myname, auditor)
                hint = (_("consider using '--cwd %s'")
                        % os.path.relpath(root, cwd))
        except error.Abort:
            pass

        raise error.Abort(_("%s not under root '%s'") % (myname, root),
                         hint=hint)

def normasprefix(path):
    '''normalize the specified path as path prefix

    Returned value can be used safely for "p.startswith(prefix)",
    "p[len(prefix):]", and so on.

    For efficiency, this expects "path" argument to be already
    normalized by "os.path.normpath", "os.path.realpath", and so on.

    See also issue3033 for detail about need of this function.

    >>> normasprefix('/foo/bar').replace(os.sep, '/')
    '/foo/bar/'
    >>> normasprefix('/').replace(os.sep, '/')
    '/'
    '''
    d, p = os.path.splitdrive(path)
    if len(p) != len(os.sep):
        return path + os.sep
    else:
        return path

# forward two methods from posixpath that do what we need, but we'd
# rather not let our internals know that we're thinking in posix terms
# - instead we'll let them be oblivious.
join = posixpath.join
dirname = posixpath.dirname