view hgext/lfs/__init__.py @ 49777:e1953a34c110

bundle: emit full snapshot as is, without doing a redelta With the new `forced` delta-reused policy, it become important to be able to send full snapshot where full snapshot are needed. Otherwise, the fallback delta will simply be used on the client sideā€¦ creating monstrous delta chain, since revision that are meant as a reset of delta-chain chain becoming too complex are simply adding a new full delta-tree on the leaf of another one. In the `non-forced` cases, client process full snapshot from the bundle differently from deltas, so client will still try to convert the full snapshot into a delta if possible. So this will no lead to pathological storage explosion. I have considered making this configurable, but the impact seems limited enough that it does not seems to be worth it. Especially with the current sparse-revlog format that use "delta-tree" with multiple level snapshots, full snapshot are much less frequent and not that different from other intermediate snapshot that we are already sending over the wire anyway. CPU wise, this will help the bundling side a little as it will not need to reconstruct revisions and compute deltas. The unbundling side might save a tiny amount of CPU as it won't need to reconstruct the delta-base to reconstruct the revision full text. This only slightly visible in some of the benchmarks. And have no real impact on most of them. ### data-env-vars.name = pypy-2018-08-01-zstd-sparse-revlog # benchmark.name = perf-bundle # benchmark.variants.revs = last-40000 before: 11.467186 seconds just-emit-full: 11.190576 seconds (-2.41%) with-pull-force: 11.041091 seconds (-3.72%) # benchmark.name = perf-unbundle # benchmark.variants.revs = last-40000 before: 16.744862 just-emit-full:: 16.561036 seconds (-1.10%) with-pull-force: 16.389344 seconds (-2.12%) # benchmark.name = pull # benchmark.variants.revs = last-40000 before: 26.870569 just-emit-full: 26.391188 seconds (-1.78%) with-pull-force: 25.633184 seconds (-4.60%) Space wise (so network-wise) the impact is fairly small. When taking compression into account. Below are tests the size of `hg bundle --all` for a handful of benchmark repositories (with bzip, zstd compression and without it) This show a small increase in the bundle size, but nothing really significant except maybe for mozilla-try (+12%) that nobody really pulls large chunk of anyway. Mozilla-try is also the repository that benefit the most for not having to recompute deltas client size. ### mercurial: bzip-before: 26 406 342 bytes bzip-after: 26 691 543 bytes +1.08% zstd-before: 27 918 645 bytes zstd-after: 28 075 896 bytes +0.56% none-before: 98 675 601 bytes none-after: 100 411 237 bytes +1.76% ### pypy bzip-before: 201 295 752 bytes bzip-after: 209 780 282 bytes +4.21% zstd-before: 202 974 795 bytes zstd-after: 205 165 780 bytes +1.08% none-before: 871 070 261 bytes none-after: 993 595 057 bytes +14.07% ### netbeans bzip-before: 601 314 330 bytes bzip-after: 614 246 241 bytes +2.15% zstd-before: 604 745 136 bytes zstd-after: 615 497 705 bytes +1.78% none-before: 3 338 238 571 bytes none-after: 3 439 422 535 bytes +3.03% ### mozilla-central bzip-before: 1 493 006 921 bytes bzip-after: 1 549 650 570 bytes +3.79% zstd-before: 1 481 910 102 bytes zstd-after: 1 513 052 415 bytes +2.10% none-before: 6 535 929 910 bytes none-after: 7 010 191 342 bytes +7.26% ### mozilla-try bzip-before: 6 583 425 999 bytes bzip-after: 7 423 536 928 bytes +12.76% zstd-before: 6 021 009 212 bytes zstd-after: 6 674 922 420 bytes +10.86% none-before: 22 954 739 558 bytes none-after: 26 013 854 771 bytes +13.32%
author Pierre-Yves David <pierre-yves.david@octobus.net>
date Wed, 07 Dec 2022 20:12:23 +0100
parents 1672c5af1271
children dde4b55a0785
line wrap: on
line source

# lfs - hash-preserving large file support using Git-LFS protocol
#
# Copyright 2017 Facebook, Inc.
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.

"""lfs - large file support (EXPERIMENTAL)

This extension allows large files to be tracked outside of the normal
repository storage and stored on a centralized server, similar to the
``largefiles`` extension.  The ``git-lfs`` protocol is used when
communicating with the server, so existing git infrastructure can be
harnessed.  Even though the files are stored outside of the repository,
they are still integrity checked in the same manner as normal files.

The files stored outside of the repository are downloaded on demand,
which reduces the time to clone, and possibly the local disk usage.
This changes fundamental workflows in a DVCS, so careful thought
should be given before deploying it.  :hg:`convert` can be used to
convert LFS repositories to normal repositories that no longer
require this extension, and do so without changing the commit hashes.
This allows the extension to be disabled if the centralized workflow
becomes burdensome.  However, the pre and post convert clones will
not be able to communicate with each other unless the extension is
enabled on both.

To start a new repository, or to add LFS files to an existing one, just
create an ``.hglfs`` file as described below in the root directory of
the repository.  Typically, this file should be put under version
control, so that the settings will propagate to other repositories with
push and pull.  During any commit, Mercurial will consult this file to
determine if an added or modified file should be stored externally.  The
type of storage depends on the characteristics of the file at each
commit.  A file that is near a size threshold may switch back and forth
between LFS and normal storage, as needed.

Alternately, both normal repositories and largefile controlled
repositories can be converted to LFS by using :hg:`convert` and the
``lfs.track`` config option described below.  The ``.hglfs`` file
should then be created and added, to control subsequent LFS selection.
The hashes are also unchanged in this case.  The LFS and non-LFS
repositories can be distinguished because the LFS repository will
abort any command if this extension is disabled.

Committed LFS files are held locally, until the repository is pushed.
Prior to pushing the normal repository data, the LFS files that are
tracked by the outgoing commits are automatically uploaded to the
configured central server.  No LFS files are transferred on
:hg:`pull` or :hg:`clone`.  Instead, the files are downloaded on
demand as they need to be read, if a cached copy cannot be found
locally.  Both committing and downloading an LFS file will link the
file to a usercache, to speed up future access.  See the `usercache`
config setting described below.

The extension reads its configuration from a versioned ``.hglfs``
configuration file found in the root of the working directory. The
``.hglfs`` file uses the same syntax as all other Mercurial
configuration files. It uses a single section, ``[track]``.

The ``[track]`` section specifies which files are stored as LFS (or
not). Each line is keyed by a file pattern, with a predicate value.
The first file pattern match is used, so put more specific patterns
first.  The available predicates are ``all()``, ``none()``, and
``size()``. See "hg help filesets.size" for the latter.

Example versioned ``.hglfs`` file::

  [track]
  # No Makefile or python file, anywhere, will be LFS
  **Makefile = none()
  **.py = none()

  **.zip = all()
  **.exe = size(">1MB")

  # Catchall for everything not matched above
  ** = size(">10MB")

Configs::

    [lfs]
    # Remote endpoint. Multiple protocols are supported:
    # - http(s)://user:pass@example.com/path
    #   git-lfs endpoint
    # - file:///tmp/path
    #   local filesystem, usually for testing
    # if unset, lfs will assume the remote repository also handles blob storage
    # for http(s) URLs.  Otherwise, lfs will prompt to set this when it must
    # use this value.
    # (default: unset)
    url = https://example.com/repo.git/info/lfs

    # Which files to track in LFS.  Path tests are "**.extname" for file
    # extensions, and "path:under/some/directory" for path prefix.  Both
    # are relative to the repository root.
    # File size can be tested with the "size()" fileset, and tests can be
    # joined with fileset operators.  (See "hg help filesets.operators".)
    #
    # Some examples:
    # - all()                       # everything
    # - none()                      # nothing
    # - size(">20MB")               # larger than 20MB
    # - !**.txt                     # anything not a *.txt file
    # - **.zip | **.tar.gz | **.7z  # some types of compressed files
    # - path:bin                    # files under "bin" in the project root
    # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
    #     | (path:bin & !path:/bin/README) | size(">1GB")
    # (default: none())
    #
    # This is ignored if there is a tracked '.hglfs' file, and this setting
    # will eventually be deprecated and removed.
    track = size(">10M")

    # how many times to retry before giving up on transferring an object
    retry = 5

    # the local directory to store lfs files for sharing across local clones.
    # If not set, the cache is located in an OS specific cache location.
    usercache = /path/to/global/cache
"""


import sys

from mercurial.i18n import _
from mercurial.node import bin

from mercurial import (
    bundlecaches,
    config,
    context,
    error,
    extensions,
    exthelper,
    filelog,
    filesetlang,
    localrepo,
    logcmdutil,
    minifileset,
    pycompat,
    revlog,
    scmutil,
    templateutil,
    util,
)

from mercurial.interfaces import repository

from . import (
    blobstore,
    wireprotolfsserver,
    wrapper,
)

# Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
# extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
# be specifying the version(s) of Mercurial they are tested with, or
# leave the attribute unspecified.
testedwith = b'ships-with-hg-core'

eh = exthelper.exthelper()
eh.merge(wrapper.eh)
eh.merge(wireprotolfsserver.eh)

cmdtable = eh.cmdtable
configtable = eh.configtable
extsetup = eh.finalextsetup
uisetup = eh.finaluisetup
filesetpredicate = eh.filesetpredicate
reposetup = eh.finalreposetup
templatekeyword = eh.templatekeyword

eh.configitem(
    b'experimental',
    b'lfs.serve',
    default=True,
)
eh.configitem(
    b'experimental',
    b'lfs.user-agent',
    default=None,
)
eh.configitem(
    b'experimental',
    b'lfs.disableusercache',
    default=False,
)
eh.configitem(
    b'experimental',
    b'lfs.worker-enable',
    default=True,
)

eh.configitem(
    b'lfs',
    b'url',
    default=None,
)
eh.configitem(
    b'lfs',
    b'usercache',
    default=None,
)
# Deprecated
eh.configitem(
    b'lfs',
    b'threshold',
    default=None,
)
eh.configitem(
    b'lfs',
    b'track',
    default=b'none()',
)
eh.configitem(
    b'lfs',
    b'retry',
    default=5,
)

lfsprocessor = (
    wrapper.readfromstore,
    wrapper.writetostore,
    wrapper.bypasscheckhash,
)


def featuresetup(ui, supported):
    # don't die on seeing a repo with the lfs requirement
    supported |= {b'lfs'}


@eh.uisetup
def _uisetup(ui):
    localrepo.featuresetupfuncs.add(featuresetup)


@eh.reposetup
def _reposetup(ui, repo):
    # Nothing to do with a remote repo
    if not repo.local():
        return

    repo.svfs.lfslocalblobstore = blobstore.local(repo)
    repo.svfs.lfsremoteblobstore = blobstore.remote(repo)

    class lfsrepo(repo.__class__):
        @localrepo.unfilteredmethod
        def commitctx(self, ctx, error=False, origctx=None):
            repo.svfs.options[b'lfstrack'] = _trackedmatcher(self)
            return super(lfsrepo, self).commitctx(ctx, error, origctx=origctx)

    repo.__class__ = lfsrepo

    if b'lfs' not in repo.requirements:

        def checkrequireslfs(ui, repo, **kwargs):
            with repo.lock():
                if b'lfs' in repo.requirements:
                    return 0

                last = kwargs.get('node_last')
                if last:
                    s = repo.set(b'%n:%n', bin(kwargs['node']), bin(last))
                else:
                    s = repo.set(b'%n', bin(kwargs['node']))
                match = repo._storenarrowmatch
                for ctx in s:
                    # TODO: is there a way to just walk the files in the commit?
                    if any(
                        ctx[f].islfs()
                        for f in ctx.files()
                        if f in ctx and match(f)
                    ):
                        repo.requirements.add(b'lfs')
                        repo.features.add(repository.REPO_FEATURE_LFS)
                        scmutil.writereporequirements(repo)
                        repo.prepushoutgoinghooks.add(b'lfs', wrapper.prepush)
                        break

        ui.setconfig(b'hooks', b'commit.lfs', checkrequireslfs, b'lfs')
        ui.setconfig(
            b'hooks', b'pretxnchangegroup.lfs', checkrequireslfs, b'lfs'
        )
    else:
        repo.prepushoutgoinghooks.add(b'lfs', wrapper.prepush)


def _trackedmatcher(repo):
    """Return a function (path, size) -> bool indicating whether or not to
    track a given file with lfs."""
    if not repo.wvfs.exists(b'.hglfs'):
        # No '.hglfs' in wdir.  Fallback to config for now.
        trackspec = repo.ui.config(b'lfs', b'track')

        # deprecated config: lfs.threshold
        threshold = repo.ui.configbytes(b'lfs', b'threshold')
        if threshold:
            filesetlang.parse(trackspec)  # make sure syntax errors are confined
            trackspec = b"(%s) | size('>%d')" % (trackspec, threshold)

        return minifileset.compile(trackspec)

    data = repo.wvfs.tryread(b'.hglfs')
    if not data:
        return lambda p, s: False

    # Parse errors here will abort with a message that points to the .hglfs file
    # and line number.
    cfg = config.config()
    cfg.parse(b'.hglfs', data)

    try:
        rules = [
            (minifileset.compile(pattern), minifileset.compile(rule))
            for pattern, rule in cfg.items(b'track')
        ]
    except error.ParseError as e:
        # The original exception gives no indicator that the error is in the
        # .hglfs file, so add that.

        # TODO: See if the line number of the file can be made available.
        raise error.Abort(_(b'parse error in .hglfs: %s') % e)

    def _match(path, size):
        for pat, rule in rules:
            if pat(path, size):
                return rule(path, size)

        return False

    return _match


# Called by remotefilelog
def wrapfilelog(filelog):
    wrapfunction = extensions.wrapfunction

    wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
    wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
    wrapfunction(filelog, 'size', wrapper.filelogsize)


@eh.wrapfunction(localrepo, b'resolverevlogstorevfsoptions')
def _resolverevlogstorevfsoptions(orig, ui, requirements, features):
    opts = orig(ui, requirements, features)
    for name, module in extensions.extensions(ui):
        if module is sys.modules[__name__]:
            if revlog.REVIDX_EXTSTORED in opts[b'flagprocessors']:
                msg = (
                    _(b"cannot register multiple processors on flag '%#x'.")
                    % revlog.REVIDX_EXTSTORED
                )
                raise error.Abort(msg)

            opts[b'flagprocessors'][revlog.REVIDX_EXTSTORED] = lfsprocessor
            break

    return opts


@eh.extsetup
def _extsetup(ui):
    wrapfilelog(filelog.filelog)

    context.basefilectx.islfs = wrapper.filectxislfs

    scmutil.fileprefetchhooks.add(b'lfs', wrapper._prefetchfiles)

    # Make bundle choose changegroup3 instead of changegroup2. This affects
    # "hg bundle" command. Note: it does not cover all bundle formats like
    # "packed1". Using "packed1" with lfs will likely cause trouble.
    bundlecaches._bundlespeccontentopts[b"v2"][b"cg.version"] = b"03"


@eh.filesetpredicate(b'lfs()')
def lfsfileset(mctx, x):
    """File that uses LFS storage."""
    # i18n: "lfs" is a keyword
    filesetlang.getargs(x, 0, 0, _(b"lfs takes no arguments"))
    ctx = mctx.ctx

    def lfsfilep(f):
        return wrapper.pointerfromctx(ctx, f, removed=True) is not None

    return mctx.predicate(lfsfilep, predrepr=b'<lfs>')


@eh.templatekeyword(b'lfs_files', requires={b'ctx'})
def lfsfiles(context, mapping):
    """List of strings. All files modified, added, or removed by this
    changeset."""
    ctx = context.resource(mapping, b'ctx')

    pointers = wrapper.pointersfromctx(ctx, removed=True)  # {path: pointer}
    files = sorted(pointers.keys())

    def pointer(v):
        # In the file spec, version is first and the other keys are sorted.
        sortkeyfunc = lambda x: (x[0] != b'version', x)
        items = sorted(pointers[v].items(), key=sortkeyfunc)
        return util.sortdict(items)

    makemap = lambda v: {
        b'file': v,
        b'lfsoid': pointers[v].oid() if pointers[v] else None,
        b'lfspointer': templateutil.hybriddict(pointer(v)),
    }

    # TODO: make the separator ', '?
    f = templateutil._showcompatlist(context, mapping, b'lfs_file', files)
    return templateutil.hybrid(f, files, makemap, pycompat.identity)


@eh.command(
    b'debuglfsupload',
    [(b'r', b'rev', [], _(b'upload large files introduced by REV'))],
)
def debuglfsupload(ui, repo, **opts):
    """upload lfs blobs added by the working copy parent or given revisions"""
    revs = opts.get('rev', [])
    pointers = wrapper.extractpointers(repo, logcmdutil.revrange(repo, revs))
    wrapper.uploadblobs(repo, pointers)


@eh.wrapcommand(
    b'verify',
    opts=[(b'', b'no-lfs', None, _(b'skip missing lfs blob content'))],
)
def verify(orig, ui, repo, **opts):
    skipflags = repo.ui.configint(b'verify', b'skipflags')
    no_lfs = opts.pop('no_lfs')

    if skipflags:
        # --lfs overrides the config bit, if set.
        if no_lfs is False:
            skipflags &= ~repository.REVISION_FLAG_EXTSTORED
    else:
        skipflags = 0

    if no_lfs is True:
        skipflags |= repository.REVISION_FLAG_EXTSTORED

    with ui.configoverride({(b'verify', b'skipflags'): skipflags}):
        return orig(ui, repo, **opts)