view hgext/largefiles/localstore.py @ 25912:cbbdd085c991

batching: migrate basic noop batching into peer.peer "Real" batching only makes sense for wirepeers, but it greatly simplifies the clients of peer instances if they can be ignorant to actual batching capabilities of that peer. By moving the not-really-batched batching code into peer.peer, all peer instances now work with the batching API, thus simplifying users. This leaves a couple of name forwards in wirepeer.py. Originally I had planned to clean those up, but it kind of unclarifies other bits of code that want to use batching, so I think it makes sense for the names to stay exposed by wireproto. Specifically, almost nothing is currently aware of peer (see largefiles.proto for an example), so making them be aware of the peer module *and* the wireproto module seems like some abstraction leakage. I *think* the right long-term fix would actually be to make wireproto an implementation detail that clients wouldn't need to know about, but I don't really know what that would entail at the moment. As far as I'm aware, no clients of batching in third-party extensions will need updating, which is nice icing.
author Augie Fackler <augie@google.com>
date Wed, 05 Aug 2015 14:51:34 -0400
parents c082a4756ed7
children 40bd01be5c25
line wrap: on
line source

# Copyright 2009-2010 Gregory P. Ward
# Copyright 2009-2010 Intelerad Medical Systems Incorporated
# Copyright 2010-2011 Fog Creek Software
# Copyright 2010-2011 Unity Technologies
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.

'''store class for local filesystem'''

from mercurial.i18n import _

import lfutil
import basestore

class localstore(basestore.basestore):
    '''localstore first attempts to grab files out of the store in the remote
    Mercurial repository.  Failing that, it attempts to grab the files from
    the user cache.'''

    def __init__(self, ui, repo, remote):
        self.remote = remote.local()
        super(localstore, self).__init__(ui, repo, self.remote.url())

    def put(self, source, hash):
        if lfutil.instore(self.remote, hash):
            return
        lfutil.link(source, lfutil.storepath(self.remote, hash))

    def exists(self, hashes):
        retval = {}
        for hash in hashes:
            retval[hash] = lfutil.instore(self.remote, hash)
        return retval


    def _getfile(self, tmpfile, filename, hash):
        path = lfutil.findfile(self.remote, hash)
        if not path:
            raise basestore.StoreError(filename, hash, self.url,
                _("can't get file locally"))
        fd = open(path, 'rb')
        try:
            return lfutil.copyandhash(fd, tmpfile)
        finally:
            fd.close()

    def _verifyfile(self, cctx, cset, contents, standin, verified):
        filename = lfutil.splitstandin(standin)
        if not filename:
            return False
        fctx = cctx[standin]
        key = (filename, fctx.filenode())
        if key in verified:
            return False

        expecthash = fctx.data()[0:40]
        storepath, exists = lfutil.findstorepath(self.remote, expecthash)
        verified.add(key)
        if not exists:
            self.ui.warn(
                _('changeset %s: %s references missing %s\n')
                % (cset, filename, storepath))
            return True                 # failed

        if contents:
            actualhash = lfutil.hashfile(storepath)
            if actualhash != expecthash:
                self.ui.warn(
                    _('changeset %s: %s references corrupted %s\n')
                    % (cset, filename, storepath))
                return True             # failed
        return False