Mercurial > hg
view hgext/largefiles/wirestore.py @ 25912:cbbdd085c991
batching: migrate basic noop batching into peer.peer
"Real" batching only makes sense for wirepeers, but it greatly
simplifies the clients of peer instances if they can be ignorant to
actual batching capabilities of that peer. By moving the
not-really-batched batching code into peer.peer, all peer instances
now work with the batching API, thus simplifying users.
This leaves a couple of name forwards in wirepeer.py. Originally I had
planned to clean those up, but it kind of unclarifies other bits of
code that want to use batching, so I think it makes sense for the
names to stay exposed by wireproto. Specifically, almost nothing is
currently aware of peer (see largefiles.proto for an example), so
making them be aware of the peer module *and* the wireproto module
seems like some abstraction leakage. I *think* the right long-term fix
would actually be to make wireproto an implementation detail that
clients wouldn't need to know about, but I don't really know what that
would entail at the moment.
As far as I'm aware, no clients of batching in third-party extensions
will need updating, which is nice icing.
author | Augie Fackler <augie@google.com> |
---|---|
date | Wed, 05 Aug 2015 14:51:34 -0400 |
parents | 9d33d6e0d442 |
children | b6e71f8af5b8 |
line wrap: on
line source
# Copyright 2010-2011 Fog Creek Software # # This software may be used and distributed according to the terms of the # GNU General Public License version 2 or any later version. '''largefile store working over Mercurial's wire protocol''' import lfutil import remotestore class wirestore(remotestore.remotestore): def __init__(self, ui, repo, remote): cap = remote.capable('largefiles') if not cap: raise lfutil.storeprotonotcapable([]) storetypes = cap.split(',') if 'serve' not in storetypes: raise lfutil.storeprotonotcapable(storetypes) self.remote = remote super(wirestore, self).__init__(ui, repo, remote.url()) def _put(self, hash, fd): return self.remote.putlfile(hash, fd) def _get(self, hash): return self.remote.getlfile(hash) def _stat(self, hashes): '''For each hash, return 0 if it is available, other values if not. It is usually 2 if the largefile is missing, but might be 1 the server has a corrupted copy.''' batch = self.remote.batch() futures = {} for hash in hashes: futures[hash] = batch.statlfile(hash) batch.submit() retval = {} for hash in hashes: retval[hash] = futures[hash].value return retval