Mercurial > hg-stable
view mercurial/peer.py @ 35767:5f5fb279fd39
streamclone: also stream caches to the client
When stream clone is used over bundle2, relevant cache files are also streamed.
This is expected to be a massive performance win for clone since no important
cache will have to be recomputed.
Some performance numbers:
(All times are wall-clock times in seconds, 2 attempts per case.)
# Mozilla-Central
## Clone over ssh over lan
V1 streaming: 234.3 239.6
V2 streaming: 248.4 243.7
## Clone over ssh over Internet
V1 streaming: 175.5 110.9
V2 streaming: 109.1 111.0
## Clone over HTTP over lan
V1 streaming: 105.3 105.6
V2 streaming: 112.7 111.4
## Clone over HTTP over internet
V1 streaming: 105.6 114.6
V2 streaming: 226.7 225.9
## Hg tags
V1 streaming (no cache): 1.084 1.071
V2 streaming (cache): 0.312 0.325
## Hg branches
V1 streaming (no cache): 14.047 14.148
V2 streaming (with cache): 0.312 0.333
# Pypy
## Clone over ssh over internet
V1 streaming: 29.4 30.1
V2 streaming: 31.2 30.1
## Clone over http over internet
V1 streaming: 29.7 29.7
V2 streaming: 75.2 72.9
(since ssh and lan are not affected, there seems to be an issue with how we
read/write the http stream on connection with latency, unrelated to the format)
## Hg tags
V1 streaming (no cache): 1.752 1.664
V2 streaming (with cache): 0.274 0.260
## Hg branches
V1 streaming (no cache): 4.469 4.728
V2 streaming (with cache): 0.318 0.321
# Private repository:
* 500K revision revisions
* 11K topological heads
* 28K branch heads
## hg tags
no cache: 1543.332
with cache: 4.900
## hg branches
no cache: 91.828
with cache: 2.955
author | Boris Feld <boris.feld@octobus.net> |
---|---|
date | Thu, 18 Jan 2018 00:50:12 +0100 |
parents | 115efdd97088 |
children |
line wrap: on
line source
# peer.py - repository base classes for mercurial # # Copyright 2005, 2006 Matt Mackall <mpm@selenic.com> # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com> # # This software may be used and distributed according to the terms of the # GNU General Public License version 2 or any later version. from __future__ import absolute_import from . import ( error, pycompat, util, ) # abstract batching support class future(object): '''placeholder for a value to be set later''' def set(self, value): if util.safehasattr(self, 'value'): raise error.RepoError("future is already set") self.value = value class batcher(object): '''base class for batches of commands submittable in a single request All methods invoked on instances of this class are simply queued and return a a future for the result. Once you call submit(), all the queued calls are performed and the results set in their respective futures. ''' def __init__(self): self.calls = [] def __getattr__(self, name): def call(*args, **opts): resref = future() # Please don't invent non-ascii method names, or you will # give core hg a very sad time. self.calls.append((name.encode('ascii'), args, opts, resref,)) return resref return call def submit(self): raise NotImplementedError() class iterbatcher(batcher): def submit(self): raise NotImplementedError() def results(self): raise NotImplementedError() class localiterbatcher(iterbatcher): def __init__(self, local): super(iterbatcher, self).__init__() self.local = local def submit(self): # submit for a local iter batcher is a noop pass def results(self): for name, args, opts, resref in self.calls: resref.set(getattr(self.local, name)(*args, **opts)) yield resref.value def batchable(f): '''annotation for batchable methods Such methods must implement a coroutine as follows: @batchable def sample(self, one, two=None): # Build list of encoded arguments suitable for your wire protocol: encargs = [('one', encode(one),), ('two', encode(two),)] # Create future for injection of encoded result: encresref = future() # Return encoded arguments and future: yield encargs, encresref # Assuming the future to be filled with the result from the batched # request now. Decode it: yield decode(encresref.value) The decorator returns a function which wraps this coroutine as a plain method, but adds the original method as an attribute called "batchable", which is used by remotebatch to split the call into separate encoding and decoding phases. ''' def plain(*args, **opts): batchable = f(*args, **opts) encargsorres, encresref = next(batchable) if not encresref: return encargsorres # a local result in this case self = args[0] cmd = pycompat.bytesurl(f.__name__) # ensure cmd is ascii bytestr encresref.set(self._submitone(cmd, encargsorres)) return next(batchable) setattr(plain, 'batchable', f) return plain