Mercurial > hg
changeset 30206:d105195436c0
wireproto: compress data from a generator
Currently, the "getbundle" wire protocol command obtains a generator of
data, converts it to a util.chunkbuffer, then converts it back to a
generator via the protocol's groupchunks() implementation. For the SSH
protocol, groupchunks() simply reads 4kb chunks then write()s the
data to a file descriptor. For the HTTP protocol, groupchunks() reads
32kb chunks, feeds those into a zlib compressor, emits compressed data
as it is available, and that is sent to the WSGI layer, where it is
likely turned into HTTP chunked transfer chunks as is or further
buffered and turned into a larger chunk.
For both the SSH and HTTP protocols, there is inefficiency from using
util.chunkbuffer.
For SSH, emitting consistent 4kb chunks sounds nice. However, the file
descriptor it is writing to is almost certainly buffered. That means
that a Python .write() probably doesn't translate into exactly what is
written to the I/O layer.
For HTTP, we're going through an intermediate layer to zlib compress
data. So all util.chunkbuffer is doing is ensuring that the chunks we
feed into the zlib compressor are of uniform size. This means more CPU
time in Python buffering and emitting chunks in util.chunkbuffer but
fewer function calls to zlib.
This patch introduces and implements a new wire protocol abstract
method: compresschunks(). It is like groupchunks() except it operates
on a generator instead of something with a .read(). The SSH
implementation simply proxies chunks. The HTTP implementation uses
zlib compression.
To avoid duplicate code, the HTTP groupchunks() has been reimplemented
in terms of compresschunks().
To prove this all works, the "getbundle" wire protocol command has been
switched to compresschunks(). This removes the util.chunkbuffer from
that command. Now, data essentially streams straight from the
changegroup emitter to the wire, possibly through a zlib compressor.
Generators all the way, baby.
There were slim to no performance changes on the server as measured
with the mozilla-central repository. This is likely because CPU
time is dominated by reading revlogs, producing the changegroup, and
zlib compressing the output stream. Still, this brings us a little
closer to our ideal of using generators everywhere.
author | Gregory Szorc <gregory.szorc@gmail.com> |
---|---|
date | Sun, 16 Oct 2016 11:10:21 -0700 |
parents | b4074417b661 |
children | abe723002509 |
files | mercurial/hgweb/protocol.py mercurial/sshserver.py mercurial/wireproto.py |
diffstat | 3 files changed, 26 insertions(+), 7 deletions(-) [+] |
line wrap: on
line diff
--- a/mercurial/hgweb/protocol.py Mon Oct 17 19:48:36 2016 +0200 +++ b/mercurial/hgweb/protocol.py Sun Oct 16 11:10:21 2016 -0700 @@ -73,21 +73,30 @@ val = self.ui.fout.getvalue() self.ui.ferr, self.ui.fout = self.oldio return val + def groupchunks(self, fh): + def getchunks(): + while True: + chunk = fh.read(32768) + if not chunk: + break + yield chunk + + return self.compresschunks(getchunks()) + + def compresschunks(self, chunks): # Don't allow untrusted settings because disabling compression or # setting a very high compression level could lead to flooding # the server's network or CPU. z = zlib.compressobj(self.ui.configint('server', 'zliblevel', -1)) - while True: - chunk = fh.read(32768) - if not chunk: - break + for chunk in chunks: data = z.compress(chunk) # Not all calls to compress() emit data. It is cheaper to inspect # that here than to send it via the generator. if data: yield data yield z.flush() + def _client(self): return 'remote:%s:%s:%s' % ( self.req.env.get('wsgi.url_scheme') or 'http',
--- a/mercurial/sshserver.py Mon Oct 17 19:48:36 2016 +0200 +++ b/mercurial/sshserver.py Sun Oct 16 11:10:21 2016 -0700 @@ -71,6 +71,10 @@ def groupchunks(self, fh): return iter(lambda: fh.read(4096), '') + def compresschunks(self, chunks): + for chunk in chunks: + yield chunk + def sendresponse(self, v): self.fout.write("%d\n" % len(v)) self.fout.write(v)
--- a/mercurial/wireproto.py Mon Oct 17 19:48:36 2016 +0200 +++ b/mercurial/wireproto.py Sun Oct 16 11:10:21 2016 -0700 @@ -85,6 +85,14 @@ """ raise NotImplementedError() + def compresschunks(self, chunks): + """Generator of possible compressed chunks to send to the client. + + This is like ``groupchunks()`` except it accepts a generator as + its argument. + """ + raise NotImplementedError() + class remotebatch(peer.batcher): '''batches the queued calls; uses as few roundtrips as possible''' def __init__(self, remote): @@ -773,9 +781,7 @@ return ooberror(bundle2required) chunks = exchange.getbundlechunks(repo, 'serve', **opts) - # TODO avoid util.chunkbuffer() here since it is adding overhead to - # what is fundamentally a generator proxying operation. - return streamres(proto.groupchunks(util.chunkbuffer(chunks))) + return streamres(proto.compresschunks(chunks)) @wireprotocommand('heads') def heads(repo, proto):