bundle2: increase payload part chunk size to 32kb
Bundle2 payload parts are framed chunks. Esentially, we obtain
data in equal size chunks of size `preferedchunksize` and emit those
to a generator. That generator is fed into a compressor (which can
be the no-op compressor, which just re-emits the generator). And
the output from the compressor likely goes to a file descriptor
or socket.
What this means is that small chunk sizes create more Python objects
and Python function calls than larger chunk sizes. And as we know,
Python object and function call overhead in performance sensitive
code matters (at least with CPython).
This commit increases the bundle2 part payload chunk size from 4k
to 32k. Practically speaking, this means that the chunks we feed
into a compressor (implemented in C code) or feed directly into a
file handle or socket write() are larger. It's possible the chunks
might be larger than what the receiver can handle in one logical
operation. But at that point, we're in C code, which is much more
efficient at dealing with splitting up the chunk and making multiple
function calls than Python is.
A downside to larger chunks is that the receiver has to wait for that
much data to arrive (either raw or from a decompressor) before it
can process the chunk. But 32kb still feels like a small buffer to
have to wait for. And in many cases, the client will convert from
8 read(4096) to 1 read(32768). That's happening in Python land. So
we cut down on the number of Python objects and function calls,
making the client faster as well. I don't think there are any
significant concerns to increasing the payload chunk size to 32kb.
The impact of this change on performance significant. Using `curl`
to obtain a stream clone bundle2 payload from a server on localhost
serving the mozilla-unified repository:
before: 20.78 user; 7.71 system; 80.5 MB/s
after: 13.90 user; 3.51 system; 132 MB/s
legacy: 9.72 user; 8.16 system; 132 MB/s
bundle2 stream clone generation is still more resource intensive than
legacy stream clone (that's likely because of the use of a
util.chunkbuffer). But the throughput is the same. We might
be in territory we're this is effectively a benchmark of the
networking stack or Python's syscall throughput.
From the client perspective, `hg clone -U --stream`:
before: 33.50 user; 7.95 system; 53.3 MB/s
after: 22.82 user; 7.33 system; 72.7 MB/s
legacy: 29.96 user; 7.94 system; 58.0 MB/s
And for `hg clone --stream` with a working directory update of
~230k files:
after: 119.55 user; 26.47 system; 0:57.08 wall
legacy: 126.98 user; 26.94 system; 1:05.56 wall
So, it appears that bundle2's stream clone is now definitively faster
than legacy stream clone!
Differential Revision: https://phab.mercurial-scm.org/D1932
$ echo "[extensions]" >> $HGRCPATH
$ echo "mq=" >> $HGRCPATH
$ hg init a
$ cd a
$ echo a > a
$ hg ci -Ama
adding a
$ hg qnew a.patch
$ echo a >> a
$ hg qrefresh
$ hg qnew b.patch
$ echo b > b
$ hg add b
$ hg qrefresh
$ hg qnew c.patch
$ echo c > c
$ hg add c
$ hg qrefresh
$ hg qgoto a.patch
popping c.patch
popping b.patch
now at: a.patch
$ hg qgoto c.patch
applying b.patch
applying c.patch
now at: c.patch
$ hg qgoto b.patch
popping c.patch
now at: b.patch
Using index:
$ hg qgoto 0
popping b.patch
now at: a.patch
$ hg qgoto 2
applying b.patch
applying c.patch
now at: c.patch
No warnings when using index ... and update from non-qtip and with pending
changes in unrelated files:
$ hg qnew bug314159
$ echo d >> c
$ hg qrefresh
$ hg qnew bug141421
$ echo e >> b
$ hg qrefresh
$ hg up -r bug314159
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
$ echo f >> a
$ echo f >> b
$ echo f >> c
$ hg qgoto 1
abort: local changes found, qrefresh first
[255]
$ hg qgoto 1 -f
popping bug141421
popping bug314159
popping c.patch
now at: b.patch
$ hg st
M a
M b
? c.orig
$ hg up -qCr.
$ hg qgoto 3
applying c.patch
applying bug314159
now at: bug314159
Detect ambiguous non-index:
$ hg qgoto 14
patch name "14" is ambiguous:
bug314159
bug141421
abort: patch 14 not in series
[255]
$ cd ..