Gregory Szorc <gregory.szorc@gmail.com> [Fri, 07 Sep 2018 12:14:42 -0700] rev 39567
util: allow lrucachedict to track cost of entries
Currently, lrucachedict allows tracking of arbitrary items with the
only limit being the total number of items in the cache.
Caches can be a lot more useful when they are bound by the size
of the items in them rather than the number of elements in the
cache.
In preparation for teaching lrucachedict to enforce a max size of
cached items, we teach lrucachedict to optionally associate a numeric
cost value with each node.
We purposefully let the caller define their own cost for nodes.
This does introduce some overhead. Most of it comes from __setitem__,
since that function now calls into insert(), thus introducing Python
function call overhead.
$ hg perflrucachedict --size 4 --gets 1000000 --sets 1000000 --mixed 1000000
! gets
! wall 0.599552 comb 0.600000 user 0.600000 sys 0.000000 (best of 17)
! wall 0.614643 comb 0.610000 user 0.610000 sys 0.000000 (best of 17)
! inserts
! <not available>
! wall 0.655817 comb 0.650000 user 0.650000 sys 0.000000 (best of 16)
! sets
! wall 0.540448 comb 0.540000 user 0.540000 sys 0.000000 (best of 18)
! wall 0.805644 comb 0.810000 user 0.810000 sys 0.000000 (best of 13)
! mixed
! wall 0.651556 comb 0.660000 user 0.660000 sys 0.000000 (best of 15)
! wall 0.781357 comb 0.780000 user 0.780000 sys 0.000000 (best of 13)
$ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000
! gets
! wall 0.621014 comb 0.620000 user 0.620000 sys 0.000000 (best of 16)
! wall 0.615146 comb 0.620000 user 0.620000 sys 0.000000 (best of 17)
! inserts
! <not available>
! wall 0.698115 comb 0.700000 user 0.700000 sys 0.000000 (best of 15)
! sets
! wall 0.560247 comb 0.560000 user 0.560000 sys 0.000000 (best of 18)
! wall 0.832495 comb 0.830000 user 0.830000 sys 0.000000 (best of 12)
! mixed
! wall 0.686172 comb 0.680000 user 0.680000 sys 0.000000 (best of 15)
! wall 0.841359 comb 0.840000 user 0.840000 sys 0.000000 (best of 12)
We're still under 1us per insert, which seems like reasonable
performance for a cache.
If we comment out updating of self.totalcost during insert(),
performance of insert() is identical to __setitem__ before. However,
I don't want to make total cost evaluation lazy because it has
significant performance implications for when we need to evaluate the
total cost at mutation time (it requires a cache traversal, which could
be expensive for large caches).
Differential Revision: https://phab.mercurial-scm.org/D4502
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 05 Sep 2018 23:15:20 -0700] rev 39566
util: add a popoldest() method to lrucachedict
This allows consumers to prune the oldest item from the cache. This
could be useful for e.g. a consumer that wishes for the size of
items tracked by the cache to remain under a high water mark.
Differential Revision: https://phab.mercurial-scm.org/D4501
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 06 Sep 2018 11:40:20 -0700] rev 39565
util: ability to change capacity when copying lrucachedict
This will allow us to easily replace an lrucachedict with one
with a higher or lower capacity as consumers deem necessary.
IMO it is easier to just create a new cache instance than to
muck with the capacity of an existing cache. Mutating an existing
cache's capacity feels more prone to bugs.
Differential Revision: https://phab.mercurial-scm.org/D4500
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 06 Sep 2018 11:37:27 -0700] rev 39564
util: make capacity a public attribute on lrucachedict
So others can query it. Useful for operations that may want to verify
the cache has capacity for N items before it performs an operation that
may cause cache eviction.
Differential Revision: https://phab.mercurial-scm.org/D4499
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 06 Sep 2018 11:33:40 -0700] rev 39563
util: properly copy lrucachedict instances
Previously, copy() only worked if the cache was full. We teach
copy() to only copy defined nodes.
Differential Revision: https://phab.mercurial-scm.org/D4498
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 06 Sep 2018 11:27:25 -0700] rev 39562
tests: rewrite test-lrucachedict.py to use unittest
This makes the code so much easier to test and debug.
Along the way, I discovered a bug in copy(), which I kind of
added test coverage for.
Differential Revision: https://phab.mercurial-scm.org/D4497
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 29 Aug 2018 15:17:11 -0700] rev 39561
wireprotov2peer: stream decoded responses
Previously, wire protocol version 2 would buffer all response data.
Only once all data was received did we CBOR decode it and resolve
the future associated with the command. This was obviously not
desirable. In future commits that introduce large response payloads,
this caused significant memory bloat and slowed down client
operations due to waiting on the server.
This commit refactors the response handling code so that response
data can be streamed.
Command response objects now contain a buffered CBOR decoder. As
new data arrives, it is fed into the decoder. Decoded objects are
made available to the generator as they are decoded.
Because there is a separate thread processing incoming frames and
feeding data into the response object, there is the potential for
race conditions when mutating response objects. So a lock has been
added to guard access to critical state variables.
Because the generator emitting decoded objects needs to wait on
those objects to become available, we've added an Event for the
generator to wait on so it doesn't busy loop. This does mean
there is the potential for deadlocks. And I'm pretty sure they can
occur in some scenarios. We already have a handful of TODOs around
this. But I've added some more. Fixing this will likely require
moving the background thread receiving frames into clienthandler.
We likely would have done this anyway when implementing the client
bits for the SSH transport.
Test output changes because the initial CBOR map holding the overall
response state is now always handled internally by the response
object.
Differential Revision: https://phab.mercurial-scm.org/D4474
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 29 Aug 2018 16:43:17 -0700] rev 39560
wireprotoframing: buffer emitted data to reduce frame count
An upcoming commit introduces a wire protocol command that can emit
hundreds of thousands of small objects. Without a buffering layer,
we would emit a single, small frame for every object. Performance
profiling revealed this to be a source of significant overhead for
both client and server.
This commit introduces a very crude buffering layer so that we emit
fewer, bigger frames in such a scenario. This code will likely get
rewritten in the future to be part of the streams API, as we'll
need a similar strategy for compressing data. I don't want to think
about it too much at the moment though.
server
before: user 32.500+0.000 sys 1.160+0.000
after: user 20.230+0.010 sys 0.180+0.000
client
before: user 133.400+0.000 sys 93.120+0.000
after: user 68.370+0.000 sys 32.950+0.000
This appears to indicate we have significant overhead in the frame
processing code on both client and server. It might be worth profiling
that at some point...
Differential Revision: https://phab.mercurial-scm.org/D4473
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 05 Sep 2018 09:06:40 -0700] rev 39559
wireprotov2: implement commands as a generator of objects
Previously, wire protocol version 2 inherited version 1's model of
having separate types to represent the results of different wire
protocol commands.
As I implemented more powerful commands in future commits, I found
I was using a common pattern of returning a special type to hold a
generator. This meant the command function required a closure to
do most of the work. That made logic flow more difficult to follow.
I also noticed that many commands were effectively a sequence of
objects to be CBOR encoded.
I think it makes sense to define version 2 commands as generators.
This way, commands can simply emit the data structures they wish to
send to the client. This eliminates the need for a closure in
command functions and removes encoding from the bodies of commands.
As part of this commit, the handling of response objects has been
moved into the serverreactor class. This puts the reactor in the
driver's seat with regards to CBOR encoding and error handling.
Having error handling in the function that emits frames is
particularly important because exceptions in that function can lead
to things getting in a bad state: I'm fairly certain that uncaught
exceptions in the frame generator were causing deadlocks.
I also introduced a dedicated error type for explicit error reporting
in command handlers. This will be used in subsequent commits.
There's still a bit of work to be done here, especially around
formalizing the error handling "protocol." I've added yet another
TODO to track this so we don't forget.
Test output changed because we're using generators and no longer know
we are at the end of the data until we hit the end of the generator.
This means we can't emit the end-of-stream flag until we've exhausted
the generator. Hence the introduction of 0-sized end-of-stream frames.
Differential Revision: https://phab.mercurial-scm.org/D4472
Gregory Szorc <gregory.szorc@gmail.com> [Mon, 27 Aug 2018 13:30:44 -0700] rev 39558
internals: extract frame-based protocol docs to own document
wireprotocol.txt is quite long and difficult to digest. The
frame-based protocol is effectively a standalone concept (and could
even be used outside of Mercurial). So this commit extracts its
docs to a standalone file.
The first few paragraphs were rewritten as part of the extraction.
Sections headers were adjusted accordingly.
Existing referalls in wireprotocol.txt were updated to refer to the
new doc / concept, which I've started referring to as `hgrpc`.
I'm on the fence as to whether to move the HTTP and SSH transport
details to the new doc as well. For now, I'm leaving them in
wireprotocol.txt.
Differential Revision: https://phab.mercurial-scm.org/D4443