Augie Fackler <augie@google.com> [Thu, 13 Sep 2018 18:09:22 -0400] rev 39607
dagop: fix typo spotted while doing unrelated investigation
Differential Revision: https://phab.mercurial-scm.org/D4584
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 12 Sep 2018 19:00:46 -0700] rev 39606
hg: don't reuse repo instance after unshare()
Unsharing a repository is a pretty invasive procedure and fundamentally
changes the behavior of the repository.
Currently, hg.unshare() calls into localrepository.__init__ to
re-initialize the repository instance. This is a bit hacky. And
future commits that refactor how localrepository instances are
constructed will make this difficult to support.
This commit changes unshare() so it constructs a new repo instance
once the unshare I/O has completed. It then poisons the old repo
instance so any further use will result in error.
Surprisingly, nothing in core appears to access a repo instance
after it has been unshared!
.. api::
``hg.unshare()`` now poisons the repo instance so it can't be used.
It also returns a new repo instance suitable for interacting with
the unshared repository.
Differential Revision: https://phab.mercurial-scm.org/D4557
Gregory Szorc <gregory.szorc@gmail.com> [Tue, 11 Sep 2018 20:06:39 -0700] rev 39605
unionrepo: dynamically create repository type from base repository
This is basically the same thing we just did for bundlerepo except
for union repositories.
.. api::
``unionrepo.unionrepository()`` is no longer usable on its own.
To instantiate an instance, call ``unionrepo.instance()`` or
``unionrepo.makeunionrepository()``.
Differential Revision: https://phab.mercurial-scm.org/D4556
Gregory Szorc <gregory.szorc@gmail.com> [Tue, 11 Sep 2018 19:50:07 -0700] rev 39604
bundlerepo: dynamically create repository type from base repository
Previously, bundlerepository inherited from localrepo.localrepository.
You simply instantiated a bundlerepository and its __init__ called
localrepo.localrepository.__init__. Things were simple.
Unfortunately, this strategy is limiting because it assumes that
the base repository is a localrepository instance. And it assumes
various properties of localrepository, such as the arguments its
__init__ takes. And it prevents us from changing behavior of
localrepository.__init__ without also having to change derived classes.
Previous and ongoing work to abstract storage revealed these
limitations.
This commit changes the initialization strategy of bundle repositories
to dynamically create a type to represent the repository. Instead of
a static type, we instantiate a new local repo instance via
localrepo.instance(). We then combine its __class__ with
bundlerepository to produce a new type. This ensures that no matter
how localrepo.instance() decides to create a repository object, we
can derive a bundle repo object from it. i.e. localrepo.instance()
could return a type that isn't a localrepository and it would "just
work."
Well, it would "just work" if bundlerepository's custom implementations
only accessed attributes in the documented repository interface. I'm
pretty sure it violates the interface contract in a handful of
places. But we can worry about that another day. This change gets us
closer to doing more clever things around instantiating repository
instances without having to worry about teaching bundlerepository about
them.
.. api::
``bundlerepo.bundlerepository`` is no longer usable on its own.
The class is combined with the class of the base repository it is
associated with at run-time.
New bundlerepository instances can be obtained by calling
``bundlerepo.instance()`` or ``bundlerepo.makebundlerepository()``.
Differential Revision: https://phab.mercurial-scm.org/D4555
Gregory Szorc <gregory.szorc@gmail.com> [Tue, 11 Sep 2018 19:16:32 -0700] rev 39603
bundlerepo: factor out code for instantiating a bundle repository
This code will soon become a bit more complicated. So extract to its
own function.
And change both instantiators of bundlerepository to use it.
Differential Revision: https://phab.mercurial-scm.org/D4554
Gregory Szorc <gregory.szorc@gmail.com> [Tue, 11 Sep 2018 18:45:05 -0700] rev 39602
bundlerepo: pass create=True
I don't want to know how this came to be. Maybe a holdover from the
days before Python had a bool type?
Differential Revision: https://phab.mercurial-scm.org/D4553
Gregory Szorc <gregory.szorc@gmail.com> [Tue, 11 Sep 2018 18:41:14 -0700] rev 39601
shelve: use bundlerepo.instance() to construct a repo object
The instance() functions are preferred over cls.__init__ for
creating repo instances. It doesn't really matter now. But future
commits will refactor the bundlerepository class in ways that will
cause the old way to break.
Differential Revision: https://phab.mercurial-scm.org/D4552
Yuya Nishihara <yuya@tcha.org> [Sun, 29 Jul 2018 22:04:01 +0900] rev 39600
templatekw: add experimental {status} keyword
This is another example of fctx-based keywords. I think this is somewhat
useful in log templates.
Yuya Nishihara <yuya@tcha.org> [Sun, 29 Jul 2018 21:52:01 +0900] rev 39599
templatekw: add option to include ignored/clean/unknown files in cache
They will be necessary to provide {status} of files.
Yuya Nishihara <yuya@tcha.org> [Sun, 29 Jul 2018 22:07:42 +0900] rev 39598
templatekw: keep status tuple in cache dict and rename cache key accordingly
There's no point to drop tail elements, which are mostly empty lists.
Yuya Nishihara <yuya@tcha.org> [Sun, 29 Jul 2018 21:39:12 +0900] rev 39597
templatekw: extract function that computes and caches file status
Yuya Nishihara <yuya@tcha.org> [Thu, 13 Sep 2018 22:32:51 +0900] rev 39596
py3: use sysstr() to convert ProgrammingError bytes with no unicode error risk
msg.decode('utf8') may fail if msg isn't an ASCII string, and that's possible
as we sometimes embed a filename in the error message for example.
Boris Feld <boris.feld@octobus.net> [Mon, 10 Sep 2018 08:31:41 +0200] rev 39595
revlog: reuse cached delta for identical base revision (
issue5975)
Since
8f83a953dddf, we skip over empty deltas when choosing a delta base. Such
delta happens when two distinct revisions have the same content.
The remote might be sending a delta against such revision within the bundle.
In that case, the delta base is no longer considered, but the cached one could
still, be used with the equivalent revision.
Not reusing the delta from the bundle can have a significant performance
impact, so we now make sure with doing so when possible.
Boris Feld <boris.feld@octobus.net> [Mon, 10 Sep 2018 10:11:21 +0200] rev 39594
snapshot: fix line order when skipping over empty deltas
The code movement in
37957e07138c introduced an error.
Since
8f83a953dddf, we discarded some revisions because they are identical to
their delta base (and use that delta base instead). That logic is good,
however, in
37957e07138c we mixed up the order of two line, adding the "new"
revision to the set of already tested one, instead of the discarded one. So in
practice, we were never investigating any revisions in a chain starting with
an empty delta. Creating significantly worst delta chain (eg: Mercurial's
manifest move goes from about 60MB up to about 80MB).
Matt Harbison <matt_harbison@yahoo.com> [Wed, 12 Sep 2018 23:10:59 -0400] rev 39593
tests: stabilize change for handling not quoting non-empty-directory
The change originated in
cb1329738d64. I suspect the problem is with the
combination of (re) and the '\' to '/' retry on Windows. I've no idea if py3 on
Windows needs the quoting, since it can't even run `hg` with no arguments.
(It's dying somewhere on the ctype declarations when win32.py is imported.)
Augie Fackler <augie@google.com> [Tue, 21 Aug 2018 15:25:46 -0400] rev 39592
hg: wrap the highest layer in the `hg` script possible in trace event
This should help us have a better idea of what "interpreter startup
costs" look like. This does omit the HGUNICODEPEDANTRY block and the
LIBDIR dancing to set up sys.path, but the former is usually off and
the latter is unavoidable and should be very fast. If we get worried
about those cases we can consider open-coding the tracing logic here.
Differential Revision: https://phab.mercurial-scm.org/D4346
Martin von Zweigbergk <martinvonz@google.com> [Wed, 12 Sep 2018 12:01:32 -0700] rev 39591
localrepo: use urllocalpath() for path to create repo too
It looks like this was lost in
7ce9dea3a14a (localrepo: move repo
creation logic out of localrepository.__init__ (API), 2018-09-11). I
don't know when it makes a difference (maybe on Windows, since
urllocalpath() mentions something about drive letters).
Differential Revision: https://phab.mercurial-scm.org/D4550
Martin von Zweigbergk <martinvonz@google.com> [Wed, 12 Sep 2018 08:41:00 -0700] rev 39590
localrepo: move check for existing repo into createrepository()
For symmetry with the check for existence of a repo in
localrepository.__init__, we should check for the non-existence in
createrepository(). We could alternatively move both checks into
instance().
Differential Revision: https://phab.mercurial-scm.org/D4549
Matt Harbison <matt_harbison@yahoo.com> [Wed, 12 Sep 2018 21:32:08 -0400] rev 39589
py3: add b'' to some run-tests.py strings for Windows
Things go seriously off the rails after this, so there may be more that are
missing.
# skip-blame since these are just converting to bytes literals
Augie Fackler <raf@durin42.com> [Wed, 12 Sep 2018 19:14:28 -0400] rev 39588
wireprotov1peer: forward __name__ of wrapped method in batchable decorator
Not required, but clarifies debugging when the going gets really tough.
Differential Revision: https://phab.mercurial-scm.org/D4551
Yuya Nishihara <yuya@tcha.org> [Sun, 29 Jul 2018 21:28:51 +0900] rev 39587
templatekw: add {size} keyword as an example of fctx-based keyword
I'll add {status}, and I think some lfs keywords can be migrated to this.
I'm not certain how many fctx-based keywords will be introduced into the
global space, but if there are a couple more, we'll probably need to sort
them out to the "File Keywords" section in the templater help. Until then,
fctx keywords are hidden as experimental.
Yuya Nishihara <yuya@tcha.org> [Sun, 29 Jul 2018 21:25:37 +0900] rev 39586
formatter: populate fctx from ctx and path value
Tests will be added by the next patch.
Yuya Nishihara <yuya@tcha.org> [Thu, 07 Jun 2018 21:36:13 +0900] rev 39585
formatter: factor out function that detects node change and document it
This prepares for demand loading of ctx/fctx objects. With this change,
'revcache' is also recreated if 'node' value changes, which will be needed
to support loading of ctx from (repo, node) pair.
Yuya Nishihara <yuya@tcha.org> [Sat, 01 Sep 2018 15:06:05 +0900] rev 39584
formatter: inline _gettermap and _knownkeys
Yuya Nishihara <yuya@tcha.org> [Sat, 01 Sep 2018 13:21:45 +0900] rev 39583
formatter: fill missing resources by formatter, not by resource mapper
While working on demand loading of ctx/fctx objects, I found it's weird
to support lookup in both directions. For instance, fctx can be loaded
from (ctx, path) pair, but ctx may also be derived from fctx.changectx()
in the original mapping. If the original mapping has had fctx but no ctx,
and if the new mapping provides {path}, we can't be sure if fctx should be
updated by fctx'.changectx()[path] or not.
This patch simply drops the support for the resolution in fctx -> ctx -> repo
direction.
Yuya Nishihara <yuya@tcha.org> [Thu, 07 Jun 2018 23:27:54 +0900] rev 39582
templater: remove unused context argument from most resourcemapper functions
While working on demand loading of ctx/fctx objects, I noticed that it's quite
easy to create infinite recursion by carelessly using the template context in
the resource mapper. Let's make that not happen.
Yuya Nishihara <yuya@tcha.org> [Mon, 10 Sep 2018 20:57:18 +0900] rev 39581
ancestor: remove extra generator from lazyancestors.__iter__()
Martin von Zweigbergk <martinvonz@google.com> [Wed, 12 Sep 2018 11:24:51 -0700] rev 39580
localrepo: fix a mixmatched arg name in createrepository() docstring
Differential Revision: https://phab.mercurial-scm.org/D4548
Augie Fackler <augie@google.com> [Wed, 12 Sep 2018 11:37:34 -0400] rev 39579
error: ensure ProgrammingError message is always a str
Since this error is internal-only and a runtime error, let's give it a
treatment that makes it behave identically when repr()d on both Python
2 and Python 3.
Differential Revision: https://phab.mercurial-scm.org/D4545
Augie Fackler <augie@google.com> [Wed, 12 Sep 2018 11:39:48 -0400] rev 39578
py3: whitelist a test caught by the ratchet
Differential Revision: https://phab.mercurial-scm.org/D4547
Augie Fackler <augie@google.com> [Wed, 12 Sep 2018 11:38:46 -0400] rev 39577
tests: handle Python 3 not quoting non-empty-directory error
I assume this happens on Windows too, so I did the same regex on both
versions of the output. The whole message printed by these aborts
comes from Python, so if we want to exert control over the quoting
here it'll be a bit of a pain.
Differential Revision: https://phab.mercurial-scm.org/D4546
Pulkit Goyal <pulkit@yandex-team.ru> [Wed, 12 Sep 2018 17:45:43 +0300] rev 39576
context: don't count deleted files as candidates for path conflicts in IMM
This patch makes sure we don't consider the deleted files in our IMM wctx
as potential conflicts while calculating paths conflicts. This fixes the bug
demonstrated in previous patch.
Differential Revision: https://phab.mercurial-scm.org/D4543
Pulkit Goyal <pulkit@yandex-team.ru> [Wed, 12 Sep 2018 17:22:46 +0300] rev 39575
rebase: add tests showing patch conflict detection needs to be smarter in IMM
This patch adds test which shows that you can't rebase a cset which removes a
dir and adds a file of the same as that of dir as there are False positives
path conflicts reported.
I fixed the case when there is a file and we adds a dir of same name while
removing the file, but missed testing the current case. Next patch will fix
this.
Differential Revision: https://phab.mercurial-scm.org/D4544
Anton Shestakov <av6@dwimlabs.net> [Mon, 10 Sep 2018 16:47:02 +0800] rev 39574
zsh_completion: add new and remove deprecated flags
Differential Revision: https://phab.mercurial-scm.org/D4519
Anton Shestakov <av6@dwimlabs.net> [Mon, 10 Sep 2018 16:43:49 +0800] rev 39573
zsh_completion: update various arguments, descriptions, metavariables
Addition of "=" means the flag must have an argument after it.
Differential Revision: https://phab.mercurial-scm.org/D4518
Pulkit Goyal <pulkit@yandex-team.ru> [Wed, 05 Sep 2018 01:18:29 +0530] rev 39572
setup: don't support py 3.5.0, 3.5.1, 3.5.2 because of bug in codecs
codecs.escape_encode() raises SystemError if an empty bytestring is passed. We
do that at some places in our code and because of this bug, things break.
Therefore we can't support the mentioned version. The bug was fixed in 3.5.3,
3.6.0 beta 2. We can't support 3.6.0 anyway because of bug in formatting
bytestrings.
Link to the python bug: https://bugs.python.org/
issue25270
Differential Revision: https://phab.mercurial-scm.org/D4475
Gregory Szorc <gregory.szorc@gmail.com> [Fri, 07 Sep 2018 10:18:20 -0700] rev 39571
util: update lrucachedict order during get()
get() should have the same semantics as __getitem__ for item
retrieval.
Differential Revision: https://phab.mercurial-scm.org/D4506
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 06 Sep 2018 18:04:27 -0700] rev 39570
util: lower water mark when removing nodes after cost limit reached
See the inline comment for the reasoning here. This is a pretty
common strategy for garbage collectors, other cache-like primtives.
The performance impact is substantial:
$ hg perflrucachedict --size 4 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 100
! inserts w/ cost limit
! wall 1.659181 comb 1.650000 user 1.650000 sys 0.000000 (best of 7)
! wall 1.722122 comb 1.720000 user 1.720000 sys 0.000000 (best of 6)
! mixed w/ cost limit
! wall 1.139955 comb 1.140000 user 1.140000 sys 0.000000 (best of 9)
! wall 1.182513 comb 1.180000 user 1.180000 sys 0.000000 (best of 9)
$ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 10000
! inserts
! wall 0.679546 comb 0.680000 user 0.680000 sys 0.000000 (best of 15)
! sets
! wall 0.825147 comb 0.830000 user 0.830000 sys 0.000000 (best of 13)
! inserts w/ cost limit
! wall 25.105273 comb 25.080000 user 25.080000 sys 0.000000 (best of 3)
! wall 1.724397 comb 1.720000 user 1.720000 sys 0.000000 (best of 6)
! mixed
! wall 0.807096 comb 0.810000 user 0.810000 sys 0.000000 (best of 13)
! mixed w/ cost limit
! wall 12.104470 comb 12.070000 user 12.070000 sys 0.000000 (best of 3)
! wall 1.190563 comb 1.190000 user 1.190000 sys 0.000000 (best of 9)
$ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 10000 --mixedgetfreq 90
! inserts
! wall 0.711177 comb 0.710000 user 0.710000 sys 0.000000 (best of 14)
! sets
! wall 0.846992 comb 0.850000 user 0.850000 sys 0.000000 (best of 12)
! inserts w/ cost limit
! wall 25.963028 comb 25.960000 user 25.960000 sys 0.000000 (best of 3)
! wall 2.184311 comb 2.180000 user 2.180000 sys 0.000000 (best of 5)
! mixed
! wall 0.728256 comb 0.730000 user 0.730000 sys 0.000000 (best of 14)
! mixed w/ cost limit
! wall 3.174256 comb 3.170000 user 3.170000 sys 0.000000 (best of 4)
! wall 0.773186 comb 0.770000 user 0.770000 sys 0.000000 (best of 13)
$ hg perflrucachedict --size 100000 --gets 1000000 --sets 1000000 --mixed 1000000 --mixedgetfreq 90 --costlimit 5000000
! gets
! wall 1.191368 comb 1.190000 user 1.190000 sys 0.000000 (best of 9)
! wall 1.195304 comb 1.190000 user 1.190000 sys 0.000000 (best of 9)
! inserts
! wall 0.950995 comb 0.950000 user 0.950000 sys 0.000000 (best of 11)
! inserts w/ cost limit
! wall 1.589732 comb 1.590000 user 1.590000 sys 0.000000 (best of 7)
! sets
! wall 1.094941 comb 1.100000 user 1.090000 sys 0.010000 (best of 9)
! mixed
! wall 0.936420 comb 0.940000 user 0.930000 sys 0.010000 (best of 10)
! mixed w/ cost limit
! wall 0.882780 comb 0.870000 user 0.870000 sys 0.000000 (best of 11)
This puts us ~2x slower than caches without cost accounting. And for
read-heavy workloads (the prime use cases for caches), performance is
nearly identical.
In the worst case (pure write workloads with cost accounting enabled),
we're looking at ~1.5us per insert on large caches. That seems "fast
enough."
Differential Revision: https://phab.mercurial-scm.org/D4505
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 06 Sep 2018 12:40:30 -0700] rev 39569
util: optimize cost auditing on insert
Calling popoldest() on insert with cost auditing enabled introduces
significant overhead.
The primary reason for this overhead is that popoldest() needs to
walk the linked list to find the first non-empty node. When we
call popoldest() within a loop, this can become quadratic. The
performance impact is more pronounced on caches with large capacities.
This commit effectively inlines the popoldest() call into
_enforcecostlimit(). By doing so, we only do the backwards walk
to find the first empty node once. However, we still may still
perform this work on insert when the cache is near cost capacity.
So this is only a partial performance win.
$ hg perflrucachedict --size 4 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 100
! gets w/ cost limit
! wall 0.598737 comb 0.590000 user 0.590000 sys 0.000000 (best of 17)
! inserts w/ cost limit
! wall 1.694282 comb 1.700000 user 1.700000 sys 0.000000 (best of 6)
! wall 1.659181 comb 1.650000 user 1.650000 sys 0.000000 (best of 7)
! mixed w/ cost limit
! wall 1.157655 comb 1.150000 user 1.150000 sys 0.000000 (best of 9)
! wall 1.139955 comb 1.140000 user 1.140000 sys 0.000000 (best of 9)
$ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 10000
! gets w/ cost limit
! wall 0.598526 comb 0.600000 user 0.600000 sys 0.000000 (best of 17)
! wall 0.601993 comb 0.600000 user 0.600000 sys 0.000000 (best of 17)
! inserts w/ cost limit
! wall 37.838315 comb 37.840000 user 37.840000 sys 0.000000 (best of 3)
! wall 25.105273 comb 25.080000 user 25.080000 sys 0.000000 (best of 3)
! mixed w/ cost limit
! wall 18.060198 comb 18.060000 user 18.060000 sys 0.000000 (best of 3)
! wall 12.104470 comb 12.070000 user 12.070000 sys 0.000000 (best of 3)
$ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 10000 --mixedgetfreq 90
! gets w/ cost limit
! wall 0.600024 comb 0.600000 user 0.600000 sys 0.000000 (best of 17)
! wall 0.614439 comb 0.620000 user 0.620000 sys 0.000000 (best of 17)
! inserts w/ cost limit
! wall 37.154547 comb 37.120000 user 37.120000 sys 0.000000 (best of 3)
! wall 25.963028 comb 25.960000 user 25.960000 sys 0.000000 (best of 3)
! mixed w/ cost limit
! wall 4.381602 comb 4.380000 user 4.370000 sys 0.010000 (best of 3)
! wall 3.174256 comb 3.170000 user 3.170000 sys 0.000000 (best of 4)
Differential Revision: https://phab.mercurial-scm.org/D4504
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 06 Sep 2018 14:04:46 -0700] rev 39568
util: teach lrucachedict to enforce a max total cost
Now that lrucachedict entries can have a numeric cost associated
with them and we can easily pop the oldest item in the cache, it
now becomes relatively trivial to implement support for enforcing
a high water mark on the total cost of items in the cache.
This commit teaches lrucachedict instances to have a max cost
associated with them. When items are inserted, we pop old items
until enough "cost" frees up to make room for the new item.
This feature is close to zero cost when not used (modulo the insertion
regressed introduced by the previous commit):
$ ./hg perflrucachedict --size 4 --gets 1000000 --sets 1000000 --mixed 1000000
! gets
! wall 0.607444 comb 0.610000 user 0.610000 sys 0.000000 (best of 17)
! wall 0.601653 comb 0.600000 user 0.600000 sys 0.000000 (best of 17)
! inserts
! wall 0.678261 comb 0.680000 user 0.680000 sys 0.000000 (best of 14)
! wall 0.685042 comb 0.680000 user 0.680000 sys 0.000000 (best of 15)
! sets
! wall 0.808770 comb 0.800000 user 0.800000 sys 0.000000 (best of 13)
! wall 0.834241 comb 0.830000 user 0.830000 sys 0.000000 (best of 12)
! mixed
! wall 0.782441 comb 0.780000 user 0.780000 sys 0.000000 (best of 13)
! wall 0.803804 comb 0.800000 user 0.800000 sys 0.000000 (best of 13)
$ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000
! init
! wall 0.006952 comb 0.010000 user 0.010000 sys 0.000000 (best of 418)
! gets
! wall 0.613350 comb 0.610000 user 0.610000 sys 0.000000 (best of 17)
! wall 0.617415 comb 0.620000 user 0.620000 sys 0.000000 (best of 17)
! inserts
! wall 0.701270 comb 0.700000 user 0.700000 sys 0.000000 (best of 15)
! wall 0.700516 comb 0.700000 user 0.700000 sys 0.000000 (best of 15)
! sets
! wall 0.825720 comb 0.830000 user 0.830000 sys 0.000000 (best of 13)
! wall 0.837946 comb 0.840000 user 0.830000 sys 0.010000 (best of 12)
! mixed
! wall 0.821644 comb 0.820000 user 0.820000 sys 0.000000 (best of 13)
! wall 0.850559 comb 0.850000 user 0.850000 sys 0.000000 (best of 12)
I reckon the slight slowdown on insert is due to added if checks.
For caches with total cost limiting enabled:
$ hg perflrucachedict --size 4 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 100
! gets w/ cost limit
! wall 0.598737 comb 0.590000 user 0.590000 sys 0.000000 (best of 17)
! inserts w/ cost limit
! wall 1.694282 comb 1.700000 user 1.700000 sys 0.000000 (best of 6)
! mixed w/ cost limit
! wall 1.157655 comb 1.150000 user 1.150000 sys 0.000000 (best of 9)
$ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 10000
! gets w/ cost limit
! wall 0.598526 comb 0.600000 user 0.600000 sys 0.000000 (best of 17)
! inserts w/ cost limit
! wall 37.838315 comb 37.840000 user 37.840000 sys 0.000000 (best of 3)
! mixed w/ cost limit
! wall 18.060198 comb 18.060000 user 18.060000 sys 0.000000 (best of 3)
$ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 10000 --mixedgetfreq 90
! gets w/ cost limit
! wall 0.600024 comb 0.600000 user 0.600000 sys 0.000000 (best of 17)
! inserts w/ cost limit
! wall 37.154547 comb 37.120000 user 37.120000 sys 0.000000 (best of 3)
! mixed w/ cost limit
! wall 4.381602 comb 4.380000 user 4.370000 sys 0.010000 (best of 3)
The functions we're benchmarking are slightly different, which could
move numbers by a few milliseconds. But the slowdown on insert is too
great to be explained by that. The slowness is due to insert heavy
operations needing to call popoldest() repeatedly when the cache is
at capacity. The next commit will address this.
Differential Revision: https://phab.mercurial-scm.org/D4503
Gregory Szorc <gregory.szorc@gmail.com> [Fri, 07 Sep 2018 12:14:42 -0700] rev 39567
util: allow lrucachedict to track cost of entries
Currently, lrucachedict allows tracking of arbitrary items with the
only limit being the total number of items in the cache.
Caches can be a lot more useful when they are bound by the size
of the items in them rather than the number of elements in the
cache.
In preparation for teaching lrucachedict to enforce a max size of
cached items, we teach lrucachedict to optionally associate a numeric
cost value with each node.
We purposefully let the caller define their own cost for nodes.
This does introduce some overhead. Most of it comes from __setitem__,
since that function now calls into insert(), thus introducing Python
function call overhead.
$ hg perflrucachedict --size 4 --gets 1000000 --sets 1000000 --mixed 1000000
! gets
! wall 0.599552 comb 0.600000 user 0.600000 sys 0.000000 (best of 17)
! wall 0.614643 comb 0.610000 user 0.610000 sys 0.000000 (best of 17)
! inserts
! <not available>
! wall 0.655817 comb 0.650000 user 0.650000 sys 0.000000 (best of 16)
! sets
! wall 0.540448 comb 0.540000 user 0.540000 sys 0.000000 (best of 18)
! wall 0.805644 comb 0.810000 user 0.810000 sys 0.000000 (best of 13)
! mixed
! wall 0.651556 comb 0.660000 user 0.660000 sys 0.000000 (best of 15)
! wall 0.781357 comb 0.780000 user 0.780000 sys 0.000000 (best of 13)
$ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000
! gets
! wall 0.621014 comb 0.620000 user 0.620000 sys 0.000000 (best of 16)
! wall 0.615146 comb 0.620000 user 0.620000 sys 0.000000 (best of 17)
! inserts
! <not available>
! wall 0.698115 comb 0.700000 user 0.700000 sys 0.000000 (best of 15)
! sets
! wall 0.560247 comb 0.560000 user 0.560000 sys 0.000000 (best of 18)
! wall 0.832495 comb 0.830000 user 0.830000 sys 0.000000 (best of 12)
! mixed
! wall 0.686172 comb 0.680000 user 0.680000 sys 0.000000 (best of 15)
! wall 0.841359 comb 0.840000 user 0.840000 sys 0.000000 (best of 12)
We're still under 1us per insert, which seems like reasonable
performance for a cache.
If we comment out updating of self.totalcost during insert(),
performance of insert() is identical to __setitem__ before. However,
I don't want to make total cost evaluation lazy because it has
significant performance implications for when we need to evaluate the
total cost at mutation time (it requires a cache traversal, which could
be expensive for large caches).
Differential Revision: https://phab.mercurial-scm.org/D4502
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 05 Sep 2018 23:15:20 -0700] rev 39566
util: add a popoldest() method to lrucachedict
This allows consumers to prune the oldest item from the cache. This
could be useful for e.g. a consumer that wishes for the size of
items tracked by the cache to remain under a high water mark.
Differential Revision: https://phab.mercurial-scm.org/D4501
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 06 Sep 2018 11:40:20 -0700] rev 39565
util: ability to change capacity when copying lrucachedict
This will allow us to easily replace an lrucachedict with one
with a higher or lower capacity as consumers deem necessary.
IMO it is easier to just create a new cache instance than to
muck with the capacity of an existing cache. Mutating an existing
cache's capacity feels more prone to bugs.
Differential Revision: https://phab.mercurial-scm.org/D4500
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 06 Sep 2018 11:37:27 -0700] rev 39564
util: make capacity a public attribute on lrucachedict
So others can query it. Useful for operations that may want to verify
the cache has capacity for N items before it performs an operation that
may cause cache eviction.
Differential Revision: https://phab.mercurial-scm.org/D4499
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 06 Sep 2018 11:33:40 -0700] rev 39563
util: properly copy lrucachedict instances
Previously, copy() only worked if the cache was full. We teach
copy() to only copy defined nodes.
Differential Revision: https://phab.mercurial-scm.org/D4498
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 06 Sep 2018 11:27:25 -0700] rev 39562
tests: rewrite test-lrucachedict.py to use unittest
This makes the code so much easier to test and debug.
Along the way, I discovered a bug in copy(), which I kind of
added test coverage for.
Differential Revision: https://phab.mercurial-scm.org/D4497
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 29 Aug 2018 15:17:11 -0700] rev 39561
wireprotov2peer: stream decoded responses
Previously, wire protocol version 2 would buffer all response data.
Only once all data was received did we CBOR decode it and resolve
the future associated with the command. This was obviously not
desirable. In future commits that introduce large response payloads,
this caused significant memory bloat and slowed down client
operations due to waiting on the server.
This commit refactors the response handling code so that response
data can be streamed.
Command response objects now contain a buffered CBOR decoder. As
new data arrives, it is fed into the decoder. Decoded objects are
made available to the generator as they are decoded.
Because there is a separate thread processing incoming frames and
feeding data into the response object, there is the potential for
race conditions when mutating response objects. So a lock has been
added to guard access to critical state variables.
Because the generator emitting decoded objects needs to wait on
those objects to become available, we've added an Event for the
generator to wait on so it doesn't busy loop. This does mean
there is the potential for deadlocks. And I'm pretty sure they can
occur in some scenarios. We already have a handful of TODOs around
this. But I've added some more. Fixing this will likely require
moving the background thread receiving frames into clienthandler.
We likely would have done this anyway when implementing the client
bits for the SSH transport.
Test output changes because the initial CBOR map holding the overall
response state is now always handled internally by the response
object.
Differential Revision: https://phab.mercurial-scm.org/D4474
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 29 Aug 2018 16:43:17 -0700] rev 39560
wireprotoframing: buffer emitted data to reduce frame count
An upcoming commit introduces a wire protocol command that can emit
hundreds of thousands of small objects. Without a buffering layer,
we would emit a single, small frame for every object. Performance
profiling revealed this to be a source of significant overhead for
both client and server.
This commit introduces a very crude buffering layer so that we emit
fewer, bigger frames in such a scenario. This code will likely get
rewritten in the future to be part of the streams API, as we'll
need a similar strategy for compressing data. I don't want to think
about it too much at the moment though.
server
before: user 32.500+0.000 sys 1.160+0.000
after: user 20.230+0.010 sys 0.180+0.000
client
before: user 133.400+0.000 sys 93.120+0.000
after: user 68.370+0.000 sys 32.950+0.000
This appears to indicate we have significant overhead in the frame
processing code on both client and server. It might be worth profiling
that at some point...
Differential Revision: https://phab.mercurial-scm.org/D4473