Durham Goode <durham@fb.com> [Wed, 02 Nov 2016 17:33:31 -0700] rev 30290
manifest: throw LookupError if node not in revlog
When accessing a manifest via manifestlog[node], let's verify that the node
actually exists and throw a LookupError if it doesn't. This matches the old read
behavior, so we don't accidentally return invalid manifestctxs.
We do this in manifestlog instead of in the manifestctx/treemanifestctx
constructors because the treemanifest code currently relies on the fact that
certain code paths can produce treemanifests without touching the revlogs (and
it has tests that verify things work if certain revlogs are missing entirely, so
they break if we add validation that tries to read them).
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 23 Oct 2016 10:40:33 -0700] rev 30289
revlog: optimize _chunkraw when startrev==endrev
In many cases, _chunkraw() is called with startrev==endrev. When
this is true, we can avoid an extra index lookup and some other
minor operations.
On the mozilla-unified repo, `hg perfrevlogchunks -c` says this
has the following impact:
! read w/ reused fd
! wall 0.371846 comb 0.370000 user 0.350000 sys 0.020000 (best of 27)
! wall 0.337930 comb 0.330000 user 0.300000 sys 0.030000 (best of 30)
! read batch w/ reused fd
! wall 0.014952 comb 0.020000 user 0.000000 sys 0.020000 (best of 197)
! wall 0.014866 comb 0.010000 user 0.000000 sys 0.010000 (best of 196)
So, we've gone from ~25x slower than batch to ~22.5x slower.
At this point, there's probably not much else we can do except
implement an optimized function in the index itself, including in C.
Gregory Szorc <gregory.szorc@gmail.com> [Sat, 22 Oct 2016 15:41:23 -0700] rev 30288
revlog: inline start() and end() for perf reasons
When I implemented `hg perfrevlogchunks`, one of the things that
stood out was N * _chunk() calls was ~38x slower than 1
_chunks() call. Specifically, on the mozilla-unified repo:
N*_chunk: 0.528997s
1*_chunks: 0.013735s
This repo has 352,097 changesets. So the average time per changeset
comes out to:
N*_chunk: 1.502us
1*_chunks: 0.039us
If you extrapolate these numbers to a repository with 1M changesets,
that comes out to 1.502s versus 0.039s, which is significant.
At these latencies, Python attribute lookups and function calls
matter. So, this patch inlines some code to cut down on that overhead.
The impact of this patch on N*_chunk() calls is clear:
! wall 0.528997 comb 0.520000 user 0.500000 sys 0.020000 (best of 19)
! wall 0.367723 comb 0.370000 user 0.350000 sys 0.020000 (best of 27)
So, we go from ~38x slower to ~27x. A nice improvement. But there's
still a long way to go.
It's worth noting that functionality like revsets perform changelog
lookups one revision at a time. So this code path is worth optimizing.
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 23 Oct 2016 09:34:55 -0700] rev 30287
revlog: reorder index accessors to match data structure order
Index entries are ordered tuples. We have accessors in the revlog
class to map tuple offsets to names. To help reinforce the order,
reorder the methods so they match the order of elements in the
tuple. While I'm here, also sneak in some minimal documentation.