Mercurial > hg-stable
changeset 30303:1f92056c4066
revlog: optimize _chunkraw when startrev==endrev
In many cases, _chunkraw() is called with startrev==endrev. When
this is true, we can avoid an extra index lookup and some other
minor operations.
On the mozilla-unified repo, `hg perfrevlogchunks -c` says this
has the following impact:
! read w/ reused fd
! wall 0.371846 comb 0.370000 user 0.350000 sys 0.020000 (best of 27)
! wall 0.337930 comb 0.330000 user 0.300000 sys 0.030000 (best of 30)
! read batch w/ reused fd
! wall 0.014952 comb 0.020000 user 0.000000 sys 0.020000 (best of 197)
! wall 0.014866 comb 0.010000 user 0.000000 sys 0.010000 (best of 196)
So, we've gone from ~25x slower than batch to ~22.5x slower.
At this point, there's probably not much else we can do except
implement an optimized function in the index itself, including in C.
author | Gregory Szorc <gregory.szorc@gmail.com> |
---|---|
date | Sun, 23 Oct 2016 10:40:33 -0700 |
parents | ceddc3d94d74 |
children | 1a0c1ad57833 |
files | mercurial/revlog.py |
diffstat | 1 files changed, 5 insertions(+), 2 deletions(-) [+] |
line wrap: on
line diff
--- a/mercurial/revlog.py Sat Oct 22 15:41:23 2016 -0700 +++ b/mercurial/revlog.py Sun Oct 23 10:40:33 2016 -0700 @@ -1113,9 +1113,12 @@ # (functions are expensive). index = self.index istart = index[startrev] - iend = index[endrev] start = int(istart[0] >> 16) - end = int(iend[0] >> 16) + iend[1] + if startrev == endrev: + end = start + istart[1] + else: + iend = index[endrev] + end = int(iend[0] >> 16) + iend[1] if self._inline: start += (startrev + 1) * self._io.size