comparison mercurial/revlog.py @ 30289:1f92056c4066

revlog: optimize _chunkraw when startrev==endrev In many cases, _chunkraw() is called with startrev==endrev. When this is true, we can avoid an extra index lookup and some other minor operations. On the mozilla-unified repo, `hg perfrevlogchunks -c` says this has the following impact: ! read w/ reused fd ! wall 0.371846 comb 0.370000 user 0.350000 sys 0.020000 (best of 27) ! wall 0.337930 comb 0.330000 user 0.300000 sys 0.030000 (best of 30) ! read batch w/ reused fd ! wall 0.014952 comb 0.020000 user 0.000000 sys 0.020000 (best of 197) ! wall 0.014866 comb 0.010000 user 0.000000 sys 0.010000 (best of 196) So, we've gone from ~25x slower than batch to ~22.5x slower. At this point, there's probably not much else we can do except implement an optimized function in the index itself, including in C.
author Gregory Szorc <gregory.szorc@gmail.com>
date Sun, 23 Oct 2016 10:40:33 -0700
parents ceddc3d94d74
children 2ded17b64f09
comparison
equal deleted inserted replaced
30288:ceddc3d94d74 30289:1f92056c4066
1111 """ 1111 """
1112 # Inlined self.start(startrev) & self.end(endrev) for perf reasons 1112 # Inlined self.start(startrev) & self.end(endrev) for perf reasons
1113 # (functions are expensive). 1113 # (functions are expensive).
1114 index = self.index 1114 index = self.index
1115 istart = index[startrev] 1115 istart = index[startrev]
1116 iend = index[endrev]
1117 start = int(istart[0] >> 16) 1116 start = int(istart[0] >> 16)
1118 end = int(iend[0] >> 16) + iend[1] 1117 if startrev == endrev:
1118 end = start + istart[1]
1119 else:
1120 iend = index[endrev]
1121 end = int(iend[0] >> 16) + iend[1]
1119 1122
1120 if self._inline: 1123 if self._inline:
1121 start += (startrev + 1) * self._io.size 1124 start += (startrev + 1) * self._io.size
1122 end += (endrev + 1) * self._io.size 1125 end += (endrev + 1) * self._io.size
1123 length = end - start 1126 length = end - start