comparison mercurial/revlog.py @ 29830:92ac2baaea86

revlog: use an LRU cache for delta chain bases Profiling using statprof revealed a hotspot during changegroup application calculating delta chain bases on generaldelta repos. Essentially, revlog._addrevision() was performing a lot of redundant work tracing the delta chain as part of determining when the chain distance was acceptable. This was most pronounced when adding revisions to manifests, which can have delta chains thousands of revisions long. There was a delta chain base cache on revlogs before, but it only captured a single revision. This was acceptable before generaldelta, when _addrevision would build deltas from the previous revision and thus we'd pretty much guarantee a cache hit when resolving the delta chain base on a subsequent _addrevision call. However, it isn't suitable for generaldelta because parent revisions aren't necessarily the last processed revision. This patch converts the delta chain base cache to an LRU dict cache. The cache can hold multiple entries, so generaldelta repos have a higher chance of getting a cache hit. The impact of this change when processing changegroup additions is significant. On a generaldelta conversion of the "mozilla-unified" repo (which contains heads of the main Firefox repositories in chronological order - this means there are lots of transitions between heads in revlog order), this change has the following impact when performing an `hg unbundle` of an uncompressed bundle of the repo: before: 5:42 CPU time after: 4:34 CPU time Most of this time is saved when applying the changelog and manifest revlogs: before: 2:30 CPU time after: 1:17 CPU time That nearly a 50% reduction in CPU time applying changesets and manifests! Applying a gzipped bundle of the same repo (effectively simulating a `hg clone` over HTTP) showed a similar speedup: before: 5:53 CPU time after: 4:46 CPU time Wall time improvements were basically the same as CPU time. I didn't measure explicitly, but it feels like most of the time is saved when processing manifests. This makes sense, as large manifests tend to have very long delta chains and thus benefit the most from this cache. So, this change effectively makes changegroup application (which is used by `hg unbundle`, `hg clone`, `hg pull`, `hg unshelve`, and various other commands) significantly faster when delta chains are long (which can happen on repos with large numbers of files and thus large manifests). In theory, this change can result in more memory utilization. However, we're caching a dict of ints. At most we have 200 ints + Python object overhead per revlog. And, the cache is really only populated when performing read-heavy operations, such as adding changegroups or scanning an individual revlog. For memory bloat to be an issue, we'd need to scan/read several revisions from several revlogs all while having active references to several revlogs. I don't think there are many operations that do this, so I don't think memory bloat from the cache will be an issue.
author Gregory Szorc <gregory.szorc@gmail.com>
date Mon, 22 Aug 2016 21:48:50 -0700
parents dae97049345b
children b5e5ddf48bd2
comparison
equal deleted inserted replaced
29829:dae97049345b 29830:92ac2baaea86
223 self.indexfile = indexfile 223 self.indexfile = indexfile
224 self.datafile = indexfile[:-2] + ".d" 224 self.datafile = indexfile[:-2] + ".d"
225 self.opener = opener 225 self.opener = opener
226 # 3-tuple of (node, rev, text) for a raw revision. 226 # 3-tuple of (node, rev, text) for a raw revision.
227 self._cache = None 227 self._cache = None
228 # 2-tuple of (rev, baserev) defining the base revision the delta chain 228 # Maps rev to chain base rev.
229 # begins at for a revision. 229 self._chainbasecache = util.lrucachedict(100)
230 self._basecache = None
231 # 2-tuple of (offset, data) of raw data from the revlog at an offset. 230 # 2-tuple of (offset, data) of raw data from the revlog at an offset.
232 self._chunkcache = (0, '') 231 self._chunkcache = (0, '')
233 # How much data to read and cache into the raw revlog data cache. 232 # How much data to read and cache into the raw revlog data cache.
234 self._chunkcachesize = 65536 233 self._chunkcachesize = 65536
235 self._maxchainlen = None 234 self._maxchainlen = None
338 except KeyError: 337 except KeyError:
339 return False 338 return False
340 339
341 def clearcaches(self): 340 def clearcaches(self):
342 self._cache = None 341 self._cache = None
343 self._basecache = None 342 self._chainbasecache.clear()
344 self._chunkcache = (0, '') 343 self._chunkcache = (0, '')
345 self._pcache = {} 344 self._pcache = {}
346 345
347 try: 346 try:
348 self._nodecache.clearcaches() 347 self._nodecache.clearcaches()
388 def end(self, rev): 387 def end(self, rev):
389 return self.start(rev) + self.length(rev) 388 return self.start(rev) + self.length(rev)
390 def length(self, rev): 389 def length(self, rev):
391 return self.index[rev][1] 390 return self.index[rev][1]
392 def chainbase(self, rev): 391 def chainbase(self, rev):
392 base = self._chainbasecache.get(rev)
393 if base is not None:
394 return base
395
393 index = self.index 396 index = self.index
394 base = index[rev][3] 397 base = index[rev][3]
395 while base != rev: 398 while base != rev:
396 rev = base 399 rev = base
397 base = index[rev][3] 400 base = index[rev][3]
401
402 self._chainbasecache[rev] = base
398 return base 403 return base
399 def chainlen(self, rev): 404 def chainlen(self, rev):
400 return self._chaininfo(rev)[0] 405 return self._chaininfo(rev)[0]
401 406
402 def _chaininfo(self, rev): 407 def _chaininfo(self, rev):
1428 fh = dfh 1433 fh = dfh
1429 ptext = self.revision(self.node(rev), _df=fh) 1434 ptext = self.revision(self.node(rev), _df=fh)
1430 delta = mdiff.textdiff(ptext, t) 1435 delta = mdiff.textdiff(ptext, t)
1431 data = self.compress(delta) 1436 data = self.compress(delta)
1432 l = len(data[1]) + len(data[0]) 1437 l = len(data[1]) + len(data[0])
1433 if basecache[0] == rev: 1438 chainbase = self.chainbase(rev)
1434 chainbase = basecache[1]
1435 else:
1436 chainbase = self.chainbase(rev)
1437 dist = l + offset - self.start(chainbase) 1439 dist = l + offset - self.start(chainbase)
1438 if self._generaldelta: 1440 if self._generaldelta:
1439 base = rev 1441 base = rev
1440 else: 1442 else:
1441 base = chainbase 1443 base = chainbase
1446 1448
1447 curr = len(self) 1449 curr = len(self)
1448 prev = curr - 1 1450 prev = curr - 1
1449 offset = self.end(prev) 1451 offset = self.end(prev)
1450 delta = None 1452 delta = None
1451 if self._basecache is None:
1452 self._basecache = (prev, self.chainbase(prev))
1453 basecache = self._basecache
1454 p1r, p2r = self.rev(p1), self.rev(p2) 1453 p1r, p2r = self.rev(p1), self.rev(p2)
1455 1454
1456 # full versions are inserted when the needed deltas 1455 # full versions are inserted when the needed deltas
1457 # become comparable to the uncompressed text 1456 # become comparable to the uncompressed text
1458 if text is None: 1457 if text is None:
1512 if alwayscache and text is None: 1511 if alwayscache and text is None:
1513 text = buildtext() 1512 text = buildtext()
1514 1513
1515 if type(text) == str: # only accept immutable objects 1514 if type(text) == str: # only accept immutable objects
1516 self._cache = (node, curr, text) 1515 self._cache = (node, curr, text)
1517 self._basecache = (curr, chainbase) 1516 self._chainbasecache[curr] = chainbase
1518 return node 1517 return node
1519 1518
1520 def _writeentry(self, transaction, ifh, dfh, entry, data, link, offset): 1519 def _writeentry(self, transaction, ifh, dfh, entry, data, link, offset):
1521 # Files opened in a+ mode have inconsistent behavior on various 1520 # Files opened in a+ mode have inconsistent behavior on various
1522 # platforms. Windows requires that a file positioning call be made 1521 # platforms. Windows requires that a file positioning call be made