comparison mercurial/revlog.py @ 33207:895ecec31c70

revlog: add an experimental option to mitigated delta issues (issue5480) The general delta heuristic to select a delta do not scale with the number of branch. The delta base is frequently too far away to be able to reuse a chain according to the "distance" criteria. This leads to insertion of larger delta (or even full text) that themselves push the bases for the next delta further away leading to more large deltas and full texts. This full text and frequent recomputation throw Mercurial performance in disarray. For example of a slightly large repository 280 000 files (2 150 000 versions) 430 000 changesets (10 000 topological heads) Number below compares repository with and without the distance criteria: manifest size: with: 21.4 GB without: 0.3 GB store size: with: 28.7 GB without 7.4 GB bundle last 15 00 revisions: with: 800 seconds 971 MB without: 50 seconds 73 MB unbundle time (of the last 15K revisions): with: 1150 seconds (~19 minutes) without: 35 seconds Similar issues has been observed in other repositories. Adding a new option or "feature" on stable is uncommon. However, given that this issues is making Mercurial practically unusable, I'm exceptionally targeting this patch for stable. What is actually needed is a full rework of the delta building and reading logic. However, that will be a longer process and churn not suitable for stable. In the meantime, we introduces a quick and dirty mitigation of this in the 'experimental' config space. The new option introduces a way to set the maximum amount of memory usable to store a diff in memory. This extend the ability for Mercurial to create chains without removing all safe guard regarding memory access. The option should be phased out when core has a more proper solution available. Setting the limit to '0' remove all limits, setting it to '-1' use the default limit (textsize x 4).
author Pierre-Yves David <pierre-yves.david@octobus.net>
date Fri, 23 Jun 2017 13:49:34 +0200
parents 6d678ab1b10d
children 85d1ac011582
comparison
equal deleted inserted replaced
33206:45d6e2767a93 33207:895ecec31c70
290 self._pcache = {} 290 self._pcache = {}
291 # Mapping of revision integer to full node. 291 # Mapping of revision integer to full node.
292 self._nodecache = {nullid: nullrev} 292 self._nodecache = {nullid: nullrev}
293 self._nodepos = None 293 self._nodepos = None
294 self._compengine = 'zlib' 294 self._compengine = 'zlib'
295 self._maxdeltachainspan = -1
295 296
296 v = REVLOG_DEFAULT_VERSION 297 v = REVLOG_DEFAULT_VERSION
297 opts = getattr(opener, 'options', None) 298 opts = getattr(opener, 'options', None)
298 if opts is not None: 299 if opts is not None:
299 if 'revlogv2' in opts: 300 if 'revlogv2' in opts:
311 if 'aggressivemergedeltas' in opts: 312 if 'aggressivemergedeltas' in opts:
312 self._aggressivemergedeltas = opts['aggressivemergedeltas'] 313 self._aggressivemergedeltas = opts['aggressivemergedeltas']
313 self._lazydeltabase = bool(opts.get('lazydeltabase', False)) 314 self._lazydeltabase = bool(opts.get('lazydeltabase', False))
314 if 'compengine' in opts: 315 if 'compengine' in opts:
315 self._compengine = opts['compengine'] 316 self._compengine = opts['compengine']
317 if 'maxdeltachainspan' in opts:
318 self._maxdeltachainspan = opts['maxdeltachainspan']
316 319
317 if self._chunkcachesize <= 0: 320 if self._chunkcachesize <= 0:
318 raise RevlogError(_('revlog chunk cache size %r is not greater ' 321 raise RevlogError(_('revlog chunk cache size %r is not greater '
319 'than 0') % self._chunkcachesize) 322 'than 0') % self._chunkcachesize)
320 elif self._chunkcachesize & (self._chunkcachesize - 1): 323 elif self._chunkcachesize & (self._chunkcachesize - 1):
1657 # - 'dist' is the distance from the base revision -- bounding it limits 1660 # - 'dist' is the distance from the base revision -- bounding it limits
1658 # the amount of I/O we need to do. 1661 # the amount of I/O we need to do.
1659 # - 'compresseddeltalen' is the sum of the total size of deltas we need 1662 # - 'compresseddeltalen' is the sum of the total size of deltas we need
1660 # to apply -- bounding it limits the amount of CPU we consume. 1663 # to apply -- bounding it limits the amount of CPU we consume.
1661 dist, l, data, base, chainbase, chainlen, compresseddeltalen = d 1664 dist, l, data, base, chainbase, chainlen, compresseddeltalen = d
1662 if (dist > textlen * 4 or l > textlen or 1665
1666 defaultmax = textlen * 4
1667 maxdist = self._maxdeltachainspan
1668 if not maxdist:
1669 maxdist = dist # ensure the conditional pass
1670 maxdist = max(maxdist, defaultmax)
1671 if (dist > maxdist or l > textlen or
1663 compresseddeltalen > textlen * 2 or 1672 compresseddeltalen > textlen * 2 or
1664 (self._maxchainlen and chainlen > self._maxchainlen)): 1673 (self._maxchainlen and chainlen > self._maxchainlen)):
1665 return False 1674 return False
1666 1675
1667 return True 1676 return True