tests/test-filelog.py
author Pierre-Yves David <pierre-yves.david@ens-lyon.org>
Mon, 23 Dec 2013 15:29:51 -0800
changeset 20207 cd62532c62a1
parent 17486 73e3e368bd42
child 20684 2761a791b113
permissions -rwxr-xr-x
obsolete: order of magnitude speedup in _computebumpedset Reminder: a changeset is said "bumped" if it tries to obsolete a immutable changeset. The previous algorithm for computing bumped changeset was: 1) Get all public changesets 2) Find all they successors 3) Search for stuff that are eligible for being "bumped" (mutable and non obsolete) The entry size of this algorithm is `O(len(public))` which is mostly the same as `O(len(repo))`. Even this this approach mean fewer obsolescence marker are traveled, this is not very scalable. The new algorithm is: 1) For each potential bumped changesets (non obsolete mutable) 2) iterate over precursors 3) if a precursors is public. changeset is bumped We travel more obsolescence marker, but the entry size is much smaller since the amount of potential bumped should remains mostly stable with time `O(1)`. On some confidential gigantic repo this move bumped computation from 15.19s to 0.46s (×33 speedup…). On "smaller" repo (mercurial, cubicweb's review) no significant gain were seen. The additional traversal of obsolescence marker is probably probably counter balance the advantage of it. Other optimisation could be done in the future (eg: sharing precursors cache for divergence detection)

#!/usr/bin/env python
"""
Tests the behaviour of filelog w.r.t. data starting with '\1\n'
"""
from mercurial import ui, hg
from mercurial.node import nullid, hex

myui = ui.ui()
repo = hg.repository(myui, path='.', create=True)

fl = repo.file('foobar')

def addrev(text, renamed=False):
    if renamed:
        # data doesn't matter. Just make sure filelog.renamed() returns True
        meta = dict(copyrev=hex(nullid), copy='bar')
    else:
        meta = {}

    lock = t = None
    try:
        lock = repo.lock()
        t = repo.transaction('commit')
        node = fl.add(text, meta, t, 0, nullid, nullid)
        return node
    finally:
        if t:
            t.close()
        if lock:
            lock.release()

def error(text):
    print 'ERROR: ' + text

textwith = '\1\nfoo'
without = 'foo'

node = addrev(textwith)
if not textwith == fl.read(node):
    error('filelog.read for data starting with \\1\\n')
if fl.cmp(node, textwith) or not fl.cmp(node, without):
    error('filelog.cmp for data starting with \\1\\n')
if fl.size(0) != len(textwith):
    error('FIXME: This is a known failure of filelog.size for data starting '
        'with \\1\\n')

node = addrev(textwith, renamed=True)
if not textwith == fl.read(node):
    error('filelog.read for a renaming + data starting with \\1\\n')
if fl.cmp(node, textwith) or not fl.cmp(node, without):
    error('filelog.cmp for a renaming + data starting with \\1\\n')
if fl.size(1) != len(textwith):
    error('filelog.size for a renaming + data starting with \\1\\n')

print 'OK.'