view mercurial/pure/mpatch.py @ 23785:cb99bacb9b4e

branchcache: introduce revbranchcache for caching of revision branch names It is expensive to retrieve the branch name of a revision. Very expensive when creating a changectx and calling .branch() every time - slightly less when using changelog.branchinfo(). Now, to speed things up, provide a way to cache the results on disk in an efficient format. Each branchname is assigned a number, and for each revision we store the number of the corresponding branch name. The branch names are stored in a dedicated file which is strictly append only. Branch names are usually reused across several revisions, and the total list of branch names will thus be so small that it is feasible to read the whole set of names before using the cache. It will however do that it might be more efficient to use the changelog for retrieving the branch info for a single revision. The revision entries are stored in another file. This file is usually append only, but if the repository has been modified, the file will be truncated and the relevant parts rewritten on demand. The entries for each revision are 8 bytes each, and the whole revision file will thus be 1/8 of 00changelog.i. Each revision entry contains the first 4 bytes of the corresponding node hash. This is used as a check sum that always is verified before the entry is used. That check is relatively expensive but it makes sure history modification is detected and handled correctly. It will also detect and handle most revision file corruptions. This is just a cache. A new format can always be introduced if other requirements or ideas make that seem like a good idea. Rebuilding the cache is not really more expensive than it was to run for example 'hg log -b branchname' before this cache was introduced. This new method is still unused but promise to make some operations several times faster once it actually is used. Abandoning Python 2.4 would make it possible to implement this more efficiently by using struct classes and pack_into. The Python code could probably also be micro optimized or it could be implemented very efficiently in C where it would be easy to control the data access.
author Mads Kiilerich <madski@unity3d.com>
date Thu, 08 Jan 2015 00:01:03 +0100
parents 525fdb738975
children 9a17576103a4
line wrap: on
line source

# mpatch.py - Python implementation of mpatch.c
#
# Copyright 2009 Matt Mackall <mpm@selenic.com> and others
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.

import struct
try:
    from cStringIO import StringIO
except ImportError:
    from StringIO import StringIO

# This attempts to apply a series of patches in time proportional to
# the total size of the patches, rather than patches * len(text). This
# means rather than shuffling strings around, we shuffle around
# pointers to fragments with fragment lists.
#
# When the fragment lists get too long, we collapse them. To do this
# efficiently, we do all our operations inside a buffer created by
# mmap and simply use memmove. This avoids creating a bunch of large
# temporary string buffers.

def patches(a, bins):
    if not bins:
        return a

    plens = [len(x) for x in bins]
    pl = sum(plens)
    bl = len(a) + pl
    tl = bl + bl + pl # enough for the patches and two working texts
    b1, b2 = 0, bl

    if not tl:
        return a

    m = StringIO()
    def move(dest, src, count):
        """move count bytes from src to dest

        The file pointer is left at the end of dest.
        """
        m.seek(src)
        buf = m.read(count)
        m.seek(dest)
        m.write(buf)

    # load our original text
    m.write(a)
    frags = [(len(a), b1)]

    # copy all the patches into our segment so we can memmove from them
    pos = b2 + bl
    m.seek(pos)
    for p in bins: m.write(p)

    def pull(dst, src, l): # pull l bytes from src
        while l:
            f = src.pop()
            if f[0] > l: # do we need to split?
                src.append((f[0] - l, f[1] + l))
                dst.append((l, f[1]))
                return
            dst.append(f)
            l -= f[0]

    def collect(buf, list):
        start = buf
        for l, p in reversed(list):
            move(buf, p, l)
            buf += l
        return (buf - start, start)

    for plen in plens:
        # if our list gets too long, execute it
        if len(frags) > 128:
            b2, b1 = b1, b2
            frags = [collect(b1, frags)]

        new = []
        end = pos + plen
        last = 0
        while pos < end:
            m.seek(pos)
            p1, p2, l = struct.unpack(">lll", m.read(12))
            pull(new, frags, p1 - last) # what didn't change
            pull([], frags, p2 - p1)    # what got deleted
            new.append((l, pos + 12))   # what got added
            pos += l + 12
            last = p2
        frags.extend(reversed(new))     # what was left at the end

    t = collect(b2, frags)

    m.seek(t[1])
    return m.read(t[0])

def patchedsize(orig, delta):
    outlen, last, bin = 0, 0, 0
    binend = len(delta)
    data = 12

    while data <= binend:
        decode = delta[bin:bin + 12]
        start, end, length = struct.unpack(">lll", decode)
        if start > end:
            break
        bin = data + length
        data = bin + 12
        outlen += start - last
        last = end
        outlen += length

    if bin != binend:
        raise ValueError("patch cannot be decoded")

    outlen += orig - last
    return outlen