view mercurial/pure/mpatch.py @ 35218:d61f2a3d5e53

hgweb: only include graph-related data in jsdata variable on /graph pages (BC) Historically, client-side graph code was not only rendering the graph itself, but it was also adding all of the changeset information to the page as well. It meant that JavaScript code needed to construct valid HTML as a string (although proper escaping was done server-side). It wasn't too clunky, even though it meant that a lot of server-side things were duplicated client-side for no good reason, but the worst thing about it was the data format it used. It was somewhat future-proof, but not human-friendly, because it was just a tuple: it was possible to append things to it (as was done in e.g. 270f57d35525), but you'd then have to remember the indices and reading the resulting JS code wasn't easy, because cur[8] is not descriptive at all. So what would need to happen for graph to have more features, such as more changeset information or a different vertex style (branch-closing, obsolete)? First you'd need to take some property, process it (e.g. escape and pass through templatefilters function, and mind the encoding too), append it to jsdata and remember its index, then go add nearly identical JavaScript code to 4 different hgweb themes that use jsdata to render HTML, and finally try and forget how brittle it all felt. Oh yeah, and the indices go to double digits if we add 2 more items, say phase and obsolescence, and there are more to come. Rendering vertex in a different style would need another property (say, character "o", "_", or "x"), except if you want to be backwards-compatible, it would need to go after tags and bookmarks, and that just doesn't feel right. So here I'm trying to fix both the duplication of code and the data format: - changesets will be rendered by hgweb templates the same way as changelog and other such pages, so jsdata won't need any information that's not needed for rendering the graph itself - jsdata will be a dict, or an Object in JS, which is a lot nicer to humans and is a lot more future-proof in the long run, because it doesn't use numeric indices What about hgweb themes? Obviously, this will break all hgweb themes that render graph in JavaScript, including 3rd-party custom ones. But this will also reduce the size of client-side code and make it more uniform, so that it can be shared across hgweb themes, further reducing its size. The next few patches demonstrate that it's not hard to adapt a theme to these changes. And in a later series, I'm planning to move duplicate JS code from */graph.tmpl to mercurial.js and leave only 4 lines of code embedded in those <script> elements, and even that would be just to allow redefining graph.vertex function. So adapting a custom 3rd-party theme to these changes would mean: - creating or copying graphnode.tmpl and adding it to the map file (if a theme doesn't already use __base__) - modifying one line in graph.tmpl and simply removing the bigger part of JavaScript code from there Making these changes in this patch and not updating every hgweb theme that uses jsdata at the same time is a bit of a cheat to make this series more manageable: /graph pages that use jsdata are broken by this patch, but since there are no tests that would detect this, bisect works fine; and themes are updated separately, in the next 4 patches of this series to ease reviewing.
author Anton Shestakov <av6@dwimlabs.net>
date Fri, 01 Dec 2017 16:00:40 +0800
parents 5326e4ef1dab
children 644a02f6b34f
line wrap: on
line source

# mpatch.py - Python implementation of mpatch.c
#
# Copyright 2009 Matt Mackall <mpm@selenic.com> and others
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.

from __future__ import absolute_import

import struct

from .. import pycompat
stringio = pycompat.stringio

class mpatchError(Exception):
    """error raised when a delta cannot be decoded
    """

# This attempts to apply a series of patches in time proportional to
# the total size of the patches, rather than patches * len(text). This
# means rather than shuffling strings around, we shuffle around
# pointers to fragments with fragment lists.
#
# When the fragment lists get too long, we collapse them. To do this
# efficiently, we do all our operations inside a buffer created by
# mmap and simply use memmove. This avoids creating a bunch of large
# temporary string buffers.

def _pull(dst, src, l): # pull l bytes from src
    while l:
        f = src.pop()
        if f[0] > l: # do we need to split?
            src.append((f[0] - l, f[1] + l))
            dst.append((l, f[1]))
            return
        dst.append(f)
        l -= f[0]

def _move(m, dest, src, count):
    """move count bytes from src to dest

    The file pointer is left at the end of dest.
    """
    m.seek(src)
    buf = m.read(count)
    m.seek(dest)
    m.write(buf)

def _collect(m, buf, list):
    start = buf
    for l, p in reversed(list):
        _move(m, buf, p, l)
        buf += l
    return (buf - start, start)

def patches(a, bins):
    if not bins:
        return a

    plens = [len(x) for x in bins]
    pl = sum(plens)
    bl = len(a) + pl
    tl = bl + bl + pl # enough for the patches and two working texts
    b1, b2 = 0, bl

    if not tl:
        return a

    m = stringio()

    # load our original text
    m.write(a)
    frags = [(len(a), b1)]

    # copy all the patches into our segment so we can memmove from them
    pos = b2 + bl
    m.seek(pos)
    for p in bins:
        m.write(p)

    for plen in plens:
        # if our list gets too long, execute it
        if len(frags) > 128:
            b2, b1 = b1, b2
            frags = [_collect(m, b1, frags)]

        new = []
        end = pos + plen
        last = 0
        while pos < end:
            m.seek(pos)
            try:
                p1, p2, l = struct.unpack(">lll", m.read(12))
            except struct.error:
                raise mpatchError("patch cannot be decoded")
            _pull(new, frags, p1 - last) # what didn't change
            _pull([], frags, p2 - p1)    # what got deleted
            new.append((l, pos + 12))   # what got added
            pos += l + 12
            last = p2
        frags.extend(reversed(new))     # what was left at the end

    t = _collect(m, b2, frags)

    m.seek(t[1])
    return m.read(t[0])

def patchedsize(orig, delta):
    outlen, last, bin = 0, 0, 0
    binend = len(delta)
    data = 12

    while data <= binend:
        decode = delta[bin:bin + 12]
        start, end, length = struct.unpack(">lll", decode)
        if start > end:
            break
        bin = data + length
        data = bin + 12
        outlen += start - last
        last = end
        outlen += length

    if bin != binend:
        raise mpatchError("patch cannot be decoded")

    outlen += orig - last
    return outlen