view mercurial/graphmod.py @ 46582:b0a3ca02d17a

copies-rust: implement PartialEqual manually Now that we know that each (dest, rev) pair has at most a unique CopySource, we can simplify comparison a lot. This "simple" step buy a good share of the previous slowdown back in some case: Repo Case Source-Rev Dest-Rev # of revisions old time new time Difference Factor time per rev --------------------------------------------------------------------------------------------------------------------------------------------------------------- mozilla-try x00000_revs_x00000_added_x000_copies 9b2a99adc05e 8e29777b48e6 : 382065 revs, 43.304637 s, 34.443661 s, -8.860976 s, × 0.7954, 90 µs/rev Full benchmark: Repo Case Source-Rev Dest-Rev # of revisions old time new time Difference Factor time per rev --------------------------------------------------------------------------------------------------------------------------------------------------------------- mercurial x_revs_x_added_0_copies ad6b123de1c7 39cfcef4f463 : 1 revs, 0.000043 s, 0.000043 s, +0.000000 s, × 1.0000, 43 µs/rev mercurial x_revs_x_added_x_copies 2b1c78674230 0c1d10351869 : 6 revs, 0.000114 s, 0.000117 s, +0.000003 s, × 1.0263, 19 µs/rev mercurial x000_revs_x000_added_x_copies 81f8ff2a9bf2 dd3267698d84 : 1032 revs, 0.004937 s, 0.004892 s, -0.000045 s, × 0.9909, 4 µs/rev pypy x_revs_x_added_0_copies aed021ee8ae8 099ed31b181b : 9 revs, 0.000339 s, 0.000196 s, -0.000143 s, × 0.5782, 21 µs/rev pypy x_revs_x000_added_0_copies 4aa4e1f8e19a 359343b9ac0e : 1 revs, 0.000049 s, 0.000050 s, +0.000001 s, × 1.0204, 50 µs/rev pypy x_revs_x_added_x_copies ac52eb7bbbb0 72e022663155 : 7 revs, 0.000202 s, 0.000117 s, -0.000085 s, × 0.5792, 16 µs/rev pypy x_revs_x00_added_x_copies c3b14617fbd7 ace7255d9a26 : 1 revs, 0.000409 s, 0.6f1f4a s, -0.000087 s, × 0.7873, 322 µs/rev pypy x_revs_x000_added_x000_copies df6f7a526b60 a83dc6a2d56f : 6 revs, 0.011984 s, 0.011949 s, -0.000035 s, × 0.9971, 1991 µs/rev pypy x000_revs_xx00_added_0_copies 89a76aede314 2f22446ff07e : 4785 revs, 0.050820 s, 0.050802 s, -0.000018 s, × 0.9996, 10 µs/rev pypy x000_revs_x000_added_x_copies 8a3b5bfd266e 2c68e87c3efe : 6780 revs, 0.087953 s, 0.088090 s, +0.000137 s, × 1.0016, 12 µs/rev pypy x000_revs_x000_added_x000_copies 89a76aede314 7b3dda341c84 : 5441 revs, 0.062902 s, 0.062079 s, -0.000823 s, × 0.9869, 11 µs/rev pypy x0000_revs_x_added_0_copies d1defd0dc478 c9cb1334cc78 : 43645 revs, 0.679234 s, 0.635337 s, -0.043897 s, × 0.9354, 14 µs/rev pypy x0000_revs_xx000_added_0_copies bf2c629d0071 4ffed77c095c : 2 revs, 0.013095 s, 0.013262 s, +0.000167 s, × 1.0128, 6631 µs/rev pypy x0000_revs_xx000_added_x000_copies 08ea3258278e d9fa043f30c0 : 11316 revs, 0.120910 s, 0.120085 s, -0.000825 s, × 0.9932, 10 µs/rev netbeans x_revs_x_added_0_copies fb0955ffcbcd a01e9239f9e7 : 2 revs, 0.000087 s, 0.000085 s, -0.000002 s, × 0.9770, 42 µs/rev netbeans x_revs_x000_added_0_copies 6f360122949f 20eb231cc7d0 : 2 revs, 0.000107 s, 0.000110 s, +0.000003 s, × 1.0280, 55 µs/rev netbeans x_revs_x_added_x_copies 1ada3faf6fb6 5a39d12eecf4 : 3 revs, 0.000186 s, 0.000177 s, -0.000009 s, × 0.9516, 59 µs/rev netbeans x_revs_x00_added_x_copies 35be93ba1e2c 9eec5e90c05f : 9 revs, 0.000754 s, 0.000743 s, -0.000011 s, × 0.9854, 82 µs/rev netbeans x000_revs_xx00_added_0_copies eac3045b4fdd 51d4ae7f1290 : 1421 revs, 0.010443 s, 0.010168 s, -0.000275 s, × 0.9737, 7 µs/rev netbeans x000_revs_x000_added_x_copies e2063d266acd 6081d72689dc : 1533 revs, 0.015697 s, 0.015946 s, +0.000249 s, × 1.0159, 10 µs/rev netbeans x000_revs_x000_added_x000_copies ff453e9fee32 411350406ec2 : 5750 revs, 0.063528 s, 0.062712 s, -0.000816 s, × 0.9872, 10 µs/rev netbeans x0000_revs_xx000_added_x000_copies 588c2d1ced70 1aad62e59ddd : 66949 revs, 0.545515 s, 0.523832 s, -0.021683 s, × 0.9603, 7 µs/rev mozilla-central x_revs_x_added_0_copies 3697f962bb7b 7015fcdd43a2 : 2 revs, 0.000089 s, 0.000090 s, +0.000001 s, × 1.0112, 45 µs/rev mozilla-central x_revs_x000_added_0_copies dd390860c6c9 40d0c5bed75d : 8 revs, 0.000265 s, 0.000264 s, -0.000001 s, × 0.9962, 33 µs/rev mozilla-central x_revs_x_added_x_copies 8d198483ae3b 14207ffc2b2f : 9 revs, 0.000381 s, 0.000187 s, -0.000194 s, × 0.4908, 20 µs/rev mozilla-central x_revs_x00_added_x_copies 98cbc58cc6bc 446a150332c3 : 7 revs, 0.000672 s, 0.000665 s, -0.000007 s, × 0.9896, 95 µs/rev mozilla-central x_revs_x000_added_x000_copies 3c684b4b8f68 0a5e72d1b479 : 3 revs, 0.003497 s, 0.003556 s, +0.000059 s, × 1.0169, 1185 µs/rev mozilla-central x_revs_x0000_added_x0000_copies effb563bb7e5 c07a39dc4e80 : 6 revs, 0.073204 s, 0.071345 s, -0.001859 s, × 0.9746, 11890 µs/rev mozilla-central x000_revs_xx00_added_0_copies 6100d773079a 04a55431795e : 1593 revs, 0.006482 s, 0.006551 s, +0.000069 s, × 1.0106, 4 µs/rev mozilla-central x000_revs_x000_added_x_copies 9f17a6fc04f9 2d37b966abed : 41 revs, 0.005066 s, 0.005078 s, +0.000012 s, × 1.0024, 123 µs/rev mozilla-central x000_revs_x000_added_x000_copies 7c97034feb78 4407bd0c6330 : 7839 revs, 0.065707 s, 0.065823 s, +0.000116 s, × 1.0018, 8 µs/rev mozilla-central x0000_revs_xx000_added_0_copies 9eec5917337d 67118cc6dcad : 615 revs, 0.026800 s, 0.027050 s, +0.000250 s, × 1.0093, 43 µs/rev mozilla-central x0000_revs_xx000_added_x000_copies f78c615a656c 96a38b690156 : 30263 revs, 0.203856 s, 0.202443 s, -0.001413 s, × 0.9931, 6 µs/rev mozilla-central x00000_revs_x0000_added_x0000_copies 6832ae71433c 4c222a1d9a00 : 153721 revs, 1.293394 s, 1.261583 s, -0.031811 s, × 0.9754, 8 µs/rev mozilla-central x00000_revs_x00000_added_x000_copies 76caed42cf7c 1daa622bbe42 : 204976 revs, 1.698239 s, 1.643869 s, -0.054370 s, × 0.9680, 8 µs/rev mozilla-try x_revs_x_added_0_copies aaf6dde0deb8 9790f499805a : 2 revs, 0.000875 s, 0.000868 s, -0.000007 s, × 0.9920, 434 µs/rev mozilla-try x_revs_x000_added_0_copies d8d0222927b4 5bb8ce8c7450 : 2 revs, 0.000891 s, 0.000887 s, -0.000004 s, × 0.9955, 443 µs/rev mozilla-try x_revs_x_added_x_copies 092fcca11bdb 936255a0384a : 4 revs, 0.000292 s, 0.000168 s, -0.000124 s, × 0.5753, 42 µs/rev mozilla-try x_revs_x00_added_x_copies b53d2fadbdb5 017afae788ec : 2 revs, 0.003939 s, 0.001160 s, -0.002779 s, × 0.2945, 580 µs/rev mozilla-try x_revs_x000_added_x000_copies 20408ad61ce5 6f0ee96e21ad : 1 revs, 0.033027 s, 0.033016 s, -0.000011 s, × 0.9997, 33016 µs/rev mozilla-try x_revs_x0000_added_x0000_copies effb563bb7e5 c07a39dc4e80 : 6 revs, 0.073703 s, 0.073312 s, -0.39ae31 s, × 0.9947, 12218 µs/rev mozilla-try x000_revs_xx00_added_0_copies 6100d773079a 04a55431795e : 1593 revs, 0.006469 s, 0.006485 s, +0.000016 s, × 1.0025, 4 µs/rev mozilla-try x000_revs_x000_added_x_copies 9f17a6fc04f9 2d37b966abed : 41 revs, 0.005278 s, 0.005494 s, +0.000216 s, × 1.0409, 134 µs/rev mozilla-try x000_revs_x000_added_x000_copies 1346fd0130e4 4c65cbdabc1f : 6657 revs, 0.064995 s, 0.064879 s, -0.000116 s, × 0.9982, 9 µs/rev mozilla-try x0000_revs_x_added_0_copies 63519bfd42ee a36a2a865d92 : 40314 revs, 0.301041 s, 0.301469 s, +0.000428 s, × 1.0014, 7 µs/rev mozilla-try x0000_revs_x_added_x_copies 9fe69ff0762d bcabf2a78927 : 38690 revs, 0.285575 s, 0.297113 s, +0.011538 s, × 1.0404, 7 µs/rev mozilla-try x0000_revs_xx000_added_x_copies 156f6e2674f2 4d0f2c178e66 : 8598 revs, 0.085597 s, 0.085890 s, +0.000293 s, × 1.0034, 9 µs/rev mozilla-try x0000_revs_xx000_added_0_copies 9eec5917337d 67118cc6dcad : 615 revs, 0.027118 s, 0.027718 s, +0.000600 s, × 1.0221, 45 µs/rev mozilla-try x0000_revs_xx000_added_x000_copies 89294cd501d9 7ccb2fc7ccb5 : 97052 revs, 2.119204 s, 2.048949 s, -0.070255 s, × 0.9668, 21 µs/rev mozilla-try x0000_revs_x0000_added_x0000_copies e928c65095ed e951f4ad123a : 52031 revs, 0.701479 s, 0.685924 s, -0.015555 s, × 0.9778, 13 µs/rev mozilla-try x00000_revs_x_added_0_copies 6a320851d377 1ebb79acd503 : 363753 revs, 4.482399 s, 4.482891 s, +0.000492 s, × 1.0001, 12 µs/rev mozilla-try x00000_revs_x00000_added_0_copies dc8a3ca7010e d16fde900c9c : 34414 revs, 0.574082 s, 0.577633 s, +0.003551 s, × 1.0062, 16 µs/rev mozilla-try x00000_revs_x_added_x_copies 5173c4b6f97c 95d83ee7242d : 362229 revs, 4.480366 s, 4.397816 s, -0.082550 s, × 0.9816, 12 µs/rev mozilla-try x00000_revs_x000_added_x_copies 9126823d0e9c ca82787bb23c : 359344 revs, 4.369070 s, 4.370538 s, +0.001468 s, × 1.0003, 12 µs/rev mozilla-try x00000_revs_x0000_added_x0000_copies 8d3fafa80d4b eb884023b810 : 192665 revs, 1.592506 s, 1.570439 s, -0.022067 s, × 0.9861, 8 µs/rev mozilla-try x00000_revs_x00000_added_x0000_copies 1b661134e2ca 1ae03d022d6d : 228985 revs, 87.824489 s, 88.388512 s, +0.564023 s, × 1.0064, 386 µs/rev mozilla-try x00000_revs_x00000_added_x000_copies 9b2a99adc05e 8e29777b48e6 : 382065 revs, 43.304637 s, 34.443661 s, -8.860976 s, × 0.7954, 90 µs/rev private : 459513 revs, 33.853687 s, 27.370148 s, -6.483539 s, × 0.8085, 59 µs/rev Differential Revision: https://phab.mercurial-scm.org/D9653
author Pierre-Yves David <pierre-yves.david@octobus.net>
date Wed, 16 Dec 2020 11:11:05 +0100
parents 9d2b2df2c2ba
children 6000f5b25c9b
line wrap: on
line source

# Revision graph generator for Mercurial
#
# Copyright 2008 Dirkjan Ochtman <dirkjan@ochtman.nl>
# Copyright 2007 Joel Rosdahl <joel@rosdahl.net>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.

"""supports walking the history as DAGs suitable for graphical output

The most basic format we use is that of::

  (id, type, data, [parentids])

The node and parent ids are arbitrary integers which identify a node in the
context of the graph returned. Type is a constant specifying the node type.
Data depends on type.
"""

from __future__ import absolute_import

from .node import nullrev
from .thirdparty import attr
from . import (
    dagop,
    pycompat,
    smartset,
    util,
)

CHANGESET = b'C'
PARENT = b'P'
GRANDPARENT = b'G'
MISSINGPARENT = b'M'
# Style of line to draw. None signals a line that ends and is removed at this
# point. A number prefix means only the last N characters of the current block
# will use that style, the rest will use the PARENT style. Add a - sign
# (so making N negative) and all but the first N characters use that style.
EDGES = {PARENT: b'|', GRANDPARENT: b':', MISSINGPARENT: None}


def dagwalker(repo, revs):
    """cset DAG generator yielding (id, CHANGESET, ctx, [parentinfo]) tuples

    This generator function walks through revisions (which should be ordered
    from bigger to lower). It returns a tuple for each node.

    Each parentinfo entry is a tuple with (edgetype, parentid), where edgetype
    is one of PARENT, GRANDPARENT or MISSINGPARENT. The node and parent ids
    are arbitrary integers which identify a node in the context of the graph
    returned.

    """
    gpcache = {}

    for rev in revs:
        ctx = repo[rev]
        # partition into parents in the rev set and missing parents, then
        # augment the lists with markers, to inform graph drawing code about
        # what kind of edge to draw between nodes.
        pset = {p.rev() for p in ctx.parents() if p.rev() in revs}
        mpars = [
            p.rev()
            for p in ctx.parents()
            if p.rev() != nullrev and p.rev() not in pset
        ]
        parents = [(PARENT, p) for p in sorted(pset)]

        for mpar in mpars:
            gp = gpcache.get(mpar)
            if gp is None:
                # precompute slow query as we know reachableroots() goes
                # through all revs (issue4782)
                if not isinstance(revs, smartset.baseset):
                    revs = smartset.baseset(revs)
                gp = gpcache[mpar] = sorted(
                    set(dagop.reachableroots(repo, revs, [mpar]))
                )
            if not gp:
                parents.append((MISSINGPARENT, mpar))
                pset.add(mpar)
            else:
                parents.extend((GRANDPARENT, g) for g in gp if g not in pset)
                pset.update(gp)

        yield (ctx.rev(), CHANGESET, ctx, parents)


def nodes(repo, nodes):
    """cset DAG generator yielding (id, CHANGESET, ctx, [parentids]) tuples

    This generator function walks the given nodes. It only returns parents
    that are in nodes, too.
    """
    include = set(nodes)
    for node in nodes:
        ctx = repo[node]
        parents = {
            (PARENT, p.rev()) for p in ctx.parents() if p.node() in include
        }
        yield (ctx.rev(), CHANGESET, ctx, sorted(parents))


def colored(dag, repo):
    """annotates a DAG with colored edge information

    For each DAG node this function emits tuples::

      (id, type, data, (col, color), [(col, nextcol, color)])

    with the following new elements:

      - Tuple (col, color) with column and color index for the current node
      - A list of tuples indicating the edges between the current node and its
        parents.
    """
    seen = []
    colors = {}
    newcolor = 1
    config = {}

    for key, val in repo.ui.configitems(b'graph'):
        if b'.' in key:
            branch, setting = key.rsplit(b'.', 1)
            # Validation
            if setting == b"width" and val.isdigit():
                config.setdefault(branch, {})[setting] = int(val)
            elif setting == b"color" and val.isalnum():
                config.setdefault(branch, {})[setting] = val

    if config:
        getconf = util.lrucachefunc(
            lambda rev: config.get(repo[rev].branch(), {})
        )
    else:
        getconf = lambda rev: {}

    for (cur, type, data, parents) in dag:

        # Compute seen and next
        if cur not in seen:
            seen.append(cur)  # new head
            colors[cur] = newcolor
            newcolor += 1

        col = seen.index(cur)
        color = colors.pop(cur)
        next = seen[:]

        # Add parents to next
        addparents = [p for pt, p in parents if p not in next]
        next[col : col + 1] = addparents

        # Set colors for the parents
        for i, p in enumerate(addparents):
            if not i:
                colors[p] = color
            else:
                colors[p] = newcolor
                newcolor += 1

        # Add edges to the graph
        edges = []
        for ecol, eid in enumerate(seen):
            if eid in next:
                bconf = getconf(eid)
                edges.append(
                    (
                        ecol,
                        next.index(eid),
                        colors[eid],
                        bconf.get(b'width', -1),
                        bconf.get(b'color', b''),
                    )
                )
            elif eid == cur:
                for ptype, p in parents:
                    bconf = getconf(p)
                    edges.append(
                        (
                            ecol,
                            next.index(p),
                            color,
                            bconf.get(b'width', -1),
                            bconf.get(b'color', b''),
                        )
                    )

        # Yield and move on
        yield (cur, type, data, (col, color), edges)
        seen = next


def asciiedges(type, char, state, rev, parents):
    """adds edge info to changelog DAG walk suitable for ascii()"""
    seen = state.seen
    if rev not in seen:
        seen.append(rev)
    nodeidx = seen.index(rev)

    knownparents = []
    newparents = []
    for ptype, parent in parents:
        if parent == rev:
            # self reference (should only be seen in null rev)
            continue
        if parent in seen:
            knownparents.append(parent)
        else:
            newparents.append(parent)
            state.edges[parent] = state.styles.get(ptype, b'|')

    ncols = len(seen)
    width = 1 + ncols * 2
    nextseen = seen[:]
    nextseen[nodeidx : nodeidx + 1] = newparents
    edges = [(nodeidx, nextseen.index(p)) for p in knownparents]

    seen[:] = nextseen
    while len(newparents) > 2:
        # ascii() only knows how to add or remove a single column between two
        # calls. Nodes with more than two parents break this constraint so we
        # introduce intermediate expansion lines to grow the active node list
        # slowly.
        edges.append((nodeidx, nodeidx))
        edges.append((nodeidx, nodeidx + 1))
        nmorecols = 1
        width += 2
        yield (type, char, width, (nodeidx, edges, ncols, nmorecols))
        char = b'\\'
        nodeidx += 1
        ncols += 1
        edges = []
        del newparents[0]

    if len(newparents) > 0:
        edges.append((nodeidx, nodeidx))
    if len(newparents) > 1:
        edges.append((nodeidx, nodeidx + 1))
    nmorecols = len(nextseen) - ncols
    if nmorecols > 0:
        width += 2
    # remove current node from edge characters, no longer needed
    state.edges.pop(rev, None)
    yield (type, char, width, (nodeidx, edges, ncols, nmorecols))


def _fixlongrightedges(edges):
    for (i, (start, end)) in enumerate(edges):
        if end > start:
            edges[i] = (start, end + 1)


def _getnodelineedgestail(echars, idx, pidx, ncols, coldiff, pdiff, fix_tail):
    if fix_tail and coldiff == pdiff and coldiff != 0:
        # Still going in the same non-vertical direction.
        if coldiff == -1:
            start = max(idx + 1, pidx)
            tail = echars[idx * 2 : (start - 1) * 2]
            tail.extend([b"/", b" "] * (ncols - start))
            return tail
        else:
            return [b"\\", b" "] * (ncols - idx - 1)
    else:
        remainder = ncols - idx - 1
        return echars[-(remainder * 2) :] if remainder > 0 else []


def _drawedges(echars, edges, nodeline, interline):
    for (start, end) in edges:
        if start == end + 1:
            interline[2 * end + 1] = b"/"
        elif start == end - 1:
            interline[2 * start + 1] = b"\\"
        elif start == end:
            interline[2 * start] = echars[2 * start]
        else:
            if 2 * end >= len(nodeline):
                continue
            nodeline[2 * end] = b"+"
            if start > end:
                (start, end) = (end, start)
            for i in range(2 * start + 1, 2 * end):
                if nodeline[i] != b"+":
                    nodeline[i] = b"-"


def _getpaddingline(echars, idx, ncols, edges):
    # all edges up to the current node
    line = echars[: idx * 2]
    # an edge for the current node, if there is one
    if (idx, idx - 1) in edges or (idx, idx) in edges:
        # (idx, idx - 1)      (idx, idx)
        # | | | |           | | | |
        # +---o |           | o---+
        # | | X |           | X | |
        # | |/ /            | |/ /
        # | | |             | | |
        line.extend(echars[idx * 2 : (idx + 1) * 2])
    else:
        line.extend([b' ', b' '])
    # all edges to the right of the current node
    remainder = ncols - idx - 1
    if remainder > 0:
        line.extend(echars[-(remainder * 2) :])
    return line


def _drawendinglines(lines, extra, edgemap, seen, state):
    """Draw ending lines for missing parent edges

    None indicates an edge that ends at between this node and the next
    Replace with a short line ending in ~ and add / lines to any edges to
    the right.

    """
    if None not in edgemap.values():
        return

    # Check for more edges to the right of our ending edges.
    # We need enough space to draw adjustment lines for these.
    edgechars = extra[::2]
    while edgechars and edgechars[-1] is None:
        edgechars.pop()
    shift_size = max((edgechars.count(None) * 2) - 1, 0)
    minlines = 3 if not state.graphshorten else 2
    while len(lines) < minlines + shift_size:
        lines.append(extra[:])

    if shift_size:
        empties = []
        toshift = []
        first_empty = extra.index(None)
        for i, c in enumerate(extra[first_empty::2], first_empty // 2):
            if c is None:
                empties.append(i * 2)
            else:
                toshift.append(i * 2)
        targets = list(range(first_empty, first_empty + len(toshift) * 2, 2))
        positions = toshift[:]
        for line in lines[-shift_size:]:
            line[first_empty:] = [b' '] * (len(line) - first_empty)
            for i in range(len(positions)):
                pos = positions[i] - 1
                positions[i] = max(pos, targets[i])
                line[pos] = b'/' if pos > targets[i] else extra[toshift[i]]

    map = {1: b'|', 2: b'~'} if not state.graphshorten else {1: b'~'}
    for i, line in enumerate(lines):
        if None not in line:
            continue
        line[:] = [c or map.get(i, b' ') for c in line]

    # remove edges that ended
    remove = [p for p, c in edgemap.items() if c is None]
    for parent in remove:
        del edgemap[parent]
        seen.remove(parent)


@attr.s
class asciistate(object):
    """State of ascii() graph rendering"""

    seen = attr.ib(init=False, default=attr.Factory(list))
    edges = attr.ib(init=False, default=attr.Factory(dict))
    lastcoldiff = attr.ib(init=False, default=0)
    lastindex = attr.ib(init=False, default=0)
    styles = attr.ib(init=False, default=attr.Factory(EDGES.copy))
    graphshorten = attr.ib(init=False, default=False)


def outputgraph(ui, graph):
    """outputs an ASCII graph of a DAG

    this is a helper function for 'ascii' below.

    takes the following arguments:

    - ui to write to
    - graph data: list of { graph nodes/edges, text }

    this function can be monkey-patched by extensions to alter graph display
    without needing to mimic all of the edge-fixup logic in ascii()
    """
    for (ln, logstr) in graph:
        ui.write((ln + logstr).rstrip() + b"\n")


def ascii(ui, state, type, char, text, coldata):
    """prints an ASCII graph of the DAG

    takes the following arguments (one call per node in the graph):

      - ui to write to
      - Somewhere to keep the needed state in (init to asciistate())
      - Column of the current node in the set of ongoing edges.
      - Type indicator of node data, usually 'C' for changesets.
      - Payload: (char, lines):
        - Character to use as node's symbol.
        - List of lines to display as the node's text.
      - Edges; a list of (col, next_col) indicating the edges between
        the current node and its parents.
      - Number of columns (ongoing edges) in the current revision.
      - The difference between the number of columns (ongoing edges)
        in the next revision and the number of columns (ongoing edges)
        in the current revision. That is: -1 means one column removed;
        0 means no columns added or removed; 1 means one column added.
    """
    idx, edges, ncols, coldiff = coldata
    assert -2 < coldiff < 2

    edgemap, seen = state.edges, state.seen
    # Be tolerant of history issues; make sure we have at least ncols + coldiff
    # elements to work with. See test-glog.t for broken history test cases.
    echars = [c for p in seen for c in (edgemap.get(p, b'|'), b' ')]
    echars.extend((b'|', b' ') * max(ncols + coldiff - len(seen), 0))

    if coldiff == -1:
        # Transform
        #
        #     | | |        | | |
        #     o | |  into  o---+
        #     |X /         |/ /
        #     | |          | |
        _fixlongrightedges(edges)

    # add_padding_line says whether to rewrite
    #
    #     | | | |        | | | |
    #     | o---+  into  | o---+
    #     |  / /         |   | |  # <--- padding line
    #     o | |          |  / /
    #                    o | |
    add_padding_line = (
        len(text) > 2 and coldiff == -1 and [x for (x, y) in edges if x + 1 < y]
    )

    # fix_nodeline_tail says whether to rewrite
    #
    #     | | o | |        | | o | |
    #     | | |/ /         | | |/ /
    #     | o | |    into  | o / /   # <--- fixed nodeline tail
    #     | |/ /           | |/ /
    #     o | |            o | |
    fix_nodeline_tail = len(text) <= 2 and not add_padding_line

    # nodeline is the line containing the node character (typically o)
    nodeline = echars[: idx * 2]
    nodeline.extend([char, b" "])

    nodeline.extend(
        _getnodelineedgestail(
            echars,
            idx,
            state.lastindex,
            ncols,
            coldiff,
            state.lastcoldiff,
            fix_nodeline_tail,
        )
    )

    # shift_interline is the line containing the non-vertical
    # edges between this entry and the next
    shift_interline = echars[: idx * 2]
    for i in pycompat.xrange(2 + coldiff):
        shift_interline.append(b' ')
    count = ncols - idx - 1
    if coldiff == -1:
        for i in pycompat.xrange(count):
            shift_interline.extend([b'/', b' '])
    elif coldiff == 0:
        shift_interline.extend(echars[(idx + 1) * 2 : ncols * 2])
    else:
        for i in pycompat.xrange(count):
            shift_interline.extend([b'\\', b' '])

    # draw edges from the current node to its parents
    _drawedges(echars, edges, nodeline, shift_interline)

    # lines is the list of all graph lines to print
    lines = [nodeline]
    if add_padding_line:
        lines.append(_getpaddingline(echars, idx, ncols, edges))

    # If 'graphshorten' config, only draw shift_interline
    # when there is any non vertical flow in graph.
    if state.graphshorten:
        if any(c in br'\/' for c in shift_interline if c):
            lines.append(shift_interline)
    # Else, no 'graphshorten' config so draw shift_interline.
    else:
        lines.append(shift_interline)

    # make sure that there are as many graph lines as there are
    # log strings
    extra_interline = echars[: (ncols + coldiff) * 2]
    if len(lines) < len(text):
        while len(lines) < len(text):
            lines.append(extra_interline[:])

    _drawendinglines(lines, extra_interline, edgemap, seen, state)

    while len(text) < len(lines):
        text.append(b"")

    # print lines
    indentation_level = max(ncols, ncols + coldiff)
    lines = [
        b"%-*s " % (2 * indentation_level, b"".join(line)) for line in lines
    ]
    outputgraph(ui, zip(lines, text))

    # ... and start over
    state.lastcoldiff = coldiff
    state.lastindex = idx