revlog: use an LRU cache for delta chain bases
Profiling using statprof revealed a hotspot during changegroup
application calculating delta chain bases on generaldelta repos.
Essentially, revlog._addrevision() was performing a lot of redundant
work tracing the delta chain as part of determining when the chain
distance was acceptable. This was most pronounced when adding
revisions to manifests, which can have delta chains thousands of
revisions long.
There was a delta chain base cache on revlogs before, but it only
captured a single revision. This was acceptable before generaldelta,
when _addrevision would build deltas from the previous revision and
thus we'd pretty much guarantee a cache hit when resolving the delta
chain base on a subsequent _addrevision call. However, it isn't
suitable for generaldelta because parent revisions aren't necessarily
the last processed revision.
This patch converts the delta chain base cache to an LRU dict cache.
The cache can hold multiple entries, so generaldelta repos have a
higher chance of getting a cache hit.
The impact of this change when processing changegroup additions is
significant. On a generaldelta conversion of the "mozilla-unified"
repo (which contains heads of the main Firefox repositories in
chronological order - this means there are lots of transitions between
heads in revlog order), this change has the following impact when
performing an `hg unbundle` of an uncompressed bundle of the repo:
before: 5:42 CPU time
after: 4:34 CPU time
Most of this time is saved when applying the changelog and manifest
revlogs:
before: 2:30 CPU time
after: 1:17 CPU time
That nearly a 50% reduction in CPU time applying changesets and
manifests!
Applying a gzipped bundle of the same repo (effectively simulating a
`hg clone` over HTTP) showed a similar speedup:
before: 5:53 CPU time
after: 4:46 CPU time
Wall time improvements were basically the same as CPU time.
I didn't measure explicitly, but it feels like most of the time
is saved when processing manifests. This makes sense, as large
manifests tend to have very long delta chains and thus benefit the
most from this cache.
So, this change effectively makes changegroup application (which is
used by `hg unbundle`, `hg clone`, `hg pull`, `hg unshelve`, and
various other commands) significantly faster when delta chains are
long (which can happen on repos with large numbers of files and thus
large manifests).
In theory, this change can result in more memory utilization. However,
we're caching a dict of ints. At most we have 200 ints + Python object
overhead per revlog. And, the cache is really only populated when
performing read-heavy operations, such as adding changegroups or
scanning an individual revlog. For memory bloat to be an issue, we'd
need to scan/read several revisions from several revlogs all while
having active references to several revlogs. I don't think there are
many operations that do this, so I don't think memory bloat from the
cache will be an issue.
/*
base85 codec
Copyright 2006 Brendan Cully <brendan@kublai.com>
This software may be used and distributed according to the terms of
the GNU General Public License, incorporated herein by reference.
Largely based on git's implementation
*/
#define PY_SSIZE_T_CLEAN
#include <Python.h>
#include "util.h"
static const char b85chars[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
"abcdefghijklmnopqrstuvwxyz!#$%&()*+-;<=>?@^_`{|}~";
static char b85dec[256];
static void b85prep(void)
{
unsigned i;
memset(b85dec, 0, sizeof(b85dec));
for (i = 0; i < sizeof(b85chars); i++)
b85dec[(int)(b85chars[i])] = i + 1;
}
static PyObject *b85encode(PyObject *self, PyObject *args)
{
const unsigned char *text;
PyObject *out;
char *dst;
Py_ssize_t len, olen, i;
unsigned int acc, val, ch;
int pad = 0;
if (!PyArg_ParseTuple(args, "s#|i", &text, &len, &pad))
return NULL;
if (pad)
olen = ((len + 3) / 4 * 5) - 3;
else {
olen = len % 4;
if (olen)
olen++;
olen += len / 4 * 5;
}
if (!(out = PyBytes_FromStringAndSize(NULL, olen + 3)))
return NULL;
dst = PyBytes_AsString(out);
while (len) {
acc = 0;
for (i = 24; i >= 0; i -= 8) {
ch = *text++;
acc |= ch << i;
if (--len == 0)
break;
}
for (i = 4; i >= 0; i--) {
val = acc % 85;
acc /= 85;
dst[i] = b85chars[val];
}
dst += 5;
}
if (!pad)
_PyBytes_Resize(&out, olen);
return out;
}
static PyObject *b85decode(PyObject *self, PyObject *args)
{
PyObject *out;
const char *text;
char *dst;
Py_ssize_t len, i, j, olen, cap;
int c;
unsigned int acc;
if (!PyArg_ParseTuple(args, "s#", &text, &len))
return NULL;
olen = len / 5 * 4;
i = len % 5;
if (i)
olen += i - 1;
if (!(out = PyBytes_FromStringAndSize(NULL, olen)))
return NULL;
dst = PyBytes_AsString(out);
i = 0;
while (i < len)
{
acc = 0;
cap = len - i - 1;
if (cap > 4)
cap = 4;
for (j = 0; j < cap; i++, j++)
{
c = b85dec[(int)*text++] - 1;
if (c < 0)
return PyErr_Format(
PyExc_ValueError,
"bad base85 character at position %d",
(int)i);
acc = acc * 85 + c;
}
if (i++ < len)
{
c = b85dec[(int)*text++] - 1;
if (c < 0)
return PyErr_Format(
PyExc_ValueError,
"bad base85 character at position %d",
(int)i);
/* overflow detection: 0xffffffff == "|NsC0",
* "|NsC" == 0x03030303 */
if (acc > 0x03030303 || (acc *= 85) > 0xffffffff - c)
return PyErr_Format(
PyExc_ValueError,
"bad base85 sequence at position %d",
(int)i);
acc += c;
}
cap = olen < 4 ? olen : 4;
olen -= cap;
for (j = 0; j < 4 - cap; j++)
acc *= 85;
if (cap && cap < 4)
acc += 0xffffff >> (cap - 1) * 8;
for (j = 0; j < cap; j++)
{
acc = (acc << 8) | (acc >> 24);
*dst++ = acc;
}
}
return out;
}
static char base85_doc[] = "Base85 Data Encoding";
static PyMethodDef methods[] = {
{"b85encode", b85encode, METH_VARARGS,
"Encode text in base85.\n\n"
"If the second parameter is true, pad the result to a multiple of "
"five characters.\n"},
{"b85decode", b85decode, METH_VARARGS, "Decode base85 text.\n"},
{NULL, NULL}
};
#ifdef IS_PY3K
static struct PyModuleDef base85_module = {
PyModuleDef_HEAD_INIT,
"base85",
base85_doc,
-1,
methods
};
PyMODINIT_FUNC PyInit_base85(void)
{
b85prep();
return PyModule_Create(&base85_module);
}
#else
PyMODINIT_FUNC initbase85(void)
{
Py_InitModule3("base85", methods, base85_doc);
b85prep();
}
#endif