revlog: use an LRU cache for delta chain bases
Profiling using statprof revealed a hotspot during changegroup
application calculating delta chain bases on generaldelta repos.
Essentially, revlog._addrevision() was performing a lot of redundant
work tracing the delta chain as part of determining when the chain
distance was acceptable. This was most pronounced when adding
revisions to manifests, which can have delta chains thousands of
revisions long.
There was a delta chain base cache on revlogs before, but it only
captured a single revision. This was acceptable before generaldelta,
when _addrevision would build deltas from the previous revision and
thus we'd pretty much guarantee a cache hit when resolving the delta
chain base on a subsequent _addrevision call. However, it isn't
suitable for generaldelta because parent revisions aren't necessarily
the last processed revision.
This patch converts the delta chain base cache to an LRU dict cache.
The cache can hold multiple entries, so generaldelta repos have a
higher chance of getting a cache hit.
The impact of this change when processing changegroup additions is
significant. On a generaldelta conversion of the "mozilla-unified"
repo (which contains heads of the main Firefox repositories in
chronological order - this means there are lots of transitions between
heads in revlog order), this change has the following impact when
performing an `hg unbundle` of an uncompressed bundle of the repo:
before: 5:42 CPU time
after: 4:34 CPU time
Most of this time is saved when applying the changelog and manifest
revlogs:
before: 2:30 CPU time
after: 1:17 CPU time
That nearly a 50% reduction in CPU time applying changesets and
manifests!
Applying a gzipped bundle of the same repo (effectively simulating a
`hg clone` over HTTP) showed a similar speedup:
before: 5:53 CPU time
after: 4:46 CPU time
Wall time improvements were basically the same as CPU time.
I didn't measure explicitly, but it feels like most of the time
is saved when processing manifests. This makes sense, as large
manifests tend to have very long delta chains and thus benefit the
most from this cache.
So, this change effectively makes changegroup application (which is
used by `hg unbundle`, `hg clone`, `hg pull`, `hg unshelve`, and
various other commands) significantly faster when delta chains are
long (which can happen on repos with large numbers of files and thus
large manifests).
In theory, this change can result in more memory utilization. However,
we're caching a dict of ints. At most we have 200 ints + Python object
overhead per revlog. And, the cache is really only populated when
performing read-heavy operations, such as adding changegroups or
scanning an individual revlog. For memory bloat to be an issue, we'd
need to scan/read several revisions from several revlogs all while
having active references to several revlogs. I don't think there are
many operations that do this, so I don't think memory bloat from the
cache will be an issue.
#require no-msys # MSYS will translate web paths as if they were file paths
This tests if CGI files from after d0db3462d568 but
before d74fc8dec2b4 still work.
$ hg init test
$ cat >hgweb.cgi <<HGWEB
> #!/usr/bin/env python
> #
> # An example CGI script to use hgweb, edit as necessary
>
> import cgitb
> cgitb.enable()
>
> from mercurial import demandimport; demandimport.enable()
> from mercurial.hgweb import hgweb
> from mercurial.hgweb import wsgicgi
> from mercurial.hgweb.request import wsgiapplication
>
> def make_web_app():
> return hgweb("test", "Empty test repository")
>
> wsgicgi.launch(wsgiapplication(make_web_app))
> HGWEB
$ chmod 755 hgweb.cgi
$ cat >hgweb.config <<HGWEBDIRCONF
> [paths]
> test = test
> HGWEBDIRCONF
$ cat >hgwebdir.cgi <<HGWEBDIR
> #!/usr/bin/env python
> #
> # An example CGI script to export multiple hgweb repos, edit as necessary
>
> import cgitb
> cgitb.enable()
>
> from mercurial import demandimport; demandimport.enable()
> from mercurial.hgweb import hgwebdir
> from mercurial.hgweb import wsgicgi
> from mercurial.hgweb.request import wsgiapplication
>
> def make_web_app():
> return hgwebdir("hgweb.config")
>
> wsgicgi.launch(wsgiapplication(make_web_app))
> HGWEBDIR
$ chmod 755 hgwebdir.cgi
$ . "$TESTDIR/cgienv"
$ python hgweb.cgi > page1
$ python hgwebdir.cgi > page2
$ PATH_INFO="/test/"
$ PATH_TRANSLATED="/var/something/test.cgi"
$ REQUEST_URI="/test/test/"
$ SCRIPT_URI="http://hg.omnifarious.org/test/test/"
$ SCRIPT_URL="/test/test/"
$ python hgwebdir.cgi > page3
$ grep -i error page1 page2 page3
[1]