Mercurial > hg
view tests/test-oldcgi.t @ 34824:e2ad93bcc084
revlog: introduce an experimental flag to slice chunks reads when too sparse
Delta chains can become quite sparse if there is a lot of unrelated data between
relevant pieces. Right now, revlog always reads all the necessary data for the
delta chain in one single read. This can lead to a lot of unrelated data to be
read (see issue5482 for more details).
One can use the `experimental.maxdeltachainspan` option with a large value
(or -1) to easily produce a very sparse delta chain.
This change introduces the ability to slice the chunks retrieval into multiple
reads, skipping large sections of unrelated data. Preliminary testing shows
interesting results. For example the peak memory consumption to read a manifest
on a large repository is reduced from 600MB to 250MB (200MB without
maxdeltachainspan). However, the slicing itself and the multiple reads can have
an negative impact on performance. This is why the new feature is hidden behind
an experimental flag.
Future changesets will add various parameters to control the slicing heuristics.
We hope to experiment a wide variety of repositories during 4.4 and hopefully
turn the feature on by default in 4.5.
As a first try, the algorithm itself is prone to deep changes. However, we wish
to define APIs and have a baseline to work on.
author | Paul Morelle <paul.morelle@octobus.net> |
---|---|
date | Tue, 10 Oct 2017 17:50:27 +0200 |
parents | 8e6f4939a69a |
children | 23b749b84b8a |
line wrap: on
line source
#require no-msys # MSYS will translate web paths as if they were file paths This tests if CGI files from before d0db3462d568 still work. $ hg init test $ cat >hgweb.cgi <<HGWEB > #!$PYTHON > # > # An example CGI script to use hgweb, edit as necessary > > import cgitb, os, sys > cgitb.enable() > > # sys.path.insert(0, "/path/to/python/lib") # if not a system-wide install > from mercurial import hgweb > > h = hgweb.hgweb("test", "Empty test repository") > h.run() > HGWEB $ chmod 755 hgweb.cgi $ cat >hgweb.config <<HGWEBDIRCONF > [paths] > test = test > HGWEBDIRCONF $ cat >hgwebdir.cgi <<HGWEBDIR > #!$PYTHON > # > # An example CGI script to export multiple hgweb repos, edit as necessary > > import cgitb, sys > cgitb.enable() > > # sys.path.insert(0, "/path/to/python/lib") # if not a system-wide install > from mercurial import hgweb > > # The config file looks like this. You can have paths to individual > # repos, collections of repos in a directory tree, or both. > # > # [paths] > # virtual/path = /real/path > # virtual/path = /real/path > # > # [collections] > # /prefix/to/strip/off = /root/of/tree/full/of/repos > # > # collections example: say directory tree /foo contains repos /foo/bar, > # /foo/quux/baz. Give this config section: > # [collections] > # /foo = /foo > # Then repos will list as bar and quux/baz. > > # Alternatively you can pass a list of ('virtual/path', '/real/path') tuples > # or use a dictionary with entries like 'virtual/path': '/real/path' > > h = hgweb.hgwebdir("hgweb.config") > h.run() > HGWEBDIR $ chmod 755 hgwebdir.cgi $ . "$TESTDIR/cgienv" $ $PYTHON hgweb.cgi > page1 $ $PYTHON hgwebdir.cgi > page2 $ PATH_INFO="/test/" $ PATH_TRANSLATED="/var/something/test.cgi" $ REQUEST_URI="/test/test/" $ SCRIPT_URI="http://hg.omnifarious.org/test/test/" $ SCRIPT_URL="/test/test/" $ $PYTHON hgwebdir.cgi > page3 $ grep -i error page1 page2 page3 [1]