util: lower water mark when removing nodes after cost limit reached
See the inline comment for the reasoning here. This is a pretty
common strategy for garbage collectors, other cache-like primtives.
The performance impact is substantial:
$ hg perflrucachedict --size 4 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 100
! inserts w/ cost limit
! wall 1.659181 comb 1.650000 user 1.650000 sys 0.000000 (best of 7)
! wall 1.722122 comb 1.720000 user 1.720000 sys 0.000000 (best of 6)
! mixed w/ cost limit
! wall 1.139955 comb 1.140000 user 1.140000 sys 0.000000 (best of 9)
! wall 1.182513 comb 1.180000 user 1.180000 sys 0.000000 (best of 9)
$ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 10000
! inserts
! wall 0.679546 comb 0.680000 user 0.680000 sys 0.000000 (best of 15)
! sets
! wall 0.825147 comb 0.830000 user 0.830000 sys 0.000000 (best of 13)
! inserts w/ cost limit
! wall 25.105273 comb 25.080000 user 25.080000 sys 0.000000 (best of 3)
! wall 1.724397 comb 1.720000 user 1.720000 sys 0.000000 (best of 6)
! mixed
! wall 0.807096 comb 0.810000 user 0.810000 sys 0.000000 (best of 13)
! mixed w/ cost limit
! wall 12.104470 comb 12.070000 user 12.070000 sys 0.000000 (best of 3)
! wall 1.190563 comb 1.190000 user 1.190000 sys 0.000000 (best of 9)
$ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000 --costlimit 10000 --mixedgetfreq 90
! inserts
! wall 0.711177 comb 0.710000 user 0.710000 sys 0.000000 (best of 14)
! sets
! wall 0.846992 comb 0.850000 user 0.850000 sys 0.000000 (best of 12)
! inserts w/ cost limit
! wall 25.963028 comb 25.960000 user 25.960000 sys 0.000000 (best of 3)
! wall 2.184311 comb 2.180000 user 2.180000 sys 0.000000 (best of 5)
! mixed
! wall 0.728256 comb 0.730000 user 0.730000 sys 0.000000 (best of 14)
! mixed w/ cost limit
! wall 3.174256 comb 3.170000 user 3.170000 sys 0.000000 (best of 4)
! wall 0.773186 comb 0.770000 user 0.770000 sys 0.000000 (best of 13)
$ hg perflrucachedict --size 100000 --gets 1000000 --sets 1000000 --mixed 1000000 --mixedgetfreq 90 --costlimit 5000000
! gets
! wall 1.191368 comb 1.190000 user 1.190000 sys 0.000000 (best of 9)
! wall 1.195304 comb 1.190000 user 1.190000 sys 0.000000 (best of 9)
! inserts
! wall 0.950995 comb 0.950000 user 0.950000 sys 0.000000 (best of 11)
! inserts w/ cost limit
! wall 1.589732 comb 1.590000 user 1.590000 sys 0.000000 (best of 7)
! sets
! wall 1.094941 comb 1.100000 user 1.090000 sys 0.010000 (best of 9)
! mixed
! wall 0.936420 comb 0.940000 user 0.930000 sys 0.010000 (best of 10)
! mixed w/ cost limit
! wall 0.882780 comb 0.870000 user 0.870000 sys 0.000000 (best of 11)
This puts us ~2x slower than caches without cost accounting. And for
read-heavy workloads (the prime use cases for caches), performance is
nearly identical.
In the worst case (pure write workloads with cost accounting enabled),
we're looking at ~1.5us per insert on large caches. That seems "fast
enough."
Differential Revision: https://phab.mercurial-scm.org/D4505
Tests if hgweb can run without touching sys.stdin, as is required
by the WSGI standard and strictly implemented by mod_wsgi.
$ hg init repo
$ cd repo
$ echo foo > bar
$ hg add bar
$ hg commit -m "test"
$ cat > request.py <<EOF
> from __future__ import absolute_import
> import os
> import sys
> from mercurial import (
> dispatch,
> hg,
> ui as uimod,
> util,
> )
> ui = uimod.ui
> from mercurial.hgweb.hgweb_mod import (
> hgweb,
> )
> stringio = util.stringio
>
> class FileLike(object):
> def __init__(self, real):
> self.real = real
> def fileno(self):
> print >> sys.__stdout__, 'FILENO'
> return self.real.fileno()
> def read(self):
> print >> sys.__stdout__, 'READ'
> return self.real.read()
> def readline(self):
> print >> sys.__stdout__, 'READLINE'
> return self.real.readline()
>
> sys.stdin = FileLike(sys.stdin)
> errors = stringio()
> input = stringio()
> output = stringio()
>
> def startrsp(status, headers):
> print('---- STATUS')
> print(status)
> print('---- HEADERS')
> print([i for i in headers if i[0] != 'ETag'])
> print('---- DATA')
> return output.write
>
> env = {
> 'wsgi.version': (1, 0),
> 'wsgi.url_scheme': 'http',
> 'wsgi.errors': errors,
> 'wsgi.input': input,
> 'wsgi.multithread': False,
> 'wsgi.multiprocess': False,
> 'wsgi.run_once': False,
> 'REQUEST_METHOD': 'GET',
> 'SCRIPT_NAME': '',
> 'PATH_INFO': '',
> 'QUERY_STRING': '',
> 'SERVER_NAME': '$LOCALIP',
> 'SERVER_PORT': os.environ['HGPORT'],
> 'SERVER_PROTOCOL': 'HTTP/1.0'
> }
>
> i = hgweb('.')
> for c in i(env, startrsp):
> pass
> print('---- ERRORS')
> print(errors.getvalue())
> print('---- OS.ENVIRON wsgi variables')
> print(sorted([x for x in os.environ if x.startswith('wsgi')]))
> print('---- request.ENVIRON wsgi variables')
> with i._obtainrepo() as repo:
> print(sorted([x for x in repo.ui.environ if x.startswith('wsgi')]))
> EOF
$ $PYTHON request.py
---- STATUS
200 Script output follows
---- HEADERS
[('Content-Type', 'text/html; charset=ascii')]
---- DATA
---- ERRORS
---- OS.ENVIRON wsgi variables
[]
---- request.ENVIRON wsgi variables
['wsgi.errors', 'wsgi.input', 'wsgi.multiprocess', 'wsgi.multithread', 'wsgi.run_once', 'wsgi.url_scheme', 'wsgi.version']
$ cd ..