Tue, 16 Aug 2016 17:15:54 +0900 check-code: allow assignment to hasattr variable
Yuya Nishihara <yuya@tcha.org> [Tue, 16 Aug 2016 17:15:54 +0900] rev 29796
check-code: allow assignment to hasattr variable
Mon, 15 Aug 2016 16:07:55 +0900 debugobsolete: add formatter support (issue5134)
Yuya Nishihara <yuya@tcha.org> [Mon, 15 Aug 2016 16:07:55 +0900] rev 29795
debugobsolete: add formatter support (issue5134) It appears that computing index isn't cheap if --rev is specified. That's why "index" field is available only if --index is specified. I've named marker.flags() as "flag" because "flags" implies a list or dict in template world. Thanks to Piotr Listkiewicz for the initial implementation of this patch.
Mon, 15 Aug 2016 12:58:33 +0900 formatter: add function to convert dict to appropriate format
Yuya Nishihara <yuya@tcha.org> [Mon, 15 Aug 2016 12:58:33 +0900] rev 29794
formatter: add function to convert dict to appropriate format This will be used to process key-value pairs by formatter. The default field names and format are derived from the {extras} template keyword. Tests will be added later.
Mon, 15 Aug 2016 17:17:39 +0900 check-code: make dict() pattern less invasive
Yuya Nishihara <yuya@tcha.org> [Mon, 15 Aug 2016 17:17:39 +0900] rev 29793
check-code: make dict() pattern less invasive 'foodict(x=y)' should be allowed.
Sun, 14 Aug 2016 21:29:46 -0700 hgweb: tweak zlib chunking behavior
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 14 Aug 2016 21:29:46 -0700] rev 29792
hgweb: tweak zlib chunking behavior When doing streaming compression with zlib, zlib appears to emit chunks with data after ~20-30kb on average is available. In other words, most calls to compress() return an empty string. On the mozilla-unified repo, only 48,433 of 921,167 (5.26%) of calls to compress() returned data. In other words, we were sending hundreds of thousands of empty chunks via a generator where they touched who knows how many frames (my guess is millions). Filtering out the empty chunks from the generator cuts down on overhead. In addition, we were previously feeding 8kb chunks into zlib compression. Since this function tends to emit *compressed* data after 20-30kb is available, it would take several calls before data was produced. We increase the amount of data fed in at a time to 32kb. This reduces the number of calls to compress() from 921,167 to 115,146. It also reduces the number of output chunks from 48,433 to 31,377. This does increase the average output chunk size by a little. But I don't think this will matter in most scenarios. The combination of these 2 changes appears to shave ~6s CPU time or ~3% from a server serving the mozilla-unified repo.
Sun, 14 Aug 2016 17:07:05 +0900 test-gpg: run migration of v1 secret keys beforehand
Yuya Nishihara <yuya@tcha.org> [Sun, 14 Aug 2016 17:07:05 +0900] rev 29791
test-gpg: run migration of v1 secret keys beforehand This suppresses unwanted output at "hg sign".
Sun, 14 Aug 2016 17:01:33 +0900 test-gpg: start gpg-agent under control of the test runner
Yuya Nishihara <yuya@tcha.org> [Sun, 14 Aug 2016 17:01:33 +0900] rev 29790
test-gpg: start gpg-agent under control of the test runner GnuPG v2 automatically starts gpg-agent. We should kill the daemon process.
Sun, 14 Aug 2016 16:49:47 +0900 test-gpg: make temporary copy of GNUPGHOME
Yuya Nishihara <yuya@tcha.org> [Sun, 14 Aug 2016 16:49:47 +0900] rev 29789
test-gpg: make temporary copy of GNUPGHOME GnuPG v2 will convert v1 secret keys and create a socket under $GNUPGHOME. This patch makes sure no state would persist. We no longer need to verify trustdb.gpg, which was added by aae219a99a6e.
Mon, 15 Aug 2016 20:39:33 -0700 hgweb: document why we don't allow untrusted settings to control zlib
Gregory Szorc <gregory.szorc@gmail.com> [Mon, 15 Aug 2016 20:39:33 -0700] rev 29788
hgweb: document why we don't allow untrusted settings to control zlib Added comment per discussion on mercurial-devel.
Sun, 14 Aug 2016 18:37:24 -0700 hgweb: profile HTTP requests
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 14 Aug 2016 18:37:24 -0700] rev 29787
hgweb: profile HTTP requests Currently, running `hg serve --profile` doesn't yield anything useful: when the process is terminated the profiling output displays results from the main thread, which typically spends most of its time in select.select(). Furthermore, it has no meaningful results from mercurial.* modules because the threads serving HTTP requests don't actually get profiled. This patch teaches the hgweb wsgi applications to profile individual requests. If profiling is enabled, the profiler kicks in after HTTP/WSGI environment processing but before Mercurial's main request processing. The profile results are printed to the configured profiling output. If running `hg serve` from a shell, they will be printed to stderr, just before the HTTP request line is logged. If profiling to a file, we only write a single profile to the file because the file is not opened in append mode. We could add support for appending to files in a future patch if someone wants it. Per request profiling doesn't work with the statprof profiler because internally that profiler collects samples from the thread that *initially* requested profiling be enabled. I have plans to address this by vendoring Facebook's customized statprof and then improving it.
Sun, 14 Aug 2016 16:03:30 -0700 hgweb: abstract call to hgwebdir wsgi function
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 14 Aug 2016 16:03:30 -0700] rev 29786
hgweb: abstract call to hgwebdir wsgi function The function names and behavior now matches hgweb. The reason for this will be obvious in the next patch.
Sun, 14 Aug 2016 18:28:43 -0700 profiling: don't error with statprof when profiling has already started
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 14 Aug 2016 18:28:43 -0700] rev 29785
profiling: don't error with statprof when profiling has already started statprof.reset() asserts if profiling has already started. So don't call if it profiling is already running.
(0) -10000 -3000 -1000 -300 -100 -12 +12 +100 +300 +1000 +3000 +10000 tip