Mon, 15 Aug 2016 16:07:55 +0900 debugobsolete: add formatter support (issue5134)
Yuya Nishihara <yuya@tcha.org> [Mon, 15 Aug 2016 16:07:55 +0900] rev 29806
debugobsolete: add formatter support (issue5134) It appears that computing index isn't cheap if --rev is specified. That's why "index" field is available only if --index is specified. I've named marker.flags() as "flag" because "flags" implies a list or dict in template world. Thanks to Piotr Listkiewicz for the initial implementation of this patch.
Mon, 15 Aug 2016 12:58:33 +0900 formatter: add function to convert dict to appropriate format
Yuya Nishihara <yuya@tcha.org> [Mon, 15 Aug 2016 12:58:33 +0900] rev 29805
formatter: add function to convert dict to appropriate format This will be used to process key-value pairs by formatter. The default field names and format are derived from the {extras} template keyword. Tests will be added later.
Mon, 15 Aug 2016 17:17:39 +0900 check-code: make dict() pattern less invasive
Yuya Nishihara <yuya@tcha.org> [Mon, 15 Aug 2016 17:17:39 +0900] rev 29804
check-code: make dict() pattern less invasive 'foodict(x=y)' should be allowed.
Sun, 14 Aug 2016 21:29:46 -0700 hgweb: tweak zlib chunking behavior
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 14 Aug 2016 21:29:46 -0700] rev 29803
hgweb: tweak zlib chunking behavior When doing streaming compression with zlib, zlib appears to emit chunks with data after ~20-30kb on average is available. In other words, most calls to compress() return an empty string. On the mozilla-unified repo, only 48,433 of 921,167 (5.26%) of calls to compress() returned data. In other words, we were sending hundreds of thousands of empty chunks via a generator where they touched who knows how many frames (my guess is millions). Filtering out the empty chunks from the generator cuts down on overhead. In addition, we were previously feeding 8kb chunks into zlib compression. Since this function tends to emit *compressed* data after 20-30kb is available, it would take several calls before data was produced. We increase the amount of data fed in at a time to 32kb. This reduces the number of calls to compress() from 921,167 to 115,146. It also reduces the number of output chunks from 48,433 to 31,377. This does increase the average output chunk size by a little. But I don't think this will matter in most scenarios. The combination of these 2 changes appears to shave ~6s CPU time or ~3% from a server serving the mozilla-unified repo.
Sun, 14 Aug 2016 17:07:05 +0900 test-gpg: run migration of v1 secret keys beforehand
Yuya Nishihara <yuya@tcha.org> [Sun, 14 Aug 2016 17:07:05 +0900] rev 29802
test-gpg: run migration of v1 secret keys beforehand This suppresses unwanted output at "hg sign".
Sun, 14 Aug 2016 17:01:33 +0900 test-gpg: start gpg-agent under control of the test runner
Yuya Nishihara <yuya@tcha.org> [Sun, 14 Aug 2016 17:01:33 +0900] rev 29801
test-gpg: start gpg-agent under control of the test runner GnuPG v2 automatically starts gpg-agent. We should kill the daemon process.
(0) -10000 -3000 -1000 -300 -100 -30 -10 -6 +6 +10 +30 +100 +300 +1000 +3000 +10000 tip