Sun, 13 Nov 2016 06:06:23 +0900 util: add utility function to skip avoiding file stat ambiguity if EPERM stable
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Sun, 13 Nov 2016 06:06:23 +0900] rev 30319
util: add utility function to skip avoiding file stat ambiguity if EPERM Now, advancing stat.st_mtime by os.utime() is used to avoid file stat ambiguity. But according to POSIX specification, utime(2) with an explicit time information is permitted only for a process with: - the effective user ID equal to the user ID of the file, or - appropriate privileges http://pubs.opengroup.org/onlinepubs/9699919799/functions/utime.html Therefore, just having group write access to a file causes EPERM at applying os.utime() on it (e.g. working on the repository shared by group access permission). This patch adds class filestat utility function avoidamgig() to avoid file stat ambiguity but skip it if EPERM. It is reasonable to always ignore EPERM, because utime(2) causes EPERM only in the case described above (EACCES is used only for utime(2) with NULL).
Sun, 06 Nov 2016 18:51:57 -0800 bdiff: replace hash algorithm
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 06 Nov 2016 18:51:57 -0800] rev 30318
bdiff: replace hash algorithm This patch replaces lyhash with the hash algorithm used by diffutils. The algorithm has its origins in Git commit 2e9d1410, which is all the way back from 1992. The license header in the code at that revision in GPL v2. I have not performed an extensive analysis of the distribution (and therefore buckets) of hash output. However, `hg perfbdiff` gives some clear wins. I'd like to think that if it is good enough for diffutils it is good enough for us? From the mozilla-unified repository: $ perfbdiff -m 3041e4d59df2 ! wall 0.053271 comb 0.060000 user 0.060000 sys 0.000000 (best of 100) ! wall 0.035827 comb 0.040000 user 0.040000 sys 0.000000 (best of 100) $ perfbdiff 0e9928989e9c --alldata --count 100 ! wall 6.204277 comb 6.200000 user 6.200000 sys 0.000000 (best of 3) ! wall 4.309710 comb 4.300000 user 4.300000 sys 0.000000 (best of 3) From the hg repo: $ perfbdiff 35000 --alldata --count 1000 ! wall 0.660358 comb 0.660000 user 0.660000 sys 0.000000 (best of 15) ! wall 0.534092 comb 0.530000 user 0.530000 sys 0.000000 (best of 19) Looking at the generated assembly and statistical profiler output from the kernel level, I believe there is room to make this function even faster. Namely, we're still consuming data character by character instead of at the word level. This translates to more loop iterations and more instructions. At this juncture though, the real performance killer is that we're hashing every line. We should get a significant speedup if we change the algorithm to find the longest prefix, longest suffix, treat those as single "lines" and then only do the line splitting and hashing on the parts that are different. That will require a lot of C code, however. I'm optimistic this approach could result in a ~2x speedup.
Fri, 04 Nov 2016 21:44:25 -0700 profiling: make statprof the default profiler (BC)
Gregory Szorc <gregory.szorc@gmail.com> [Fri, 04 Nov 2016 21:44:25 -0700] rev 30317
profiling: make statprof the default profiler (BC) The statprof sampling profiler runs with significantly less overhead. Its data is therefore more useful. Furthermore, its default output shows the hotpath by default, which I've found to be way more useful than the default profiler's function time table. There is one behavioral regression with this change worth noting: the statprof profiler currently doesn't profile individual hgweb requests like lsprof does. This is because the current implementation of statprof only profiles the thread that started profiling. The ability for lsprof to profile individual hgweb requests is relatively new and likely not widely used. Furthermore, I have plans to modify statprof to support profiling multiple threads. I expect that change to go through several iterations. I'm submitting this patch first so there is more time to test statprof. Perfect is the enemy of good.
(0) -30000 -10000 -3000 -1000 -300 -100 -30 -10 -3 +3 +10 +30 +100 +300 +1000 +3000 +10000 tip