Thu, 21 Apr 2016 22:04:11 -0500 bdiff: balance recursion to avoid quadratic behavior (issue4704) stable
Matt Mackall <mpm@selenic.com> [Thu, 21 Apr 2016 22:04:11 -0500] rev 29014
bdiff: balance recursion to avoid quadratic behavior (issue4704) For highly structured files like JSON or XML dumps with large numbers of duplicate lines (eg braces) and isolated matching lines, bdiff could find large numbers of equally good spans. Because it prefers earlier matches, this would result in pathologically unbalance recursion that resulted in quadratic performance. This patch makes it prefer matches closer to the middle that tend to balance recursion. This change improves the speed of a pathological test case from 1100s to 9s. Included is a smaller test that has a roughly 50x safety margin on the performance it accepts. It's likely to fail on pure builds because difflib also has a recursion-balancing problem.
Thu, 21 Apr 2016 21:05:26 -0500 bdiff: deal better with duplicate lines stable
Matt Mackall <mpm@selenic.com> [Thu, 21 Apr 2016 21:05:26 -0500] rev 29013
bdiff: deal better with duplicate lines The longest_match code compares all the possible positions in two files to find the best match. Given a pair of sequences, it effectively searches a grid like this: a b b b c . d e . f 0 1 2 3 4 5 6 7 8 9 a 1 - - - - - - - - - b - 2 1 1 - - - - - - b - 1 3 2 - - - - - - b - 1 2 4 - - - - - - . - - - - - 1 - - 1 - Here, the 4 in the middle says "the first four lines of the file match", which it can compute be comparing the fourth lines and then adding one to the result found when comparing the third lines in the entry to the upper left. We generally avoid the quadratic worst case by only looking at lines that match, which is precomputed. We also avoid quadratic storage by only keeping a single column vector and then keeping track of the best match. Unfortunately, this can get us into trouble with the sequences above. Because we want to reuse the '3' value when calculating the '4', we need to be careful not to overwrite it with the '2' we calculate immediately before. If we scan left to right, top to bottom, we're going to have a problem: we'll overwrite our 3 before we use it and calculate a suboptimal best match. To address this, we can either keep two column vectors and swap between them (which significantly complicates bookkeeping), or change our scanning order. If we instead scan from left to right, bottom to top, we'll avoid ever overwriting values we'll need in the future. This unfortunately needs several changes to be made simultaneously: - change the order we build the initial hash chains for the b sequence - change the sentinel values from INT_MAX to -1 - change the visit order in the longest_match inner loop - add a tie-breaker preference for earlier matches This last is needed because we previously had an implicit tie-breaker from our visitation order that our test suite relies on. Later matches can also trigger a bug in the normalization code in diff().
Thu, 21 Apr 2016 21:53:18 -0500 bdiff: fix latent normalization bug stable
Matt Mackall <mpm@selenic.com> [Thu, 21 Apr 2016 21:53:18 -0500] rev 29012
bdiff: fix latent normalization bug This bug is hidden by the current bias towards matches at the beginning of the file. When this bias is tweaked later to address recursion balancing, the normalization code could cause the next block to shrink to a negative length, thus creating invalid delta chunks. We add checks here to disallow that. This bug requires test cases that are an awkwardly large size for the test suite, but is very rapidly picked up by the included torture tester.
Thu, 21 Apr 2016 21:46:31 -0500 bdiff: fold in shift calculation in normalize stable
Matt Mackall <mpm@selenic.com> [Thu, 21 Apr 2016 21:46:31 -0500] rev 29011
bdiff: fold in shift calculation in normalize This just makes the code harder to read without any performance advantage. We're going to make the check here more complex, let's make it simpler first.
Thu, 21 Apr 2016 21:37:13 -0500 bdiff: unify duplicate normalize loops stable
Matt Mackall <mpm@selenic.com> [Thu, 21 Apr 2016 21:37:13 -0500] rev 29010
bdiff: unify duplicate normalize loops We're about to make the while loop check more complicated, so let's simplify first.
Mon, 25 Apr 2016 16:34:02 -0700 make: backout changeset 51f5fae84e43 stable
Siddharth Agarwal <sid0@fb.com> [Mon, 25 Apr 2016 16:34:02 -0700] rev 29009
make: backout changeset 51f5fae84e43 Support for '!=' was only added in GNU Make 4.0, and CentOS versions as new as CentOS 7 only carry 3.82. I will leave figuring out compatibility with BSD make as an exercise for interested folks.
Thu, 01 Jan 1970 00:00:00 +0000 tests: test-lock-badness.t message could come later stable
timeless <timeless@mozdev.org> [Thu, 01 Jan 1970 00:00:00 +0000] rev 29008
tests: test-lock-badness.t message could come later I got this on gcc112: @@ -58,8 +58,8 @@ $ hg -R b ci -A -m b --config hooks.precommit="python:`pwd`/hooks.py:sleepone" > stdout & $ hg -R b up -q --config hooks.pre-update="python:`pwd`/hooks.py:sleephalf" waiting for lock on working directory of b held by '*:*' (glob) - got lock after ? seconds (glob) $ wait + got lock after 1 seconds $ cat stdout adding b
(0) -10000 -3000 -1000 -300 -100 -30 -10 -7 +7 +10 +30 +100 +300 +1000 +3000 +10000 tip