Adam Simpkins <simpkins@fb.com> [Tue, 26 Apr 2016 15:32:59 -0700] rev 29017
util: fix race in makedirs()
Update makedirs() to ignore EEXIST in case someone else has already created the
directory in question. Previously the ensuredirs() function existed, and was
nearly identical to makedirs() except that it fixed this race. Unfortunately
ensuredirs() was only used in 3 places, and most code uses the racy makedirs()
function. This fixes makedirs() to be non-racy, and replaces calls to
ensuredirs() with makedirs().
In particular, mercurial.scmutil.origpath() used the racy makedirs() code,
which could cause failures during "hg update" as it tried to create backup
directories.
This does slightly change the behavior of call sites using ensuredirs():
previously ensuredirs() would throw EEXIST if the path existed but was a
regular file instead of a directory. It did this by explicitly checking
os.path.isdir() after getting EEXIST. The makedirs() code did not do this and
swallowed all EEXIST errors. I kept the makedirs() behavior, since it seemed
preferable to avoid the extra stat call in the common case where this directory
already exists. If the path does happen to be a file, the caller will almost
certainly fail with an ENOTDIR error shortly afterwards anyway. I checked
the 3 existing call sites of ensuredirs(), and this seems to be the case for
them.
Yuya Nishihara <yuya@tcha.org> [Sun, 24 Apr 2016 21:35:30 +0900] rev 29016
chg: initialize sockdirfd to -1 instead of AT_FDCWD
As we don't use sockdirfd yet, this is the simplest workaround to compile chg
on old Unices where AT_FDCWD does not exist. Foozy pointed out Mac OS X 10.10
is required for AT_FDCWD as well as xxxat() functions.
Matt Mackall <mpm@selenic.com> [Fri, 22 Apr 2016 13:38:02 -0500] rev 29015
bdiff: further restrain potential quadratic performance
This causes the longest_match search to limit itself to a window of
30000 lines during search (roughly 1MB), thus avoiding a full O(N*M)
search that might occur in repetitive structured inputs. For a
particular class of many MB pathological test cases, this generated
the following timings:
size before after
10x 1.25s 1.24s
100x 57s 33s
1000x >8400s 400s
The times on the right quickly become much faster and appear more linear.
While windowing means deltas are no longer "optimal", the resulting
deltas were within a couple percent of expected size. While we've yet
to have a report of a file with the level of repetition necessary to
hit this case, some JSON/XML database dump scenario is fairly likely
to hit it.
This may also slightly improve the average-case performance for deltas
of large binaries.
Matt Mackall <mpm@selenic.com> [Thu, 21 Apr 2016 22:04:11 -0500] rev 29014
bdiff: balance recursion to avoid quadratic behavior (
issue4704)
For highly structured files like JSON or XML dumps with large numbers
of duplicate lines (eg braces) and isolated matching lines, bdiff
could find large numbers of equally good spans. Because it prefers
earlier matches, this would result in pathologically unbalance
recursion that resulted in quadratic performance.
This patch makes it prefer matches closer to the middle that tend to
balance recursion. This change improves the speed of a pathological
test case from 1100s to 9s.
Included is a smaller test that has a roughly 50x safety margin on the
performance it accepts. It's likely to fail on pure builds because
difflib also has a recursion-balancing problem.
Matt Mackall <mpm@selenic.com> [Thu, 21 Apr 2016 21:05:26 -0500] rev 29013
bdiff: deal better with duplicate lines
The longest_match code compares all the possible positions in two
files to find the best match. Given a pair of sequences, it
effectively searches a grid like this:
a b b b c . d e . f
0 1 2 3 4 5 6 7 8 9
a 1 - - - - - - - - -
b - 2 1 1 - - - - - -
b - 1 3 2 - - - - - -
b - 1 2 4 - - - - - -
. - - - - - 1 - - 1 -
Here, the 4 in the middle says "the first four lines of the
file match", which it can compute be comparing the fourth lines and
then adding one to the result found when comparing the third lines in
the entry to the upper left.
We generally avoid the quadratic worst case by only looking at lines
that match, which is precomputed. We also avoid quadratic storage by
only keeping a single column vector and then keeping track of the best
match.
Unfortunately, this can get us into trouble with the sequences above.
Because we want to reuse the '3' value when calculating the '4', we
need to be careful not to overwrite it with the '2' we calculate
immediately before. If we scan left to right, top to bottom, we're
going to have a problem: we'll overwrite our 3 before we use it and
calculate a suboptimal best match.
To address this, we can either keep two column vectors and swap
between them (which significantly complicates bookkeeping), or change
our scanning order. If we instead scan from left to right, bottom to
top, we'll avoid ever overwriting values we'll need in the future.
This unfortunately needs several changes to be made simultaneously:
- change the order we build the initial hash chains for the b sequence
- change the sentinel values from INT_MAX to -1
- change the visit order in the longest_match inner loop
- add a tie-breaker preference for earlier matches
This last is needed because we previously had an implicit tie-breaker
from our visitation order that our test suite relies on. Later matches
can also trigger a bug in the normalization code in diff().
Matt Mackall <mpm@selenic.com> [Thu, 21 Apr 2016 21:53:18 -0500] rev 29012
bdiff: fix latent normalization bug
This bug is hidden by the current bias towards matches at the
beginning of the file. When this bias is tweaked later to address
recursion balancing, the normalization code could cause the next block
to shrink to a negative length, thus creating invalid delta chunks. We
add checks here to disallow that.
This bug requires test cases that are an awkwardly large size for the test
suite, but is very rapidly picked up by the included torture tester.
Matt Mackall <mpm@selenic.com> [Thu, 21 Apr 2016 21:46:31 -0500] rev 29011
bdiff: fold in shift calculation in normalize
This just makes the code harder to read without any performance
advantage. We're going to make the check here more complex, let's make
it simpler first.
Matt Mackall <mpm@selenic.com> [Thu, 21 Apr 2016 21:37:13 -0500] rev 29010
bdiff: unify duplicate normalize loops
We're about to make the while loop check more complicated, so let's simplify
first.