Martin von Zweigbergk <martinvonz@google.com> [Wed, 17 Apr 2019 23:10:29 -0700] rev 42260
copies: filter out copies from non-existent source later in _chain()
_changesetforwardcopies() repeatedly calls _chain(). That is very
expensive because _chain() does lookups in the manifest. I hope to
split up the function in two parts: 1) simple chaining, not
considering end points, and 2) filter out files that don't exist in
the end points (and ping-pong copies/renames).
This patches gets us closer to that by moving the check for
non-existent source later in the function. Now there are no more
checks for "src" and "dst" in the first loop; all the filtering of
invalid copies is done in the second loop. The code also looks much
more consistent now.
No measureable impact on `hg debugpathcopies 4.0 4.8`. That shouldn't
be surprising since the only case we're doing more checks now is in
case of chained copies/renames, which are quire rare in practice.
Differential Revision: https://phab.mercurial-scm.org/D6277
Martin von Zweigbergk <martinvonz@google.com> [Thu, 18 Apr 2019 00:12:56 -0700] rev 42259
copies: clarify mutually exclusive cases in _chain() with a s/if/elif/
If the 'b' dict has a rename from 'x' to 'y', it shouldn't be possible
for 'x' to be both (a key) in 'a' and in 'src'. That would mean that
'x' is a file in the source commit and also a rename destination in
the intermediate commit. But we currently don't allow renaming files
onto existing files, so that shouldn't happen. So let's clarify that
by using an "elif" instead of an "if". And if we did allow renaming
files onto existing files, we should prefer to use the rename
destination in the intermediate commit as source anyway.
Differential Revision: https://phab.mercurial-scm.org/D6276
Martin von Zweigbergk <martinvonz@google.com> [Thu, 18 Apr 2019 00:05:05 -0700] rev 42258
copies: delete a redundant cleanup step in _chain()
The check is redundant since d5edb5d3a337 (copies: filter out copies
when target is not in destination manifest, 2019-02-14). To test that
hypothesis, I made this change in the commit that commit, but all
tests still passed. I think the case was necessary before then, we
just didn't have tests for it.
Differential Revision: https://phab.mercurial-scm.org/D6275
Martin von Zweigbergk <martinvonz@google.com> [Wed, 17 Apr 2019 23:10:14 -0700] rev 42257
copies: document cases in _chain()
Differential Revision: https://phab.mercurial-scm.org/D6274
Martin von Zweigbergk <martinvonz@google.com> [Wed, 17 Apr 2019 14:44:18 -0700] rev 42256
copies: ignore heuristics copytracing when using changeset-centric algos
Differential Revision: https://phab.mercurial-scm.org/D6269
Martin von Zweigbergk <martinvonz@google.com> [Wed, 17 Apr 2019 14:42:23 -0700] rev 42255
copies: move check for experimental.copytrace==<falsy> earlier
I'm going to ignore experimental.copytrace when changeset-centric
algorithms are required. This little refactoring makes that easier to
add.
Differential Revision: https://phab.mercurial-scm.org/D6268
Martin von Zweigbergk <martinvonz@google.com> [Wed, 17 Apr 2019 14:11:54 -0700] rev 42254
copies: replace .items() by .values() where appropriate
As pointed out by Pierre-Yves.
Differential Revision: https://phab.mercurial-scm.org/D6266
Martin von Zweigbergk <martinvonz@google.com> [Fri, 12 Apr 2019 10:44:37 -0700] rev 42253
copies: inline _computenonoverlap() in mergecopies()
We now call pathcopies() from the base to each of the commits, and
that calls _computeforwardmissing(), which does file prefetching (in
the remotefilelog override). So the call to _computenonoverlap() is
now pointless (the sets of files from _computenonoverlap() are subsets
of the sets of files from _computeforwardmissing()).
This somehow also fixes a broken remotefilelog test.
Differential Revision: https://phab.mercurial-scm.org/D6256
Martin von Zweigbergk <martinvonz@google.com> [Thu, 11 Apr 2019 23:22:54 -0700] rev 42252
copies: calculate mergecopies() based on pathcopies()
When copies are stored in changesets, we need a changeset-centric
version of mergecopies() just like we have a changeset-centric version
of pathcopies(). I think the natural way of thinking about
mergecopies() is in terms of pathcopies() from the base to each of the
commits. So if we can rewrite mergecopies() based on two such
pathcopies() calls, we'll get the changeset-centric version for
free. That's what this patch does.
A nice bonus is that it ends up being a lot simpler. mergecopies() has
accumulated a lot of technical debt over time. One good example is the
code for dealing with grafts (the "partial/incomplete/dirty"
stuff). Since pathcopies() already deals with backwards renames and
ping-pong renames, we get that for free.
I've run tests with hard-coded debug logging for "fullcopy" and while
I haven't looked at every difference it produces, all the ones I have
looked at seemed reasonable to me. I'm a little surprised that no more
tests fail when run with '--extra-config-opt
experimental.copies.read-from=compatibility' compared to before this
patch. This patch also fixes the broken cases in test-annotate.t and
test-fastannotate.t. It also enables the part of test-copies.t that
was previously disabled exactly because mergecopies() needed to get a
changeset-centric version.
One drawback of the rewritten code is that we may now make
remotefilelog prefetch more files. We used to prefetch files that were
unique to either side of the merge compared to the other. We now
prefetch files that are unique to either side of the merge compared to
the base. This means that if you added the same file to each side, we
would not prefetch it before, but we would now. Such cases are
probably quite rare, but one likely scenario where they happen is when
moving from a commit to its successor (or the other way around). The
user will probably already have the files in the cache in such cases,
so it's probably not a big deal.
Some timings for calculating mergecopies between two revisions
(revisions shown on each line, all using the common ancestor as base):
In the hg repo:
4.8 4.9: 0.21s -> 0.21s
4.0 4.8: 0.35s -> 0.63s
In and old copy of the mozilla-unified repo:
FIREFOX_BETA_60_BASE^ FIREFOX_BETA_60_BASE: 0.82s -> 0.82s
FIREFOX_NIGHTLY_59_END FIREFOX_BETA_60_BASE: 2.5s -> 2.6s
FIREFOX_BETA_59_END FIREFOX_BETA_60_BASE: 3.9s -> 4.1s
FIREFOX_AURORA_50_BASE FIREFOX_BETA_60_BASE: 31s -> 33s
So it's measurably slower in most cases. The most significant
difference is in the hg repo between revisions 4.0 and 4.8. In that
case it seems to come from the fact that pathcopies() uses
fctx.isintroducedafter() (in _tracefile), while the old mergecopies()
used fctx.linkrev() (in _checkcopies()). That results in a single call
to filectx._adjustlinkrev(), which is responsible for the entire
difference in time (in my repo). So we pay a performance penalty but
we get more correct code (see change in
test-mv-cp-st-diff.t). Deleting the "== f.filenode()" in _tracefile()
recovers the lost performance in the hg repo.
There were are few other optimizations in _checkcopies() that I could
not measure any impact from. One was from the "seen" set. Another was
from a "continue" when the file was not in the destination manifest
(corresponding to "am" in _tracefile).
Also note that merge copies are not calculated when updating with a
clean working copy, which is probably the most common case. I
therefore think the much simpler code is worth the slowdown.
Differential Revision: https://phab.mercurial-scm.org/D6255
Martin von Zweigbergk <martinvonz@google.com> [Mon, 29 Apr 2019 14:38:54 -0700] rev 42251
tests: add test where copy source is deleted and added back
This shows another difference between pathcopies() and mergecopies():
mergecopies() considers files that have been deleted and then added
back as different files, but pathcopies() does not.
Differential Revision: https://phab.mercurial-scm.org/D6330