perf: allow to specify the base of the merge in perfmergecalculate
We can now test the rebase case.
perf: add a --from flag to perfmergecalculate
Before this change, `perfmergecalculate` was always benchmarking the merge of
the working copy with another revision. We can now benchmark the
`mergecalculate` call for any arbitrary pair of revision.
py3: fix test-narrow* which started failing because of recent changes
#skip-blame because just r'' prefix
Differential Revision: https://phab.mercurial-scm.org/D6447
manifest: add some documentation to _lazymanifest python code
It was not particularly easy figuring out the design of this class and keeping
track of how the pieces work. So might as well write some of it down for the
next person.
manifest: avoid corruption by dropping removed files with pure (
issue5801)
Previously, removed files would simply be marked by overwriting the first byte
with NUL and dropping their entry in `self.position`. But no effort was made to
ignore them when compacting the dictionary into text form. This allowed them to
slip into the manifest revision, since the code seems to be trying to minimize
the string operations by copying as large a chunk as possible. As part of this,
compact() walks the existing text based on entries in the `positions` list, and
consumed everything up to the next position entry. This typically resulted in
a ValueError complaining about unsorted manifest entries.
Sometimes it seems that files do get dropped in large repos- it seems to
correspond to there being a new entry that would take the same slot. A much
more trivial problem is that if the only changes were removals, `_compact()`
didn't even run because `__delitem__` doesn't add anything to `self.extradata`.
Now there's an explicit variable to flag this, both to allow `_compact()` to
run, and to avoid searching the manifest in cases where there are no removals.
In practice, this behavior was mostly obscured by the check in fastdelta() which
takes a different path that explicitly drops removed files if there are fewer
than 1000 changes. However, timeless has a repo where after rebasing tens of
commits, a totally different path[1] is taken that bypasses the change count
check and hits this problem.
[1] https://www.mercurial-scm.org/repo/hg/file/
2338bdea4474/mercurial/manifest.py#l1511
tests: demonstrate broken manifest generation with the pure module
This will be fixed next. But I don't fully understand how 'b.txt' is actually
removed properly in the second test, given what's broken. Also, I'm not sure
why 'bb.txt' is flagged as not being in the manifest, when it clearly appears
to be.
tests: add test for {file_mods}, {file_adds}, {file_dels} on merge commit
Differential Revision: https://phab.mercurial-scm.org/D6368
context: add ctx.files{modified,added,removed}() methods
Changeset-centric copy tracing is currently very slow because it often
reads manifests. One place it needs the manifest is in _chain(), where
it removes a copy X->Y if Y has subsequently gotten removed. I want to
speed that up by keeping track directly in the changeset of which
files are removed in the changeset. These methods will be similar to
ctx.p[12]copies() in that way: they will either read from the
changeset or calculate the information from the manifests otherwise.
Note that these are different from ctx.{modified,added,removed}() on
merge commits. Those functions always compare to p1, but the new ones
compare to both parents. filesadded() means "file does not exist in
either parent but exists now", filesremoved() means "file existed in
either parent but does not exist now", and filesmodified() means "file
existed in either parent and still exists". The set of files in
ctx.files() is the union of the files from the three new functions
(and the three new ones are all disjoint sets).
Also note that uncommitted merges are weird as usual. The invariant
mentioned above still holds, but the functions compare to p1 (and are
thus identical to the existing methods).
Differential Revision: https://phab.mercurial-scm.org/D6367
copies: split up _chain() in naive chaining and filtering steps
The function now has two clearly defined steps. The first step is the
actual chaining. This step is very cheap. The second step is filtering
out invalid copies. This step is expensive. For changeset-centric copy
tracing, I want to do the filtering step only at the end. This patch
prepares for that.
Differential Revision: https://phab.mercurial-scm.org/D6418