upgrade: add information about sparse-revlog
Show information about sparse-revlog in debugformat, just like other
requirements.
sparse-revlog: implement algorithm to write sparse delta chains (
issue5480)
The classic behavior of revlog._isgooddeltainfo is to consider the span size
of the whole delta chain, and limit it to 4 * textlen.
Once sparse-revlog writing is allowed (and enforced with a requirement),
revlog._isgooddeltainfo considers the span of the largest chunk as the
distance used in the verification, instead of using the span of the whole
delta chain.
In order to compute the span of the largest chunk, we need to slice into
chunks a chain with the new revision at the top of the revlog, and take the
maximal span of these chunks. The sparse read density is a parameter to the
slicing, as it will stop when the global read density reaches this threshold.
For instance, a density of 50% means that 2 of 4 read bytes are actually used
for the reconstruction of the revision (the others are part of other chains).
This allows a new revision to be potentially stored with a diff against
another revision anywhere in the history, instead of forcing it in the last 4
* textlen. The result is a much better compression on repositories that have
many concurrent branches. Here are a comparison between using deltas from
current upstream (aggressive-merge-deltas on by default) and deltas from a
sparse-revlog
Comparison of `.hg/store/` size:
mercurial (6.74% merges):
before: 46,831,873 bytes
after: 46,795,992 bytes (no relevant change)
pypy (8.30% merges):
before: 333,524,651 bytes
after: 308,417,511 bytes -8%
netbeans (34.21% merges):
before: 1,141,847,554 bytes
after: 1,131,093,161 bytes -1%
mozilla-central (4.84% merges):
before: 2,344,248,850 bytes
after: 2,328,459,258 bytes -1%
large-private-repo-A (merge 19.73%)
before: 41,510,550,163 bytes
after: 8,121,763,428 bytes -80%
large-private-repo-B (23.77%)
before: 58,702,221,709 bytes
after: 8,351,588,828 bytes -76%
Comparison of `00manifest.d` size:
mercurial (6.74% merges):
before: 6,143,044 bytes
after: 6,107,163 bytes
pypy (8.30% merges):
before: 52,941,780 bytes
after: 27,834,082 bytes -48%
netbeans (34.21% merges):
before: 130,088,982 bytes
after: 119,337,636 bytes -10%
mozilla-central (4.84% merges):
before: 215,096,339 bytes
after: 199,496,863 bytes -8%
large-private-repo-A (merge 19.73%)
before: 33,725,285,081 bytes
after: 390,302,545 bytes -99%
large-private-repo-B (23.77%)
before: 49,457,701,645 bytes
after: 1,366,752,187 bytes -97%
The better delta chains provide a performance boost in relevant repositories:
pypy, bundling 1000 revisions:
before: 1.670s
after: 1.149s -31%
Unbundling got a bit slower. probably because the sparse algorithm is still
pure
python.
pypy, unbundling 1000 revisions:
before: 4.062s
after: 4.507s +10%
Performance of bundle/unbundle in repository with few concurrent branches (eg:
mercurial) are unaffected.
No significant differences have been noticed then timing `hg push` and `hg
pull` locally. More state timings are being gathered.
Same as for aggressive-merge-delta, better delta comes with longer delta
chains. Longer chains have a performance impact. For example. The length of
the chain needed to get the manifest of pypy's tip moves from 82 item to 1929
items. This moves the restore time from 3.88ms to 11.3ms.
Delta chain length is an independent issue that affects repository without
this changes. It will be dealt with independently.
No significant differences have been observed on repositories where
`sparse-revlog` have not much effect (mercurial, unity, netbeans). On pypy,
small differences have been observed on some operation affected by delta chain
building and retrieval.
pypy, perfmanifest
before: 0.006162s
after: 0.017899s +190%
pypy, commit:
before: 0.382
after: 0.376 -1%
pypy, status:
before: 0.157
after: 0.168 +7%
More comprehensive and stable timing comparisons are in progress.
sparse-revlog: new requirement enabled with format.sparse-revlog
The meaning of the new 'sparse-revlog' requirement is that the revlogs are
allowed to contain wider delta chains with larger holes between the interesting
chunks. These sparse delta chains should be read in several chunks to avoid a
potential explosion of memory usage.
Former version won't know how to read a delta chain in several chunks. They
would keep reading them in a single read, and therefore would be subject to the
potential memory explosion. Hence this new requirement: only versions having
support of sparse-revlog reading should be allowed to read such a revlog.
Implementation of this new algorithm and tools to enable or disable the
requirement will follow in the next changesets.
revlog: extract `deltainfo.distance` for future conditional redefinition
This commit exist to make the next one clearer.
shelve: pick the most recent shelve if none specified for --patch/--stat
Differential Revision: https://phab.mercurial-scm.org/D3950
shelve: improve help text for --patch and --stat
It's not currently obvious why "hg shelve -p" fails, since -p doesn't take an argument.
Differential Revision: https://phab.mercurial-scm.org/D3949
ssh: avoid reading beyond the end of stream when using compression
Compressed streams can be used as part of getbundle. The normal read()
operation of bufferedinputpipe will try to fulfill the request exactly
and can deadlock if the server sends less as it is done. At the same
time, the bundle2 logic will stop reading when it believes it has gotten
all parts of the bundle, which can leave behind end of stream markers as
used by bzip2 and zstd.
To solve this, introduce a new optional unbufferedread interface and
provided it in bufferedinputpipe and doublepipe. If there is buffered
data left, it will be returned, otherwise it will issue a single read
request and return whatever it obtains.
Reorganize the decompression handlers to try harder to read until the
end of stream, especially if the requested read can already be
fulfilled. Check for end of stream is messy with Python 2, none of the
standard compression modules properly exposes it. At least with zstd and
bzip2, decompressing will remember EOS and fail for empty input after
the EOS has been seen. For zlib, the only way to detect it with Python 2
is to duplicate the decompressobj and force some additional data into
it. The common handler can be further optimized, but works as PoC.
Differential Revision: https://phab.mercurial-scm.org/D3937
revset: add larger test for heads(ancestors(…))
It is important to not regress on this benchmark so we move it into the "base"
file. And we add another benchmark with more than two revisions.
revset-benchmark: use a generic revset to test `heads(commonancestors())`
This allow to benchmark revset performance in other repositories than just
the mercurial one.
revlog: reintroduce `revlog.descendant` as deprecated
Reintroduce `revlog.descendant` to help extensions authors update their
extensions in order to use the new API.
context: reintroduce `ctx.descendant` as deprecated
Reintroduce `ctx.descendant` to help extensions authors update their
extensions in order to use the new API.
obsolete: explode if metadata contains invalid UTF-8 sequence (API)
The current metadata API can be a source of bugs since it forces callers to
process encoding conversion by themselves. So let's make it reject bad data
as a last ditch. I assume there's no metadata field which is supposed to store
arbitrary BLOB like transplant_source.
obsolete: store user name and note in UTF-8 (
issue5754) (BC)
Before, user names were stored in local encoding and transferred across
repositories, which made it impossible to restore non-ASCII user names on
different platforms. This patch fixes new markers to be encoded in UTF-8
and decoded back to local encoding when displaying. Existing markers are
unfixable so they may result in mojibake.
I don't like the API that requires metadata dict to be UTF-8 encoded, which
is a source of bugs, but there's no abstraction layer to process the encoding
thingy efficiently. So we apply the same rule as extras dict to obsstore
metadata.
revset: special case commonancestors(none()) to be empty set
This matches the behavior of ancestor(none()).
From an implementation perspective, ancestor() and commonancestors() are
intersection, and ancestors() is union, so it would make some sense that
commonancestors(none()) returned all revisions. However, ancestor(none())
isn't implemented as such, which breaks ancestor(x) == max(commonancestors(x)).
From a user perspective, ancestors of nothing is nothing whichever type
of operation the ancestor predicate does.