Boris Feld <boris.feld@octobus.net> [Fri, 07 Sep 2018 11:17:33 -0400] rev 39498
snapshot: add refining logic at the findeltainfo level
Once we found a delta, we want to have the candidates logic challenge it,
searching for a better candidate.
The logic at the lower level is still missing. We'll introduce it later.
Adding small changes in individual commits make it simpler to explain the code
change.
This is another small step toward turning `_refinegroups` into a co-routine.
Boris Feld <boris.feld@octobus.net> [Fri, 07 Sep 2018 11:17:32 -0400] rev 39497
snapshot: use None as a stop value when looking for a good delta
Having clear stop value should help keep clear logic around the co-routine.
The alternative of using a StopIteration exception give a messier result.
This is one small step toward turning `_refinegroups` into a co-routine.
Boris Feld <boris.feld@octobus.net> [Fri, 07 Sep 2018 11:17:32 -0400] rev 39496
snapshot: introduce an intermediate `_refinedgroups` generator
This method will be used to improve the search for a good snapshot base. To
keep things simpler, we introduce the necessary function before doing any
delta base logic change. The next handful of commits will focus on refactoring
the code to let that new logic land as clearly as possible.
# General Idea
Right now, the search for a good delta base stop whenever we found a good one.
However, when using sparse-revlog, we should probably try a bit harder.
We do significant effort to increase delta re-use by jumping on "unrelated"
delta chains that provide better results. Moving to another chain for a better
result is good, but we have no guarantee we jump at a reasonable point in that
new chain. When we consider over the chains related to the parents, we start
from the higher-level snapshots. This is a way to consider the snapshot closer
to the current revision that has the best chance to produce a small delta. We
do benefit from this walk order when jumping to a better "unrelated" stack.
To counter-balance this, we'll introduce a way to "refine" the result. After a
good delta have been found, we'll keep searching for a better delta, using the
current best one as a starting point.
# Target Setup
The `finddeltainfo` method is responsible for the general search for a good
delta. It requests candidates base from `_candidategroups` and decides which
one are usable.
The `_candidategroups` generator act as a top-level filter, it does not care
about how we pick candidates, it just does basic filtering, excluding
revisions that have been tested already or that are an obvious misfit.
The `_rawgroups` generator is the one with the actual ancestors walking logic,
It does not care about what would do a good delta and what was already tested,
it just issues the initial candidates.
We introduce a new `_refinedgroup` function to bridge the gap between
`_candidategroups` and `_rawgroups`. It delegates the initial iteration logic
and then performing relevant refining of the valid base once found. (This
logic is yet to be added to function)
All these logics are fairly independent and easier to understand when standing
alone, not mixed with each other. It also makes it easy to test and try
different approaches for one of those four layers without affecting the other
ones.
# Technical details
To communicate `finddeltainfo` choice of "current best delta base" to the
`_refinegroup` logic, we plan to use python co-routine feature. The
`_candidategroups` and `_refinegroup` generators will become co-routine. This
will allow `_refinegroup` to detect when a good delta have been found and
triggers various refining steps.
For now, `_candidategroups` will just pass the value down the stack.
After poking at various option, the co-routine appears the best to keep each
layers focus on its duty, without the need to spread implementation details
across layers.
Boris Feld <boris.feld@octobus.net> [Fri, 07 Sep 2018 11:17:31 -0400] rev 39495
snapshot: consider unrelated snapshots at a similar level first
This new step is inserted before considering using a level-N snapshot as a
base for a level-N+1 snapshot. We first check if existing level-N+1 snapshots
using the same base would be a suitable base for a level-N+2 snapshot.
This increases snapshot reuse and limits the risk of snapshot explosion in
very branchy repositories.
Using a "deeper" snapshot as the base also results in a smaller snapshot since
it builds a level-N+2 intermediate snapshot instead of an N+1 one.
This logic is similar for the one we added in a previous commit. In that
previous commit is only applied to level-0 "siblings".
We can see this effect in the test repository. Snapshots moved from lower
levels to higher levels.
Boris Feld <boris.feld@octobus.net> [Fri, 07 Sep 2018 11:17:30 -0400] rev 39494
snapshot: consider all snapshots in the parents' chains
There are no reasons to only consider full snapshot as a possible base for an
intermediate snapshot. Now that the basic principles have been set, we can
start adding more levels of snapshots.
We now consider all snapshots in the parent's chains (full or intermediate).
This creates a chain of intermediate snapshots, each smaller than the previous
one.
# Effect On The Test Repository
In the test repository, we can see a decrease in the revlog size and slightly
shorter delta chain.
However, that approach creates snapshots more frequently, increasing the risk
of ending into problematic cases in very branchy repositories (not triggered
by the test repository). The next changesets will remove that risk by adding
logic that increases deltas reuse.
Boris Feld <boris.feld@octobus.net> [Fri, 07 Sep 2018 11:17:30 -0400] rev 39493
snapshot: search for unrelated but reusable full-snapshot
# New Strategy Step: Reusing Snapshot Outside Of Parents' Chain.
If no suitable bases were found in the parent's chains, see if we could reuse
a full snapshot not directly related to the current revision. Such search can
be expensive, so we only search for snapshots appended to the revlog *after*
the bases used by the parents of the current revision (the one we just
tested). We assume the parent's bases were created because the previous
snapshots were unsuitable, so there are low odds they would be useful now.
This search gives a chance to reuse a delta chain unrelated to the current
revision. Without this re-use, topological branches would keep reopening new
full chains. Creating more and more snapshots as the repository grow.
In repositories with many topological branches, the lack of delta reuse can
create too many snapshots reducing overall compression to nothing. This
results in a very large repository and other usability issues.
For now, we still focus on creating level-1 snapshots. However, this principle
will play a large part in how we avoid snapshot explosion once we have more
snapshot levels.
# Effects On The Test Repository
In the test repository we created, we can see the beneficial effect of such
reuse. We need very few level-0 snapshots and the overall revlog size has
decreased.
The `hg debugrevlog` call, show a "lvl-2" snapshot. It comes from the existing
delta logic using the `prev` revision (revlog's tip) as the base. In this
specific case, it turns out the tip was a level-1 snapshot. This is a
coincidence that can be ignored.
Finding and testing against all these unrelated snapshots can have a
performance impact at write time. We currently focus on building good deltas
chain we build. Performance concern will be dealt with later in another
series.
Boris Feld <boris.feld@octobus.net> [Fri, 07 Sep 2018 11:17:29 -0400] rev 39492
snapshot: try intermediate snapshot against parents' base
# Regarding The Series Started By This Changeset
This is the first changesets of a group adjusting delta chain strategy to
build a useful chain of intermediate snapshots. The series will introduce a
full strategy to produce chains of multiple snapshots on top of which a
"usual" delta chain will be built.
That strategy will have multiple steps to maximize snapshot reuse, avoiding
pathological cases and improving overall compression in very branchy
repositories. An important property of sparse-revlog using such snapshot-chain
is that they can use very short delta chain without problematic impact on the
resulting compression. Shorter delta chains are important to achieve good
performance.
To make each step clear, we'll introduce them one by one.
See the end of this series for full details.
# Regarding This Changeset
Before this change, if we cannot store the current revision as a delta against
a "simple" candidate (p1, p2, prev), we created a new level-0 snapshot (also
called full snapshot).
As the first step, we introduce a simple strategy: try an intermediate level-1
snapshot against the chain base of the "current revision" parents.
The "current revision" is the one we are currently trying to store in the
revlog, triggering this search for a good delta base.
The first item in the chain is always a level-0 snapshot.
# Effect On The Test Repository
We can already see the effect on the test-repository. Most of the snapshots
have shifted from level 0 to level 1. The overall size has slightly decreased.
(However, keep in mind that this repository only emulates real data)
# Regarding Statistic
The current series focuses on improving the chain built. Improving the
performance of this logic will be done as a second step. Sparse-revlog is
still experimental and disabled by default.
We'll provide more statistic about resulting size and delta chain at the end
of this series.
Boris Feld <boris.feld@octobus.net> [Mon, 10 Sep 2018 09:08:24 -0700] rev 39491
sparse-revlog: add a test checking revlog deltas for a churning file
The test repository contains 5000 revisions and is therefore slow to build:
five minutes with CHG, over fifteen minutes without. It is too slow to build
during the test. Bundling all content produce a sizeable result, 20BM, too
large to be committed. Instead, we commit a script to build the expected
bundle and the test checks if the bundle is available. Any run of the script
will produce the same repository content, using resulting in the same hashes.
Using smaller repositories was tried, however, it misses most of the cases we
are planning to improve. Having them in a 5000 repository is already nice, we
usually see these case in repositories in the order of magnitude of one
million revisions.
This test will be very useful to check various changes strategy for building
delta to store in a sparse-revlog.
In this series we will focus our attention on the following metrics:
The ones that will impact the final storage performance (size, space):
* size of the revlog data file (".hg/store/data/*.d")
* chain length info
The ones that describe the deltas patterns:
* number of snapshot revision (and their level)
* size taken by snapshot revision (and their level)
Boris Feld <boris.feld@octobus.net> [Sat, 18 Aug 2018 12:45:44 +0200] rev 39490
tests: add a `tests/artifacts/` directory
That directory is meant to cache large items used by tests that are slow to
generate. See 'PURPOSE' file for details and next changesets for a first user.
Meirambek Omyrzak <meirambek77@gmail.com> [Wed, 05 Sep 2018 01:19:48 +0300] rev 39489
verify: make output less confusing (
issue5924)
output before: "500 files, 2035 changesets, 2622 total revisions"
output after: "checked 2035 changesets with 2622 changes to 500 files"
new one was suggested in the comments inside the issue.
Differential Revision: https://phab.mercurial-scm.org/D4476
Boris Feld <boris.feld@octobus.net> [Tue, 04 Sep 2018 21:28:28 +0200] rev 39488
revlog: clarify the comment attached to delta reuse
The previous version was a bit complicated and referred to a deprecated
configuration option.
Boris Feld <boris.feld@octobus.net> [Tue, 04 Sep 2018 21:05:21 +0200] rev 39487
revlog: drop duplicated code
This code probably got duplicated by a rebase/evolve conflict. We drop the
extra copy, the other copy is right below.
This had no real effects since other logic ensure that we never test the same
revision twice.
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 05 Sep 2018 09:04:40 -0700] rev 39486
wireprotov2peer: properly format errors
formatrichmessage() expects an iterable containing dicts with
well-defined keys. We were passing in something else. This caused
an exception.
Change the code to call formatrichmessage() with the proper argument.
And add a TODO to potentially emit the proper data structure from
the server in the first place.
Differential Revision: https://phab.mercurial-scm.org/D4441
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 23 Aug 2018 13:50:47 -0700] rev 39485
wireprotov2peer: report exceptions in frame handling against request future
Otherwise the future may never resolve, which could cause deadlock.
Differential Revision: https://phab.mercurial-scm.org/D4440
Anton Shestakov <av6@dwimlabs.net> [Sat, 08 Sep 2018 21:58:51 +0800] rev 39484
httppeer: use util.readexactly() to abort on incomplete responses
Plain resp.read(n) may not return exactly n bytes when we need, and to detect
such cases before trying to interpret whatever has been read, we can use
util.readexactly(), which raises an Abort when stream ends unexpectedly. In the
first case here, readexactly() prevents a traceback with struct.error, in the
second it avoids looking for invalid compression engines.
In this test case, _wraphttpresponse doesn't catch the problem (presumably
because it doesn't know transfer encoding), and the code continues reading the
response until it gets to compression engine data. Maybe there should be checks
before the execution gets there, but I'm not sure where (httplib?)
Anton Shestakov <av6@dwimlabs.net> [Sat, 08 Sep 2018 23:57:07 +0800] rev 39483
httppeer: calculate total expected bytes correctly
User-facing error messages that handled httplib.IncompleteRead errors in
Mercurial used to look like this:
abort: HTTP request error (incomplete response; expected 3 bytes got 1)
But the errors that are being handled underneath the UI look like this:
IncompleteRead(1 bytes read, 3 more expected)
I.e. the error actually counts total number of expected bytes minus bytes
already received.
Before, users could see weird messages like "expected 10 bytes got 10", while
in reality httplib expected 10 _more_ bytes (20 in total).
Martin von Zweigbergk <martinvonz@google.com> [Fri, 07 Sep 2018 23:36:09 -0700] rev 39482
lazyancestors: reuse __iter__ implementation in __contains__
There was a comment in the code that said "Trying to do both __iter__
and __contains__ using the same visit heap and seen set is complex
enough that it slows down both. Keep them separate.". However, it
seems easy and efficient to make __contains__ keep an iterator across
calls.
I couldn't measure any slowdown from `hg bundle --all` (which seem to
call lazyancestors.__contains__ frequently).
Differential Revision: https://phab.mercurial-scm.org/D4508
Martin von Zweigbergk <martinvonz@google.com> [Sun, 09 Sep 2018 23:16:55 -0700] rev 39481
lazyancestors: extract __iter__ to free function
The next patch will keep a reference to the returned iterator in a
field, which would otherwise result in a reference cycle.
Differential Revision: https://phab.mercurial-scm.org/D4517
Boris Feld <boris.feld@octobus.net> [Thu, 30 Aug 2018 01:53:21 +0200] rev 39480
phase: report number of non-public changeset alongside the new range
When interacting with non-publishing repository or bundle, it is useful to
have some information about the phase of the changeset we just pulled.
This changeset updates the "new changesets MIN:MAX" output to also includes
phases information for non-public changesets. Displaying extra data about
non-public changesets means the output for exchange with publishing repository
(the default) is unaffected.
Matt Harbison <matt_harbison@yahoo.com> [Fri, 07 Sep 2018 23:54:42 -0400] rev 39479
tests: disable test-nointerrupt on Windows
Per the followup discussion[1]. proc.send_signal(INT) in timeout.py raises a
ValueError because of an unsupported signal. I don't like missing test coverage
for this on Windows. But this is the last test failing on Windows, and red all
the time hides new failures.
[1] https://phab.mercurial-scm.org/D3716
Matt Harbison <matt_harbison@yahoo.com> [Fri, 07 Sep 2018 23:39:49 -0400] rev 39478
tests: conditionalize an error message about unlinking a non empty directory
The message on Windows comes from win32.unlink(). It looks like os.unlink() on
posix platforms is a simple call to unlink(3), which turns into unlinkat(2).
Since there's a comment in one of the tests that the message should be improved,
I don't think it's worth adding a check in win32.unlink() to see if it's empty,
if that function is always going to fail on a directory. (It seems like the
POSIX spec allows unlinking directories though.)
Martin von Zweigbergk <martinvonz@google.com> [Fri, 07 Sep 2018 14:48:38 -0700] rev 39477
ancestors: add nullrev to set from the beginning
Differential Revision: https://phab.mercurial-scm.org/D4507
Yuya Nishihara <yuya@tcha.org> [Sat, 08 Sep 2018 10:59:24 +0900] rev 39476
ancestor: filter out initial revisions lower than stoprev
Yuya Nishihara <yuya@tcha.org> [Sat, 08 Sep 2018 10:48:42 +0900] rev 39475
ancestor: add test showing inconsistency between __iter__ and __contains__
Boris Feld <boris.feld@octobus.net> [Thu, 06 Sep 2018 19:37:38 -0400] rev 39474
ancestors: ensure a consistent order even in the "inclusive" case
It seems odds to first issue the "source" revs and then the other ancestors.
In addition, doing so can break the other contract of always issuing a child
before its parent. We update the code to apply the same logic to all yielded
revision. No tests break so we seem in the clear except where we explicitly
test the order.
Boris Feld <boris.feld@octobus.net> [Thu, 06 Sep 2018 17:00:28 -0400] rev 39473
ancestors: actually iterate over ancestors in topological order (
issue5979)
This code previously used a dequeue logic, the first ancestors seen were the
first ancestors to be emitted. In branching/merging situations, it can result
in ancestors being yielded before their descendants, breaking the object
contract.
We got affected by this issue while working on the copy tracing code. At about
the same time, Axel Hecht <axel@mozilla.com> reported the issue and provided
the test case used in this changeset. Thanks Axel.
Running `hg perfancestors` does not show a significant difference between the
old and the new version.
Yuya Nishihara <yuya@tcha.org> [Thu, 06 Sep 2018 22:12:21 +0900] rev 39472
doc: use modern import style in runrst
Yuya Nishihara <yuya@tcha.org> [Sun, 26 Aug 2018 22:18:09 +0900] rev 39471
hgweb: do not audit URL path as working-directory path
Since hgweb is an interface to repository data, we don't need to prohibit
any paths conflicting within the filesystem. Still an access to working
files is audited by filectx.
Yuya Nishihara <yuya@tcha.org> [Sun, 26 Aug 2018 22:23:25 +0900] rev 39470
hgweb: map Abort to 403 error to report inaccessible path for example
Abort is so common in our codebase. We could instead introduce a dedicated
type for path auditing errors, but we'll probably have to catch error.Abort
anyway.
As you can see, an abort message may include a full path to the repository,
which might be considered information leak. If that matters, we should hide
the message and send it to the server log instead.
Yuya Nishihara <yuya@tcha.org> [Fri, 07 Sep 2018 22:19:28 +0900] rev 39469
hgweb: add error template to json so it won't crash
Yuya Nishihara <yuya@tcha.org> [Fri, 07 Sep 2018 22:12:46 +0900] rev 39468
hgweb: show shortlog by default in json output (
issue5978)
Augie Fackler <augie@google.com> [Fri, 07 Sep 2018 11:35:43 -0400] rev 39467
merge with stable
Pulkit Goyal <pulkit@yandex-team.ru> [Tue, 04 Sep 2018 15:16:22 +0300] rev 39466
tests: improve the widening testing in test-narrow-widen*
Before this patch, we are testing `hg tracked --addinclude` by adding a command
which is not introduced in the changesets till now.
If you closely look at the tests, wider/f was introduced on the server after the
narrow clone was done and extending the existing clone to include wider/f does
not make sense. We should test extending a file which exists.
Differential Revision: https://phab.mercurial-scm.org/D4452
Pulkit Goyal <pulkit@yandex-team.ru> [Tue, 04 Sep 2018 19:26:50 +0300] rev 39465
narrow: use util.readfile() and improve error message using --narrowspec
This patch improves the error message and uses util.readfile() for reading
narrowspecs file while cloning.
Differential Revision: https://phab.mercurial-scm.org/D4462
Gregory Szorc <gregory.szorc@gmail.com> [Tue, 04 Sep 2018 15:55:23 -0700] rev 39464
merge: use vfs methods for I/O
All I/O is supposed to be performed via vfs instances so filesystems
can be abstracted. The previous commit ported the old code in purge,
which didn't go through the vfs layer. This commit ports the purge
code to use the vfs layer.
The vfs layer didn't have a method to remove a single directory, so
it was added as part of implementing this.
Differential Revision: https://phab.mercurial-scm.org/D4478
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 06 Sep 2018 18:30:12 -0700] rev 39463
merge: move purge logic from extension
Working directory purging feels like functionality that should be
in core rather than in an extension.
This commit effectively moves the core purging logic from the
purge extension to merge.py.
Code was refactored slightly. Rather than deal with printing in
this function, the function is a generator of paths and the caller
can worry about printing.
Differential Revision: https://phab.mercurial-scm.org/D4477
Matt Harbison <matt_harbison@yahoo.com> [Thu, 06 Sep 2018 23:37:24 -0400] rev 39462
tests: stabilize test-removeemptydirs.t on Windows
Yuya Nishihara <yuya@tcha.org> [Thu, 06 Sep 2018 21:55:30 +0900] rev 39461
help: add internals.wireprotocolv2 to the table, and remove redundant header
Kyle Lippincott <spectral@google.com> [Fri, 17 Aug 2018 19:18:53 -0700] rev 39460
match: improve includematcher.visitchildrenset to be much faster and cached
This improves the speed of visitchildrenset considerably, especially when there
are complicated matchers involved that may have many entries in _dirs or
_parents.
Unfortunately the benchmark isn't easily upstreamed due to its reliance on
https://github.com/vstinner/perf (primarily due to the conflict when importing
it if I were to contribute the benchmark as contrib/matcherbenchmarks.py)
instead of asv or some other perf measurement system.
To describe the benchmark briefly: I generated an includematcher of either 5 or
3500 "rootfilesin:prefix1/prefix2/prefix3/<randomsubdirs, 1-8 levels deep>"
items in the 'setup' function, and then called
`im.visitchildrenset('prefix1/prefix2')` in the 'stmt' function in perf.timeit.
For the set of 5:
- before: 15.3 us +- 2.9 us
- after: 1.59 us +- 0.02 us
For the set of 3500:
- before: 3.90 ms +- 0.10 ms
- after: 3.15 us +- 0.09 us (note the m->u change)
Differential Revision: https://phab.mercurial-scm.org/D4351
Pulkit Goyal <pulkit@yandex-team.ru> [Thu, 06 Sep 2018 03:21:05 +0530] rev 39459
py3: add new passing tests spotted by the buildbot
Differential Revision: https://phab.mercurial-scm.org/D4495
Pulkit Goyal <pulkit@yandex-team.ru> [Thu, 06 Sep 2018 03:24:27 +0530] rev 39458
tests: order the imports in test-fastannotate-hg.t
The wrong ordering breaks test-check-module-imports.t on Python 3. I am not sure
why that test is so much active on py3.
Differential Revision: https://phab.mercurial-scm.org/D4496
Matt Harbison <matt_harbison@yahoo.com> [Thu, 06 Sep 2018 00:51:21 -0400] rev 39457
lfs: ensure the blob is linked to the remote store on skipped uploads
I noticed a "missing" blob when pushing two repositories with common blobs to a
fresh server, and then running `hg verify` as a user different from the one
running the web server. When pushing the second repo, several of the blobs
already existed in the user cache, so the server indicated to the client that it
doesn't need to upload the blobs. That's good enough for the web server process
to serve up in the future. But a different user has a different cache by
default, so verify complains that `lfs.url` needs to be set, because it wants to
fetch the missing blobs.
Aside from that corner case, it's better to keep all of the blobs in the repo
whenever possible. Especially since the largefiles wiki says the user cache can
be deleted at any time to reclaim disk space- users switching over may have the
same expectations.
Matt Harbison <matt_harbison@yahoo.com> [Tue, 04 Sep 2018 22:29:38 -0400] rev 39456
cbor: teach the encoder to handle python `long` type for Windows
The tests for 2**32 and -
7000000000 were blowing up, complaining about not
knowing how to encode type 'long'. sys.maxint tops out at 2**31-1 on Windows,
but I guess is 2^63-1 on Linux? I *think* we're OK on the decode side, as there
is an assertion that the decoded value is equal to the original primitive value.
I opted for the pycompat alias instead of swallowing the NameError because the
vendored cbor package uses an alias, and I see at least pywatchman and
templatefilters open codes their own aliases.
Augie Fackler <raf@durin42.com> [Wed, 05 Sep 2018 16:39:47 -0400] rev 39455
buildrpm: remove TODO that looks done to me
Differential Revision: https://phab.mercurial-scm.org/D4494
Augie Fackler <raf@durin42.com> [Wed, 05 Sep 2018 16:29:31 -0400] rev 39454
buildrpm: fix embarassing bug in shell variable expansions
`$distance_$node` looks up and concatenates `$distance_` and
`$node`. `$distance_` is empty, so we were getting the node without
the distance. Using the curly braces makes our intent explicit and
produces better-versioned RPMS.
Differential Revision: https://phab.mercurial-scm.org/D4493
Matt Harbison <matt_harbison@yahoo.com> [Tue, 04 Sep 2018 20:48:22 -0400] rev 39453
absorb: clarify the reason for not finding changesets to modify
I'm used to pushing to non-publishing repos, so this was the last thing I
thought to check.
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 05 Sep 2018 09:36:31 -0700] rev 39452
merge with stable
Includes an extra bump of the version number for parsers because the
merge produces a new, distinct version of the code.
Gregory Szorc <gregory.szorc@gmail.com> [Tue, 04 Sep 2018 10:22:42 -0700] rev 39451
state: use our CBOR module
This was the last consumer of the vendored CBOR package in
core.
Differential Revision: https://phab.mercurial-scm.org/D4471