Gregory Szorc <gregory.szorc@gmail.com> [Wed, 05 Sep 2018 09:04:40 -0700] rev 39486
wireprotov2peer: properly format errors
formatrichmessage() expects an iterable containing dicts with
well-defined keys. We were passing in something else. This caused
an exception.
Change the code to call formatrichmessage() with the proper argument.
And add a TODO to potentially emit the proper data structure from
the server in the first place.
Differential Revision: https://phab.mercurial-scm.org/D4441
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 23 Aug 2018 13:50:47 -0700] rev 39485
wireprotov2peer: report exceptions in frame handling against request future
Otherwise the future may never resolve, which could cause deadlock.
Differential Revision: https://phab.mercurial-scm.org/D4440
Anton Shestakov <av6@dwimlabs.net> [Sat, 08 Sep 2018 21:58:51 +0800] rev 39484
httppeer: use util.readexactly() to abort on incomplete responses
Plain resp.read(n) may not return exactly n bytes when we need, and to detect
such cases before trying to interpret whatever has been read, we can use
util.readexactly(), which raises an Abort when stream ends unexpectedly. In the
first case here, readexactly() prevents a traceback with struct.error, in the
second it avoids looking for invalid compression engines.
In this test case, _wraphttpresponse doesn't catch the problem (presumably
because it doesn't know transfer encoding), and the code continues reading the
response until it gets to compression engine data. Maybe there should be checks
before the execution gets there, but I'm not sure where (httplib?)
Anton Shestakov <av6@dwimlabs.net> [Sat, 08 Sep 2018 23:57:07 +0800] rev 39483
httppeer: calculate total expected bytes correctly
User-facing error messages that handled httplib.IncompleteRead errors in
Mercurial used to look like this:
abort: HTTP request error (incomplete response; expected 3 bytes got 1)
But the errors that are being handled underneath the UI look like this:
IncompleteRead(1 bytes read, 3 more expected)
I.e. the error actually counts total number of expected bytes minus bytes
already received.
Before, users could see weird messages like "expected 10 bytes got 10", while
in reality httplib expected 10 _more_ bytes (20 in total).
Martin von Zweigbergk <martinvonz@google.com> [Fri, 07 Sep 2018 23:36:09 -0700] rev 39482
lazyancestors: reuse __iter__ implementation in __contains__
There was a comment in the code that said "Trying to do both __iter__
and __contains__ using the same visit heap and seen set is complex
enough that it slows down both. Keep them separate.". However, it
seems easy and efficient to make __contains__ keep an iterator across
calls.
I couldn't measure any slowdown from `hg bundle --all` (which seem to
call lazyancestors.__contains__ frequently).
Differential Revision: https://phab.mercurial-scm.org/D4508
Martin von Zweigbergk <martinvonz@google.com> [Sun, 09 Sep 2018 23:16:55 -0700] rev 39481
lazyancestors: extract __iter__ to free function
The next patch will keep a reference to the returned iterator in a
field, which would otherwise result in a reference cycle.
Differential Revision: https://phab.mercurial-scm.org/D4517
Boris Feld <boris.feld@octobus.net> [Thu, 30 Aug 2018 01:53:21 +0200] rev 39480
phase: report number of non-public changeset alongside the new range
When interacting with non-publishing repository or bundle, it is useful to
have some information about the phase of the changeset we just pulled.
This changeset updates the "new changesets MIN:MAX" output to also includes
phases information for non-public changesets. Displaying extra data about
non-public changesets means the output for exchange with publishing repository
(the default) is unaffected.
Matt Harbison <matt_harbison@yahoo.com> [Fri, 07 Sep 2018 23:54:42 -0400] rev 39479
tests: disable test-nointerrupt on Windows
Per the followup discussion[1]. proc.send_signal(INT) in timeout.py raises a
ValueError because of an unsupported signal. I don't like missing test coverage
for this on Windows. But this is the last test failing on Windows, and red all
the time hides new failures.
[1] https://phab.mercurial-scm.org/D3716
Matt Harbison <matt_harbison@yahoo.com> [Fri, 07 Sep 2018 23:39:49 -0400] rev 39478
tests: conditionalize an error message about unlinking a non empty directory
The message on Windows comes from win32.unlink(). It looks like os.unlink() on
posix platforms is a simple call to unlink(3), which turns into unlinkat(2).
Since there's a comment in one of the tests that the message should be improved,
I don't think it's worth adding a check in win32.unlink() to see if it's empty,
if that function is always going to fail on a directory. (It seems like the
POSIX spec allows unlinking directories though.)
Martin von Zweigbergk <martinvonz@google.com> [Fri, 07 Sep 2018 14:48:38 -0700] rev 39477
ancestors: add nullrev to set from the beginning
Differential Revision: https://phab.mercurial-scm.org/D4507
Yuya Nishihara <yuya@tcha.org> [Sat, 08 Sep 2018 10:59:24 +0900] rev 39476
ancestor: filter out initial revisions lower than stoprev
Yuya Nishihara <yuya@tcha.org> [Sat, 08 Sep 2018 10:48:42 +0900] rev 39475
ancestor: add test showing inconsistency between __iter__ and __contains__
Boris Feld <boris.feld@octobus.net> [Thu, 06 Sep 2018 19:37:38 -0400] rev 39474
ancestors: ensure a consistent order even in the "inclusive" case
It seems odds to first issue the "source" revs and then the other ancestors.
In addition, doing so can break the other contract of always issuing a child
before its parent. We update the code to apply the same logic to all yielded
revision. No tests break so we seem in the clear except where we explicitly
test the order.
Boris Feld <boris.feld@octobus.net> [Thu, 06 Sep 2018 17:00:28 -0400] rev 39473
ancestors: actually iterate over ancestors in topological order (
issue5979)
This code previously used a dequeue logic, the first ancestors seen were the
first ancestors to be emitted. In branching/merging situations, it can result
in ancestors being yielded before their descendants, breaking the object
contract.
We got affected by this issue while working on the copy tracing code. At about
the same time, Axel Hecht <axel@mozilla.com> reported the issue and provided
the test case used in this changeset. Thanks Axel.
Running `hg perfancestors` does not show a significant difference between the
old and the new version.
Yuya Nishihara <yuya@tcha.org> [Thu, 06 Sep 2018 22:12:21 +0900] rev 39472
doc: use modern import style in runrst
Yuya Nishihara <yuya@tcha.org> [Sun, 26 Aug 2018 22:18:09 +0900] rev 39471
hgweb: do not audit URL path as working-directory path
Since hgweb is an interface to repository data, we don't need to prohibit
any paths conflicting within the filesystem. Still an access to working
files is audited by filectx.
Yuya Nishihara <yuya@tcha.org> [Sun, 26 Aug 2018 22:23:25 +0900] rev 39470
hgweb: map Abort to 403 error to report inaccessible path for example
Abort is so common in our codebase. We could instead introduce a dedicated
type for path auditing errors, but we'll probably have to catch error.Abort
anyway.
As you can see, an abort message may include a full path to the repository,
which might be considered information leak. If that matters, we should hide
the message and send it to the server log instead.
Yuya Nishihara <yuya@tcha.org> [Fri, 07 Sep 2018 22:19:28 +0900] rev 39469
hgweb: add error template to json so it won't crash
Yuya Nishihara <yuya@tcha.org> [Fri, 07 Sep 2018 22:12:46 +0900] rev 39468
hgweb: show shortlog by default in json output (
issue5978)
Augie Fackler <augie@google.com> [Fri, 07 Sep 2018 11:35:43 -0400] rev 39467
merge with stable
Pulkit Goyal <pulkit@yandex-team.ru> [Tue, 04 Sep 2018 15:16:22 +0300] rev 39466
tests: improve the widening testing in test-narrow-widen*
Before this patch, we are testing `hg tracked --addinclude` by adding a command
which is not introduced in the changesets till now.
If you closely look at the tests, wider/f was introduced on the server after the
narrow clone was done and extending the existing clone to include wider/f does
not make sense. We should test extending a file which exists.
Differential Revision: https://phab.mercurial-scm.org/D4452
Pulkit Goyal <pulkit@yandex-team.ru> [Tue, 04 Sep 2018 19:26:50 +0300] rev 39465
narrow: use util.readfile() and improve error message using --narrowspec
This patch improves the error message and uses util.readfile() for reading
narrowspecs file while cloning.
Differential Revision: https://phab.mercurial-scm.org/D4462
Gregory Szorc <gregory.szorc@gmail.com> [Tue, 04 Sep 2018 15:55:23 -0700] rev 39464
merge: use vfs methods for I/O
All I/O is supposed to be performed via vfs instances so filesystems
can be abstracted. The previous commit ported the old code in purge,
which didn't go through the vfs layer. This commit ports the purge
code to use the vfs layer.
The vfs layer didn't have a method to remove a single directory, so
it was added as part of implementing this.
Differential Revision: https://phab.mercurial-scm.org/D4478
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 06 Sep 2018 18:30:12 -0700] rev 39463
merge: move purge logic from extension
Working directory purging feels like functionality that should be
in core rather than in an extension.
This commit effectively moves the core purging logic from the
purge extension to merge.py.
Code was refactored slightly. Rather than deal with printing in
this function, the function is a generator of paths and the caller
can worry about printing.
Differential Revision: https://phab.mercurial-scm.org/D4477
Matt Harbison <matt_harbison@yahoo.com> [Thu, 06 Sep 2018 23:37:24 -0400] rev 39462
tests: stabilize test-removeemptydirs.t on Windows
Yuya Nishihara <yuya@tcha.org> [Thu, 06 Sep 2018 21:55:30 +0900] rev 39461
help: add internals.wireprotocolv2 to the table, and remove redundant header
Kyle Lippincott <spectral@google.com> [Fri, 17 Aug 2018 19:18:53 -0700] rev 39460
match: improve includematcher.visitchildrenset to be much faster and cached
This improves the speed of visitchildrenset considerably, especially when there
are complicated matchers involved that may have many entries in _dirs or
_parents.
Unfortunately the benchmark isn't easily upstreamed due to its reliance on
https://github.com/vstinner/perf (primarily due to the conflict when importing
it if I were to contribute the benchmark as contrib/matcherbenchmarks.py)
instead of asv or some other perf measurement system.
To describe the benchmark briefly: I generated an includematcher of either 5 or
3500 "rootfilesin:prefix1/prefix2/prefix3/<randomsubdirs, 1-8 levels deep>"
items in the 'setup' function, and then called
`im.visitchildrenset('prefix1/prefix2')` in the 'stmt' function in perf.timeit.
For the set of 5:
- before: 15.3 us +- 2.9 us
- after: 1.59 us +- 0.02 us
For the set of 3500:
- before: 3.90 ms +- 0.10 ms
- after: 3.15 us +- 0.09 us (note the m->u change)
Differential Revision: https://phab.mercurial-scm.org/D4351
Pulkit Goyal <pulkit@yandex-team.ru> [Thu, 06 Sep 2018 03:21:05 +0530] rev 39459
py3: add new passing tests spotted by the buildbot
Differential Revision: https://phab.mercurial-scm.org/D4495
Pulkit Goyal <pulkit@yandex-team.ru> [Thu, 06 Sep 2018 03:24:27 +0530] rev 39458
tests: order the imports in test-fastannotate-hg.t
The wrong ordering breaks test-check-module-imports.t on Python 3. I am not sure
why that test is so much active on py3.
Differential Revision: https://phab.mercurial-scm.org/D4496
Matt Harbison <matt_harbison@yahoo.com> [Thu, 06 Sep 2018 00:51:21 -0400] rev 39457
lfs: ensure the blob is linked to the remote store on skipped uploads
I noticed a "missing" blob when pushing two repositories with common blobs to a
fresh server, and then running `hg verify` as a user different from the one
running the web server. When pushing the second repo, several of the blobs
already existed in the user cache, so the server indicated to the client that it
doesn't need to upload the blobs. That's good enough for the web server process
to serve up in the future. But a different user has a different cache by
default, so verify complains that `lfs.url` needs to be set, because it wants to
fetch the missing blobs.
Aside from that corner case, it's better to keep all of the blobs in the repo
whenever possible. Especially since the largefiles wiki says the user cache can
be deleted at any time to reclaim disk space- users switching over may have the
same expectations.