Matt Harbison <matt_harbison@yahoo.com> [Wed, 24 Jan 2018 22:26:28 -0500] rev 35800
minifileset: note the unsupported file pattern when raising a parse error
This was useful in debugging, because I stupidly quoted it out of habit from the
command line. This isn't a great example that clearly shows the problem, but I
don't know how to improve it. The problem *is* obvious once a complex statement
or a clearly bogus string is used.
Matt Harbison <matt_harbison@yahoo.com> [Tue, 23 Jan 2018 21:29:45 -0500] rev 35799
lfs: don't automatically exclude '.hg*' files from external tracking
The only reasons I did this in the first place was because tracking externally
seems like it would always be a mistake, and the eol extension does the same
thing. Yuya and Jun thought it might be better to not do this[1], so I'll defer
to them on this. If a problem with say, .hgtags or .hgeol does arise, it can be
added back without breaking existing repos.
[1] https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-January/110371.html
Matt Harbison <matt_harbison@yahoo.com> [Tue, 23 Jan 2018 20:50:02 -0500] rev 35798
lfs: rename {oid} to {lfsoid}
Per Yuya, for consistency with {lfspointer}. It might be slightly confusing for
there to be {lfsoid} and {lfspointer.oid}. But preventing ambiguity seems more
important, and the latter is controlled by the git-lfs spec.
Matt Harbison <matt_harbison@yahoo.com> [Mon, 22 Jan 2018 17:47:40 -0500] rev 35797
lfs: rename {pointer} to {lfspointer}
Per Martin von Zweigbergk's suggestion to keep this unambiguous, for when it is
migrated to {files} and friends.
Augie Fackler <raf@durin42.com> [Mon, 22 Jan 2018 18:08:50 -0500] rev 35796
Added signature for changeset
27b6df1b5adb
Augie Fackler <raf@durin42.com> [Mon, 22 Jan 2018 18:08:49 -0500] rev 35795
Added tag 4.5-rc for changeset
27b6df1b5adb
Augie Fackler <augie@google.com> [Mon, 22 Jan 2018 17:53:02 -0500] rev 35794
merge with stable to begin 4.5 freeze
# no-check-commit because it's a clean merge
Gregory Szorc <gregory.szorc@gmail.com> [Sat, 20 Jan 2018 22:55:42 -0800] rev 35793
bundle2: increase payload part chunk size to 32kb
Bundle2 payload parts are framed chunks. Esentially, we obtain
data in equal size chunks of size `preferedchunksize` and emit those
to a generator. That generator is fed into a compressor (which can
be the no-op compressor, which just re-emits the generator). And
the output from the compressor likely goes to a file descriptor
or socket.
What this means is that small chunk sizes create more Python objects
and Python function calls than larger chunk sizes. And as we know,
Python object and function call overhead in performance sensitive
code matters (at least with CPython).
This commit increases the bundle2 part payload chunk size from 4k
to 32k. Practically speaking, this means that the chunks we feed
into a compressor (implemented in C code) or feed directly into a
file handle or socket write() are larger. It's possible the chunks
might be larger than what the receiver can handle in one logical
operation. But at that point, we're in C code, which is much more
efficient at dealing with splitting up the chunk and making multiple
function calls than Python is.
A downside to larger chunks is that the receiver has to wait for that
much data to arrive (either raw or from a decompressor) before it
can process the chunk. But 32kb still feels like a small buffer to
have to wait for. And in many cases, the client will convert from
8 read(4096) to 1 read(32768). That's happening in Python land. So
we cut down on the number of Python objects and function calls,
making the client faster as well. I don't think there are any
significant concerns to increasing the payload chunk size to 32kb.
The impact of this change on performance significant. Using `curl`
to obtain a stream clone bundle2 payload from a server on localhost
serving the mozilla-unified repository:
before: 20.78 user; 7.71 system; 80.5 MB/s
after: 13.90 user; 3.51 system; 132 MB/s
legacy: 9.72 user; 8.16 system; 132 MB/s
bundle2 stream clone generation is still more resource intensive than
legacy stream clone (that's likely because of the use of a
util.chunkbuffer). But the throughput is the same. We might
be in territory we're this is effectively a benchmark of the
networking stack or Python's syscall throughput.
From the client perspective, `hg clone -U --stream`:
before: 33.50 user; 7.95 system; 53.3 MB/s
after: 22.82 user; 7.33 system; 72.7 MB/s
legacy: 29.96 user; 7.94 system; 58.0 MB/s
And for `hg clone --stream` with a working directory update of
~230k files:
after: 119.55 user; 26.47 system; 0:57.08 wall
legacy: 126.98 user; 26.94 system; 1:05.56 wall
So, it appears that bundle2's stream clone is now definitively faster
than legacy stream clone!
Differential Revision: https://phab.mercurial-scm.org/D1932
Gregory Szorc <gregory.szorc@gmail.com> [Mon, 22 Jan 2018 12:23:47 -0800] rev 35792
bundle2: always advertise client support for stream parts
Previously, enabling of stream clone over bundle2 was a server-side
only change. And clients would only advertise bundle2 support for
stream clones if an experimental config option was set.
This situation wasn't forward compatible because in the future
(when the feature is enabled on servers by default), an old client
would send a request to the server but it wouldn't send its own
bundle2 capability support for stream parts. Servers would have to
infer that clients not sending this capability were old Mercurial
clients that only sent the capability if the feature was
explicitly enabled. Implicit and inferred behavior makes implementing
servers hard. It's much better to be explicit about client features.
We should either make the config option for bundle2 stream clones
disable the feature client-side as well (so a server doesn't see
a request from a client not advertising stream support). Or we
should always advertise stream support if a client is willing
to accept stream parts.
Since I anticipating stabilizing stream clone support in bundle2
quickly, I think it's safe to always advertise client support
in the bundle2 capabilities. So this commit changes things to
do that.
Because capabilities now vary between client and server, we had
to create variations of the variable substitutions for those
strings.
Differential Revision: https://phab.mercurial-scm.org/D1931
Gregory Szorc <gregory.szorc@gmail.com> [Mon, 22 Jan 2018 12:22:01 -0800] rev 35791
exchange: don't send stream data when server.uncompressed is set
Previously, bundle2 stream support would send out data even though
the streaming clone feature was disabled. This commit changes
the part handler to respect the server config.
Differential Revision: https://phab.mercurial-scm.org/D1930
Gregory Szorc <gregory.szorc@gmail.com> [Mon, 22 Jan 2018 12:21:15 -0800] rev 35790
bundle2: don't advertise stream bundle2 capability when feature disabled
The server.uncompressed config option can be used to disable streaming
clones. While the top-level capability to advertise streaming clone
support isn't advertised when this option is set, we were still sending
the bundle2-level capabilities advertising support for stream parts.
It makes sense to not advertise that support when streaming clones
are globally disabled.
If the structure of the new code seems a bit odd, it is because we'll
have to further tweak the behavior in commit(s) ahead.
Differential Revision: https://phab.mercurial-scm.org/D1929
Gregory Szorc <gregory.szorc@gmail.com> [Mon, 22 Jan 2018 12:19:45 -0800] rev 35789
tests: add more testing around server.uncompressed
We already have testing for server.uncompressed in test-http*.t.
However, it doesn't cover the new bundle2 use case. And, we don't
have comprehensive testing of advertised capabilities.
We add tests to test-clone-uncompressed.t that demonstrate
behavior for both legacy and bundle2 configurations.
If you look closely, the bundle2 capabilities are advertising
stream support when it isn't enabled. That's a bug.
In addition, while the client is smart enough to not request
a stream clone when the server doesn't have the feature enabled,
the getbundle wire protocol command is still sending stream
clone data. This doesn't match the behavior of the legacy
stream_out wire protocol command. That's also a bug. Tests
have been added.
While I was here, I also changed how the PID is recorded in
$DAEMON_PIDS. If we kill a process, the PID formerly in
$DAEMON_PIDS no longer exists. So we should replace that file
instead of appending to it.
Differential Revision: https://phab.mercurial-scm.org/D1928
Gregory Szorc <gregory.szorc@gmail.com> [Mon, 22 Jan 2018 12:19:49 -0800] rev 35788
bundle2: move version of stream clone into part name
I don't like having version numbers as part parameters. It means
that parts can theoretically vary wildly in their generation and
processing semantics. I think that a named part should have consistent
behavior over time. In other words, if you need to introduce new
functionality or behavior, that should be expressed by inventing
a new bundle2 part, not adding functionality to an existing part.
This commit applies this advice to the just-introduced stream clone
via bundle2 feature.
The "version" part parameter is removed. The name of the bundle2 part
is now "stream2" instead of "stream" with "version=v2".
Differential Revision: https://phab.mercurial-scm.org/D1927
Gregory Szorc <gregory.szorc@gmail.com> [Mon, 22 Jan 2018 12:12:29 -0800] rev 35787
exchange: send bundle2 stream clones uncompressed
Stream clones don't compress well. And compression undermines
a point of stream clones which is to trade significant CPU
reductions by increasing size.
Building upon our introduction of metadata to communicate bundle
information back to callers of exchange.getbundlechunks(), we add
an attribute to the bundler that communicates whether the bundle is
best left uncompressed. We return this attribute as part of the bundle
metadata. And the wire protocol honors it when determining whether
to compress the wire protocol response.
The added test demonstrates that the raw result from the wire
protocol is not compressed. It also demonstrates that the server
will serve stream responses when the feature isn't enabled. We'll
address that in another commit.
The effect of this change is that server-side CPU usage for bundle2
stream clones is significantly reduced by removing zstd compression.
For the mozilla-unified repository:
before: 37.69 user 8.01 system
after: 27.38 user 7.34 system
Assuming things are CPU bound, that ~10s reduction would translate to
faster clones on the client. zstd can decompress at >1 GB/s. So the
overhead from decompression on the client is small in the grand scheme
of things. But if zlib compression were being used, the overhead would
be much greater.
Differential Revision: https://phab.mercurial-scm.org/D1926
Gregory Szorc <gregory.szorc@gmail.com> [Mon, 22 Jan 2018 12:38:04 -0800] rev 35786
tests: update test to work with Git 2.16
It looks like Git 2.16 removed the "..." from some strings.
Glob over those characters in the test output.
Differential Revision: https://phab.mercurial-scm.org/D1935
Gregory Szorc <gregory.szorc@gmail.com> [Sat, 20 Jan 2018 13:41:57 -0800] rev 35785
exchange: return bundle info from getbundlechunks() (API)
We generally want a mechanism to pass information about the
generated bundle back to callers (in addition to the byte stream).
Ideally we would return a bundler from this function and have the
caller code to an interface. But the bundling APIs are not great
and getbundlechunks() is the best API we have for obtaining bundle
contents in a unified manner.
We change getbundlechunks() to return a dict that we can use to
communicate metadata.
We populate that dict with the bundle version number to demonstrate
some value.
.. api::
exchange.getbundlechunks() now returns a 2-tuple instead of just
an iterator.
Differential Revision: https://phab.mercurial-scm.org/D1925
Gregory Szorc <gregory.szorc@gmail.com> [Sat, 20 Jan 2018 15:26:31 -0800] rev 35784
exchange: make stream bundle part deterministic
repo.requirements is a set. We need to sort it so the part
content is deterministic.
Differential Revision: https://phab.mercurial-scm.org/D1924
Gregory Szorc <gregory.szorc@gmail.com> [Sat, 20 Jan 2018 13:54:36 -0800] rev 35783
bundle2: specify what capabilities will be used for
We currently assume there is a symmetric relationship of bundle2
capabilities between client and server. However, this may not always be
the case.
We need a bundle2 capability to advertise bundle2 streaming clone support
on servers to differentiate it from the existing, legacy streaming clone
support.
However, servers may wish to disable streaming clone support. If bundle2
capabilities were the same between client and server, a client (which
may also be a server) that has disabled streaming clone support would
not be able to perform a streaming clone itself!
This commit introduces a "role" argument to bundle2.getrepocaps() that
explicitly defines the role being performed. This will allow us (and
extensions) to alter bundle2 capabilities depending on the operation
being performed.
Differential Revision: https://phab.mercurial-scm.org/D1923
Gregory Szorc <gregory.szorc@gmail.com> [Sat, 20 Jan 2018 15:43:02 -0800] rev 35782
wireproto: don't compress errors from getbundle()
Errors should be small. There's no real need to compress them.
Truth be told, there's no good reason to not compress them either.
But leaving them uncompressed makes it easier to test failures
by looking at the raw HTTP response. This makes it easier for us
to write tests. It may make it easier for people writing their
own clients.
Differential Revision: https://phab.mercurial-scm.org/D1922
Gregory Szorc <gregory.szorc@gmail.com> [Sat, 20 Jan 2018 16:08:07 -0800] rev 35781
tests: teach get-with-headers.py some new tricks
We add the ability to specify arbitrary HTTP request headers and
to save the HTTP response body to a file. These will be used in
upcoming commits.
Differential Revision: https://phab.mercurial-scm.org/D1921
Gregory Szorc <gregory.szorc@gmail.com> [Sat, 20 Jan 2018 14:59:08 -0800] rev 35780
tests: use argparse in get-with-headers.py
I'm about to add another flag and I don't want to deal with this
organic, artisanal argument parser.
Differential Revision: https://phab.mercurial-scm.org/D1920
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 21 Jan 2018 17:11:31 -0800] rev 35779
convert: use a collections.deque
This function was doing a list.pop(0) on a list whose length
was the number of revisions to convert. Popping an early element
from long lists is not an efficient operation.
collections.deque supports efficient inserts and pops at both
ends. So we switch to that data structure.
When converting the mozilla-unified repository, which has 445,748
revisions, this change makes the "sorting..." step of
`hg convert --sourcesort` significantly faster:
before: ~59.2s
after: ~1.3s
Differential Revision: https://phab.mercurial-scm.org/D1934
Martin von Zweigbergk <martinvonz@google.com> [Sat, 20 Jan 2018 23:21:59 -0800] rev 35778
repair: invalidate volatile sets after stripping
Matt Harbison reported that some tests were broken on Windows after
1a09dad8b85a (evolution: report new unstable changesets,
2018-01-14). The failures were exactly as seen in this patch. The
failures actually seemed correct, which made me wonder why they didn't
fail the same way on Linux. It turned out to be a cache invalidation
problem.
The new orphan mentioned in the test case actually does get created
when we're re-applying the temporary bundle that's created while
stripping. However, without the invalidation, it appears that there
was already an orphan before applying the temporary bundle.
The warnings about unknown working parent appear because the
aformentioned changeset means that we're now accessing the dirstate
while it's invalid.
We may want to suppress these messages that happen in the intermediate
strip state, but they're technically correct (although confusing to
the user), so I think just fixing the cache invalidation is fine for
now.
I haven't figured out why the caches seemed to get correctly
invalidated on Windows.
Differential Revision: https://phab.mercurial-scm.org/D1933
Matt Harbison <matt_harbison@yahoo.com> [Sun, 21 Jan 2018 13:54:05 -0500] rev 35777
subrepo: handle 'C:' style paths on the command line (
issue5770)
If you think 'C:' and 'C:\' are equivalent paths, see the inline comment before
proceeding.
The problem here was that several commands that take a URL argument (incoming,
outgoing, pull, and push) will use that value to set 'repo._subtoppath' on the
repository object after command specific manipulation of it, but before
converting it to an absolute path. When an operation is performed on a relative
subrepo, subrepo._abssource() will posixpath.join() this value with the relative
subrepo path. That adds a '/' after the drive letter, changing how it is
evaluated by abspath()/realpath() in vfsmod.vfs(..., realpath=True) as the
subrepo is instantiated.
I initially tried sanitizing the path in url.localpath(), because url.isabs()
only checks that it starts with a drive letter. By the sample behavior, this is
clearly not an absolute path. (Though the comment in isabs() is weasely- this
style path can't be joined either.) But not everything funnels through there,
and it required explicitly calling localpath() in hg.parseurl() and assigning to
url.path to fix. But then tests failed with urls like 'a#0'.
Next up was sanitizing the path in the url constructor. That caused doctest
failures, because there are drive letter tests, so those got expanded in system
specific ways. Yuya correctly pointed out that util.url is a parser, and
shouldn't be substituting the path too.
Rather than fixing every command call site, just convert it in the common
subrepo location. I don't see any sanitizing on the path config options, so I
fixed those too. Note that while the behavior is fixed here, there are still
places where 'comparing with C:' gets printed out, and that's not great for
debugging purposes. (Specifically I saw it in `hg incoming -B C:`, without
subrepos.) While clone will write out an absolute default path, I wonder what
would happen if a user edited that path to be 'C:'. (I don't think supporting
relative paths in .hgrc is a sane thing to do, but while we're poking holes in
things...)
Since this is such an oddball case, it still leaks through in places, and there
seems to be a lot of duplicate url parsing, maybe the url parsing should be
moved to dispatch, and provide the command with a url object? Then we could
convert this to an absolute path once, and not have to worry about it in the
rest of the code.
I also checked '--cwd C:' on the command line, and it was previously working
because os.chdir() will DTRT.
Finally, one other note from the url.localpath() experimenting. I don't see any
cases where 'self._hostport' can hold a drive letter. So I'm wondering if that
is wrong/old code.
Matt Harbison <matt_harbison@yahoo.com> [Mon, 22 Jan 2018 00:39:42 -0500] rev 35776
dummysmtpd: don't die on client connection errors
The connection refused error in test-patchbomb-tls.t[1] is sporadic, but one of
the more often seen errors on Windows. I added enough logging to a file and
dumped it out at the end to make the following observations:
- The listening socket is successfully created and bound to the port, and the
"listening at..." message is always logged.
- Generally, the following is the entire log output, with the "accepted ..."
message having been added after `sslutil.wrapserversocket`:
listening at localhost:$HGPORT
$LOCALIP ssl error
accepted connect
accepted connect
$LOCALIP from=quux to=foo, bar
$LOCALIP ssl error
- In the cases that fail, asyncore.loop() in the run() method is exiting, but
not with an exception.
- In the cases that fail, the following is logged right after "listening ...":
Traceback (most recent call last):
File "c:\\Python27\\lib\\asyncore.py", line 83, in read
obj.handle_read_event()
File "c:\\Python27\\lib\\asyncore.py", line 443, in handle_read_event
self.handle_accept()
File "../tests/dummysmtpd.py", line 80, in handle_accept
conn = sslutil.wrapserversocket(conn, ui, certfile=self._certfile)
File "..\\mercurial\\sslutil.py", line 570, in wrapserversocket
return sslcontext.wrap_socket(sock, server_side=True)
File "c:\\Python27\\lib\\ssl.py", line 363, in wrap_socket
_context=self)
File "c:\\Python27\\lib\\ssl.py", line 611, in __init__
self.do_handshake()
File "c:\\Python27\\lib\\ssl.py", line 840, in do_handshake
self._sslobj.do_handshake()
error: [Errno 10054] $ECONNRESET$
- If the base class handler is overridden completely, the the first "ssl
error" line is replaced by the stacktrace, but the other lines are
unchanged. The client behaves no differently, whether or not the server
stacktraced.
In general, `./run-tests.py --local -j9 -t9000 test-patchbomb-tls.t
--runs-per-test 20` would show the issue after a run or two. With this change,
`./run-tests.py --local -j9 -t9000 test-patchbomb-tls.t --loop` ran 800 times
without a hiccup. This makes me wonder if the other connection refused messages
that bubble up on occasion are caused by a similar issue. It seems a bit
drastic to kill the whole server on account of a single communication failure
with a client.
# no-check-commit because of handle_error()
[1] https://buildbot.mercurial-scm.org/builders/Win7%20x86_64%20hg%20tests/builds/421/steps/run-tests.py%20%28python%202.7.13%29/logs/stdio
André Sintzoff <andre.sintzoff@gmail.com> [Sun, 21 Jan 2018 15:39:48 +0100] rev 35775
cext: define MIN macro only if it is not yet defined
MIN macro is defined in <sys/param.h> on macOS Sierra. Therefore as
HAVE_BSD_STATFS is defined in osutil.c, 'MIN' macro redefined warning is
emitted.
Anton Shestakov <av6@dwimlabs.net> [Sun, 21 Jan 2018 14:47:45 +0800] rev 35774
copyright: update to 2018
January seems to be a good month to do this.
Anton Shestakov <av6@dwimlabs.net> [Sun, 21 Jan 2018 14:46:26 +0800] rev 35773
tests: glob copyright years in test-extension.t
Other tests already do this.