Mon, 19 Feb 2018 13:20:17 -0800 tests: store protocol payload in files
Gregory Szorc <gregory.szorc@gmail.com> [Mon, 19 Feb 2018 13:20:17 -0800] rev 36368
tests: store protocol payload in files Upcoming changes to version 2 of the SSH protocol will introduce binary components to the protocol. It will be easier to eliminate trailing newlines and use binary in the tests if the protocol payload is being generated by Python. So use inline Python to write payloads to files and pipe those files to server processes instead of shell strings/variables. Differential Revision: https://phab.mercurial-scm.org/D2381
Wed, 21 Feb 2018 08:35:48 -0800 sshpeer: return framed file object when needed
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 21 Feb 2018 08:35:48 -0800] rev 36367
sshpeer: return framed file object when needed Currently, wireproto.wirepeer has a default implementation of _submitbatch() and sshv1peer has a very similar implementation. The main difference is that sshv1peer is aware of the total amount of bytes it can read whereas the default implementation reads the stream until no more data is returned. The default implementation works for HTTP, since there is a known end to HTTP responses (either Content-Length or 0 sized chunk). This commit teaches sshv1peer to use our just-introduced "cappedreader" class for wrapping a file object to limit the number of bytes that can be read. We do this by introducing an argument to specify whether the response is framed. If set, we returned a cappedreader instance instead of the raw pipe. _call() always has framed responses. So we set this argument unconditionally and then .read() the entirety of the result. Strictly speaking, we don't need to use cappedreader in this case and can inline frame decoding/read logic. But I like when things are consistent. The overhead should be negligible. _callstream() and _callcompressable() are special: whether framing is used depends on the specific command. So, we define a set of commands that have framed response. It currently only contains "batch." As a result of this change, the one-off implementation of _submitbatch() in sshv1peer can be removed since it is now safe to .read() the response's file object until end of stream. cappedreader takes care of not overrunning the frame. Differential Revision: https://phab.mercurial-scm.org/D2380
Wed, 21 Feb 2018 08:33:50 -0800 sshpeer: move logic for sending a request into a new function
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 21 Feb 2018 08:33:50 -0800] rev 36366
sshpeer: move logic for sending a request into a new function The **args being used to pass arbitrary command arguments is limiting because it makes it harder to control behavior of the function. We factor most of _callstream() into a new function that doesn't use **args. Differential Revision: https://phab.mercurial-scm.org/D2379
Wed, 21 Feb 2018 16:51:09 -0500 help: fix wording describing SSH requirements stable
Josef 'Jeff' Sipek <jeffpc@josefsipek.net> [Wed, 21 Feb 2018 16:51:09 -0500] rev 36365
help: fix wording describing SSH requirements
Thu, 22 Feb 2018 15:18:44 +0800 graphlog: document what "_" and "*" mean stable
Anton Shestakov <av6@dwimlabs.net> [Thu, 22 Feb 2018 15:18:44 +0800] rev 36364
graphlog: document what "_" and "*" mean Documenting "*" should've been a part of 9b3f95d9783d, but I somehow didn't notice that the symbols are explained in the command's help text.
Mon, 19 Feb 2018 15:57:28 -0800 sshpeer: rename _recv and _send to _readframed and _writeframed
Gregory Szorc <gregory.szorc@gmail.com> [Mon, 19 Feb 2018 15:57:28 -0800] rev 36363
sshpeer: rename _recv and _send to _readframed and _writeframed Because it is reading and writing a chunk of data with a well-defined size. "recv" and "send" make it sound like things are a direct proxy to the underlying pipe, which they aren't. Differential Revision: https://phab.mercurial-scm.org/D2378
Wed, 21 Feb 2018 13:41:20 -0800 util: add a file object proxy that can read at most N bytes
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 21 Feb 2018 13:41:20 -0800] rev 36362
util: add a file object proxy that can read at most N bytes Sometimes we have data of a known size within a stream. For performance reasons, we don't want to pre-read this data (we want to allow consumers to read on demand). For simplicitly reasons, we don't want callers to necessarily know their data is coming from within an outer stream and there is a limit to how much they should read. The class introduced by this commit provides a very simple proxy around an underlying file object that allows the consumer to .read() up to N bytes from the file object. Attempts to read past this many bytes results in a simulated EOF. Differential Revision: https://phab.mercurial-scm.org/D2377
Mon, 05 Feb 2018 15:03:51 +0100 patches: release the GIL while applying the patch
Boris Feld <boris.feld@octobus.net> [Mon, 05 Feb 2018 15:03:51 +0100] rev 36361
patches: release the GIL while applying the patch This will allow multiple threads to apply patches at the same time.
(0) -30000 -10000 -3000 -1000 -300 -100 -30 -10 -8 +8 +10 +30 +100 +300 +1000 +3000 +10000 tip