Mon, 21 Nov 2016 15:38:56 +0530 py3: update test-check-py3-compat.t output
Pulkit Goyal <7895pulkit@gmail.com> [Mon, 21 Nov 2016 15:38:56 +0530] rev 30469
py3: update test-check-py3-compat.t output This part remains unchanged because it runs in Python 3 only.
Mon, 21 Nov 2016 15:35:22 +0530 py3: use pycompat.sysargv in dispatch.run()
Pulkit Goyal <7895pulkit@gmail.com> [Mon, 21 Nov 2016 15:35:22 +0530] rev 30468
py3: use pycompat.sysargv in dispatch.run() Another one to have a bytes result from sys.argv in Python 3. This one is also a part of running `hg version` on Python 3.
Mon, 21 Nov 2016 15:26:47 +0530 py3: use pycompat.sysargv in scmposix.systemrcpath()
Pulkit Goyal <7895pulkit@gmail.com> [Mon, 21 Nov 2016 15:26:47 +0530] rev 30467
py3: use pycompat.sysargv in scmposix.systemrcpath() sys.argv returns unicodes on Python 3. We have pycompat.sysargv which returns bytes encoded using os.fsencode(). After this patch scmposix.systemrcpath() returns bytes in Python 3 world. This change is also a part of making `hg version` run in Python 3.
Sun, 20 Nov 2016 13:50:45 -0800 wireproto: perform chunking and compression at protocol layer (API)
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 20 Nov 2016 13:50:45 -0800] rev 30466
wireproto: perform chunking and compression at protocol layer (API) Currently, the "streamres" response type is populated with a generator of chunks with compression possibly already applied. This puts the onus on commands to perform chunking and compression. Architecturally, I think this is the wrong place to perform this work. I think commands should say "here is the data" and the protocol layer should take care of encoding the final bytes to put on the wire. Additionally, upcoming commits will improve wire protocol support for compression. Having a central place for performing compression in the protocol transport layer will be easier than having to deal with compression at the commands layer. This commit refactors the "streamres" response type to accept either a generator or an object with "read." Additionally, the type now accepts a flag indicating whether the response is a "version 1 compressible" response. This basically identifies all commands currently performing compression. I could have used a special type for this, but a flag works just as well. The argument name foreshadows the introduction of wire protocol changes, hence the "v1." The code for chunking and compressing has been moved to the output generation function for each protocol transport. Some code has been inlined, resulting in the deletion of now unused methods.
Sun, 20 Nov 2016 13:55:53 -0800 httppeer: use compression engine API for decompressing responses
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 20 Nov 2016 13:55:53 -0800] rev 30465
httppeer: use compression engine API for decompressing responses In preparation for supporting multiple compression formats on the wire protocol, we need all users of the wire protocol to use compression engine APIs. This commit ports the HTTP wire protocol client to use the compression engine API. The code for handling the HTTPException is a bit hacky. Essentially, HTTPException could be thrown by any read() from the socket. However, as part of porting the API, we no longer have a generator wrapping the socket and we don't have a single place where we can trap the exception. We solve this by introducing a proxy class that intercepts read() and converts the exception appropriately. In the future, we could introduce a new compression engine API that supports emitting a generator of decompressed chunks. This would eliminate the need for the proxy class. As I said when I introduced the decompressorreader() API, I'm not fond of it and would support transitioning to something better. This can be done as a follow-up, preferably once all the code is using the compression engine API and we have a better idea of the API needs of all the consumers.
Sat, 19 Nov 2016 18:31:40 -0800 httppeer: do decompression inside _callstream
Gregory Szorc <gregory.szorc@gmail.com> [Sat, 19 Nov 2016 18:31:40 -0800] rev 30464
httppeer: do decompression inside _callstream The current HTTP transport protocol only compresses certain command responses and requires calls to that command to call "_callcompressable," which zlib decompresses the response transparently. Upcoming changes will enable *any* response to be compressed with varying compression formats. In order to handle this better, this commit moves the decompression bits to the main function performing the HTTP request. We introduce an underscore-prefixed argument to denote this behavior so it doesn't conflict with a named argument to a command.
Sat, 19 Nov 2016 17:11:12 -0800 keepalive: reorder header precedence
Gregory Szorc <gregory.szorc@gmail.com> [Sat, 19 Nov 2016 17:11:12 -0800] rev 30463
keepalive: reorder header precedence There are 3 sources of headers used by this function: * The default headers defined by the URL opener * Headers that are copied on redirects * Headers that aren't copied on redirects Previously, we applied the default headers from the URL opener last. This feels wrong to me as those headers are the most low level and something built on top of the URL opener may wish to override them. So, this commit changes the order to apply them with the least precedence. While I was here, I removed a Python version test that is no longer necessary.
Sat, 19 Nov 2016 10:54:21 -0800 debuginstall: print compression engine support
Gregory Szorc <gregory.szorc@gmail.com> [Sat, 19 Nov 2016 10:54:21 -0800] rev 30462
debuginstall: print compression engine support Since compression engines may be provided by extensions and since not all registered compression engines may be available to use, it seems useful to provide a mechanism to see the state of known compression engines. This commit teaches `hg debuginstall` to print info on known and available compression engines.
Sun, 20 Nov 2016 16:56:21 -0800 bdiff: don't check border condition in loop
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 20 Nov 2016 16:56:21 -0800] rev 30461
bdiff: don't check border condition in loop This is pretty much a copy of d500ddae7494, just to a different loop. The condition `p == plast` (`plast == a + len - 1`) was only true on the final iteration of the loop. So it was wasteful to check for it on every iteration. We decrease the iteration count by 1 and add an explicit check for `p == plast` after the loop. Again, we see modest wins. From the mozilla-unified repository: $ perfbdiff -m 3041e4d59df2 ! wall 0.035502 comb 0.040000 user 0.040000 sys 0.000000 (best of 100) ! wall 0.030480 comb 0.030000 user 0.030000 sys 0.000000 (best of 100) $ perfbdiff 0e9928989e9c --alldata --count 100 ! wall 4.097394 comb 4.100000 user 4.100000 sys 0.000000 (best of 3) ! wall 3.597798 comb 3.600000 user 3.600000 sys 0.000000 (best of 3) The 2nd example throws a total of ~3.3GB of data at bdiff. This change increases the throughput from ~811 MB/s to ~924 MB/s.
Sat, 19 Nov 2016 15:41:37 -0800 conflicts: make spacing consistent in conflict markers
Kostia Balytskyi <ikostia@fb.com> [Sat, 19 Nov 2016 15:41:37 -0800] rev 30460
conflicts: make spacing consistent in conflict markers The way default marker template was defined before this patch, the spacing before dash in conflict markes was dependent on whether changeset is a tip one or not. This is a relevant part of template: '{ifeq(tags, "tip", "", "{tags} "}' If revision is a tip revision with no other tags, this would resolve to an empty string, but for revisions which are not tip and don't have any other tags, this would resolve to a single space string. In the end this causes weirdnesses like the ones you can see in the affected tests. This is a not a big deal, but double spacing may be visually less pleasant. Please note that test changes where commit hashes change are the result of marking files as resolved without removing markers.
(0) -30000 -10000 -3000 -1000 -300 -100 -10 +10 +100 +300 +1000 +3000 +10000 tip