Danek Duvall <danek.duvall@oracle.com> [Tue, 22 Nov 2016 13:32:05 -0800] rev 30498
zstd: fix compilation with Solaris Studio
Without these changes, Solaris Studio (12.4) gives us "syntax error: empty
declaration" on these two lines.
Augie Fackler <augie@google.com> [Mon, 21 Nov 2016 21:36:46 -0500] rev 30497
cmdutil: turn forward of checkunresolved into a deprecation warning
As with dirstateguard, I really doubt anyone outside core was using
this, as my grep over the repositories I keep locally suggests nobody
was using this. If others are comfortable with it, let's drop the
forward entirely.
Augie Fackler <augie@google.com> [Mon, 21 Nov 2016 21:32:55 -0500] rev 30496
localrepo: refer to checkunresolved by its new name
Augie Fackler <augie@google.com> [Mon, 21 Nov 2016 21:32:39 -0500] rev 30495
rebase: refer to checkunresolved by its new name
Augie Fackler <augie@google.com> [Mon, 21 Nov 2016 21:31:45 -0500] rev 30494
checkunresolved: move to new package to help avoid import cycles
This will allow localrepo to stop using cmdutil, which should avoid
some future import cycles. There's room for an adventurous soul to
delve deeper into merge.py and figure out how to disentangle more of
it - it appears to be a nexus of cycle problems. Some of it might be
able to move into this new mergeutil package.
Augie Fackler <augie@google.com> [Mon, 21 Nov 2016 21:16:54 -0500] rev 30493
cmdutil: mark dirstateguard as deprecated
I sincerely doubt this is used in external code, as grepping the
extensions I keep locally (including Facebook's hgexperimental and
evolve) indicate nobody outside of core uses this. As such, I'd also
welcome just dropping this name forward entirely.
Augie Fackler <augie@google.com> [Mon, 21 Nov 2016 21:06:34 -0500] rev 30492
localrepo: refer to dirstateguard by its new name
Augie Fackler <augie@google.com> [Mon, 21 Nov 2016 21:06:22 -0500] rev 30491
commands: refer to dirstateguard by its new name
Augie Fackler <augie@google.com> [Mon, 21 Nov 2016 21:27:12 -0500] rev 30490
rebase: refer to dirstateguard by its new name
Augie Fackler <augie@google.com> [Mon, 21 Nov 2016 21:05:52 -0500] rev 30489
mq: refer to dirstateguard by its new name
Augie Fackler <augie@google.com> [Mon, 21 Nov 2016 21:29:32 -0500] rev 30488
dirstateguard: move to new module so I can break some layering violations
Recently in a review I noticed that localrepo almost has no reason to
import cmdutil anymore. Also, cmdutil is a little on the enormous
side, so breaking this class out strikes me as a win.
Augie Fackler <augie@google.com> [Mon, 21 Nov 2016 22:17:45 -0500] rev 30487
keepalive: discard legacy Python support for error handling
We never changed the behavior defined by this attribute anyway, so
just jettison all of this support.
Augie Fackler <augie@google.com> [Mon, 21 Nov 2016 21:52:19 -0500] rev 30486
mergemod: drop support for merge.update without a target
This was to be deleted after 3.9.
Augie Fackler <augie@google.com> [Mon, 21 Nov 2016 21:51:23 -0500] rev 30485
dispatch: stop supporting non-use of @command
We said we'd delete this after 3.8. It's time.
Gregory Szorc <gregory.szorc@gmail.com> [Mon, 21 Nov 2016 20:12:51 -0800] rev 30484
httppeer: document why super() isn't used
Adding a follow-up to document lack of super() per Augie's
request.
Stanislau Hlebik <stash@fb.com> [Thu, 17 Nov 2016 00:59:41 -0800] rev 30483
exchange: add `_getbookmarks()` function
This function will be used to generate bookmarks bundle2 part.
It is a separate function in order to make it easy to overwrite it
in extensions. Passing `kwargs` to the function makes it easy to
add new parameters in extensions.
Stanislau Hlebik <stash@fb.com> [Thu, 17 Nov 2016 00:59:41 -0800] rev 30482
bookmarks: use listbinbookmarks() in listbookmarks()
Stanislau Hlebik <stash@fb.com> [Thu, 17 Nov 2016 00:59:41 -0800] rev 30481
bookmarks: introduce listbinbookmarks()
`bookmarks` bundle2 part will work with binary nodes. To avoid unnecessary
conversions between binary and hex nodes let's add `listbinbookmarks()` that
returns binary nodes. For now this function is a copy-paste of
listbookmarks(). In the next patch this copy-paste will be removed.
Kostia Balytskyi <ikostia@fb.com> [Mon, 21 Nov 2016 16:22:26 -0800] rev 30480
ui: add configoverride context manager
I feel like this idea might've been discussed before, so please
feel free to point me to the right mailing list entry to read
about why it should not be done.
We have a common pattern of the following code:
backup = ui.backupconfig(section, name)
try:
ui.setconfig(section, name, temporaryvalue, source)
do_something()
finally:
ui.restoreconfig(backup)
IMO, this looks better:
with ui.configoverride({(section, name): temporaryvalue}, source):
do_something()
Especially this becomes more convenient when one has to backup multiple
config values before doing something. In such case, adding a new value
to backup requires codemod in three places.
Augie Fackler <augie@google.com> [Mon, 21 Nov 2016 18:17:02 -0500] rev 30479
archival: simplify code and drop message about Python 2.5
Augie Fackler <augie@google.com> [Mon, 21 Nov 2016 17:52:32 -0500] rev 30478
bugzilla: stop mentioning Pythons older than 2.6
We don't support those anyway.
Augie Fackler <augie@google.com> [Mon, 21 Nov 2016 17:51:39 -0500] rev 30477
tests: update sitecustomize to use uuid1() instead of randrange()
The comments mention that uuid would be better, so let's go ahead and
make good on an old idea.
Augie Fackler <augie@google.com> [Mon, 21 Nov 2016 17:48:13 -0500] rev 30476
win32mbcs: drop code that was catering to Python 2.3 and earlier
Augie Fackler <augie@google.com> [Mon, 21 Nov 2016 17:47:11 -0500] rev 30475
httppeer: drop an except block that says it happens only on Python 2.3
Yuya Nishihara <yuya@tcha.org> [Fri, 21 Oct 2016 00:03:46 +0900] rev 30474
windows: do not replace sys.stdout by winstdout
Now we use util.stdout everywhere.
Yuya Nishihara <yuya@tcha.org> [Thu, 20 Oct 2016 23:53:36 +0900] rev 30473
py3: bulk replace sys.stdin/out/err by util's
Almost all sys.stdin/out/err in hgext/ and mercurial/ are replaced by util's.
There are a few exceptions:
- lsprof.py and statprof.py are untouched since they are a kind of vendor
code and they never import mercurial modules right now.
- ui._readline() needs to replace sys.stdin and stdout to pass them to
raw_input(). We'll need another workaround here.
Yuya Nishihara <yuya@tcha.org> [Thu, 20 Oct 2016 23:40:24 +0900] rev 30472
py3: provide bytes stdin/out/err through util module
Since standard streams are TextIO on Python 3, we can't use sys.stdin/out/err
directly. Fortunately we can get the underlying BytesIO via .buffer as long as
the streams aren't replaced by e.g. StringIO.
stdin/out/err are provided through util so we can wrap them by platform API.
Yuya Nishihara <yuya@tcha.org> [Fri, 21 Oct 2016 00:09:38 +0900] rev 30471
util: rewrite pycompat imports to make pyflakes always happy
I'll add more imports which would confuse pyflakes.
Yuya Nishihara <yuya@tcha.org> [Thu, 20 Oct 2016 23:27:09 +0900] rev 30470
windows: do not replace sys.__stdout__
Now we don't use sys.__stdout__ except for getting its fileno(), so we no
longer have to wrap it by winstdout.
This helps adding pycompat.stdin/out/err.
Pulkit Goyal <7895pulkit@gmail.com> [Mon, 21 Nov 2016 15:38:56 +0530] rev 30469
py3: update test-check-py3-compat.t output
This part remains unchanged because it runs in Python 3 only.
Pulkit Goyal <7895pulkit@gmail.com> [Mon, 21 Nov 2016 15:35:22 +0530] rev 30468
py3: use pycompat.sysargv in dispatch.run()
Another one to have a bytes result from sys.argv in Python 3.
This one is also a part of running `hg version` on Python 3.
Pulkit Goyal <7895pulkit@gmail.com> [Mon, 21 Nov 2016 15:26:47 +0530] rev 30467
py3: use pycompat.sysargv in scmposix.systemrcpath()
sys.argv returns unicodes on Python 3. We have pycompat.sysargv which returns
bytes encoded using os.fsencode(). After this patch scmposix.systemrcpath()
returns bytes in Python 3 world. This change is also a part of making
`hg version` run in Python 3.
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 20 Nov 2016 13:50:45 -0800] rev 30466
wireproto: perform chunking and compression at protocol layer (API)
Currently, the "streamres" response type is populated with a generator
of chunks with compression possibly already applied. This puts the onus
on commands to perform chunking and compression. Architecturally, I
think this is the wrong place to perform this work. I think commands
should say "here is the data" and the protocol layer should take care
of encoding the final bytes to put on the wire.
Additionally, upcoming commits will improve wire protocol support for
compression. Having a central place for performing compression in the
protocol transport layer will be easier than having to deal with
compression at the commands layer.
This commit refactors the "streamres" response type to accept either
a generator or an object with "read." Additionally, the type now
accepts a flag indicating whether the response is a "version 1
compressible" response. This basically identifies all commands
currently performing compression. I could have used a special type
for this, but a flag works just as well. The argument name
foreshadows the introduction of wire protocol changes, hence the "v1."
The code for chunking and compressing has been moved to the output
generation function for each protocol transport. Some code has been
inlined, resulting in the deletion of now unused methods.
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 20 Nov 2016 13:55:53 -0800] rev 30465
httppeer: use compression engine API for decompressing responses
In preparation for supporting multiple compression formats on the
wire protocol, we need all users of the wire protocol to use
compression engine APIs.
This commit ports the HTTP wire protocol client to use the
compression engine API.
The code for handling the HTTPException is a bit hacky. Essentially,
HTTPException could be thrown by any read() from the socket. However,
as part of porting the API, we no longer have a generator wrapping
the socket and we don't have a single place where we can trap the
exception. We solve this by introducing a proxy class that intercepts
read() and converts the exception appropriately.
In the future, we could introduce a new compression engine API that
supports emitting a generator of decompressed chunks. This would
eliminate the need for the proxy class. As I said when I introduced
the decompressorreader() API, I'm not fond of it and would support
transitioning to something better. This can be done as a follow-up,
preferably once all the code is using the compression engine API and
we have a better idea of the API needs of all the consumers.
Gregory Szorc <gregory.szorc@gmail.com> [Sat, 19 Nov 2016 18:31:40 -0800] rev 30464
httppeer: do decompression inside _callstream
The current HTTP transport protocol only compresses certain command
responses and requires calls to that command to call
"_callcompressable," which zlib decompresses the response
transparently.
Upcoming changes will enable *any* response to be compressed with
varying compression formats. In order to handle this better, this
commit moves the decompression bits to the main function performing
the HTTP request. We introduce an underscore-prefixed argument to
denote this behavior so it doesn't conflict with a named argument
to a command.
Gregory Szorc <gregory.szorc@gmail.com> [Sat, 19 Nov 2016 17:11:12 -0800] rev 30463
keepalive: reorder header precedence
There are 3 sources of headers used by this function:
* The default headers defined by the URL opener
* Headers that are copied on redirects
* Headers that aren't copied on redirects
Previously, we applied the default headers from the URL
opener last. This feels wrong to me as those headers are
the most low level and something built on top of the URL
opener may wish to override them. So, this commit changes
the order to apply them with the least precedence.
While I was here, I removed a Python version test that is
no longer necessary.
Gregory Szorc <gregory.szorc@gmail.com> [Sat, 19 Nov 2016 10:54:21 -0800] rev 30462
debuginstall: print compression engine support
Since compression engines may be provided by extensions and since
not all registered compression engines may be available to use,
it seems useful to provide a mechanism to see the state of known
compression engines.
This commit teaches `hg debuginstall` to print info on known and
available compression engines.
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 20 Nov 2016 16:56:21 -0800] rev 30461
bdiff: don't check border condition in loop
This is pretty much a copy of
d500ddae7494, just to a different loop.
The condition `p == plast` (`plast == a + len - 1`) was only true on
the final iteration of the loop. So it was wasteful to check for it
on every iteration. We decrease the iteration count by 1 and add an
explicit check for `p == plast` after the loop.
Again, we see modest wins.
From the mozilla-unified repository:
$ perfbdiff -m
3041e4d59df2
! wall 0.035502 comb 0.040000 user 0.040000 sys 0.000000 (best of 100)
! wall 0.030480 comb 0.030000 user 0.030000 sys 0.000000 (best of 100)
$ perfbdiff
0e9928989e9c --alldata --count 100
! wall 4.097394 comb 4.100000 user 4.100000 sys 0.000000 (best of 3)
! wall 3.597798 comb 3.600000 user 3.600000 sys 0.000000 (best of 3)
The 2nd example throws a total of ~3.3GB of data at bdiff. This
change increases the throughput from ~811 MB/s to ~924 MB/s.
Kostia Balytskyi <ikostia@fb.com> [Sat, 19 Nov 2016 15:41:37 -0800] rev 30460
conflicts: make spacing consistent in conflict markers
The way default marker template was defined before this patch,
the spacing before dash in conflict markes was dependent on
whether changeset is a tip one or not. This is a relevant part
of template:
'{ifeq(tags, "tip", "", "{tags} "}'
If revision is a tip revision with no other tags, this would
resolve to an empty string, but for revisions which are not tip
and don't have any other tags, this would resolve to a single
space string. In the end this causes weirdnesses like the ones
you can see in the affected tests.
This is a not a big deal, but double spacing may be visually
less pleasant.
Please note that test changes where commit hashes change are
the result of marking files as resolved without removing markers.
Durham Goode <durham@fb.com> [Thu, 10 Nov 2016 09:21:41 -0800] rev 30459
rebase: move bookmark update to before rebase clearing
Bookmark fixing should probably happen before the rebase starts to clean up, so
let's move it before clearrebased. This will also help a future patch where we
want to add more clear logic to the existing clear section.
Gábor Stefanik <gabor.stefanik@nng.com> [Fri, 28 Oct 2016 17:44:28 +0200] rev 30458
setup: include a dummy $PATH in the custom environment used by build.py
This is required for building with pypiwin32, the pip-installable replacement
for pywin32.
Kostia Balytskyi <ikostia@fb.com> [Fri, 11 Nov 2016 07:01:27 -0800] rev 30457
shelve: move unshelve-finishing logic to a separate function
Finishing unshelve involves two steps now:
- stripping a changelog
- aborting a transaction
Obs-based shelve will not require these things, so isolating this logic
into a separate function where the normal/obs-shelve branching is
going to be implemented seems to be like a nice idea.
Behavior-wise this change moves 'unshelvecleanup' from being between
changelog stripping and transaction abortion to being after them.
I don't think this has any negative effects.
Kostia Balytskyi <ikostia@fb.com> [Thu, 10 Nov 2016 11:02:39 -0800] rev 30456
shelve: move file-forgetting logic to a separate function
This is just a readability improvement.
Kostia Balytskyi <ikostia@fb.com> [Thu, 10 Nov 2016 10:57:10 -0800] rev 30455
shelve: move rebasing logic to a separate function
Rebasing restored shelved commit onto the right destination is done
differently in traditional and obs-based unshelve:
- for traditional, we just rebase it
- for obs-based, we need to check whether a successor of
the restored commit already exists in the destination (this
might happen when unshelving twice on the same destination)
This is the reason why this piece of logic should be in its own
function: to not have excessive complexity in the main function.
Kostia Balytskyi <ikostia@fb.com> [Thu, 10 Nov 2016 10:51:06 -0800] rev 30454
shelve: move commit restoration logic to a separate function
Kostia Balytskyi <ikostia@fb.com> [Sun, 13 Nov 2016 03:35:52 -0800] rev 30453
shelve: move temporary commit creation to a separate function
Committing working copy changes before rebasing a shelved commit
on top of them is an independent piece of behavior, which fits
into its own function.
Similar to the previous series, this and a couple of following
patches are for unshelve refactoring.
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 17 Nov 2016 20:30:00 -0800] rev 30452
commands: print chunk type in debugrevlog
Each data entry ("chunk") in a revlog has a type based on the first
byte of the data. This type indicates how to interpret the data.
This seems like a useful thing to be able to query through a debug
command. So let's add that to `hg debugrevlog`.
This does make `hg debugrevlog` slightly slower, as it has to read
more than just the index. However, even on the mozilla-unified
manifest (which is ~200MB spread over ~350K revisions), this takes
<400ms.
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 17 Nov 2016 20:17:51 -0800] rev 30451
perf: add command for measuring revlog chunk operations
Upcoming commits will teach revlogs to leverage the new compression
engine API so that new compression formats can more easily be
leveraged in revlogs. We want to be sure this refactoring doesn't
regress performance. So this commit introduces "perfrevchunks" to
explicitly test performance of reading, decompressing, and
recompressing revlog chunks.
Here is output when run on the mozilla-unified repo:
$ hg perfrevlogchunks -c
! read
! wall 0.346603 comb 0.350000 user 0.340000 sys 0.010000 (best of 28)
! read w/ reused fd
! wall 0.337707 comb 0.340000 user 0.320000 sys 0.020000 (best of 30)
! read batch
! wall 0.013206 comb 0.020000 user 0.000000 sys 0.020000 (best of 221)
! read batch w/ reused fd
! wall 0.013259 comb 0.030000 user 0.010000 sys 0.020000 (best of 222)
! chunk
! wall 1.909939 comb 1.910000 user 1.900000 sys 0.010000 (best of 6)
! chunk batch
! wall 1.750677 comb 1.760000 user 1.740000 sys 0.020000 (best of 6)
! compress
! wall 5.668004 comb 5.670000 user 5.670000 sys 0.000000 (best of 3)
$ hg perfrevlogchunks -m
! read
! wall 0.365834 comb 0.370000 user 0.350000 sys 0.020000 (best of 26)
! read w/ reused fd
! wall 0.350160 comb 0.350000 user 0.320000 sys 0.030000 (best of 28)
! read batch
! wall 0.024777 comb 0.020000 user 0.000000 sys 0.020000 (best of 119)
! read batch w/ reused fd
! wall 0.024895 comb 0.030000 user 0.000000 sys 0.030000 (best of 118)
! chunk
! wall 2.514061 comb 2.520000 user 2.480000 sys 0.040000 (best of 4)
! chunk batch
! wall 2.380788 comb 2.380000 user 2.360000 sys 0.020000 (best of 5)
! compress
! wall 9.815297 comb 9.820000 user 9.820000 sys 0.000000 (best of 3)
We already see some interesting data, such as how much slower
non-batched chunk reading is and that zlib compression appears to be
>2x slower than decompression.
I didn't have the data when I wrote this commit message, but I ran this
on Mozilla's NFS-based Mercurial server and the time for reading with a
reused file descriptor was faster. So I think it is worth testing both
with and without file descriptor reuse so we can make informed
decisions about recycling file descriptors.
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 17 Nov 2016 20:09:10 -0800] rev 30450
setup: add flag to build_ext to control building zstd
Downstream packagers will inevitably want to disable building the
vendored python-zstandard Python package. Rather than force them
to patch setup.py, let's give them a knob to use.
distutils Command classes support defining custom options. It requires
setting certain class attributes (yes, class attributes: instance
attributes don't work because the class type is consulted before it
is instantiated).
We already have a custom child class of build_ext, so we set these
class attributes, implement some scaffolding, and override
build_extensions to filter the Extension instance for the zstd
extension if the `--no-zstd` argument is specified.
Example usage:
$ python setup.py build_ext --no-zstd
Jun Wu <quark@fb.com> [Wed, 09 Nov 2016 16:01:34 +0000] rev 30449
drawdag: update test repos by drawing the changelog DAG in ASCII
Currently, we have "debugbuilddag" which is a powerful tool to build test
cases but not intuitive. We may end up running "hg log" in the test to make
the test more readable.
This patch adds a "drawdag" extension with a "debugdrawdag" command for
similar testing purpose. Unlike the cryptic "debugbuilddag" command, it
reads an ASCII graph that is intuitive to human, so the test case can be
more readable.
Unlike "debugbuilddag", "drawdag" does not require an empty repo. So it can
be used to add new changesets to an existing repo.
Since the "drawdag" logic is not that trivial and only makes sense for
testing purpose, the extension is added to the "tests" directory, to make
the core logic clean. If we find it useful (for example, to demonstrate
cases and help user understand some cases) and want to ship it by default in
the future, we can move it to a ship-by-default "debugdrawdag" at that time.
Mads Kiilerich <madski@unity3d.com> [Wed, 14 Jan 2015 01:15:26 +0100] rev 30448
posix: give checklink a fast path that cache the check file and is read only
util.checklink would create a symlink and remove it again. That would sometimes
happen multiple times. Write operations are relatively expensive and give disk
tear and noise for applications monitoring file system activity.
Instead of creating a symlink and deleting it again, just create it once and
leave it in .hg/cache/check-link . If the file exists, just verify that
os.islink reports true. We will assume that this check is as good as symlink
creation not failing.
Note: The symlink left in .hg/cache has to resolve to a file - otherwise 'make
dist' will fail ...
test-symlink-os-yes-fs-no.py does some monkey patching to simulate a platform
without symlink support. The slightly different testing method requires
additional monkeying.
Mads Kiilerich <madski@unity3d.com> [Thu, 17 Nov 2016 12:59:36 +0100] rev 30447
posix: move checklink test file to .hg/cache
This avoids unnecessary churn in the working directory.
It is not necessarily a fully valid assumption that .hg/cache is on the same
filesystem as the working directory, but I think it is an acceptable
approximation. It could also be the case that different parts of the working
directory is on different mount points so checking in the root folder could
also be wrong.
Mads Kiilerich <madski@unity3d.com> [Wed, 14 Jan 2015 01:15:26 +0100] rev 30446
posix: give checkexec a fast path; keep the check files and test read only
Before, Mercurial would create a new temporary file every time, stat it, change
its exec mode, stat it again, and delete it. Most of this dance was done to
handle the rare and not-so-essential case of VFAT mounts on unix. The cost of
that was paid by the much more common and important case of using normal file
systems.
Instead, try to create and preserve .hg/cache/checkisexec and
.hg/cache/checknoexec with and without exec flag set. If the files exist and
have correct exec flags set, we can conclude that that file system supports the
exec flag. Best case, the whole exec check can thus be done with two stat
calls. Worst case, we delete the wrong files and check as usual. That will be
because temporary loss of exec bit or on file systems without support for the
exec bit. In that case we check as we did before, with the additional overhead
of one extra stat call.
It is possible that this different test algorithm in some cases on odd file
systems will give different behaviour. Again, I think it will be rare and
special cases and I think it is worth the risk.
test-clone.t happens to show the situation where checkisexec is left behind
from the old style check, while checknoexec only will be created next time a
exec check will be performed.
Mads Kiilerich <madski@unity3d.com> [Wed, 14 Jan 2015 01:15:26 +0100] rev 30445
posix: simplify checkexec check
Use a slightly simpler logic that in some cases can avoid an unnecessary chmod
and stat.
Instead of flipping the X bits, make it more clear that we rely on no X bits
being set on initial file creation, and that at least some of them stick after
they all have been set.
Mads Kiilerich <madski@unity3d.com> [Thu, 17 Nov 2016 12:59:36 +0100] rev 30444
posix: move checkexec test file to .hg/cache
This avoids unnecessary churn in the working directory.
It is not necessarily a fully valid assumption that .hg/cache is on the same
filesystem as the working directory, but I think it is an acceptable
approximation. It could also be the case that different parts of the working
directory is on different mount points so checking in the root folder could
also be wrong.
Durham Goode <durham@fb.com> [Thu, 17 Nov 2016 15:31:19 -0800] rev 30443
manifest: move manifestctx creation into manifestlog.get()
Most manifestctx creation already happened in manifestlog.get(), but there was
one spot in the manifestctx class itself that created an instance manually. This
patch makes that one instance go through the manifestlog. This means extensions
can just wrap manifestlog.get() and it will cover all manifestctx creations. It
also means this code path now hits the manifestlog cache.
Gregory Szorc <gregory.szorc@gmail.com> [Fri, 11 Nov 2016 01:10:07 -0800] rev 30442
util: implement zstd compression engine
Now that zstd is vendored and being built (in some configurations), we
can implement a compression engine for zstd!
The zstd engine is a little different from existing engines. Because
it may not always be present, we have to defer load the module in case
importing it fails. We facilitate this via a cached property that holds
a reference to the module or None. The "available" method is
implemented to reflect reality.
The zstd engine declares its ability to handle bundles using the
"zstd" human name and the "ZS" internal name. The latter was chosen
because internal names are 2 characters (by only convention I think)
and "ZS" seems reasonable.
The engine, like others, supports specifying the compression level.
However, there are no consumers of this API that yet pass in that
argument. I have plans to change that, so stay tuned.
Since all we need to do to support bundle generation with a new
compression engine is implement and register the compression engine,
bundle generation with zstd "just works!" Tests demonstrating this
have been added.
How does performance of zstd for bundle generation compare? On the
mozilla-unified repo, `hg bundle --all -t <engine>-v2` yields the
following on my i7-6700K on Linux:
engine CPU time bundle size vs orig size throughput
none 97.0s 4,054,405,584 100.0% 41.8 MB/s
bzip2 (l=9) 393.6s 975,343,098 24.0% 10.3 MB/s
gzip (l=6) 184.0s 1,140,533,074 28.1% 22.0 MB/s
zstd (l=1) 108.2s 1,119,434,718 27.6% 37.5 MB/s
zstd (l=2) 111.3s 1,078,328,002 26.6% 36.4 MB/s
zstd (l=3) 113.7s 1,011,823,727 25.0% 35.7 MB/s
zstd (l=4) 116.0s 1,008,965,888 24.9% 35.0 MB/s
zstd (l=5) 121.0s 977,203,148 24.1% 33.5 MB/s
zstd (l=6) 131.7s 927,360,198 22.9% 30.8 MB/s
zstd (l=7) 139.0s 912,808,505 22.5% 29.2 MB/s
zstd (l=12) 198.1s 854,527,714 21.1% 20.5 MB/s
zstd (l=18) 681.6s 789,750,690 19.5% 5.9 MB/s
On compression, zstd for bundle generation delivers:
* better compression than gzip with significantly less CPU utilization
* better than bzip2 compression ratios while still being significantly
faster than gzip
* ability to aggressively tune compression level to achieve
significantly smaller bundles
That last point is important. With clone bundles, a server can
pre-generate a bundle file, upload it to a static file server, and
redirect clients to transparently download it during clone. The server
could choose to produce a zstd bundle with the highest compression
settings possible. This would take a very long time - a magnitude
longer than a typical zstd bundle generation - but the result would
be hundreds of megabytes smaller! For the clone volume we do at
Mozilla, this could translate to petabytes of bandwidth savings
per year and faster clones (due to smaller transfer size).
I don't have detailed numbers to report on decompression. However,
zstd decompression is fast: >1 GB/s output throughput on this machine,
even through the Python bindings. And it can do that regardless of the
compression level of the input. By the time you have enough data to
worry about overhead of decompression, you have plenty of other things
to worry about performance wise.
zstd is wins all around. I can't wait to implement support for it
on the wire protocol and in revlogs.
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 10 Nov 2016 23:38:41 -0800] rev 30441
hghave: add check for zstd support
Not all configurations will support zstd. Add a check so we can
conditionalize tests.
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 10 Nov 2016 23:34:15 -0800] rev 30440
exchange: obtain compression engines from the registrar
util.compengines has knowledge of all registered compression engines
and the metadata that associates them with various bundle types.
This patch removes the now redundant declaration of this metadata from
exchange.py and obtains it from the new source.
The effect of this patch is that once a new compression engine is
registered with util.compengines, `hg bundle -t <engine>` will just
work.
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 10 Nov 2016 23:29:01 -0800] rev 30439
bundle2: equate 'UN' with no compression
An upcoming patch will change the "alg" argument passed to this
function from None to "UN" when no compression is wanted.
The existing implementation of bundle2 does not set a "Compression"
parameter if no compression is used. In theory, setting
"Compression=UN" should work. But I haven't audited the code to see if
all client versions supporting bundle2 will accept this.
Rather than take the risk, avoid the BC breakage and treat "UN"
the same as None.
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 10 Nov 2016 23:15:02 -0800] rev 30438
util: check for compression engine availability before returning
If a requested compression engine is registered but not available,
requesting it will now abort.
To be honest, I'm not sure if this is the appropriate mechanism
for handling optional compression engines. I won't know until
all uses of compression (bundles, wire protocol, revlogs, etc)
are using the new API and zstd (our planned optional engine)
is implemented. So this API could change.
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 10 Nov 2016 23:03:48 -0800] rev 30437
util: expose an "available" API on compression engines
When the zstd compression engine is introduced, it won't work in all
installations, namely pure Python installs. So, we need a mechanism to
declare whether a compression engine is available. We don't want to
conditionally register the compression engine because it is sometimes
useful to know when a compression engine name or encountered data is
valid but just not available versus unknown.
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 10 Nov 2016 22:26:35 -0800] rev 30436
setup: compile zstd C extension
Now that zstd and python-zstandard are vendored, we can start compiling
them as part of the install.
python-zstandard provides a self-contained Python function that returns
a distutils.extension.Extension, so it is really easy to add zstd
to our setup.py without having to worry about defining source files,
include paths, etc. The function even allows specifying the module
name the extension should be compiled as. This conveniently allows us
to compile the module into the "mercurial" package so "our" version
won't collide with a version installed under the canonical "zstd"
module name.
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 10 Nov 2016 22:15:58 -0800] rev 30435
zstd: vendor python-zstandard 0.5.0
As the commit message for the previous changeset says, we wish
for zstd to be a 1st class citizen in Mercurial. To make that
happen, we need to enable Python to talk to the zstd C API. And
that requires bindings.
This commit vendors a copy of existing Python bindings. Why do we
need to vendor? As the commit message of the previous commit says,
relying on systems in the wild to have the bindings or zstd present
is a losing proposition. By distributing the zstd and bindings with
Mercurial, we significantly increase our chances that zstd will
work. Since zstd will deliver a better end-user experience by
achieving better performance, this benefits our users. Another
reason is that the Python bindings still aren't stable and the
API is somewhat fluid. While Mercurial could be coded to target
multiple versions of the Python bindings, it is safer to bundle
an explicit, known working version.
The added Python bindings are mostly a fully-featured interface
to the zstd C API. They allow one-shot operations, streaming,
reading and writing from objects implements the file object
protocol, dictionary compression, control over low-level compression
parameters, and more. The Python bindings work on Python 2.6,
2.7, and 3.3+ and have been tested on Linux and Windows. There are
CFFI bindings, but they are lacking compared to the C extension.
Upstream work will be needed before we can support zstd with PyPy.
But it will be possible.
The files added in this commit come from Git commit
e637c1b214d5f869cf8116c550dcae23ec13b677 from
https://github.com/indygreg/python-zstandard and are added without
modifications. Some files from the upstream repository have been
omitted, namely files related to continuous integration.
In the spirit of full disclosure, I'm the maintainer of the
"python-zstandard" project and have authored 100% of the code
added in this commit. Unfortunately, the Python bindings have
not been formally code reviewed by anyone. While I've tested
much of the code thoroughly (I even have tests that fuzz APIs),
there's a good chance there are bugs, memory leaks, not well
thought out APIs, etc. If someone wants to review the code and
send feedback to the GitHub project, it would be greatly
appreciated.
Despite my involvement with both projects, my opinions of code
style differ from Mercurial's. The code in this commit introduces
numerous code style violations in Mercurial's linters. So, the code
is excluded from most lints. However, some violations I agree with.
These have been added to the known violations ignore list for now.