Tue, 07 Feb 2017 13:11:30 -0800 destutil: remove dead code about non-linear updates
Martin von Zweigbergk <martinvonz@google.com> [Tue, 07 Feb 2017 13:11:30 -0800] rev 30933
destutil: remove dead code about non-linear updates IIUC, the non-linear updates no longer happen by default since 6b1fc09c699a (update: change default destination to tipmost descendant (issue4673) (BC), 2016-02-02), and it was only if they happened by default that we used to error out, so there is no longer a need to handle this case.
Thu, 09 Feb 2017 09:55:31 -0800 update: fix typo/stale comment to match code
Martin von Zweigbergk <martinvonz@google.com> [Thu, 09 Feb 2017 09:55:31 -0800] rev 30932
update: fix typo/stale comment to match code The comment about "obsolete.background" seems to have been about obsolete.foreground ever since it was introduced in a59e575c6ff8 (update: allow dirty update to foreground (successors), 2013-04-16), so let's change it.
Wed, 08 Feb 2017 23:03:33 -0800 merge: remove unused handling of default destination in merge.update()
Martin von Zweigbergk <martinvonz@google.com> [Wed, 08 Feb 2017 23:03:33 -0800] rev 30931
merge: remove unused handling of default destination in merge.update() As far as I can tell, all the callers of merge.update() have been migrated over to use destutil to find the default destination when the revision was not specified. So it's time to delete the code for handling a node value of None. Let's add an assertion that node is not None, so any extensions relying on it will not silently misbehave.
Wed, 08 Feb 2017 14:49:37 -0800 update: localize logic around which ancestor to use
Martin von Zweigbergk <martinvonz@google.com> [Wed, 08 Feb 2017 14:49:37 -0800] rev 30930
update: localize logic around which ancestor to use The merge code works by applying the changes between an ancestor and the working copy onto the destination. To achieve an update, it sets the ancestor to be the parent of the working copy. To achieve a clean update (update -C), it sets the ancestor to be the working copy itself (so there are no changes to carry over). The logic for this was spread out a bit. Let's move it all to one place so it's easier to follow the reason for it. Also add some documentation.
Wed, 08 Feb 2017 22:12:27 -0800 tests: add test for updating to null revision
Martin von Zweigbergk <martinvonz@google.com> [Wed, 08 Feb 2017 22:12:27 -0800] rev 30929
tests: add test for updating to null revision While working on merge.py, I realized that we don't (as far as I could tell) have any tests for updating to the null revision with a dirty working copy. This adds some simple tests for that.
Fri, 10 Feb 2017 15:26:03 -0800 import: mention "stdin" (abbreviated) and add example
Martin von Zweigbergk <martinvonz@google.com> [Fri, 10 Feb 2017 15:26:03 -0800] rev 30928
import: mention "stdin" (abbreviated) and add example I actually didn't even think it was possible because I searched the help text for "stdin", and didn't even think of searching for "standard input". Let's mention the abbreviated form too to help others like me. (When importing from stdin, we actually print a message saying "applying patch from stdin".) This patch also adds an example showing how to import from stdin.
Thu, 09 Feb 2017 09:32:25 -0800 merge: print status message before launching external merge tool
Martin von Zweigbergk <martinvonz@google.com> [Thu, 09 Feb 2017 09:32:25 -0800] rev 30927
merge: print status message before launching external merge tool It seems somewhat common that people run into a merge conflict and don't notice the launched merge tool, and instead they think hg just hung. Let's print a message for each file that we launch a GUI merge tool for.
Wed, 08 Feb 2017 07:44:10 -0800 pager: exit cleanly on SIGPIPE (BC)
Simon Farnsworth <simonfar@fb.com> [Wed, 08 Feb 2017 07:44:10 -0800] rev 30926
pager: exit cleanly on SIGPIPE (BC) Changeset aaa751585325 removes SIGPIPE handling completely. This is wrong, as it means that Mercurial does not exit when the pager does. Instead, raise SignalInterrupt when SIGPIPE happens with a pager attached, to trigger the normal exit path. This will cause "killed!" to be printed to stderr (hence the BC warning), but in the normal pager use case (where the pager gets both stderr and stdout), this message is lost as we only get SIGPIPE when the pager quits.
Fri, 10 Feb 2017 04:09:06 -0800 runtests: catch EPROTONOSUPPORT in checkportisavailable
Jun Wu <quark@fb.com> [Fri, 10 Feb 2017 04:09:06 -0800] rev 30925
runtests: catch EPROTONOSUPPORT in checkportisavailable This is a follow-up of "runtests: check ports on IPv6 address". On some platforms, "socket.AF_INET6" exists while that does not necessarily mean the platform support IPv6 - when initializing a socket using "socket.socket", it could fail with EPROTONOSUPPORT. So treat that as "Port unavailable".
Tue, 07 Feb 2017 23:24:47 -0800 zstd: vendor python-zstandard 0.7.0
Gregory Szorc <gregory.szorc@gmail.com> [Tue, 07 Feb 2017 23:24:47 -0800] rev 30924
zstd: vendor python-zstandard 0.7.0 Commit 3054ae3a66112970a091d3939fee32c2d0c1a23e from https://github.com/indygreg/python-zstandard is imported without modifications (other than removing unwanted files). The vendored zstd library within has been upgraded from 1.1.2 to 1.1.3. This version introduced new APIs for threads, thread pools, multi-threaded compression, and a new dictionary builder (COVER). These features are not yet used by python-zstandard (or Mercurial for that matter). However, that will likely change in the next python-zstandard release (and I think there are opportunities for Mercurial to take advantage of the multi-threaded APIs). Relevant to Mercurial, the CFFI bindings are now fully implemented. This means zstd should "just work" with PyPy (although I haven't tried). The python-zstandard test suite also runs all tests against both the C extension and CFFI bindings to ensure feature parity. There is also a "decompress_content_dict_chain()" API. This was derived from discussions with Yann Collet on list about alternate ways of encoding delta chains. The change most relevant to Mercurial is a performance enhancement in the simple decompression API to reuse a data structure across operations. This makes decompression of multiple inputs significantly faster. (This scenario occurs when reading revlog delta chains, for example.) Using python-zstandard's bench.py to measure the performance difference... On changelog chunks in the mozilla-unified repo: decompress discrete decompress() reuse zctx 1.262243 wall; 1.260000 CPU; 1.260000 user; 0.000000 sys 170.43 MB/s (best of 3) 0.949106 wall; 0.950000 CPU; 0.950000 user; 0.000000 sys 226.66 MB/s (best of 4) decompress discrete dict decompress() reuse zctx 0.692170 wall; 0.690000 CPU; 0.690000 user; 0.000000 sys 310.80 MB/s (best of 5) 0.437088 wall; 0.440000 CPU; 0.440000 user; 0.000000 sys 492.17 MB/s (best of 7) On manifest chunks in the mozilla-unified repo: decompress discrete decompress() reuse zctx 1.367284 wall; 1.370000 CPU; 1.370000 user; 0.000000 sys 274.01 MB/s (best of 3) 1.086831 wall; 1.080000 CPU; 1.080000 user; 0.000000 sys 344.72 MB/s (best of 3) decompress discrete dict decompress() reuse zctx 0.993272 wall; 0.990000 CPU; 0.990000 user; 0.000000 sys 377.19 MB/s (best of 3) 0.678651 wall; 0.680000 CPU; 0.680000 user; 0.000000 sys 552.06 MB/s (best of 5) That should make reads on zstd revlogs a bit faster ;) # no-check-commit
(0) -30000 -10000 -3000 -1000 -300 -100 -10 +10 +100 +300 +1000 +3000 +10000 tip