Tue, 18 Oct 2016 14:13:06 -0500 merge with i18n stable
Kevin Bullock <kbullock+mercurial@ringworld.org> [Tue, 18 Oct 2016 14:13:06 -0500] rev 30214
merge with i18n
Tue, 11 Oct 2016 20:39:47 -0300 i18n-pt_BR: synchronized with 149433e68974 stable
Wagner Bruna <wbruna@softwareexpress.com.br> [Tue, 11 Oct 2016 20:39:47 -0300] rev 30213
i18n-pt_BR: synchronized with 149433e68974
Sun, 16 Oct 2016 13:35:23 -0700 changegroup: increase write buffer size to 128k
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 16 Oct 2016 13:35:23 -0700] rev 30212
changegroup: increase write buffer size to 128k By default, Python defers to the operating system for choosing the default buffer size on opened files. On my Linux machine, the default is 4k, which is really small for 2016. This patch bumps the write buffer size when writing changegroups/bundles to 128k. This matches the 128k read buffer we already use on revlogs. It's worth noting that this only impacts when writing to an explicit file (such as during `hg bundle`). Buffers when writing to bundle files via the repo vfs or to a temporary file are not impacted. When producing a none-v2 bundle file of the mozilla-unified repository, this change caused the number of write() system calls to drop from 952,449 to 29,788. After this change, the most frequent system calls are fstat(), read(), lseek(), and open(). There were 2,523,672 system calls after this patch (so a net decrease of ~950k is statistically significant). This change shows no performance change on my system. But I have a high-end system with a fast SSD. It is quite possible this change will have a significant impact on network file systems, where extra network round trips due to excessive I/O system calls could introduce significant latency.
Fri, 14 Oct 2016 01:31:11 +0200 changegroup: skip delta when the underlying revlog do not use them
Pierre-Yves David <pierre-yves.david@ens-lyon.org> [Fri, 14 Oct 2016 01:31:11 +0200] rev 30211
changegroup: skip delta when the underlying revlog do not use them Revlog can now be configured to store full snapshot only. This is used on the changelog. However, the changegroup packing was still recomputing deltas to be sent over the wire. We now just reuse the full snapshot directly in this case, skipping delta computation. This provides use with a large speed up(-30%): # perfchangegroupchangelog on mercurial ! wall 2.010326 comb 2.020000 user 2.000000 sys 0.020000 (best of 5) ! wall 1.382039 comb 1.380000 user 1.370000 sys 0.010000 (best of 8) # perfchangegroupchangelog on pypy ! wall 5.792589 comb 5.780000 user 5.780000 sys 0.000000 (best of 3) ! wall 3.911158 comb 3.920000 user 3.900000 sys 0.020000 (best of 3) # perfchangegroupchangelog on mozilla central ! wall 20.683727 comb 20.680000 user 20.630000 sys 0.050000 (best of 3) ! wall 14.190204 comb 14.190000 user 14.150000 sys 0.040000 (best of 3) Many tests have to be updated because of the change in bundle content. All theses update have been verified. Because diffing changelog was not very valuable, the resulting bundle have similar size (often a bit smaller): # full bundle of mozilla central with delta: 1142740533B without delta: 1142173300B So this is a win all over the board.
Fri, 14 Oct 2016 02:25:08 +0200 revlog: make 'storedeltachains' a "public" attribute
Pierre-Yves David <pierre-yves.david@ens-lyon.org> [Fri, 14 Oct 2016 02:25:08 +0200] rev 30210
revlog: make 'storedeltachains' a "public" attribute The next changeset will make that attribute read by the changegroup packer. We make it "public" beforehand.
Mon, 17 Oct 2016 22:51:22 -0700 manifest: don't store None in fulltextcache
Martin von Zweigbergk <martinvonz@google.com> [Mon, 17 Oct 2016 22:51:22 -0700] rev 30209
manifest: don't store None in fulltextcache When we read a value from fulltextcache, we expect it to be an array, so we should not store None in it. Found while working on narrowhg.
Tue, 18 Oct 2016 02:09:08 +0200 copies: improve assertions during copy recombination
Gábor Stefanik <gabor.stefanik@nng.com> [Tue, 18 Oct 2016 02:09:08 +0200] rev 30208
copies: improve assertions during copy recombination - Make sure there is nothing to recombine in non-graftlike scenarios - More pythonic assert syntax
Mon, 17 Oct 2016 16:12:12 -0700 treemanifest: fix bad argument order to treemanifestctx
Martin von Zweigbergk <martinvonz@google.com> [Mon, 17 Oct 2016 16:12:12 -0700] rev 30207
treemanifest: fix bad argument order to treemanifestctx Found by running tests with _treeinmem (both of them) modified to be True.
Sun, 16 Oct 2016 11:10:21 -0700 wireproto: compress data from a generator
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 16 Oct 2016 11:10:21 -0700] rev 30206
wireproto: compress data from a generator Currently, the "getbundle" wire protocol command obtains a generator of data, converts it to a util.chunkbuffer, then converts it back to a generator via the protocol's groupchunks() implementation. For the SSH protocol, groupchunks() simply reads 4kb chunks then write()s the data to a file descriptor. For the HTTP protocol, groupchunks() reads 32kb chunks, feeds those into a zlib compressor, emits compressed data as it is available, and that is sent to the WSGI layer, where it is likely turned into HTTP chunked transfer chunks as is or further buffered and turned into a larger chunk. For both the SSH and HTTP protocols, there is inefficiency from using util.chunkbuffer. For SSH, emitting consistent 4kb chunks sounds nice. However, the file descriptor it is writing to is almost certainly buffered. That means that a Python .write() probably doesn't translate into exactly what is written to the I/O layer. For HTTP, we're going through an intermediate layer to zlib compress data. So all util.chunkbuffer is doing is ensuring that the chunks we feed into the zlib compressor are of uniform size. This means more CPU time in Python buffering and emitting chunks in util.chunkbuffer but fewer function calls to zlib. This patch introduces and implements a new wire protocol abstract method: compresschunks(). It is like groupchunks() except it operates on a generator instead of something with a .read(). The SSH implementation simply proxies chunks. The HTTP implementation uses zlib compression. To avoid duplicate code, the HTTP groupchunks() has been reimplemented in terms of compresschunks(). To prove this all works, the "getbundle" wire protocol command has been switched to compresschunks(). This removes the util.chunkbuffer from that command. Now, data essentially streams straight from the changegroup emitter to the wire, possibly through a zlib compressor. Generators all the way, baby. There were slim to no performance changes on the server as measured with the mozilla-central repository. This is likely because CPU time is dominated by reading revlogs, producing the changegroup, and zlib compressing the output stream. Still, this brings us a little closer to our ideal of using generators everywhere.
Mon, 17 Oct 2016 19:48:36 +0200 revset: optimize for destination() being "inefficient"
Mads Kiilerich <madski@unity3d.com> [Mon, 17 Oct 2016 19:48:36 +0200] rev 30205
revset: optimize for destination() being "inefficient" destination() will scan through the whole subset and read extras for each revision to get its source.
Tue, 11 Oct 2016 04:39:47 +0200 copies: make _checkcopies handle copy sequences spanning the TCA (issue4028)
Gábor Stefanik <gabor.stefanik@nng.com> [Tue, 11 Oct 2016 04:39:47 +0200] rev 30204
copies: make _checkcopies handle copy sequences spanning the TCA (issue4028) When working in a rotated DAG (for a graftlike merge), there can be files that are renamed both between the base and the topological CA, and between the TCA and the endpoint farther from the base. Such renames span the TCA (and thus need both passes of _checkcopies to be fully detected), but may not necessarily be divergent. Make _checkcopies return "incomplete copies" and "incomplete divergences" in this case, and let mergecopies recombine them once data from both passes of _checkcopies is available. With this patch, all known cases involving renames and grafts pass. (Developed together with Pierre-Yves David)
Tue, 11 Oct 2016 04:25:59 +0200 checkcopies: add logic to handle remotebase
Gábor Stefanik <gabor.stefanik@nng.com> [Tue, 11 Oct 2016 04:25:59 +0200] rev 30203
checkcopies: add logic to handle remotebase As the two _checkcopies passes' ranges are separated by tca, not base, only one of the two passes will actually encounter the base. Pass "remotebase" to the other pass to let it know not to expect passing over the base. This is required for handling a few unusual rename cases.
Tue, 04 Oct 2016 12:51:54 +0200 mergecopies: add logic to process incomplete data
Gábor Stefanik <gabor.stefanik@nng.com> [Tue, 04 Oct 2016 12:51:54 +0200] rev 30202
mergecopies: add logic to process incomplete data We first combine incomplete copies on the two sides of the topological CA into complete copies. Any leftover incomplete copies are then combined with the incomplete divergences to reconstruct divergences spanning over the topological CA. Finally we promote any divergences falsely flagged as incomplete to full divergences. Right now, there is nothing generating incomplete copy/divergence data, so this code does nothing. Changes to _checkcopies to populate these dicts are coming later in this series.
Wed, 12 Oct 2016 11:54:03 +0200 checkcopies: handle divergences contained entirely in tca::ctx
Gábor Stefanik <gabor.stefanik@nng.com> [Wed, 12 Oct 2016 11:54:03 +0200] rev 30201
checkcopies: handle divergences contained entirely in tca::ctx During a graftlike merge, _checkcopies runs from ctx to tca, possibly passing over the merge base. If there is a rename both before and after the base, then we're actually dealing with divergent renames. If there is no rename on the other side of tca, then the divergence is contained entirely in the range of one _checkcopies invocation, and should be detected "in the loop" without having to rely on the other _checkcopies pass.
Thu, 25 Aug 2016 22:02:26 +0200 update: enable copy tracing for backwards and non-linear updates
Gábor Stefanik <gabor.stefanik@nng.com> [Thu, 25 Aug 2016 22:02:26 +0200] rev 30200
update: enable copy tracing for backwards and non-linear updates As a followup to the issue4028 series, this fixes a variant of the issue that can occur when updating with uncommited local changes. The duplicated .hgsub warning is coming from wc.dirty(). We would previously skip this call because it's only relevant when we're going to perform copy tracing, which we didn't do before. The change to the update summary line is because we now treat the rename as a proper rename (which counts as a change), rather than an add+delete pair (which counts as a change and a delete).
Mon, 26 Sep 2016 10:47:37 +0200 bashcompletion: allow skipping completion for 'hg status'
Mathias De Maré <mathias.de_mare@nokia.com> [Mon, 26 Sep 2016 10:47:37 +0200] rev 30199
bashcompletion: allow skipping completion for 'hg status' On systems with large repositories and slow disks, the calls to 'hg status' make autocomplete annoyingly slow. This fix makes it possible to avoid the slowdown.
Sun, 21 Aug 2016 01:12:00 +0200 tests: add more test coverage of phase changes when pushing
Mads Kiilerich <madski@unity3d.com> [Sun, 21 Aug 2016 01:12:00 +0200] rev 30198
tests: add more test coverage of phase changes when pushing Prepare for test coverage of phase updates with future push --readonly option, both with and without actually pushing changesets.
Thu, 13 Oct 2016 02:19:43 +0200 mergecopies: invoke _computenonoverlap for both base and tca during merges
Gábor Stefanik <gabor.stefanik@nng.com> [Thu, 13 Oct 2016 02:19:43 +0200] rev 30197
mergecopies: invoke _computenonoverlap for both base and tca during merges The algorithm of _checkcopies can only walk backwards in the DAG, never forward. Because of this, the two _checkcopies patches need to run from their respective endpoints to the TCA to cover the entire subgraph where the merge is being performed. However, detection of files new in both endpoints, as well as directory rename detection, need to run with respect to the merge base, so we need lists of new files both from the TCA's and the merge base's viewpoint to correctly detect renames in a graft-like merge scenario. (Series reworked by Pierre-Yves David)
Tue, 18 Oct 2016 00:00:43 +0200 copies: make it possible to distinguish betwen _computenonoverlap invocations
Pierre-Yves David <pierre-yves.david@ens-lyon.org> [Tue, 18 Oct 2016 00:00:43 +0200] rev 30196
copies: make it possible to distinguish betwen _computenonoverlap invocations _computenonoverlap needs to be invoked twice during a graft, and debugging messages should be distinguishable between the two invocations
Thu, 13 Oct 2016 02:03:54 +0200 copies: make _checkcopies handle simple renames in a rotated DAG
Gábor Stefanik <gabor.stefanik@nng.com> [Thu, 13 Oct 2016 02:03:54 +0200] rev 30195
copies: make _checkcopies handle simple renames in a rotated DAG This introduces a distinction between "merge base" and "topological common ancestor". During a regular merge, these two are identical. Graft, however, performs a merge in a rotated DAG, where the merge base will not be a common ancestor at all in the original DAG. To correctly find copies in case of a graft, we need to take both the merge base and the topological CA into account, and track any renames between them in reverse. Fortunately we can detect this in advance, see comment in the code about "backwards". This patch only supports finding non-divergent renames contained entirely between the merge base and the topological CA. Further patches are coming to support more complex cases. (Pierre-Yves David was involved in the cleanup of this patch.)
Thu, 13 Oct 2016 02:03:49 +0200 copies: compute a suitable TCA if base turns out to be unsuitable
Gábor Stefanik <gabor.stefanik@nng.com> [Thu, 13 Oct 2016 02:03:49 +0200] rev 30194
copies: compute a suitable TCA if base turns out to be unsuitable This will be used later in an update to _checkcopies. (Pierre-Yves David was involved in the cleanup of this patch.)
Thu, 13 Oct 2016 01:47:33 +0200 copies: detect graft-like merges
Gábor Stefanik <gabor.stefanik@nng.com> [Thu, 13 Oct 2016 01:47:33 +0200] rev 30193
copies: detect graft-like merges Right now, nothing changes as a result of this, but we want to handle grafts differently from ordinary merges later. (Series developed together with Pierre-Yves David)
Wed, 12 Oct 2016 12:41:28 +0200 tests: introduce tests for grafting through renames
Gábor Stefanik <gabor.stefanik@nng.com> [Wed, 12 Oct 2016 12:41:28 +0200] rev 30192
tests: introduce tests for grafting through renames These cover all currently known cases of renames being grafted, or changes being grafted through renames. Right now, most of these cases are broken. Later patches in this series will make them behave correctly. The testcases heavily rely on each other, which would make it very difficult to separate them and add them one-by-one for each case fixed by a patch. Separating them should perhaps be a 4.1 task, if it doesn't slow down the tests too much. (Developed together with Pierre-Yves David)
Mon, 17 Oct 2016 17:12:24 +0200 largefiles: fix 'deleted' files sometimes persistently appearing with R status stable
Mads Kiilerich <madski@unity3d.com> [Mon, 17 Oct 2016 17:12:24 +0200] rev 30191
largefiles: fix 'deleted' files sometimes persistently appearing with R status A code snippet that has been around since largefiles was introduced was wrong: Standins no longer found in lfdirstate has *not* been removed - they have probably just been deleted ... or not created. This wrong reporting did that 'up -C' didn't undo the change and didn't sync the two dirstates. Instead of reporting such files as removed, propagate the deletion to the standin file and report the file as deleted.
Sun, 16 Oct 2016 02:29:45 +0200 largefiles: more safe handling of interruptions while updating modifications stable
Mads Kiilerich <madski@unity3d.com> [Sun, 16 Oct 2016 02:29:45 +0200] rev 30190
largefiles: more safe handling of interruptions while updating modifications Largefiles are fragile with the design where dirstate and lfdirstate must be kept in sync. To be less fragile, mark all clean largefiles as unsure ("normallookup") before updating standins. After standins have been updated and we know exactly which largefile standins actually was changed, mark the unchanged largefiles back to clean ("normal"). This will make the failure mode more safe. If interrupted, the next command will continue to perform extra hashing of all largefiles. That will do that all largefiles that are out of sync with their standin will be marked dirty and they will show up in status and can be cleaned with update --clean.
Sun, 16 Oct 2016 02:26:38 +0200 largefiles: test coverage of fatal interruption of update stable
Mads Kiilerich <madski@unity3d.com> [Sun, 16 Oct 2016 02:26:38 +0200] rev 30189
largefiles: test coverage of fatal interruption of update Test using existing changesets in a clean working directory, revealing problems with files that don't show up as modified or do show up as removed when they just not have been written yet.
Wed, 12 Oct 2016 21:33:45 +0200 checkcopies: add a sanity check against false-positive copies
Gábor Stefanik <gabor.stefanik@nng.com> [Wed, 12 Oct 2016 21:33:45 +0200] rev 30188
checkcopies: add a sanity check against false-positive copies When grafting a copy backwards through a rename, a copy is wrongly detected, which causes the graft to be applied inappropriately, in a destructive way. Make sure that the old file name really exists in the common ancestor, and bail out if it doesn't. This fixes the aggravated case of bug 5343, although the basic issue (failure to duplicate the copy information) still occurs.
Sun, 16 Oct 2016 10:38:52 -0700 exchange: refactor APIs to obtain bundle data (API)
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 16 Oct 2016 10:38:52 -0700] rev 30187
exchange: refactor APIs to obtain bundle data (API) Currently, exchange.getbundle() returns either a cg1unpacker or a util.chunkbuffer (in the case of bundle2). This is kinda OK, as both expose a .read() to consumers. However, localpeer.getbundle() has code inferring what the response type is based on arguments and converts the util.chunkbuffer returned in the bundle2 case to a bundle2.unbundle20 instance. This is a sign that the API for exchange.getbundle() is not ideal because it doesn't consistently return an "unbundler" instance. In addition, unbundlers mask the fact that there is an underlying generator of changegroup data. In both cg1 and bundle2, this generator is being fed into a util.chunkbuffer so it can be re-exposed as a file object. util.chunkbuffer is a nice abstraction. However, it should only be used "at the edges." This is because keeping data as a generator is more efficient than converting it to a chunkbuffer, especially if we convert that chunkbuffer back to a generator (as is the case in some code paths currently). This patch refactors exchange.getbundle() into exchange.getbundlechunks(). The new API returns an iterator of chunks instead of a file-like object. Callers of exchange.getbundle() have been updated to use the new API. There is a minor change of behavior in test-getbundle.t. This is because `hg debuggetbundle` isn't defining bundlecaps. As a result, a cg1 data stream and unpacker is being produced. This is getting fed into a new bundle20 instance via bundle2.writebundle(), which uses a backchannel mechanism between changegroup generation to add the "nbchanges" part parameter. I never liked this backchannel mechanism and I plan to remove it someday. `hg bundle` still produces the "nbchanges" part parameter, so there should be no user-visible change of behavior. I consider this "regression" a bug in `hg debuggetbundle`. And that bug is captured by an existing "TODO" in the code to use bundle2 capabilities.
Thu, 13 Oct 2016 01:30:14 +0200 mergecopies: rename 'ca' to 'base'
Pierre-Yves David <pierre-yves.david@ens-lyon.org> [Thu, 13 Oct 2016 01:30:14 +0200] rev 30186
mergecopies: rename 'ca' to 'base' This variable was named after the common ancestor. It is actually the merge base that might differ from the common ancestor in the graft case. We rename the variable before a larger refactoring to clarify the situation. Similar rename was also applied to 'checkcopies' in a prior changeset.
Thu, 13 Oct 2016 01:26:33 +0200 copies: move variable document from checkcopies to mergecopies
Pierre-Yves David <pierre-yves.david@ens-lyon.org> [Thu, 13 Oct 2016 01:26:33 +0200] rev 30185
copies: move variable document from checkcopies to mergecopies It appears that 'mergecopies' is the function consuming these data so we move the documentation there.
(0) -30000 -10000 -3000 -1000 -300 -100 -50 -30 +30 +50 +100 +300 +1000 +3000 +10000 tip