Wed, 10 Jun 2020 13:02:39 +0200 py3: make stdout line-buffered if connected to a TTY
Manuel Jacob <me@manueljacob.de> [Wed, 10 Jun 2020 13:02:39 +0200] rev 44991
py3: make stdout line-buffered if connected to a TTY Status messages that are to be shown on the terminal should be written to the file descriptor before anything further is done, to keep the user updated. One common way to achieve this is to make stdout line-buffered if it is connected to a TTY. This is done on Python 2 (except on Windows, where libc, which the CPython 2 streams depend on, does not properly support this). Python 3 rolls it own I/O streams. On Python 3, buffered binary streams can't be set line-buffered. The previous code (added in 227ba1afcb65) incorrectly assumed that on Python 3, pycompat.stdout (sys.stdout.buffer) is already line-buffered. However the interpreter initializes it with a block-buffered stream or an unbuffered stream (when the -u option or the PYTHONUNBUFFERED environment variable is set), never with a line-buffered stream. One example where the current behavior is unacceptable is when running `hg pull https://www.mercurial-scm.org/repo/hg` on Python 3, where the line "pulling from https://www.mercurial-scm.org/repo/hg" does not appear on the terminal before the hg process blocks while waiting for the server. Various approaches to fix this problem are possible, including: 1. Weaken the contract of procutil.stdout to not give any guarantees about buffering behavior. In this case, users of procutil.stdout need to be changed to do enough flushes. In particular, 1. either ui must insert enough flushes for ui.write() and friends, or 2. ui.write() and friends get split into flushing and fully buffered methods, or 3. users of ui.write() and friends must flush explicitly. 2. Make stdout unbuffered. 3. Make stdout line-buffered. Since Python 3 does not natively support that for binary streams, we must implement it ourselves. (2.) is problematic because using unbuffered I/O changes the performance characteristics significantly compared to line-buffered (which is used on Python 2) and this would be a regression. (1.2.) and (1.3) are a substantial amount of work. It’s unclear whether the added complexity would be justified, given that raw performance doesn’t matter that much when writing to a terminal much faster than the user could read it. (1.1.) pushes complexity into the ui class instead of separating the concern of how stdout is buffered. Other users of procutil.stdout would still need to take care of the flushes. This patch implements (3.). The general performance considerations are very similar to (1.1.). The extra method invocation and method forwarding add a little more overhead if the class is used. In exchange, it doesn’t add overhead if not used. For the benchmarks, I compared the previous implementation (incorrect on Python 3), (1.1.), (3.) and (2.). The command was chosen so that the streams were configured as if they were writing to a TTY, but actually write to a pager, which is also the default: HGRCPATH=/dev/null python3 ./hg --cwd ~/vcs/mozilla-central --time --pager yes --config pager.pager='cat > /dev/null' status --all previous: time: real 7.880 secs (user 7.290+0.050 sys 0.580+0.170) time: real 7.830 secs (user 7.220+0.070 sys 0.590+0.140) time: real 7.800 secs (user 7.210+0.050 sys 0.570+0.170) (1.1.) using Yuya Nishihara’s patch: time: real 9.860 secs (user 8.670+0.350 sys 1.160+0.830) time: real 9.540 secs (user 8.430+0.370 sys 1.100+0.770) time: real 9.830 secs (user 8.630+0.370 sys 1.180+0.840) (3.) using this patch: time: real 9.580 secs (user 8.480+0.350 sys 1.090+0.770) time: real 9.670 secs (user 8.480+0.330 sys 1.170+0.860) time: real 9.640 secs (user 8.500+0.350 sys 1.130+0.810) (2.) using a previous patch by me: time: real 10.480 secs (user 8.850+0.720 sys 1.590+1.500) time: real 10.490 secs (user 8.750+0.750 sys 1.710+1.470) time: real 10.240 secs (user 8.600+0.700 sys 1.590+1.510) As expected, there’s no difference on Python 2, as exactly the same code paths are used: previous: time: real 6.950 secs (user 5.870+0.330 sys 1.070+0.770) time: real 7.040 secs (user 6.040+0.360 sys 0.980+0.750) time: real 7.070 secs (user 5.950+0.360 sys 1.100+0.760) this patch: time: real 7.010 secs (user 5.900+0.390 sys 1.070+0.730) time: real 7.000 secs (user 5.850+0.350 sys 1.120+0.760) time: real 7.000 secs (user 5.790+0.380 sys 1.170+0.710)
Tue, 02 Jun 2020 21:44:57 +0900 simplemerge: rewrite flag merging loop as expression
Yuya Nishihara <yuya@tcha.org> [Tue, 02 Jun 2020 21:44:57 +0900] rev 44990
simplemerge: rewrite flag merging loop as expression I feel binary operations are more readable.
Tue, 02 Jun 2020 21:40:49 +0900 simplemerge: leverage pycompat function to convert byte string to set
Yuya Nishihara <yuya@tcha.org> [Tue, 02 Jun 2020 21:40:49 +0900] rev 44989
simplemerge: leverage pycompat function to convert byte string to set
Tue, 02 Jun 2020 21:39:07 +0900 simplemerge: fix function name that tests if ctx is not null revision
Yuya Nishihara <yuya@tcha.org> [Tue, 02 Jun 2020 21:39:07 +0900] rev 44988
simplemerge: fix function name that tests if ctx is not null revision
Tue, 09 Jun 2020 13:18:21 -0700 git: decode node IDs back into Python strings (issue6349)
Hollis Blanchard <hollis_blanchard@mentor.com> [Tue, 09 Jun 2020 13:18:21 -0700] rev 44987
git: decode node IDs back into Python strings (issue6349) db.text_factory = bytes, so the database contains only strings. The object IDs we get from pygit2 are Python strings. b'foo' != 'foo' This change allows the "don't reindex" optimization to work by allowing the "cur_cache_heads == cache_heads" comparison a few lines down to succeed. Differential Revision: https://phab.mercurial-scm.org/D8622
Tue, 09 Jun 2020 22:02:09 +0530 phabricator: make it clear what happen when no response
Sushil khanchi <sushilkhanchi97@gmail.com> [Tue, 09 Jun 2020 22:02:09 +0530] rev 44986
phabricator: make it clear what happen when no response Differential Revision: https://phab.mercurial-scm.org/D8621
Mon, 08 Jun 2020 11:43:07 +0530 tests: make it clear what happen when no response entered
Sushil khanchi <sushilkhanchi97@gmail.com> [Mon, 08 Jun 2020 11:43:07 +0530] rev 44985
tests: make it clear what happen when no response entered Differential Revision: https://phab.mercurial-scm.org/D8620
Sat, 18 Jan 2020 10:07:07 -0800 localrepo: handle ValueError during repository opening
Gregory Szorc <gregory.szorc@gmail.com> [Sat, 18 Jan 2020 10:07:07 -0800] rev 44984
localrepo: handle ValueError during repository opening Python 3.8 can raise ValueError on attempt of an I/O operation against an illegal path. This was causing test-remotefilelog-gc.t to fail on Python 3.8. This commit teaches repository opening to handle ValueError and re-raise an Abort on failure. An arguably better solution would be to implement this logic in the vfs layer. But that seems like a bag of worms and I don't want to go down that rabbit hole. Until users report uncaught ValueError exceptions in the wild, I think it is fine to patch this at the only occurrence our test harness is finding it. Differential Revision: https://phab.mercurial-scm.org/D7944
Wed, 27 May 2020 12:56:13 +0200 metadata: filter the `removed` set to only contains relevant data
Pierre-Yves David <pierre-yves.david@octobus.net> [Wed, 27 May 2020 12:56:13 +0200] rev 44983
metadata: filter the `removed` set to only contains relevant data The `files` entry can be bogus and contains too many entries. This can badly combines with the computation of `removed` inflating the set size. The can lead to the changesets centric rename computation to process much more data than needed, slowing it down (and increasing space taken by data storage). In practice newer commits already that reduced set, this applies this "fix" to older changeset. Differential Revision: https://phab.mercurial-scm.org/D8589
Wed, 27 May 2020 12:45:39 +0200 files: extract code for extra filtering of the `removed` entry into copies
Pierre-Yves David <pierre-yves.david@octobus.net> [Wed, 27 May 2020 12:45:39 +0200] rev 44982
files: extract code for extra filtering of the `removed` entry into copies We want to reduce the set of `removed` files that to the set of files actually removed. That `removed` set is used as of the changeset centric algorithm, having smaller sets means less processing and faster computation. In this changeset we extract the code to be a function of it own. We will make use of it in the next changesets. Differential Revision: https://phab.mercurial-scm.org/D8588
(0) -30000 -10000 -3000 -1000 -300 -100 -10 +10 +100 +300 +1000 +3000 tip