Wed, 18 Sep 2019 06:07:09 +0200 hgext: initial version of fastexport extension
Joerg Sonnenberger <joerg@bec.de> [Wed, 18 Sep 2019 06:07:09 +0200] rev 44280
hgext: initial version of fastexport extension Differential Revision: https://phab.mercurial-scm.org/D7733
Fri, 07 Feb 2020 15:55:21 +0100 hghave: cache the result of gethgversion
Julien Cristau <jcristau@mozilla.com> [Fri, 07 Feb 2020 15:55:21 +0100] rev 44279
hghave: cache the result of gethgversion hghave --test-features calls it 90 times, each one calling hg --version which takes a tenth of a second on my workstation, adding up to about 10s win on test-hghave.t. Fixes https://bugs.debian.org/939756 Differential Revision: https://phab.mercurial-scm.org/D8092
Mon, 03 Feb 2020 11:56:02 -0500 resourceutil: blacken stable
Augie Fackler <augie@google.com> [Mon, 03 Feb 2020 11:56:02 -0500] rev 44278
resourceutil: blacken
Fri, 24 Jan 2020 14:11:43 -0800 clean: delete obsolete unlinking of .hg/graftstate
Martin von Zweigbergk <martinvonz@google.com> [Fri, 24 Jan 2020 14:11:43 -0800] rev 44277
clean: delete obsolete unlinking of .hg/graftstate The responsibility for clearing it is now in `cmdutil.clearunfinished()`, so we shouldn't have to unlink it in `hg.clean()`. Differential Revision: https://phab.mercurial-scm.org/D7992
Tue, 04 Feb 2020 10:16:30 -0800 copies: avoid filtering by short-circuit dirstate-only copies earlier
Martin von Zweigbergk <martinvonz@google.com> [Tue, 04 Feb 2020 10:16:30 -0800] rev 44276
copies: avoid filtering by short-circuit dirstate-only copies earlier The call to `y.ancestor(x)` triggered repo filtering, which we'd like to avoid in the simple `hg status --copies` case. Differential Revision: https://phab.mercurial-scm.org/D8071
Tue, 04 Feb 2020 10:14:44 -0800 tests: add test showing that repo filter is calculated for `hg st --copies`
Martin von Zweigbergk <martinvonz@google.com> [Tue, 04 Feb 2020 10:14:44 -0800] rev 44275
tests: add test showing that repo filter is calculated for `hg st --copies` Differential Revision: https://phab.mercurial-scm.org/D8070
Tue, 21 Jan 2020 11:40:15 -0500 lfs: enable workers by default
Matt Harbison <matt_harbison@yahoo.com> [Tue, 21 Jan 2020 11:40:15 -0500] rev 44274
lfs: enable workers by default With the stall issue seemingly fixed, there's no reason not to use workers. The setting is left for now to keep the test output deterministic, and in case other issues come up. If none do, this can be converted to a developer setting for usage with the tests. Differential Revision: https://phab.mercurial-scm.org/D7963
Tue, 21 Jan 2020 11:32:33 -0500 lfs: fix the stall and corruption issue when concurrently uploading blobs
Matt Harbison <matt_harbison@yahoo.com> [Tue, 21 Jan 2020 11:32:33 -0500] rev 44273
lfs: fix the stall and corruption issue when concurrently uploading blobs We've avoided the issue up to this point by gating worker usage with an experimental config. See 10e62d5efa73, and the thread linked there for some of the initial diagnosis, but essentially some data was being read from the blob before an error occurred and `keepalive` retried, but didn't rewind the file pointer. So the leading data was lost from the blob on the server, and the connection stalled, trying to send more data than available. In trying to recreate this, I was unable to do so uploading from Windows to CentOS 7. But it reproduced every time going from CentOS 7 to another CentOS 7 over https. I found recent fixes in the FaceBook repo to address this[1][2]. The commit message for the first is: The KeepAlive HTTP implementation is bugged in it's retry logic, it supports reading from a file pointer, but doesn't support rewinding of the seek cursor when it performs a retry. So it can happen that an upload fails for whatever reason and will then 'hang' on the retry event. The sequence of events that get triggered are: - Upload file A, goes OK. Keep-Alive caches connection. - Upload file B, fails due to (for example) failing Keep-Alive, but LFS file pointer has been consumed for the upload and fd has been closed. - Retry for file B starts, sets the Content-Length properly to the expected file size, but since file pointer has been consumed no data will be uploaded, causing the server to wait for the uploaded data until either client or server reaches a timeout, making it seem as our mercurial process hangs. This is just a stop-gap measure to prevent this behavior from blocking Mercurial (LFS has retry logic). A proper solutions need to be build on top of this stop-gap measure: for upload from file pointers, we should support fseek() on the interface. Since we expect to consume the whole file always anyways, this should be safe. This way we can seek back to the beginning on a retry. I ported those two patches, and it works. But I see that `url._sendfile()` does a rewind on `httpsendfile` objects[3], so maybe it's better to keep this all in one place and avoid a second seek. We may still want the first FaceBook patch as extra protection for this problem in general. The other two uses of `httpsendfile` are in the wire protocol to upload bundles, and to upload largefiles. Neither of these appear to use a worker, and I'm not sure why workers seem to trigger this, or if this could have happened without a worker. Since `httpsendfile` already has a `close()` method, that is dropped. That class also explicitly says there's no `__len__` attribute, so that is removed too. The override for `read()` is necessary to avoid the progressbar usage per file. [1] https://github.com/facebookexperimental/eden/commit/c350d6536d90c044c837abdd3675185644481469 [2] https://github.com/facebookexperimental/eden/commit/77f0d3fd0415e81b63e317e457af9c55c46103ee [3] https://www.mercurial-scm.org/repo/hg/file/5.2.2/mercurial/url.py#l176 Differential Revision: https://phab.mercurial-scm.org/D7962
(0) -30000 -10000 -3000 -1000 -300 -100 -30 -10 -8 +8 +10 +30 +100 +300 +1000 +3000 tip