Wed, 26 Oct 2022 18:24:34 +0200 dirstate-v2: highlight a bug when Python-packed but used in `rhg` stable
Raphaël Gomès <rgomes@octobus.net> [Wed, 26 Oct 2022 18:24:34 +0200] rev 49541
dirstate-v2: highlight a bug when Python-packed but used in `rhg` The Python packer creates unsorted entries in the edge case that a file starts with the same name as a sibling folder. This bug has no effect on the Python `hg status` since Python ignores directories. `rhg` assumes that all on-disk entries are sorted (which is a property of the format) including folder, hence the issue highlighted. This is also technically broken in Rust-augmented `hg status`, but it makes setting up the test more complex than necessary, since it requires the packing to be Python only (which it isn't if you have Rust extensions). Fix is in the next commit.
Wed, 26 Oct 2022 12:20:23 +0200 dirstate-v2: correct documented return values of `pack_dirstate` stable
Raphaël Gomès <rgomes@octobus.net> [Wed, 26 Oct 2022 12:20:23 +0200] rev 49540
dirstate-v2: correct documented return values of `pack_dirstate`
Wed, 26 Oct 2022 12:19:47 +0200 dirstate-v2: fix typos in docstrings stable
Raphaël Gomès <rgomes@octobus.net> [Wed, 26 Oct 2022 12:19:47 +0200] rev 49539
dirstate-v2: fix typos in docstrings
Fri, 04 Nov 2022 14:52:16 -0400 dirstate-v2: update constant that wasn't kept in sync stable
Raphaël Gomès <rgomes@octobus.net> [Fri, 04 Nov 2022 14:52:16 -0400] rev 49538
dirstate-v2: update constant that wasn't kept in sync Despite the best efforts of the comment, this constant wasn't kept in sync when the flags were being rewritten. The fact that this doesn't actually break anything in the Rust implementation too much (which does use directories) relies on the fact that all nodes can have children and that dirstate traversal is not based on that flag, but for metadata in optimizations. However the bug could become more serious should we start encoding stronger guarantees using a combination of flags including this one.
Tue, 18 Oct 2022 13:56:45 -0400 lfs: avoid closing connections when the worker doesn't fork stable
Matt Harbison <matt_harbison@yahoo.com> [Tue, 18 Oct 2022 13:56:45 -0400] rev 49537
lfs: avoid closing connections when the worker doesn't fork Probably not much more than an minor optimization, but could be useful in the case of `hg verify` where missing blobs are fetched one at a time.
Tue, 18 Oct 2022 13:36:33 -0400 lfs: fix blob corruption when tranferring with workers on posix stable
Matt Harbison <matt_harbison@yahoo.com> [Tue, 18 Oct 2022 13:36:33 -0400] rev 49536
lfs: fix blob corruption when tranferring with workers on posix The problem seems to be that the connection used to request the location of the blobs is sitting in the connection pool, and then when workers are forked, they all see and attempt to use the same connection. This garbles everything. I have no clue how this ever worked reliably (but it seems to, even on Linux, with SCM Manager 1.58). See previous discussion when worker support was added[1]. It shouldn't be a problem on Windows, since the workers are just threads in the same process, and can see which connections are marked available and which are in use. (The fact that `mercurial.keepalive.ConnectionManager.set_ready()` doesn't acquire a lock does give me some pause though.) [1] https://phab.mercurial-scm.org/D1568#31621
Tue, 18 Oct 2022 12:58:34 -0400 keepalive: add `__repr__()` to the HTTPConnection class to ease debugging stable
Matt Harbison <matt_harbison@yahoo.com> [Tue, 18 Oct 2022 12:58:34 -0400] rev 49535
keepalive: add `__repr__()` to the HTTPConnection class to ease debugging By default, this just printed the class name and memory address. By displaying the address and ports on both sides of the socket, it makes it easier to figure out what's in the ConnectionManager, and correlate with WireShark traces. It looks like the two connections mentioned in the previous commit come about because the LFS POST request to access the blobs opens connection 1, and gets a 401. Then for some reason, the follow up with credentials opens a new socket, instead of using the existing one in the pool. I have no clue why. This can be seen with something like this in the blobstore: ``` for h in self.urlopener.handlers: if hasattr(h, "close_all"): print('open connections on %s in pid %d' % (type(h), os.getpid())) for host, conns in h._cm.get_all().items(): for c in conns: print('connection: %r' % c) ```
Tue, 18 Oct 2022 11:54:58 -0400 keepalive: ensure `close_all()` actually closes all cached connections stable
Matt Harbison <matt_harbison@yahoo.com> [Tue, 18 Oct 2022 11:54:58 -0400] rev 49534
keepalive: ensure `close_all()` actually closes all cached connections While debugging why LFS blob downloads are getting corrupted with workers, I noticed that prior to spinning up the workers, the ConnectionManager has 2 connections to the server and calling `KeepAliveHandler.close_all()` left one behind. The reason is the value component of `self._cm.get_all().items()` is a list, and `self._cm.remove()` modifies said list while the caller is iterating over it. Now `get_all()` is a deep copy of both the dict and lists in all cases.
Wed, 02 Nov 2022 16:46:46 -0400 localrepo: byteify the requirements.DIRSTATE_TRACKED_HINT_Vx warning message stable
Matt Harbison <matt_harbison@yahoo.com> [Wed, 02 Nov 2022 16:46:46 -0400] rev 49533
localrepo: byteify the requirements.DIRSTATE_TRACKED_HINT_Vx warning message Flagged by PyCharm.
Mon, 31 Oct 2022 16:15:54 +0000 rhg: fallback to slow path on invalid patterns in hgignore stable
Arseniy Alekseyev <aalekseyev@janestreet.com> [Mon, 31 Oct 2022 16:15:54 +0000] rev 49532
rhg: fallback to slow path on invalid patterns in hgignore
Mon, 31 Oct 2022 16:15:30 +0000 rhg: add a test involving hgignore lookaround stable
Arseniy Alekseyev <aalekseyev@janestreet.com> [Mon, 31 Oct 2022 16:15:30 +0000] rev 49531
rhg: add a test involving hgignore lookaround
Mon, 24 Oct 2022 18:07:22 +0200 relnotes: add 6.3 stable
Raphaël Gomès <rgomes@octobus.net> [Mon, 24 Oct 2022 18:07:22 +0200] rev 49530
relnotes: add 6.3
Mon, 24 Oct 2022 17:30:44 +0200 Added signature for changeset a3356ab610fc stable
Raphaël Gomès <rgomes@octobus.net> [Mon, 24 Oct 2022 17:30:44 +0200] rev 49529
Added signature for changeset a3356ab610fc
Mon, 24 Oct 2022 17:30:19 +0200 Added tag 6.3rc0 for changeset a3356ab610fc stable
Raphaël Gomès <rgomes@octobus.net> [Mon, 24 Oct 2022 17:30:19 +0200] rev 49528
Added tag 6.3rc0 for changeset a3356ab610fc
Mon, 24 Oct 2022 15:32:14 +0200 branching: merge default into stable stable 6.3rc0
Raphaël Gomès <rgomes@octobus.net> [Mon, 24 Oct 2022 15:32:14 +0200] rev 49527
branching: merge default into stable This marks the feature freeze for the 6.3 release
(0) -30000 -10000 -3000 -1000 -300 -100 -15 +15 +100 +300 +1000 tip