Raphaël Gomès <rgomes@octobus.net> [Wed, 26 Oct 2022 18:46:56 +0200] rev 49550
dirstate-v2: fix edge case where entries aren't sorted
See previous commit for more details.
Raphaël Gomès <rgomes@octobus.net> [Wed, 26 Oct 2022 18:24:34 +0200] rev 49549
dirstate-v2: highlight a bug when Python-packed but used in `rhg`
The Python packer creates unsorted entries in the edge case that a file
starts with the same name as a sibling folder.
This bug has no effect on the Python `hg status` since Python ignores
directories. `rhg` assumes that all on-disk entries are sorted (which is
a property of the format) including folder, hence the issue highlighted.
This is also technically broken in Rust-augmented `hg status`, but it
makes setting up the test more complex than necessary, since it requires
the packing to be Python only (which it isn't if you have Rust extensions).
Fix is in the next commit.
Raphaël Gomès <rgomes@octobus.net> [Wed, 26 Oct 2022 12:20:23 +0200] rev 49548
dirstate-v2: correct documented return values of `pack_dirstate`
Raphaël Gomès <rgomes@octobus.net> [Wed, 26 Oct 2022 12:19:47 +0200] rev 49547
dirstate-v2: fix typos in docstrings
Raphaël Gomès <rgomes@octobus.net> [Fri, 04 Nov 2022 14:52:16 -0400] rev 49546
dirstate-v2: update constant that wasn't kept in sync
Despite the best efforts of the comment, this constant wasn't kept in sync
when the flags were being rewritten.
The fact that this doesn't actually break anything in the Rust implementation
too much (which does use directories) relies on the fact that all nodes
can have children and that dirstate traversal is not based on that flag, but
for metadata in optimizations. However the bug could become more serious should
we start encoding stronger guarantees using a combination of flags including
this one.
Matt Harbison <matt_harbison@yahoo.com> [Tue, 18 Oct 2022 13:56:45 -0400] rev 49545
lfs: avoid closing connections when the worker doesn't fork
Probably not much more than an minor optimization, but could be useful in the
case of `hg verify` where missing blobs are fetched one at a time.
Matt Harbison <matt_harbison@yahoo.com> [Tue, 18 Oct 2022 13:36:33 -0400] rev 49544
lfs: fix blob corruption when tranferring with workers on posix
The problem seems to be that the connection used to request the location of the
blobs is sitting in the connection pool, and then when workers are forked, they
all see and attempt to use the same connection. This garbles everything. I
have no clue how this ever worked reliably (but it seems to, even on Linux, with
SCM Manager 1.58). See previous discussion when worker support was added[1].
It shouldn't be a problem on Windows, since the workers are just threads in the
same process, and can see which connections are marked available and which are
in use. (The fact that `mercurial.keepalive.ConnectionManager.set_ready()`
doesn't acquire a lock does give me some pause though.)
[1] https://phab.mercurial-scm.org/D1568#31621
Matt Harbison <matt_harbison@yahoo.com> [Tue, 18 Oct 2022 12:58:34 -0400] rev 49543
keepalive: add `__repr__()` to the HTTPConnection class to ease debugging
By default, this just printed the class name and memory address. By displaying
the address and ports on both sides of the socket, it makes it easier to figure
out what's in the ConnectionManager, and correlate with WireShark traces.
It looks like the two connections mentioned in the previous commit come about
because the LFS POST request to access the blobs opens connection 1, and gets a
401. Then for some reason, the follow up with credentials opens a new socket,
instead of using the existing one in the pool. I have no clue why.
This can be seen with something like this in the blobstore:
```
for h in self.urlopener.handlers:
if hasattr(h, "close_all"):
print('open connections on %s in pid %d' % (type(h), os.getpid()))
for host, conns in h._cm.get_all().items():
for c in conns:
print('connection: %r' % c)
```
Matt Harbison <matt_harbison@yahoo.com> [Tue, 18 Oct 2022 11:54:58 -0400] rev 49542
keepalive: ensure `close_all()` actually closes all cached connections
While debugging why LFS blob downloads are getting corrupted with workers, I
noticed that prior to spinning up the workers, the ConnectionManager has 2
connections to the server and calling `KeepAliveHandler.close_all()` left one
behind. The reason is the value component of `self._cm.get_all().items()` is a
list, and `self._cm.remove()` modifies said list while the caller is iterating
over it. Now `get_all()` is a deep copy of both the dict and lists in all
cases.
Pierre-Yves David <pierre-yves.david@octobus.net> [Thu, 22 Sep 2022 16:27:17 +0200] rev 49541
tests: remove non-python3 line matching and tests block
We don't support Python2 anymore