Mercurial > hg
changeset 49544:abf471862b8e stable
lfs: fix blob corruption when tranferring with workers on posix
The problem seems to be that the connection used to request the location of the
blobs is sitting in the connection pool, and then when workers are forked, they
all see and attempt to use the same connection. This garbles everything. I
have no clue how this ever worked reliably (but it seems to, even on Linux, with
SCM Manager 1.58). See previous discussion when worker support was added[1].
It shouldn't be a problem on Windows, since the workers are just threads in the
same process, and can see which connections are marked available and which are
in use. (The fact that `mercurial.keepalive.ConnectionManager.set_ready()`
doesn't acquire a lock does give me some pause though.)
[1] https://phab.mercurial-scm.org/D1568#31621
author | Matt Harbison <matt_harbison@yahoo.com> |
---|---|
date | Tue, 18 Oct 2022 13:36:33 -0400 |
parents | 76fbb1b6692a |
children | 3556f0392808 |
files | hgext/lfs/blobstore.py |
diffstat | 1 files changed, 13 insertions(+), 0 deletions(-) [+] |
line wrap: on
line diff
--- a/hgext/lfs/blobstore.py Tue Oct 18 12:58:34 2022 -0400 +++ b/hgext/lfs/blobstore.py Tue Oct 18 13:36:33 2022 -0400 @@ -599,6 +599,19 @@ # Until https multiplexing gets sorted out if self.ui.configbool(b'experimental', b'lfs.worker-enable'): + # The POSIX workers are forks of this process, so before spinning + # them up, close all pooled connections. Otherwise, there's no way + # to coordinate between them about who is using what, and the + # transfers will get corrupted. + # + # TODO: add a function to keepalive.ConnectionManager to mark all + # ready connections as in use, and roll that back after the fork? + # That would allow the existing pool of connections in this process + # to be preserved. + if not pycompat.iswindows: + for h in self.urlopener.handlers: + getattr(h, "close_all", lambda: None)() + oids = worker.worker( self.ui, 0.1,