changeset 19006:0b3b84222a2d

largefiles: getlfile must hit end of HTTP chunked streams to reuse connections We did read the exactly the right number of bytes from the response body. But if the response came in chunked encoding then that meant that the HTTP layer still hadn't read the last 0-sized chunk and expected the app layer to read more data from the stream. The app layer was however happy and sent another request which had to be sent on another HTTP connection while the old one was lingering until some other event closed the connection. Adding an extra read where we expect to hit the end of file makes the HTTP connection ready for reuse. This thus plugs a real socket leak. To distinguish HTTP from SSH we look at self's class, just like it is done in putlfile.
author Mads Kiilerich <madski@unity3d.com>
date Tue, 16 Apr 2013 04:35:10 +0200
parents 1b84047e7d16
children 266b5fb72f26
files hgext/largefiles/proto.py
diffstat 1 files changed, 7 insertions(+), 0 deletions(-) [+]
line wrap: on
line diff
--- a/hgext/largefiles/proto.py	Tue Apr 16 01:55:57 2013 +0200
+++ b/hgext/largefiles/proto.py	Tue Apr 16 04:35:10 2013 +0200
@@ -126,6 +126,13 @@
             # SSH streams will block if reading more than length
             for chunk in util.filechunkiter(stream, 128 * 1024, length):
                 yield chunk
+            # HTTP streams must hit the end to process the last empty
+            # chunk of Chunked-Encoding so the connection can be reused.
+            if issubclass(self.__class__, httppeer.httppeer):
+                chunk = stream.read(1)
+                if chunk:
+                    self._abort(error.ResponseError(_("unexpected response:"),
+                                                    chunk))
 
         @batchable
         def statlfile(self, sha):