Mads Kiilerich <madski@unity3d.com> [Tue, 16 Apr 2013 04:35:10 +0200] rev 19006
largefiles: getlfile must hit end of HTTP chunked streams to reuse connections
We did read the exactly the right number of bytes from the response body. But
if the response came in chunked encoding then that meant that the HTTP layer
still hadn't read the last 0-sized chunk and expected the app layer to read
more data from the stream. The app layer was however happy and sent another
request which had to be sent on another HTTP connection while the old one was
lingering until some other event closed the connection.
Adding an extra read where we expect to hit the end of file makes the HTTP
connection ready for reuse. This thus plugs a real socket leak.
To distinguish HTTP from SSH we look at self's class, just like it is done in
putlfile.
Mads Kiilerich <madski@unity3d.com> [Tue, 16 Apr 2013 01:55:57 +0200] rev 19005
largefiles: drop limitreader, use filechunkiter limit
filechunkiter.close was a noop.
Mads Kiilerich <madski@unity3d.com> [Tue, 16 Apr 2013 01:46:39 +0200] rev 19004
largefiles: move protocol conversion into getlfile and make it an iterable
Avoid the intermediate limitreader and filechunkiter between getlfile and
copyandhash - return the right protocol and put the complexity where it better
can be managed.
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 23:47:04 +0200] rev 19003
largefiles: don't close the fd passed to store._getfile
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 23:43:50 +0200] rev 19002
largefiles: remove blecch from lfutil.copyandhash - don't close the passed fd
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 23:43:44 +0200] rev 19001
largefiles: drop lfutil.blockstream - use filechunkiter like everybody else
The old chunk size is kept - just to avoid changing it.
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 23:35:43 +0200] rev 19000
largefiles: refactoring - use findfile in localstore._getfile
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 23:35:18 +0200] rev 18999
largefiles: refactoring - return hex from _getfile and copyandhash
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 23:32:33 +0200] rev 18998
largefiles: refactoring - create destination dir in lfutil.link
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Tue, 09 Apr 2013 23:40:11 +0900] rev 18997
summary: clear "commonincoming" also if branches are different
Before this patch, "commonincoming" calculated by
"discovery.findcommonincoming()" is cleared, only if "default" URL
without branch part (tail "#branch" of URL) differs from
"default-push" URL without branch part.
But common revisions in "commonincoming" calculated for a branch
doesn't include ones for another branch, even if URLs without branch
part are same. The result of "discovery.findcommonoutgoing()"
invocation with such "commonincoming" becomes incorrect in some cases.
This patch clears "commonincoming", also if branch part of "default"
differs from one of "default-push".
To avoid redundant looking up:
- "ui.expandpath('default')" and "ui.expandpath('default-push',
'default')" are not compared directly, even though they contain
branch information, because they are not yet normalized by
"hg.parseurl()": tail "/" of path, for example
- "commonincoming" is not cleared, if branch isn't specified in
"default" URL, because such "commonincoming" contains common
revisions for all branches
This patch also tests "different path, same branch" pattern to check
careless degrading around comparison between source and destination.
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Tue, 09 Apr 2013 23:40:11 +0900] rev 18996
summary: make "incoming" information sensitive to branch in URL (
issue3830)
Before this patch, "incoming" information of "hg summary --remote" is
not sensitive to the branch specified in the URL of the destination
repository, even though "hg pull"/"hg incoming" are so.
Invocation of "discovery.findcommonincoming()" without "heads"
argument treats revisions on branches other than the one specified in
the URL as incoming ones unexpectedly.
This patch looks head revisions, which are already detected by
"hg.addbranchrevs()" from URL, up against "other" repository, and
invokes "discovery.findcommonincoming()" with list of them as "heads"
to limit calculation of incoming revisions.
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Tue, 09 Apr 2013 23:40:10 +0900] rev 18995
histedit: make "hg histedit" sensitive to branch in URL
Before this patch, "hg histedit" are not sensitive to the branch
specified in the URL of the destination repository, even though "hg
push"/"hg outgoing" are so:
Invocation of "discovery.findcommonoutgoing()" without "onlyheads"
argument treats revisions on branches other than the one specified in
the URL as outgoing ones unexpectedly.
This patch specifies list of head revisions, which are already
detected by "hg.addbranchrevs()" from URL and looked up against local
repository, as "onlyheads" to "discovery.findcommonoutgoing()" to
limit calculation of outgoing revisions.