Sat, 25 Oct 2014 21:34:49 -0400 httppeer: close the temporary bundle file after two-way streaming it stable
Matt Harbison <matt_harbison@yahoo.com> [Sat, 25 Oct 2014 21:34:49 -0400] rev 23086
httppeer: close the temporary bundle file after two-way streaming it This fixes several push tests in test-bundle2-exchange.t that were failing on Windows with messages like the following: $ hg -R main push http://localhost:$HGPORT2/ -r 32af7686d403 \ --bookmark book_32af pushing to http://localhost:$HGPORT2/ searching for changes remote: adding changesets remote: adding manifests remote: adding file changes remote: added 1 changesets with 1 changes to 1 files remote: 1 new obsolescence markers updating bookmark book_32af abort: The process cannot access the file because it is being used by another process: 'C:\path\to\tmp\bundle.hg' [255]
Fri, 24 Oct 2014 14:24:28 -0700 status: make 'hg status --rev' faster when there are deleted files stable
Martin von Zweigbergk <martinvonz@google.com> [Fri, 24 Oct 2014 14:24:28 -0700] rev 23085
status: make 'hg status --rev' faster when there are deleted files In order not to avoid listing files as both added and deleted, for example, we check for every file in the manifest if it is in the _list_ of deleted files. This can get quite slow when there are many deleted files. Change it to a set to make the containment check faster. On a somewhat contrived example of the Mozilla repo with the entire testing/ directory deleted (~14k files), this makes 'hg status --rev .^' go from 26s to 2s.
Mon, 27 Oct 2014 17:52:33 +0100 setdiscovery: limit the size of the initial sample (issue4411) stable
Pierre-Yves David <pierre-yves.david@fb.com> [Mon, 27 Oct 2014 17:52:33 +0100] rev 23084
setdiscovery: limit the size of the initial sample (issue4411) The set discovery start by sending a "known" command with all local heads. When the number of local heads is massive (eg: using hidden changesets) such request becomes too large. This lead to 414 error over http, aborting the whole process. We limit the size of the sample used by the first query to fix this. The test are impacted because they do test massive number of heads. But they do not test it over real world http setup.
Mon, 27 Oct 2014 17:40:32 +0100 setdiscovery: extract sample limitation in a `_limitsample` function stable
Pierre-Yves David <pierre-yves.david@fb.com> [Mon, 27 Oct 2014 17:40:32 +0100] rev 23083
setdiscovery: extract sample limitation in a `_limitsample` function We need to reuse this logic for the initial query. We extract it in a function to unsure sample limiting is applied consistently in all cases.
Fri, 24 Oct 2014 17:24:46 -0500 exchange: don't report failure from identical bookmarks stable
Gregory Szorc <gregory.szorc@gmail.com> [Fri, 24 Oct 2014 17:24:46 -0500] rev 23082
exchange: don't report failure from identical bookmarks b901645a8784 regressed the behavior of pushing an unchanged bookmark to a remote. Before that commit, pushing a unchanged bookmark would result in "exporting bookmark @" being printed. After that commit, we now see an incorrect message "bookmark %s does not exist on the local or remote repository!" This patch fixes the regression introduced by b901645a8784 by having the bookmark error reporting code filter identical bookmarks and adds a test for the behavior.
Fri, 24 Oct 2014 10:40:37 -0700 bookmarks: explicitly track identical bookmarks stable
Gregory Szorc <gregory.szorc@gmail.com> [Fri, 24 Oct 2014 10:40:37 -0700] rev 23081
bookmarks: explicitly track identical bookmarks bookmarks.compare() previously lumped identical bookmarks in the "invalid" bucket. This patch adds a "same" bucket. An 8-tuple for holding this state is pretty gnarly. The return value should probably be converted into a class to increase readability. But that is beyond the scope of a patch intended to be a late arrival to stable.
Fri, 24 Oct 2014 15:52:20 -0500 context.status: remove incorrect swapping of added/removed in workingctx stable
Martin von Zweigbergk <martinvonz@google.com> [Fri, 24 Oct 2014 15:52:20 -0500] rev 23080
context.status: remove incorrect swapping of added/removed in workingctx The comment in workingctx.status() says that "calling 'super' subtly reveresed the contexts", but that is simply not true, so we should not be swapping added and removed fields.
(0) -10000 -3000 -1000 -300 -100 -30 -10 -7 +7 +10 +30 +100 +300 +1000 +3000 +10000 tip