Fri, 07 Aug 2015 19:51:55 -0700 branchmap: use absolute_import
Gregory Szorc <gregory.szorc@gmail.com> [Fri, 07 Aug 2015 19:51:55 -0700] rev 25918
branchmap: use absolute_import
Fri, 07 Aug 2015 19:49:21 -0700 bookmarks: use absolute_import
Gregory Szorc <gregory.szorc@gmail.com> [Fri, 07 Aug 2015 19:49:21 -0700] rev 25917
bookmarks: use absolute_import
Fri, 07 Aug 2015 19:47:49 -0700 archival: use absolute_import
Gregory Szorc <gregory.szorc@gmail.com> [Fri, 07 Aug 2015 19:47:49 -0700] rev 25916
archival: use absolute_import
Fri, 07 Aug 2015 19:45:48 -0700 ancestor: use absolute_import
Gregory Szorc <gregory.szorc@gmail.com> [Fri, 07 Aug 2015 19:45:48 -0700] rev 25915
ancestor: use absolute_import A few months ago, import-checker.py was taught to enforce a more well-defined import style for files with absolute_import. However, we stopped short of actually converting source files to use absolute_import because of problems with certain files. Investigation revealed the following problems with switching to absolute_import universally: 1) import cycles result in import failure on Python 2.6 2) undetermined way to import C/pure modules While these problems need to be solved, they can be put off. This patch starts a series of converting files to absolute_import that won't exhibit any of the aforementioned problems.
Wed, 05 Aug 2015 14:21:46 -0400 discovery: always use batching now that all peers support batching
Augie Fackler <augie@google.com> [Wed, 05 Aug 2015 14:21:46 -0400] rev 25914
discovery: always use batching now that all peers support batching Some peers will transparently downgrade batched requests to non-batched ones, but that simplifies code for everyone using batching.
Wed, 05 Aug 2015 14:15:17 -0400 wireproto: make wirepeer look-before-you-leap on batching
Augie Fackler <augie@google.com> [Wed, 05 Aug 2015 14:15:17 -0400] rev 25913
wireproto: make wirepeer look-before-you-leap on batching This means that users of request batching don't need to worry themselves with capability checking. Instead, they can just use batching, and if the remote server doesn't support batching for some reason the wirepeer code will transparently un-batch the requests. This will allow for some slight simplification in a handful of places. Prior to this change, largefiles would have been silently broken against a server which did not support batching.
Wed, 05 Aug 2015 14:51:34 -0400 batching: migrate basic noop batching into peer.peer
Augie Fackler <augie@google.com> [Wed, 05 Aug 2015 14:51:34 -0400] rev 25912
batching: migrate basic noop batching into peer.peer "Real" batching only makes sense for wirepeers, but it greatly simplifies the clients of peer instances if they can be ignorant to actual batching capabilities of that peer. By moving the not-really-batched batching code into peer.peer, all peer instances now work with the batching API, thus simplifying users. This leaves a couple of name forwards in wirepeer.py. Originally I had planned to clean those up, but it kind of unclarifies other bits of code that want to use batching, so I think it makes sense for the names to stay exposed by wireproto. Specifically, almost nothing is currently aware of peer (see largefiles.proto for an example), so making them be aware of the peer module *and* the wireproto module seems like some abstraction leakage. I *think* the right long-term fix would actually be to make wireproto an implementation detail that clients wouldn't need to know about, but I don't really know what that would entail at the moment. As far as I'm aware, no clients of batching in third-party extensions will need updating, which is nice icing.
Thu, 06 Aug 2015 22:54:28 -0700 parsers: fix memory leak in compute_phases_map_sets stable
Laurent Charignon <lcharignon@fb.com> [Thu, 06 Aug 2015 22:54:28 -0700] rev 25911
parsers: fix memory leak in compute_phases_map_sets PySet_Add increments the reference of the added object to the set, see: https://hg.python.org/cpython/file/2.6/Objects/setobject.c#l379 Before this patch we were forgetting to decrement the reference count after adding objects to the phaseset. This patch fixes the issue and makes the reference count right so that these objects can be properly garbage collected.
(0) -10000 -3000 -1000 -300 -100 -30 -10 -8 +8 +10 +30 +100 +300 +1000 +3000 +10000 tip