Wed, 09 Aug 2017 21:51:45 -0700 wireproto: properly implement batchable checking
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 09 Aug 2017 21:51:45 -0700] rev 33764
wireproto: properly implement batchable checking remoteiterbatcher (unlike remotebatcher) only supports batchable commands. This claim can be validated by comparing their implementations of submit() and noting how remoteiterbatcher assumes the invoked method has a "batchable" attribute, which is set by @peer.batchable. remoteiterbatcher has a custom __getitem__ that was trying to validate that only batchable methods are called. However, it was only validating that the called method exists, not that it is batchable. This wasn't a big deal since remoteiterbatcher.submit() would raise an AttributeError attempting to `mtd.batchable(...)`. Let's fix the check and convert it to ProgrammingError, which may not have been around when this was originally implemented. Differential Revision: https://phab.mercurial-scm.org/D317
Wed, 09 Aug 2017 21:04:03 -0700 largefiles: remove remotestore.batch()
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 09 Aug 2017 21:04:03 -0700] rev 33763
largefiles: remove remotestore.batch() This method was added in 9e1616307c4c. AFAICT it didn't do anything at inception. If it did, there was no test coverage for it because changing it to raise doesn't fail any tests at that revision. b6e71f8af5b8 later refactored all remote.batch() calls to remote.iterbatch(). So if this was somehow used, it isn't called any more because there are no calls to .batch() remaining in the repo. I suspect the original patch author got confused by the distinction between the peer/remote interface and the largefiles store. The lf store is a gateway to a peer instance. It exposes additional lf-specific methods to execute against a peer. However, it is not a peer and doesn't need to implement batch() because peer itself does that. Differential Revision: https://phab.mercurial-scm.org/D316
Fri, 11 Aug 2017 15:20:41 +0200 histedit: check first changeset for verb "roll" or "fold" (issue5498)
André Klitzing <aklitzing@gmail.com> [Fri, 11 Aug 2017 15:20:41 +0200] rev 33762
histedit: check first changeset for verb "roll" or "fold" (issue5498) If someone changes "pick" to "roll" or "fold" for the first changeset in a histedit rule Mercurial could remove a wrong changeset if the phase is non-public. roll or fold for the first changeset should be invalid.
Mon, 31 Jul 2017 23:13:47 +0900 encoding: drop circular import by proxying through '<policy>.charencode'
Yuya Nishihara <yuya@tcha.org> [Mon, 31 Jul 2017 23:13:47 +0900] rev 33761
encoding: drop circular import by proxying through '<policy>.charencode' I decided not to split charencode.c to new C extension module because it would duplicate binary codes unnecessarily.
Mon, 31 Jul 2017 23:40:36 +0900 policy: reroute proxy modules internally
Yuya Nishihara <yuya@tcha.org> [Mon, 31 Jul 2017 23:40:36 +0900] rev 33760
policy: reroute proxy modules internally This allows us to split encoding functions from pure.parsers without doing that for cext.parsers. See the next patch for why.
Mon, 31 Jul 2017 22:58:06 +0900 cext: modernize charencode.c to use Py_ssize_t
Yuya Nishihara <yuya@tcha.org> [Mon, 31 Jul 2017 22:58:06 +0900] rev 33759
cext: modernize charencode.c to use Py_ssize_t
Sun, 21 May 2017 14:23:22 +0900 cext: factor out header for charencode.c
Yuya Nishihara <yuya@tcha.org> [Sun, 21 May 2017 14:23:22 +0900] rev 33758
cext: factor out header for charencode.c This merges a part of util.h with the header which should exist for charencode.c.
Mon, 31 Jul 2017 22:28:27 +0900 cext: split character encoding functions to new compilation unit
Yuya Nishihara <yuya@tcha.org> [Mon, 31 Jul 2017 22:28:27 +0900] rev 33757
cext: split character encoding functions to new compilation unit This extracts charencode.c from parsers.c, which seems big enough for me to hesitate to add new JSON functions. Still charencode.o is linked to parsers.so to avoid duplication of binary codes.
(0) -30000 -10000 -3000 -1000 -300 -100 -30 -10 -8 +8 +10 +30 +100 +300 +1000 +3000 +10000 tip