Tue, 19 Jul 2016 21:16:44 +0900 hghave: fix typo of sslutil.supportedprotocols stable
Yuya Nishihara <yuya@tcha.org> [Tue, 19 Jul 2016 21:16:44 +0900] rev 29611
hghave: fix typo of sslutil.supportedprotocols
Tue, 19 Jul 2016 03:29:53 -0700 rebase: turn rebase revs into set before filtering obsolete stable
Simon Farnsworth <simonfar@fb.com> [Tue, 19 Jul 2016 03:29:53 -0700] rev 29610
rebase: turn rebase revs into set before filtering obsolete When the inhibit extension from mutable-history is enabled, it attempts to iterate over the rebaseset to prevent the nodes being rebased from being marked obsolete. This happens at the same time as rebase's _filterobsoleterevs function trying to iterate over the rebaseset to figure out which ones are obsolete. The two of these iterating over the same revset generatorset cause a 'generator already executing' exception. This is probably a flaw in the revset implementation, since iterating over the same set twice should be supported. This regression was introduced in 5d16ebe7b14, since it changed _filterobsoleterevs to be called before the rebaseset was turned into a set(). For now let’s just make the rebaseset an actual set again before calling that function. This was caught by the inhibit tests. The relevant call stack from test-inhibit.t: File "/tmp/hgtests.jgjrN5/install/lib/python/hgext/rebase.py", line 285, in _preparenewrebase obsrevs = _filterobsoleterevs(self.repo, rebaseset) File "/data/hgbuild/facebook-hg-rpms/mutable-history/hgext/inhibit.py", line 197, in _filterobsoleterevswrap r = orig(repo, rebasesetrevs, *args, **kwargs) File "/tmp/hgtests.jgjrN5/install/lib/python/hgext/rebase.py", line 1380, in _filterobsoleterevs return set(r for r in revs if repo[r].obsolete()) File "/tmp/hgtests.jgjrN5/install/lib/python/hgext/rebase.py", line 1380, in <genexpr> return set(r for r in revs if repo[r].obsolete()) File "/tmp/hgtests.jgjrN5/install/lib/python/mercurial/revset.py", line 3079, in _iterordered val2 = next(iter2) File "/tmp/hgtests.jgjrN5/install/lib/python/mercurial/revset.py", line 3417, in gen yield nextrev() File "/tmp/hgtests.jgjrN5/install/lib/python/mercurial/revset.py", line 3424, in _consumegen for item in self._gen: File "/tmp/hgtests.jgjrN5/install/lib/python/mercurial/revset.py", line 71, in iterate cl = repo.changelog File "/tmp/hgtests.jgjrN5/install/lib/python/mercurial/repoview.py", line 319, in changelog revs = filterrevs(unfi, self.filtername) File "/tmp/hgtests.jgjrN5/install/lib/python/mercurial/repoview.py", line 261, in filterrevs repo.filteredrevcache[filtername] = func(repo.unfiltered()) File "/data/hgbuild/facebook-hg-rpms/mutable-history/hgext/directaccess.py", line 65, in _computehidden hidden = repoview.filterrevs(repo, 'visible') File "/tmp/hgtests.jgjrN5/install/lib/python/mercurial/repoview.py", line 261, in filterrevs repo.filteredrevcache[filtername] = func(repo.unfiltered()) File "/tmp/hgtests.jgjrN5/install/lib/python/mercurial/repoview.py", line 175, in computehidden hideable = hideablerevs(repo) File "/tmp/hgtests.jgjrN5/install/lib/python/mercurial/repoview.py", line 33, in hideablerevs return obsolete.getrevs(repo, 'obsolete') File "/tmp/hgtests.jgjrN5/install/lib/python/mercurial/obsolete.py", line 1097, in getrevs repo.obsstore.caches[name] = cachefuncs[name](repo) File "/data/hgbuild/facebook-hg-rpms/mutable-history/hgext/inhibit.py", line 255, in _computeobsoleteset if getrev(n) not in blacklist: File "/tmp/hgtests.jgjrN5/install/lib/python/mercurial/revset.py", line 3264, in __contains__ return x in self._r1 or x in self._r2 File "/tmp/hgtests.jgjrN5/install/lib/python/mercurial/revset.py", line 3348, in __contains__ for l in self._consumegen(): File "/tmp/hgtests.jgjrN5/install/lib/python/mercurial/revset.py", line 3424, in _consumegen for item in self._gen: ValueError: generator already executing
Mon, 18 Jul 2016 15:59:08 +0100 commandserver: update comment about setpgid stable
Jun Wu <quark@fb.com> [Mon, 18 Jul 2016 15:59:08 +0100] rev 29609
commandserver: update comment about setpgid Now setpgid has 2 main purposes: better handling for terminal-generated SIGTSTP, SIGINT, and process-exit-generated SIGHUP. Update the comment to explain things more clearly.
Sun, 17 Jul 2016 22:55:47 +0100 chg: forward SIGINT, SIGHUP to process group stable
Jun Wu <quark@fb.com> [Sun, 17 Jul 2016 22:55:47 +0100] rev 29608
chg: forward SIGINT, SIGHUP to process group These signals are meant to send to a process group, instead of a single process: SIGINT is usually emitted by the terminal and sent to the process group. SIGHUP usually happens to a process group if termination of a process causes that process group to become orphaned. Before this patch, chg will only forward these signals to the single server process. This patch changes it to the server process group. This will allow us to properly kill processes started by the forked server process, like a ssh process. The behavior difference can be observed by setting SSH_ASKPASS to a dummy script doing "sleep 100" and then run "chg push ssh://dest-need-password-auth". Before this patch, the first Ctrl+C will kill the hg process while ssh-askpass and ssh will remain alive. This patch will make sure they are killed properly.
Mon, 18 Jul 2016 23:31:51 -0500 Added signature for changeset 519bb4f9d3a4 stable
Matt Mackall <mpm@selenic.com> [Mon, 18 Jul 2016 23:31:51 -0500] rev 29607
Added signature for changeset 519bb4f9d3a4
Mon, 18 Jul 2016 23:31:50 -0500 Added tag 3.9-rc for changeset 519bb4f9d3a4 stable
Matt Mackall <mpm@selenic.com> [Mon, 18 Jul 2016 23:31:50 -0500] rev 29606
Added tag 3.9-rc for changeset 519bb4f9d3a4
Mon, 18 Jul 2016 23:28:14 -0500 merge default into stable for 3.9 code freeze stable 3.9-rc
Matt Mackall <mpm@selenic.com> [Mon, 18 Jul 2016 23:28:14 -0500] rev 29605
merge default into stable for 3.9 code freeze
Mon, 18 Jul 2016 22:22:38 +0200 rbc: fix invalid rbc-revs entries caused by missing cache growth
Mads Kiilerich <madski@unity3d.com> [Mon, 18 Jul 2016 22:22:38 +0200] rev 29604
rbc: fix invalid rbc-revs entries caused by missing cache growth It was in some cases possible to end up writing to the cache file without growing it first. The range assignment in _setcachedata would append instead of writing at the requested position and thus write the new record in the wrong place. To fix this, we avoid looking up in too small caches, and when growing the cache, do it right before writing the new record to it so we know it has been done correctly.
Mon, 18 Jul 2016 22:21:42 +0200 rbc: test case for cache file not growing correctly, causing bad new entries
Mads Kiilerich <madski@unity3d.com> [Mon, 18 Jul 2016 22:21:42 +0200] rev 29603
rbc: test case for cache file not growing correctly, causing bad new entries
Mon, 18 Jul 2016 18:55:06 +0100 chg: handle EOF reading data block
Jun Wu <quark@fb.com> [Mon, 18 Jul 2016 18:55:06 +0100] rev 29602
chg: handle EOF reading data block We recently discovered a case in production that chg uses 100% CPU and is trying to read data forever: recvfrom(4, "", 1814012019, 0, NULL, NULL) = 0 Using gdb, apparently readchannel() got wrong data. It was reading in an infinite loop because rsize == 0 does not exit the loop, while the server process had ended. (gdb) bt #0 ... in recv () at /lib64/libc.so.6 #1 ... in readchannel (...) at /usr/include/bits/socket2.h:45 #2 ... in readchannel (hgc=...) at hgclient.c:129 #3 ... in handleresponse (hgc=...) at hgclient.c:255 #4 ... in hgc_runcommand (hgc=..., args=<optimized>, argsize=<optimized>) #5 ... in main (argc=...486922636, argv=..., envp=...) at chg.c:661 (gdb) frame 2 (gdb) p *hgc $1 = {sockfd = 4, pid = 381152, ctx = {ch = 108 'l', data = 0x7fb05164f010 "st):\nTraceback (most recent call last):\n" "Traceback (most recent call last):\ne", maxdatasize = 1814065152," " datasize = 1814064225}, capflags = 16131} This patch addresses the infinite loop issue by detecting continuously empty responses and abort in that case. Note that datasize can be translated to ['l', ' ', 'l', 'a']. Concatenate datasize and data, it forms part of "Traceback (most recent call last):". This may indicate a server-side channeledoutput issue. If it is a race condition, we may want to use flock to protect the channels.
(0) -10000 -3000 -1000 -300 -100 -10 +10 +100 +300 +1000 +3000 +10000 tip