Mercurial > hg
view mercurial/wireprotov2server.py @ 46057:e0313b0a6f7e
copies-rust: parse the changed-file sidedata directly in rust
It does not make much sense to parse the data into python object using slow
python code to later turn them into rust object. We directly pass the binary
blob and use it directly in Rust.
Ideally we could directly read the sidedata in Rust, using a revlog in Rust.
However we do not have this ready to use yet.
This more direct approach provides a nice speedup over the board. Especially five
cases that we previously too slow to return in the previous changeset are not
able to finish.
Notably, we are now significantly faster than the Python version of this code in
all the meaningful cases.
I looked at the various cases that remains significantly slower then the filelog
version and they are currently 3 main source of slowness:
* The isancestor computation: even if we cache them, if the revs spawn over a
large amount of history the ancestry checking is still quite expensive. Using
a different approach more centered on the graph we are currently considering
might yield significant speed.
* Merging of the map from the two parents: in some case, this climb up to ⅔ of
the time spent in copy tracing. See inline comment for idea to handle this
better.
* Extracting data from the filelog. I would like to think this mostly comes from
the fact my test repositories pre-date Valentin Gatien-Baron improvement of
the `files` field (99ebde4fec99) and that more recent revisions will be faster
to fetch. Further testing on this aspect is needed.
This revision compared to the previous one:
===========================================
Repo Case Source-Rev Dest-Rev # of revisions old time new time Difference Factor time per rev
--------------------------------------------------------------------------------------------------------------------------------------------------------------
mercurial x_revs_x_added_0_copies ad6b123de1c7 39cfcef4f463 : 1 revs, 0.000047 s, 0.000049 s, +0.000002 s, × 1.0426, 49 µs/rev
mercurial x_revs_x_added_x_copies 2b1c78674230 0c1d10351869 : 6 revs, 0.000181 s, 0.000114 s, -0.000067 s, × 0.6298, 19 µs/rev
mercurial x000_revs_x000_added_x_copies 81f8ff2a9bf2 dd3267698d84 : 1032 revs, 0.005852 s, 0.004223 s, -0.001629 s, × 0.7216, 4 µs/rev
pypy x_revs_x_added_0_copies aed021ee8ae8 099ed31b181b : 9 revs, 0.000229 s, 0.000305 s, +0.000076 s, × 1.3319, 33 µs/rev
pypy x_revs_x000_added_0_copies 4aa4e1f8e19a 359343b9ac0e : 1 revs, 0.000058 s, 0.000060 s, +0.000002 s, × 1.0345, 60 µs/rev
pypy x_revs_x_added_x_copies ac52eb7bbbb0 72e022663155 : 7 revs, 0.000146 s, 0.000173 s, +0.000027 s, × 1.1849, 24 µs/rev
pypy x_revs_x00_added_x_copies c3b14617fbd7 ace7255d9a26 : 1 revs, 0.001206 s, 0.000446 s, -0.000760 s, × 0.3698, 446 µs/rev
pypy x_revs_x000_added_x000_copies df6f7a526b60 a83dc6a2d56f : 6 revs, 0.025275 s, 0.010360 s, -0.014915 s, × 0.4099, 1726 µs/rev
pypy x000_revs_xx00_added_0_copies 89a76aede314 2f22446ff07e : 4785 revs, 0.080303 s, 0.048002 s, -0.032301 s, × 0.5978, 10 µs/rev
pypy x000_revs_x000_added_x_copies 8a3b5bfd266e 2c68e87c3efe : 6780 revs, 0.152641 s, 0.075705 s, -0.076936 s, × 0.4960, 11 µs/rev
pypy x000_revs_x000_added_x000_copies 89a76aede314 7b3dda341c84 : 5441 revs, 0.099107 s, 0.056705 s, -0.042402 s, × 0.5722, 10 µs/rev
pypy x0000_revs_x_added_0_copies d1defd0dc478 c9cb1334cc78 : 43646 revs, 2.137894 s, 0.794685 s, -1.343209 s, × 0.3717, 18 µs/rev
pypy x0000_revs_xx000_added_0_copies bf2c629d0071 4ffed77c095c : 26389 revs, 0.022202 s, 0.020209 s, -0.001993 s, × 0.9102, 0 µs/rev
pypy x0000_revs_xx000_added_x000_copies 08ea3258278e d9fa043f30c0 : 11316 revs, 0.228946 s, 0.122475 s, -0.106471 s, × 0.5350, 10 µs/rev
netbeans x_revs_x_added_0_copies fb0955ffcbcd a01e9239f9e7 : 2 revs, 0.000186 s, 0.000142 s, -0.000044 s, × 0.7634, 71 µs/rev
netbeans x_revs_x000_added_0_copies 6f360122949f 20eb231cc7d0 : 2 revs, 0.000133 s, 0.000113 s, -0.000020 s, × 0.8496, 56 µs/rev
netbeans x_revs_x_added_x_copies 1ada3faf6fb6 5a39d12eecf4 : 3 revs, 0.000320 s, 0.000241 s, -0.000079 s, × 0.7531, 80 µs/rev
netbeans x_revs_x00_added_x_copies 35be93ba1e2c 9eec5e90c05f : 9 revs, 0.001339 s, 0.000729 s, -0.000610 s, × 0.5444, 81 µs/rev
netbeans x000_revs_xx00_added_0_copies eac3045b4fdd 51d4ae7f1290 : 1421 revs, 0.015694 s, 0.010198 s, -0.005496 s, × 0.6498, 7 µs/rev
netbeans x000_revs_x000_added_x_copies e2063d266acd 6081d72689dc : 1533 revs, 0.018457 s, 0.015312 s, -0.003145 s, × 0.8296, 9 µs/rev
netbeans x000_revs_x000_added_x000_copies ff453e9fee32 411350406ec2 : 5750 revs, 0.111691 s, 0.060517 s, -0.051174 s, × 0.5418, 10 µs/rev
netbeans x0000_revs_xx000_added_x000_copies 588c2d1ced70 1aad62e59ddd : 67005 revs, 1.166017 s, 0.611102 s, -0.554915 s, × 0.5241, 9 µs/rev
mozilla-central x_revs_x_added_0_copies 3697f962bb7b 7015fcdd43a2 : 2 revs, 0.000197 s, 0.000164 s, -0.000033 s, × 0.8325, 82 µs/rev
mozilla-central x_revs_x000_added_0_copies dd390860c6c9 40d0c5bed75d : 8 revs, 0.000626 s, 0.000334 s, -0.000292 s, × 0.5335, 41 µs/rev
mozilla-central x_revs_x_added_x_copies 8d198483ae3b 14207ffc2b2f : 9 revs, 0.000303 s, 0.000463 s, +0.000160 s, × 1.5281, 51 µs/rev
mozilla-central x_revs_x00_added_x_copies 98cbc58cc6bc 446a150332c3 : 7 revs, 0.001679 s, 0.000730 s, -0.000949 s, × 0.4348, 104 µs/rev
mozilla-central x_revs_x000_added_x000_copies 3c684b4b8f68 0a5e72d1b479 : 3 revs, 0.006947 s, 0.003522 s, -0.003425 s, × 0.5070, 1174 µs/rev
mozilla-central x_revs_x0000_added_x0000_copies effb563bb7e5 c07a39dc4e80 : 6 revs, 0.133070 s, 0.072518 s, -0.060552 s, × 0.5450, 12086 µs/rev
mozilla-central x000_revs_xx00_added_0_copies 6100d773079a 04a55431795e : 1593 revs, 0.008705 s, 0.005760 s, -0.002945 s, × 0.6617, 3 µs/rev
mozilla-central x000_revs_x000_added_x_copies 9f17a6fc04f9 2d37b966abed : 8315 revs, 0.005913 s, 0.005720 s, -0.000193 s, × 0.9674, 0 µs/rev
mozilla-central x000_revs_x000_added_x000_copies 7c97034feb78 4407bd0c6330 : 7839 revs, 0.101373 s, 0.063310 s, -0.038063 s, × 0.6245, 8 µs/rev
mozilla-central x0000_revs_xx000_added_0_copies 9eec5917337d 67118cc6dcad : 45299 revs, 0.046526 s, 0.043608 s, -0.002918 s, × 0.9373, 0 µs/rev
mozilla-central x0000_revs_xx000_added_x000_copies f78c615a656c 96a38b690156 : 30263 revs, 0.313954 s, 0.204831 s, -0.109123 s, × 0.6524, 6 µs/rev
mozilla-central x00000_revs_x0000_added_x0000_copies 6832ae71433c 4c222a1d9a00 : 153721 revs, 3.367395 s, 2.161906 s, -1.205489 s, × 0.6420, 14 µs/rev
mozilla-central x00000_revs_x00000_added_x000_copies 76caed42cf7c 1daa622bbe42 : 210546 revs, 4.691820 s, 3.291831 s, -1.399989 s, × 0.7016, 15 µs/rev
mozilla-try x_revs_x_added_0_copies aaf6dde0deb8 9790f499805a : 2 revs, 0.001199 s, 0.001213 s, +0.000014 s, × 1.0117, 606 µs/rev
mozilla-try x_revs_x000_added_0_copies d8d0222927b4 5bb8ce8c7450 : 2 revs, 0.001216 s, 0.001225 s, +0.000009 s, × 1.0074, 612 µs/rev
mozilla-try x_revs_x_added_x_copies 092fcca11bdb 936255a0384a : 4 revs, 0.000613 s, 0.000564 s, -0.000049 s, × 0.9201, 141 µs/rev
mozilla-try x_revs_x00_added_x_copies b53d2fadbdb5 017afae788ec : 2 revs, 0.001906 s, 0.001549 s, -0.000357 s, × 0.8127, 774 µs/rev
mozilla-try x_revs_x000_added_x000_copies 20408ad61ce5 6f0ee96e21ad : 1 revs, 0.092766 s, 0.035918 s, -0.056848 s, × 0.3872, 35918 µs/rev
mozilla-try x_revs_x0000_added_x0000_copies effb563bb7e5 c07a39dc4e80 : 6 revs, 0.136074 s, 0.073788 s, -0.062286 s, × 0.5423, 12298 µs/rev
mozilla-try x000_revs_xx00_added_0_copies 6100d773079a 04a55431795e : 1593 revs, 0.009067 s, 0.006151 s, -0.002916 s, × 0.6784, 3 µs/rev
mozilla-try x000_revs_x000_added_x_copies 9f17a6fc04f9 2d37b966abed : 8315 revs, 0.006243 s, 0.006165 s, -0.000078 s, × 0.9875, 0 µs/rev
mozilla-try x000_revs_x000_added_x000_copies 1346fd0130e4 4c65cbdabc1f : 6657 revs, 0.114463 s, 0.065421 s, -0.049042 s, × 0.5715, 9 µs/rev
mozilla-try x0000_revs_x_added_0_copies 63519bfd42ee a36a2a865d92 : 40314 revs, 0.433683 s, 0.313749 s, -0.119934 s, × 0.7235, 7 µs/rev
mozilla-try x0000_revs_x_added_x_copies 9fe69ff0762d bcabf2a78927 : 38690 revs, 0.411278 s, 0.297867 s, -0.113411 s, × 0.7242, 7 µs/rev
mozilla-try x0000_revs_xx000_added_x_copies 156f6e2674f2 4d0f2c178e66 : 54487 revs, 0.155133 s, 0.111300 s, -0.043833 s, × 0.7174, 2 µs/rev
mozilla-try x0000_revs_xx000_added_0_copies 9eec5917337d 67118cc6dcad : 45299 revs, 0.048933 s, 0.046202 s, -0.002731 s, × 0.9442, 1 µs/rev
mozilla-try x0000_revs_xx000_added_x000_copies 89294cd501d9 7ccb2fc7ccb5 : 97052 revs, 8.100385 s, 1.999640 s, -6.100745 s, × 0.2469, 20 µs/rev
mozilla-try x0000_revs_x0000_added_x0000_copies e928c65095ed e951f4ad123a : 52031 revs, 1.446720 s, 0.809134 s, -0.637586 s, × 0.5593, 15 µs/rev
mozilla-try x00000_revs_x_added_0_copies 6a320851d377 1ebb79acd503 : 363753 revs, killed , 47.406785 s, , , 130 µs/rev
mozilla-try x00000_revs_x00000_added_0_copies dc8a3ca7010e d16fde900c9c : 444327 revs, 1.369537 s, 0.996219 s, -0.373318 s, × 0.7274, 2 µs/rev
mozilla-try x00000_revs_x_added_x_copies 5173c4b6f97c 95d83ee7242d : 362229 revs, killed , 47.273399 s, , , 130 µs/rev
mozilla-try x00000_revs_x000_added_x_copies 9126823d0e9c ca82787bb23c : 359344 revs, killed , 47.419099 s, , , 131 µs/rev
mozilla-try x00000_revs_x0000_added_x0000_copies 8d3fafa80d4b eb884023b810 : 192665 revs, 5.186079 s, 3.512653 s, -1.673426 s, × 0.6773, 18 µs/rev
mozilla-try x00000_revs_x00000_added_x0000_copies 1b661134e2ca 1ae03d022d6d : 237259 revs, killed , 44.459049 s, , , 187 µs/rev
mozilla-try x00000_revs_x00000_added_x000_copies 9b2a99adc05e 8e29777b48e6 : 391148 revs, killed , 52.837926 s, , , 135 µs/rev
This revision compared to the python code:
==========================================
Repo Case Source-Rev Dest-Rev # of revisions Python-Time Rust-Time Difference Factor time per rev
--------------------------------------------------------------------------------------------------------------------------------------------------------------
mercurial x_revs_x_added_0_copies ad6b123de1c7 39cfcef4f463 : 1 revs, 0.000044 s, 0.000049 s, +0.000005 s, × 1.1136, 49 µs/rev
mercurial x_revs_x_added_x_copies 2b1c78674230 0c1d10351869 : 6 revs, 0.000138 s, 0.000114 s, -0.000024 s, × 0.8261, 19 µs/rev
mercurial x000_revs_x000_added_x_copies 81f8ff2a9bf2 dd3267698d84 : 1032 revs, 0.005052 s, 0.004223 s, -0.000829 s, × 0.8359, 4 µs/rev
pypy x_revs_x_added_0_copies aed021ee8ae8 099ed31b181b : 9 revs, 0.000219 s, 0.000305 s, +0.000086 s, × 1.3927, 33 µs/rev
pypy x_revs_x000_added_0_copies 4aa4e1f8e19a 359343b9ac0e : 1 revs, 0.000055 s, 0.000060 s, +0.000005 s, × 1.0909, 60 µs/rev
pypy x_revs_x_added_x_copies ac52eb7bbbb0 72e022663155 : 7 revs, 0.000128 s, 0.000173 s, +0.000045 s, × 1.3516, 24 µs/rev
pypy x_revs_x00_added_x_copies c3b14617fbd7 ace7255d9a26 : 1 revs, 0.001089 s, 0.000446 s, -0.000643 s, × 0.4096, 446 µs/rev
pypy x_revs_x000_added_x000_copies df6f7a526b60 a83dc6a2d56f : 6 revs, 0.017407 s, 0.010360 s, -0.007047 s, × 0.5952, 1726 µs/rev
pypy x000_revs_xx00_added_0_copies 89a76aede314 2f22446ff07e : 4785 revs, 0.094175 s, 0.048002 s, -0.046173 s, × 0.5097, 10 µs/rev
pypy x000_revs_x000_added_x_copies 8a3b5bfd266e 2c68e87c3efe : 6780 revs, 0.238009 s, 0.075705 s, -0.162304 s, × 0.3181, 11 µs/rev
pypy x000_revs_x000_added_x000_copies 89a76aede314 7b3dda341c84 : 5441 revs, 0.125876 s, 0.056705 s, -0.069171 s, × 0.4505, 10 µs/rev
pypy x0000_revs_x_added_0_copies d1defd0dc478 c9cb1334cc78 : 43646 revs, 3.581556 s, 0.794685 s, -2.786871 s, × 0.2219, 18 µs/rev
pypy x0000_revs_xx000_added_0_copies bf2c629d0071 4ffed77c095c : 26389 revs, 0.016721 s, 0.020209 s, +0.003488 s, × 1.2086, 0 µs/rev
pypy x0000_revs_xx000_added_x000_copies 08ea3258278e d9fa043f30c0 : 11316 revs, 0.242367 s, 0.122475 s, -0.119892 s, × 0.5053, 10 µs/rev
netbeans x_revs_x_added_0_copies fb0955ffcbcd a01e9239f9e7 : 2 revs, 0.000165 s, 0.000142 s, -0.000023 s, × 0.8606, 71 µs/rev
netbeans x_revs_x000_added_0_copies 6f360122949f 20eb231cc7d0 : 2 revs, 0.000114 s, 0.000113 s, -0.000001 s, × 0.9912, 56 µs/rev
netbeans x_revs_x_added_x_copies 1ada3faf6fb6 5a39d12eecf4 : 3 revs, 0.000296 s, 0.000241 s, -0.000055 s, × 0.8142, 80 µs/rev
netbeans x_revs_x00_added_x_copies 35be93ba1e2c 9eec5e90c05f : 9 revs, 0.001124 s, 0.000729 s, -0.000395 s, × 0.6486, 81 µs/rev
netbeans x000_revs_xx00_added_0_copies eac3045b4fdd 51d4ae7f1290 : 1421 revs, 0.013060 s, 0.010198 s, -0.002862 s, × 0.7809, 7 µs/rev
netbeans x000_revs_x000_added_x_copies e2063d266acd 6081d72689dc : 1533 revs, 0.017112 s, 0.015312 s, -0.001800 s, × 0.8948, 9 µs/rev
netbeans x000_revs_x000_added_x000_copies ff453e9fee32 411350406ec2 : 5750 revs, 0.660350 s, 0.060517 s, -0.599833 s, × 0.0916, 10 µs/rev
netbeans x0000_revs_xx000_added_x000_copies 588c2d1ced70 1aad62e59ddd : 67005 revs, 10.032499 s, 0.611102 s, -9.421397 s, × 0.0609, 9 µs/rev
mozilla-central x_revs_x_added_0_copies 3697f962bb7b 7015fcdd43a2 : 2 revs, 0.000189 s, 0.000164 s, -0.000025 s, × 0.8677, 82 µs/rev
mozilla-central x_revs_x000_added_0_copies dd390860c6c9 40d0c5bed75d : 8 revs, 0.000462 s, 0.000334 s, -0.000128 s, × 0.7229, 41 µs/rev
mozilla-central x_revs_x_added_x_copies 8d198483ae3b 14207ffc2b2f : 9 revs, 0.000270 s, 0.000463 s, +0.000193 s, × 1.7148, 51 µs/rev
mozilla-central x_revs_x00_added_x_copies 98cbc58cc6bc 446a150332c3 : 7 revs, 0.001474 s, 0.000730 s, -0.000744 s, × 0.4953, 104 µs/rev
mozilla-central x_revs_x000_added_x000_copies 3c684b4b8f68 0a5e72d1b479 : 3 revs, 0.004806 s, 0.003522 s, -0.001284 s, × 0.7328, 1174 µs/rev
mozilla-central x_revs_x0000_added_x0000_copies effb563bb7e5 c07a39dc4e80 : 6 revs, 0.085150 s, 0.072518 s, -0.012632 s, × 0.8517, 12086 µs/rev
mozilla-central x000_revs_xx00_added_0_copies 6100d773079a 04a55431795e : 1593 revs, 0.007064 s, 0.005760 s, -0.001304 s, × 0.8154, 3 µs/rev
mozilla-central x000_revs_x000_added_x_copies 9f17a6fc04f9 2d37b966abed : 8315 revs, 0.004741 s, 0.005720 s, +0.000979 s, × 1.2065, 0 µs/rev
mozilla-central x000_revs_x000_added_x000_copies 7c97034feb78 4407bd0c6330 : 7839 revs, 0.190133 s, 0.063310 s, -0.126823 s, × 0.3330, 8 µs/rev
mozilla-central x0000_revs_xx000_added_0_copies 9eec5917337d 67118cc6dcad : 45299 revs, 0.035651 s, 0.043608 s, +0.007957 s, × 1.2232, 0 µs/rev
mozilla-central x0000_revs_xx000_added_x000_copies f78c615a656c 96a38b690156 : 30263 revs, 0.440694 s, 0.204831 s, -0.235863 s, × 0.4648, 6 µs/rev
mozilla-central x00000_revs_x0000_added_x0000_copies 6832ae71433c 4c222a1d9a00 : 153721 revs, 18.454163 s, 2.161906 s, -16.292257 s, × 0.1172, 14 µs/rev
mozilla-central x00000_revs_x00000_added_x000_copies 76caed42cf7c 1daa622bbe42 : 210546 revs, 31.562719 s, 3.291831 s, -28.270888 s, × 0.1043, 15 µs/rev
mozilla-try x_revs_x_added_0_copies aaf6dde0deb8 9790f499805a : 2 revs, 0.001189 s, 0.001213 s, +0.000024 s, × 1.0202, 606 µs/rev
mozilla-try x_revs_x000_added_0_copies d8d0222927b4 5bb8ce8c7450 : 2 revs, 0.001204 s, 0.001225 s, +0.000021 s, × 1.0174, 612 µs/rev
mozilla-try x_revs_x_added_x_copies 092fcca11bdb 936255a0384a : 4 revs, 0.000586 s, 0.000564 s, -0.000022 s, × 0.9625, 141 µs/rev
mozilla-try x_revs_x00_added_x_copies b53d2fadbdb5 017afae788ec : 2 revs, 0.001845 s, 0.001549 s, -0.000296 s, × 0.8396, 774 µs/rev
mozilla-try x_revs_x000_added_x000_copies 20408ad61ce5 6f0ee96e21ad : 1 revs, 0.063822 s, 0.035918 s, -0.027904 s, × 0.5628, 35918 µs/rev
mozilla-try x_revs_x0000_added_x0000_copies effb563bb7e5 c07a39dc4e80 : 6 revs, 0.088038 s, 0.073788 s, -0.014250 s, × 0.8381, 12298 µs/rev
mozilla-try x000_revs_xx00_added_0_copies 6100d773079a 04a55431795e : 1593 revs, 0.007389 s, 0.006151 s, -0.001238 s, × 0.8325, 3 µs/rev
mozilla-try x000_revs_x000_added_x_copies 9f17a6fc04f9 2d37b966abed : 8315 revs, 0.004868 s, 0.006165 s, +0.001297 s, × 1.2664, 0 µs/rev
mozilla-try x000_revs_x000_added_x000_copies 1346fd0130e4 4c65cbdabc1f : 6657 revs, 0.222450 s, 0.065421 s, -0.157029 s, × 0.2941, 9 µs/rev
mozilla-try x0000_revs_x_added_0_copies 63519bfd42ee a36a2a865d92 : 40314 revs, 0.370675 s, 0.313749 s, -0.056926 s, × 0.8464, 7 µs/rev
mozilla-try x0000_revs_x_added_x_copies 9fe69ff0762d bcabf2a78927 : 38690 revs, 0.358020 s, 0.297867 s, -0.060153 s, × 0.8320, 7 µs/rev
mozilla-try x0000_revs_xx000_added_x_copies 156f6e2674f2 4d0f2c178e66 : 54487 revs, 0.145235 s, 0.111300 s, -0.033935 s, × 0.7663, 2 µs/rev
mozilla-try x0000_revs_xx000_added_0_copies 9eec5917337d 67118cc6dcad : 45299 revs, 0.037606 s, 0.046202 s, +0.008596 s, × 1.2286, 1 µs/rev
mozilla-try x0000_revs_xx000_added_x000_copies 89294cd501d9 7ccb2fc7ccb5 : 97052 revs, 7.382439 s, 1.999640 s, -5.382799 s, × 0.2709, 20 µs/rev
mozilla-try x0000_revs_x0000_added_x0000_copies e928c65095ed e951f4ad123a : 52031 revs, 7.273506 s, 0.809134 s, -6.464372 s, × 0.1112, 15 µs/rev
mozilla-try x00000_revs_x_added_0_copies 6a320851d377 1ebb79acd503 : 363753 revs, killed , 47.406785 s, , , 130 µs/rev
mozilla-try x00000_revs_x00000_added_0_copies dc8a3ca7010e d16fde900c9c : 444327 revs, 1.074593 s, 0.996219 s, -0.078374 s, × 0.9271, 2 µs/rev
mozilla-try x00000_revs_x_added_x_copies 5173c4b6f97c 95d83ee7242d : 362229 revs, killed , 47.273399 s, , , 130 µs/rev
mozilla-try x00000_revs_x000_added_x_copies 9126823d0e9c ca82787bb23c : 359344 revs, killed , 47.419099 s, , , 131 µs/rev
mozilla-try x00000_revs_x0000_added_x0000_copies 8d3fafa80d4b eb884023b810 : 192665 revs, 27.746195 s, 3.512653 s, -24.233542 s, × 0.1266, 18 µs/rev
mozilla-try x00000_revs_x00000_added_x0000_copies 1b661134e2ca 1ae03d022d6d : 237259 revs, killed , 44.459049 s, , , 187 µs/rev
mozilla-try x00000_revs_x00000_added_x000_copies 9b2a99adc05e 8e29777b48e6 : 391148 revs, killed , 52.837926 s, , , 135 µs/rev
This revision compared to the filelog algorithm:
================================================
Repo Case Source-Rev Dest-Rev # of revisions filelog sidedata Difference Factor time per rev
--------------------------------------------------------------------------------------------------------------------------------------------------------------
mercurial x_revs_x_added_0_copies ad6b123de1c7 39cfcef4f463 : 1 revs, 0.000906 s, 0.000049 s, -0.000857 s, × 0.0540, 48 µs/rev
mercurial x_revs_x_added_x_copies 2b1c78674230 0c1d10351869 : 6 revs, 0.001844 s, 0.000114 s, -0.001730 s, × 0.0618, 18 µs/rev
mercurial x000_revs_x000_added_x_copies 81f8ff2a9bf2 dd3267698d84 : 1032 revs, 0.018577 s, 0.004223 s, -0.014354 s, × 0.2273, 4 µs/rev
pypy x_revs_x_added_0_copies aed021ee8ae8 099ed31b181b : 9 revs, 0.005009 s, 0.000305 s, -0.004704 s, × 0.0608, 33 µs/rev
pypy x_revs_x000_added_0_copies 4aa4e1f8e19a 359343b9ac0e : 1 revs, 0.209606 s, 0.000060 s, -0.209546 s, × 0.0002, 59 µs/rev
pypy x_revs_x_added_x_copies ac52eb7bbbb0 72e022663155 : 7 revs, 0.017008 s, 0.000173 s, -0.016835 s, × 0.0101, 24 µs/rev
pypy x_revs_x00_added_x_copies c3b14617fbd7 ace7255d9a26 : 1 revs, 0.019227 s, 0.000446 s, -0.018781 s, × 0.0231, 445 µs/rev
pypy x_revs_x000_added_x000_copies df6f7a526b60 a83dc6a2d56f : 6 revs, 0.765782 s, 0.010360 s, -0.755422 s, × 0.0135, 1726 µs/rev
pypy x000_revs_xx00_added_0_copies 89a76aede314 2f22446ff07e : 4785 revs, 1.186068 s, 0.048002 s, -1.138066 s, × 0.0404, 10 µs/rev
pypy x000_revs_x000_added_x_copies 8a3b5bfd266e 2c68e87c3efe : 6780 revs, 1.266745 s, 0.075705 s, -1.191040 s, × 0.0597, 11 µs/rev
pypy x000_revs_x000_added_x000_copies 89a76aede314 7b3dda341c84 : 5441 revs, 1.666389 s, 0.056705 s, -1.609684 s, × 0.0340, 10 µs/rev
pypy x0000_revs_x_added_0_copies d1defd0dc478 c9cb1334cc78 : 43646 revs, 0.001070 s, 0.794685 s, +0.793615 s, × 742.69, 18 µs/rev
pypy x0000_revs_xx000_added_0_copies bf2c629d0071 4ffed77c095c : 26389 revs, 1.076269 s, 0.020209 s, -1.056060 s, × 0.0187, 0 µs/rev
pypy x0000_revs_xx000_added_x000_copies 08ea3258278e d9fa043f30c0 : 11316 revs, 1.355085 s, 0.122475 s, -1.232610 s, × 0.0903, 10 µs/rev
netbeans x_revs_x_added_0_copies fb0955ffcbcd a01e9239f9e7 : 2 revs, 0.028551 s, 0.000142 s, -0.028409 s, × 0.0049, 70 µs/rev
netbeans x_revs_x000_added_0_copies 6f360122949f 20eb231cc7d0 : 2 revs, 0.157319 s, 0.000113 s, -0.157206 s, × 0.0007, 56 µs/rev
netbeans x_revs_x_added_x_copies 1ada3faf6fb6 5a39d12eecf4 : 3 revs, 0.025722 s, 0.000241 s, -0.025481 s, × 0.0093, 80 µs/rev
netbeans x_revs_x00_added_x_copies 35be93ba1e2c 9eec5e90c05f : 9 revs, 0.053374 s, 0.000729 s, -0.052645 s, × 0.0136, 80 µs/rev
netbeans x000_revs_xx00_added_0_copies eac3045b4fdd 51d4ae7f1290 : 1421 revs, 0.038146 s, 0.010198 s, -0.027948 s, × 0.2673, 7 µs/rev
netbeans x000_revs_x000_added_x_copies e2063d266acd 6081d72689dc : 1533 revs, 0.229215 s, 0.015312 s, -0.213903 s, × 0.0668, 9 µs/rev
netbeans x000_revs_x000_added_x000_copies ff453e9fee32 411350406ec2 : 5750 revs, 0.974484 s, 0.060517 s, -0.913967 s, × 0.0621, 10 µs/rev
netbeans x0000_revs_xx000_added_x000_copies 588c2d1ced70 1aad62e59ddd : 67005 revs, 3.924308 s, 0.611102 s, -3.313206 s, × 0.1557, 9 µs/rev
mozilla-central x_revs_x_added_0_copies 3697f962bb7b 7015fcdd43a2 : 2 revs, 0.035563 s, 0.000164 s, -0.035399 s, × 0.0046, 81 µs/rev
mozilla-central x_revs_x000_added_0_copies dd390860c6c9 40d0c5bed75d : 8 revs, 0.145766 s, 0.000334 s, -0.145432 s, × 0.0022, 41 µs/rev
mozilla-central x_revs_x_added_x_copies 8d198483ae3b 14207ffc2b2f : 9 revs, 0.026283 s, 0.000463 s, -0.025820 s, × 0.0176, 51 µs/rev
mozilla-central x_revs_x00_added_x_copies 98cbc58cc6bc 446a150332c3 : 7 revs, 0.087403 s, 0.000730 s, -0.086673 s, × 0.0083, 104 µs/rev
mozilla-central x_revs_x000_added_x000_copies 3c684b4b8f68 0a5e72d1b479 : 3 revs, 0.209484 s, 0.003522 s, -0.205962 s, × 0.0168, 1173 µs/rev
mozilla-central x_revs_x0000_added_x0000_copies effb563bb7e5 c07a39dc4e80 : 6 revs, 2.197867 s, 0.072518 s, -2.125349 s, × 0.0329, 12084 µs/rev
mozilla-central x000_revs_xx00_added_0_copies 6100d773079a 04a55431795e : 1593 revs, 0.090142 s, 0.005760 s, -0.084382 s, × 0.0638, 3 µs/rev
mozilla-central x000_revs_x000_added_x_copies 9f17a6fc04f9 2d37b966abed : 8315 revs, 0.742658 s, 0.005720 s, -0.736938 s, × 0.0077, 0 µs/rev
mozilla-central x000_revs_x000_added_x000_copies 7c97034feb78 4407bd0c6330 : 7839 revs, 1.166159 s, 0.063310 s, -1.102849 s, × 0.0542, 8 µs/rev
mozilla-central x0000_revs_xx000_added_0_copies 9eec5917337d 67118cc6dcad : 45299 revs, 6.721719 s, 0.043608 s, -6.678111 s, × 0.0064, 0 µs/rev
mozilla-central x0000_revs_xx000_added_x000_copies f78c615a656c 96a38b690156 : 30263 revs, 3.356523 s, 0.204831 s, -3.151692 s, × 0.0610, 6 µs/rev
mozilla-central x00000_revs_x0000_added_x0000_copies 6832ae71433c 4c222a1d9a00 : 153721 revs, 15.880822 s, 2.161906 s, -13.718916 s, × 0.1361, 14 µs/rev
mozilla-central x00000_revs_x00000_added_x000_copies 76caed42cf7c 1daa622bbe42 : 210546 revs, 20.781275 s, 3.291831 s, -17.489444 s, × 0.1584, 15 µs/rev
mozilla-try x_revs_x_added_0_copies aaf6dde0deb8 9790f499805a : 2 revs, 0.084165 s, 0.001213 s, -0.082952 s, × 0.0144, 606 µs/rev
mozilla-try x_revs_x000_added_0_copies d8d0222927b4 5bb8ce8c7450 : 2 revs, 0.503744 s, 0.001225 s, -0.502519 s, × 0.0024, 612 µs/rev
mozilla-try x_revs_x_added_x_copies 092fcca11bdb 936255a0384a : 4 revs, 0.021545 s, 0.000564 s, -0.020981 s, × 0.0261, 140 µs/rev
mozilla-try x_revs_x00_added_x_copies b53d2fadbdb5 017afae788ec : 2 revs, 0.240699 s, 0.001549 s, -0.239150 s, × 0.0064, 774 µs/rev
mozilla-try x_revs_x000_added_x000_copies 20408ad61ce5 6f0ee96e21ad : 1 revs, 1.100682 s, 0.035918 s, -1.064764 s, × 0.0326, 35882 µs/rev
mozilla-try x_revs_x0000_added_x0000_copies effb563bb7e5 c07a39dc4e80 : 6 revs, 2.234809 s, 0.073788 s, -2.161021 s, × 0.0330, 12295 µs/rev
mozilla-try x000_revs_xx00_added_0_copies 6100d773079a 04a55431795e : 1593 revs, 0.091222 s, 0.006151 s, -0.085071 s, × 0.0674, 3 µs/rev
mozilla-try x000_revs_x000_added_x_copies 9f17a6fc04f9 2d37b966abed : 8315 revs, 0.764722 s, 0.006165 s, -0.758557 s, × 0.0080, 0 µs/rev
mozilla-try x000_revs_x000_added_x000_copies 1346fd0130e4 4c65cbdabc1f : 6657 revs, 1.185655 s, 0.065421 s, -1.120234 s, × 0.0551, 9 µs/rev
mozilla-try x0000_revs_x_added_0_copies 63519bfd42ee a36a2a865d92 : 40314 revs, 0.089736 s, 0.313749 s, +0.224013 s, × 3.4963, 7 µs/rev
mozilla-try x0000_revs_x_added_x_copies 9fe69ff0762d bcabf2a78927 : 38690 revs, 0.084132 s, 0.297867 s, +0.213735 s, × 3.5404, 7 µs/rev
mozilla-try x0000_revs_xx000_added_x_copies 156f6e2674f2 4d0f2c178e66 : 54487 revs, 7.581932 s, 0.111300 s, -7.470632 s, × 0.0146, 2 µs/rev
mozilla-try x0000_revs_xx000_added_0_copies 9eec5917337d 67118cc6dcad : 45299 revs, 6.671144 s, 0.046202 s, -6.624942 s, × 0.0069, 1 µs/rev
mozilla-try x0000_revs_xx000_added_x000_copies 89294cd501d9 7ccb2fc7ccb5 : 97052 revs, 7.674771 s, 1.999640 s, -5.675131 s, × 0.2605, 20 µs/rev
mozilla-try x0000_revs_x0000_added_x0000_copies e928c65095ed e951f4ad123a : 52031 revs, 9.870343 s, 0.809134 s, -9.061209 s, × 0.0819, 15 µs/rev
mozilla-try x00000_revs_x_added_0_copies 6a320851d377 1ebb79acd503 : 363753 revs, 0.094781 s, 47.406785 s, +47.312004 s, × 500.17, 130 µs/rev
mozilla-try x00000_revs_x00000_added_0_copies dc8a3ca7010e d16fde900c9c : 444327 revs, 26.690029 s, 0.996219 s, -25.693810 s, × 0.0373, 2 µs/rev
mozilla-try x00000_revs_x_added_x_copies 5173c4b6f97c 95d83ee7242d : 362229 revs, 0.094941 s, 47.273399 s, +47.178458 s, × 497.92, 130 µs/rev
mozilla-try x00000_revs_x000_added_x_copies 9126823d0e9c ca82787bb23c : 359344 revs, 0.233811 s, 47.419099 s, +47.185288 s, × 202.80, 131 µs/rev
mozilla-try x00000_revs_x0000_added_x0000_copies 8d3fafa80d4b eb884023b810 : 192665 revs, 19.321750 s, 3.512653 s, -15.809097 s, × 0.1817, 18 µs/rev
mozilla-try x00000_revs_x00000_added_x0000_copies 1b661134e2ca 1ae03d022d6d : 237259 revs, 21.358350 s, 44.459049 s, +23.100699 s, × 2.0815, 187 µs/rev
mozilla-try x00000_revs_x00000_added_x000_copies 9b2a99adc05e 8e29777b48e6 : 391148 revs, 25.328737 s, 52.837926 s, +27.509189 s, × 2.0860, 135 µs/rev
Differential Revision: https://phab.mercurial-scm.org/D9307
author | Pierre-Yves David <pierre-yves.david@octobus.net> |
---|---|
date | Thu, 12 Nov 2020 15:54:10 +0100 |
parents | 89a2afe31e82 |
children | 14ff4929ca8c |
line wrap: on
line source
# Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net> # Copyright 2005-2007 Matt Mackall <mpm@selenic.com> # # This software may be used and distributed according to the terms of the # GNU General Public License version 2 or any later version. from __future__ import absolute_import import collections import contextlib from .i18n import _ from .node import ( hex, nullid, ) from . import ( discovery, encoding, error, match as matchmod, narrowspec, pycompat, streamclone, templatefilters, util, wireprotoframing, wireprototypes, ) from .interfaces import util as interfaceutil from .utils import ( cborutil, hashutil, stringutil, ) FRAMINGTYPE = b'application/mercurial-exp-framing-0006' HTTP_WIREPROTO_V2 = wireprototypes.HTTP_WIREPROTO_V2 COMMANDS = wireprototypes.commanddict() # Value inserted into cache key computation function. Change the value to # force new cache keys for every command request. This should be done when # there is a change to how caching works, etc. GLOBAL_CACHE_VERSION = 1 def handlehttpv2request(rctx, req, res, checkperm, urlparts): from .hgweb import common as hgwebcommon # URL space looks like: <permissions>/<command>, where <permission> can # be ``ro`` or ``rw`` to signal read-only or read-write, respectively. # Root URL does nothing meaningful... yet. if not urlparts: res.status = b'200 OK' res.headers[b'Content-Type'] = b'text/plain' res.setbodybytes(_(b'HTTP version 2 API handler')) return if len(urlparts) == 1: res.status = b'404 Not Found' res.headers[b'Content-Type'] = b'text/plain' res.setbodybytes( _(b'do not know how to process %s\n') % req.dispatchpath ) return permission, command = urlparts[0:2] if permission not in (b'ro', b'rw'): res.status = b'404 Not Found' res.headers[b'Content-Type'] = b'text/plain' res.setbodybytes(_(b'unknown permission: %s') % permission) return if req.method != b'POST': res.status = b'405 Method Not Allowed' res.headers[b'Allow'] = b'POST' res.setbodybytes(_(b'commands require POST requests')) return # At some point we'll want to use our own API instead of recycling the # behavior of version 1 of the wire protocol... # TODO return reasonable responses - not responses that overload the # HTTP status line message for error reporting. try: checkperm(rctx, req, b'pull' if permission == b'ro' else b'push') except hgwebcommon.ErrorResponse as e: res.status = hgwebcommon.statusmessage(e.code, pycompat.bytestr(e)) for k, v in e.headers: res.headers[k] = v res.setbodybytes(b'permission denied') return # We have a special endpoint to reflect the request back at the client. if command == b'debugreflect': _processhttpv2reflectrequest(rctx.repo.ui, rctx.repo, req, res) return # Extra commands that we handle that aren't really wire protocol # commands. Think extra hard before making this hackery available to # extension. extracommands = {b'multirequest'} if command not in COMMANDS and command not in extracommands: res.status = b'404 Not Found' res.headers[b'Content-Type'] = b'text/plain' res.setbodybytes(_(b'unknown wire protocol command: %s\n') % command) return repo = rctx.repo ui = repo.ui proto = httpv2protocolhandler(req, ui) if ( not COMMANDS.commandavailable(command, proto) and command not in extracommands ): res.status = b'404 Not Found' res.headers[b'Content-Type'] = b'text/plain' res.setbodybytes(_(b'invalid wire protocol command: %s') % command) return # TODO consider cases where proxies may add additional Accept headers. if req.headers.get(b'Accept') != FRAMINGTYPE: res.status = b'406 Not Acceptable' res.headers[b'Content-Type'] = b'text/plain' res.setbodybytes( _(b'client MUST specify Accept header with value: %s\n') % FRAMINGTYPE ) return if req.headers.get(b'Content-Type') != FRAMINGTYPE: res.status = b'415 Unsupported Media Type' # TODO we should send a response with appropriate media type, # since client does Accept it. res.headers[b'Content-Type'] = b'text/plain' res.setbodybytes( _(b'client MUST send Content-Type header with value: %s\n') % FRAMINGTYPE ) return _processhttpv2request(ui, repo, req, res, permission, command, proto) def _processhttpv2reflectrequest(ui, repo, req, res): """Reads unified frame protocol request and dumps out state to client. This special endpoint can be used to help debug the wire protocol. Instead of routing the request through the normal dispatch mechanism, we instead read all frames, decode them, and feed them into our state tracker. We then dump the log of all that activity back out to the client. """ # Reflection APIs have a history of being abused, accidentally disclosing # sensitive data, etc. So we have a config knob. if not ui.configbool(b'experimental', b'web.api.debugreflect'): res.status = b'404 Not Found' res.headers[b'Content-Type'] = b'text/plain' res.setbodybytes(_(b'debugreflect service not available')) return # We assume we have a unified framing protocol request body. reactor = wireprotoframing.serverreactor(ui) states = [] while True: frame = wireprotoframing.readframe(req.bodyfh) if not frame: states.append(b'received: <no frame>') break states.append( b'received: %d %d %d %s' % (frame.typeid, frame.flags, frame.requestid, frame.payload) ) action, meta = reactor.onframerecv(frame) states.append(templatefilters.json((action, meta))) action, meta = reactor.oninputeof() meta[b'action'] = action states.append(templatefilters.json(meta)) res.status = b'200 OK' res.headers[b'Content-Type'] = b'text/plain' res.setbodybytes(b'\n'.join(states)) def _processhttpv2request(ui, repo, req, res, authedperm, reqcommand, proto): """Post-validation handler for HTTPv2 requests. Called when the HTTP request contains unified frame-based protocol frames for evaluation. """ # TODO Some HTTP clients are full duplex and can receive data before # the entire request is transmitted. Figure out a way to indicate support # for that so we can opt into full duplex mode. reactor = wireprotoframing.serverreactor(ui, deferoutput=True) seencommand = False outstream = None while True: frame = wireprotoframing.readframe(req.bodyfh) if not frame: break action, meta = reactor.onframerecv(frame) if action == b'wantframe': # Need more data before we can do anything. continue elif action == b'runcommand': # Defer creating output stream because we need to wait for # protocol settings frames so proper encoding can be applied. if not outstream: outstream = reactor.makeoutputstream() sentoutput = _httpv2runcommand( ui, repo, req, res, authedperm, reqcommand, reactor, outstream, meta, issubsequent=seencommand, ) if sentoutput: return seencommand = True elif action == b'error': # TODO define proper error mechanism. res.status = b'200 OK' res.headers[b'Content-Type'] = b'text/plain' res.setbodybytes(meta[b'message'] + b'\n') return else: raise error.ProgrammingError( b'unhandled action from frame processor: %s' % action ) action, meta = reactor.oninputeof() if action == b'sendframes': # We assume we haven't started sending the response yet. If we're # wrong, the response type will raise an exception. res.status = b'200 OK' res.headers[b'Content-Type'] = FRAMINGTYPE res.setbodygen(meta[b'framegen']) elif action == b'noop': pass else: raise error.ProgrammingError( b'unhandled action from frame processor: %s' % action ) def _httpv2runcommand( ui, repo, req, res, authedperm, reqcommand, reactor, outstream, command, issubsequent, ): """Dispatch a wire protocol command made from HTTPv2 requests. The authenticated permission (``authedperm``) along with the original command from the URL (``reqcommand``) are passed in. """ # We already validated that the session has permissions to perform the # actions in ``authedperm``. In the unified frame protocol, the canonical # command to run is expressed in a frame. However, the URL also requested # to run a specific command. We need to be careful that the command we # run doesn't have permissions requirements greater than what was granted # by ``authedperm``. # # Our rule for this is we only allow one command per HTTP request and # that command must match the command in the URL. However, we make # an exception for the ``multirequest`` URL. This URL is allowed to # execute multiple commands. We double check permissions of each command # as it is invoked to ensure there is no privilege escalation. # TODO consider allowing multiple commands to regular command URLs # iff each command is the same. proto = httpv2protocolhandler(req, ui, args=command[b'args']) if reqcommand == b'multirequest': if not COMMANDS.commandavailable(command[b'command'], proto): # TODO proper error mechanism res.status = b'200 OK' res.headers[b'Content-Type'] = b'text/plain' res.setbodybytes( _(b'wire protocol command not available: %s') % command[b'command'] ) return True # TODO don't use assert here, since it may be elided by -O. assert authedperm in (b'ro', b'rw') wirecommand = COMMANDS[command[b'command']] assert wirecommand.permission in (b'push', b'pull') if authedperm == b'ro' and wirecommand.permission != b'pull': # TODO proper error mechanism res.status = b'403 Forbidden' res.headers[b'Content-Type'] = b'text/plain' res.setbodybytes( _(b'insufficient permissions to execute command: %s') % command[b'command'] ) return True # TODO should we also call checkperm() here? Maybe not if we're going # to overhaul that API. The granted scope from the URL check should # be good enough. else: # Don't allow multiple commands outside of ``multirequest`` URL. if issubsequent: # TODO proper error mechanism res.status = b'200 OK' res.headers[b'Content-Type'] = b'text/plain' res.setbodybytes( _(b'multiple commands cannot be issued to this URL') ) return True if reqcommand != command[b'command']: # TODO define proper error mechanism res.status = b'200 OK' res.headers[b'Content-Type'] = b'text/plain' res.setbodybytes(_(b'command in frame must match command in URL')) return True res.status = b'200 OK' res.headers[b'Content-Type'] = FRAMINGTYPE try: objs = dispatch(repo, proto, command[b'command'], command[b'redirect']) action, meta = reactor.oncommandresponsereadyobjects( outstream, command[b'requestid'], objs ) except error.WireprotoCommandError as e: action, meta = reactor.oncommanderror( outstream, command[b'requestid'], e.message, e.messageargs ) except Exception as e: action, meta = reactor.onservererror( outstream, command[b'requestid'], _(b'exception when invoking command: %s') % stringutil.forcebytestr(e), ) if action == b'sendframes': res.setbodygen(meta[b'framegen']) return True elif action == b'noop': return False else: raise error.ProgrammingError( b'unhandled event from reactor: %s' % action ) def getdispatchrepo(repo, proto, command): viewconfig = repo.ui.config(b'server', b'view') return repo.filtered(viewconfig) def dispatch(repo, proto, command, redirect): """Run a wire protocol command. Returns an iterable of objects that will be sent to the client. """ repo = getdispatchrepo(repo, proto, command) entry = COMMANDS[command] func = entry.func spec = entry.args args = proto.getargs(spec) # There is some duplicate boilerplate code here for calling the command and # emitting objects. It is either that or a lot of indented code that looks # like a pyramid (since there are a lot of code paths that result in not # using the cacher). callcommand = lambda: func(repo, proto, **pycompat.strkwargs(args)) # Request is not cacheable. Don't bother instantiating a cacher. if not entry.cachekeyfn: for o in callcommand(): yield o return if redirect: redirecttargets = redirect[b'targets'] redirecthashes = redirect[b'hashes'] else: redirecttargets = [] redirecthashes = [] cacher = makeresponsecacher( repo, proto, command, args, cborutil.streamencode, redirecttargets=redirecttargets, redirecthashes=redirecthashes, ) # But we have no cacher. Do default handling. if not cacher: for o in callcommand(): yield o return with cacher: cachekey = entry.cachekeyfn( repo, proto, cacher, **pycompat.strkwargs(args) ) # No cache key or the cacher doesn't like it. Do default handling. if cachekey is None or not cacher.setcachekey(cachekey): for o in callcommand(): yield o return # Serve it from the cache, if possible. cached = cacher.lookup() if cached: for o in cached[b'objs']: yield o return # Else call the command and feed its output into the cacher, allowing # the cacher to buffer/mutate objects as it desires. for o in callcommand(): for o in cacher.onobject(o): yield o for o in cacher.onfinished(): yield o @interfaceutil.implementer(wireprototypes.baseprotocolhandler) class httpv2protocolhandler(object): def __init__(self, req, ui, args=None): self._req = req self._ui = ui self._args = args @property def name(self): return HTTP_WIREPROTO_V2 def getargs(self, args): # First look for args that were passed but aren't registered on this # command. extra = set(self._args) - set(args) if extra: raise error.WireprotoCommandError( b'unsupported argument to command: %s' % b', '.join(sorted(extra)) ) # And look for required arguments that are missing. missing = {a for a in args if args[a][b'required']} - set(self._args) if missing: raise error.WireprotoCommandError( b'missing required arguments: %s' % b', '.join(sorted(missing)) ) # Now derive the arguments to pass to the command, taking into # account the arguments specified by the client. data = {} for k, meta in sorted(args.items()): # This argument wasn't passed by the client. if k not in self._args: data[k] = meta[b'default']() continue v = self._args[k] # Sets may be expressed as lists. Silently normalize. if meta[b'type'] == b'set' and isinstance(v, list): v = set(v) # TODO consider more/stronger type validation. data[k] = v return data def getprotocaps(self): # Protocol capabilities are currently not implemented for HTTP V2. return set() def getpayload(self): raise NotImplementedError @contextlib.contextmanager def mayberedirectstdio(self): raise NotImplementedError def client(self): raise NotImplementedError def addcapabilities(self, repo, caps): return caps def checkperm(self, perm): raise NotImplementedError def httpv2apidescriptor(req, repo): proto = httpv2protocolhandler(req, repo.ui) return _capabilitiesv2(repo, proto) def _capabilitiesv2(repo, proto): """Obtain the set of capabilities for version 2 transports. These capabilities are distinct from the capabilities for version 1 transports. """ caps = { b'commands': {}, b'framingmediatypes': [FRAMINGTYPE], b'pathfilterprefixes': set(narrowspec.VALID_PREFIXES), } for command, entry in COMMANDS.items(): args = {} for arg, meta in entry.args.items(): args[arg] = { # TODO should this be a normalized type using CBOR's # terminology? b'type': meta[b'type'], b'required': meta[b'required'], } if not meta[b'required']: args[arg][b'default'] = meta[b'default']() if meta[b'validvalues']: args[arg][b'validvalues'] = meta[b'validvalues'] # TODO this type of check should be defined in a per-command callback. if ( command == b'rawstorefiledata' and not streamclone.allowservergeneration(repo) ): continue caps[b'commands'][command] = { b'args': args, b'permissions': [entry.permission], } if entry.extracapabilitiesfn: extracaps = entry.extracapabilitiesfn(repo, proto) caps[b'commands'][command].update(extracaps) caps[b'rawrepoformats'] = sorted(repo.requirements & repo.supportedformats) targets = getadvertisedredirecttargets(repo, proto) if targets: caps[b'redirect'] = { b'targets': [], b'hashes': [b'sha256', b'sha1'], } for target in targets: entry = { b'name': target[b'name'], b'protocol': target[b'protocol'], b'uris': target[b'uris'], } for key in (b'snirequired', b'tlsversions'): if key in target: entry[key] = target[key] caps[b'redirect'][b'targets'].append(entry) return proto.addcapabilities(repo, caps) def getadvertisedredirecttargets(repo, proto): """Obtain a list of content redirect targets. Returns a list containing potential redirect targets that will be advertised in capabilities data. Each dict MUST have the following keys: name The name of this redirect target. This is the identifier clients use to refer to a target. It is transferred as part of every command request. protocol Network protocol used by this target. Typically this is the string in front of the ``://`` in a URL. e.g. ``https``. uris List of representative URIs for this target. Clients can use the URIs to test parsing for compatibility or for ordering preference for which target to use. The following optional keys are recognized: snirequired Bool indicating if Server Name Indication (SNI) is required to connect to this target. tlsversions List of bytes indicating which TLS versions are supported by this target. By default, clients reflect the target order advertised by servers and servers will use the first client-advertised target when picking a redirect target. So targets should be advertised in the order the server prefers they be used. """ return [] def wireprotocommand( name, args=None, permission=b'push', cachekeyfn=None, extracapabilitiesfn=None, ): """Decorator to declare a wire protocol command. ``name`` is the name of the wire protocol command being provided. ``args`` is a dict defining arguments accepted by the command. Keys are the argument name. Values are dicts with the following keys: ``type`` The argument data type. Must be one of the following string literals: ``bytes``, ``int``, ``list``, ``dict``, ``set``, or ``bool``. ``default`` A callable returning the default value for this argument. If not specified, ``None`` will be the default value. ``example`` An example value for this argument. ``validvalues`` Set of recognized values for this argument. ``permission`` defines the permission type needed to run this command. Can be ``push`` or ``pull``. These roughly map to read-write and read-only, respectively. Default is to assume command requires ``push`` permissions because otherwise commands not declaring their permissions could modify a repository that is supposed to be read-only. ``cachekeyfn`` defines an optional callable that can derive the cache key for this request. ``extracapabilitiesfn`` defines an optional callable that defines extra command capabilities/parameters that are advertised next to the command in the capabilities data structure describing the server. The callable receives as arguments the repository and protocol objects. It returns a dict of extra fields to add to the command descriptor. Wire protocol commands are generators of objects to be serialized and sent to the client. If a command raises an uncaught exception, this will be translated into a command error. All commands can opt in to being cacheable by defining a function (``cachekeyfn``) that is called to derive a cache key. This function receives the same arguments as the command itself plus a ``cacher`` argument containing the active cacher for the request and returns a bytes containing the key in a cache the response to this command may be cached under. """ transports = { k for k, v in wireprototypes.TRANSPORTS.items() if v[b'version'] == 2 } if permission not in (b'push', b'pull'): raise error.ProgrammingError( b'invalid wire protocol permission; ' b'got %s; expected "push" or "pull"' % permission ) if args is None: args = {} if not isinstance(args, dict): raise error.ProgrammingError( b'arguments for version 2 commands must be declared as dicts' ) for arg, meta in args.items(): if arg == b'*': raise error.ProgrammingError( b'* argument name not allowed on version 2 commands' ) if not isinstance(meta, dict): raise error.ProgrammingError( b'arguments for version 2 commands ' b'must declare metadata as a dict' ) if b'type' not in meta: raise error.ProgrammingError( b'%s argument for command %s does not ' b'declare type field' % (arg, name) ) if meta[b'type'] not in ( b'bytes', b'int', b'list', b'dict', b'set', b'bool', ): raise error.ProgrammingError( b'%s argument for command %s has ' b'illegal type: %s' % (arg, name, meta[b'type']) ) if b'example' not in meta: raise error.ProgrammingError( b'%s argument for command %s does not ' b'declare example field' % (arg, name) ) meta[b'required'] = b'default' not in meta meta.setdefault(b'default', lambda: None) meta.setdefault(b'validvalues', None) def register(func): if name in COMMANDS: raise error.ProgrammingError( b'%s command already registered for version 2' % name ) COMMANDS[name] = wireprototypes.commandentry( func, args=args, transports=transports, permission=permission, cachekeyfn=cachekeyfn, extracapabilitiesfn=extracapabilitiesfn, ) return func return register def makecommandcachekeyfn(command, localversion=None, allargs=False): """Construct a cache key derivation function with common features. By default, the cache key is a hash of: * The command name. * A global cache version number. * A local cache version number (passed via ``localversion``). * All the arguments passed to the command. * The media type used. * Wire protocol version string. * The repository path. """ if not allargs: raise error.ProgrammingError( b'only allargs=True is currently supported' ) if localversion is None: raise error.ProgrammingError(b'must set localversion argument value') def cachekeyfn(repo, proto, cacher, **args): spec = COMMANDS[command] # Commands that mutate the repo can not be cached. if spec.permission == b'push': return None # TODO config option to disable caching. # Our key derivation strategy is to construct a data structure # holding everything that could influence cacheability and to hash # the CBOR representation of that. Using CBOR seems like it might # be overkill. However, simpler hashing mechanisms are prone to # duplicate input issues. e.g. if you just concatenate two values, # "foo"+"bar" is identical to "fo"+"obar". Using CBOR provides # "padding" between values and prevents these problems. # Seed the hash with various data. state = { # To invalidate all cache keys. b'globalversion': GLOBAL_CACHE_VERSION, # More granular cache key invalidation. b'localversion': localversion, # Cache keys are segmented by command. b'command': command, # Throw in the media type and API version strings so changes # to exchange semantics invalid cache. b'mediatype': FRAMINGTYPE, b'version': HTTP_WIREPROTO_V2, # So same requests for different repos don't share cache keys. b'repo': repo.root, } # The arguments passed to us will have already been normalized. # Default values will be set, etc. This is important because it # means that it doesn't matter if clients send an explicit argument # or rely on the default value: it will all normalize to the same # set of arguments on the server and therefore the same cache key. # # Arguments by their very nature must support being encoded to CBOR. # And the CBOR encoder is deterministic. So we hash the arguments # by feeding the CBOR of their representation into the hasher. if allargs: state[b'args'] = pycompat.byteskwargs(args) cacher.adjustcachekeystate(state) hasher = hashutil.sha1() for chunk in cborutil.streamencode(state): hasher.update(chunk) return pycompat.sysbytes(hasher.hexdigest()) return cachekeyfn def makeresponsecacher( repo, proto, command, args, objencoderfn, redirecttargets, redirecthashes ): """Construct a cacher for a cacheable command. Returns an ``iwireprotocolcommandcacher`` instance. Extensions can monkeypatch this function to provide custom caching backends. """ return None def resolvenodes(repo, revisions): """Resolve nodes from a revisions specifier data structure.""" cl = repo.changelog clhasnode = cl.hasnode seen = set() nodes = [] if not isinstance(revisions, list): raise error.WireprotoCommandError( b'revisions must be defined as an array' ) for spec in revisions: if b'type' not in spec: raise error.WireprotoCommandError( b'type key not present in revision specifier' ) typ = spec[b'type'] if typ == b'changesetexplicit': if b'nodes' not in spec: raise error.WireprotoCommandError( b'nodes key not present in changesetexplicit revision ' b'specifier' ) for node in spec[b'nodes']: if node not in seen: nodes.append(node) seen.add(node) elif typ == b'changesetexplicitdepth': for key in (b'nodes', b'depth'): if key not in spec: raise error.WireprotoCommandError( b'%s key not present in changesetexplicitdepth revision ' b'specifier', (key,), ) for rev in repo.revs( b'ancestors(%ln, %s)', spec[b'nodes'], spec[b'depth'] - 1 ): node = cl.node(rev) if node not in seen: nodes.append(node) seen.add(node) elif typ == b'changesetdagrange': for key in (b'roots', b'heads'): if key not in spec: raise error.WireprotoCommandError( b'%s key not present in changesetdagrange revision ' b'specifier', (key,), ) if not spec[b'heads']: raise error.WireprotoCommandError( b'heads key in changesetdagrange cannot be empty' ) if spec[b'roots']: common = [n for n in spec[b'roots'] if clhasnode(n)] else: common = [nullid] for n in discovery.outgoing(repo, common, spec[b'heads']).missing: if n not in seen: nodes.append(n) seen.add(n) else: raise error.WireprotoCommandError( b'unknown revision specifier type: %s', (typ,) ) return nodes @wireprotocommand(b'branchmap', permission=b'pull') def branchmapv2(repo, proto): yield { encoding.fromlocal(k): v for k, v in pycompat.iteritems(repo.branchmap()) } @wireprotocommand(b'capabilities', permission=b'pull') def capabilitiesv2(repo, proto): yield _capabilitiesv2(repo, proto) @wireprotocommand( b'changesetdata', args={ b'revisions': { b'type': b'list', b'example': [ { b'type': b'changesetexplicit', b'nodes': [b'abcdef...'], } ], }, b'fields': { b'type': b'set', b'default': set, b'example': {b'parents', b'revision'}, b'validvalues': {b'bookmarks', b'parents', b'phase', b'revision'}, }, }, permission=b'pull', ) def changesetdata(repo, proto, revisions, fields): # TODO look for unknown fields and abort when they can't be serviced. # This could probably be validated by dispatcher using validvalues. cl = repo.changelog outgoing = resolvenodes(repo, revisions) publishing = repo.publishing() if outgoing: repo.hook(b'preoutgoing', throw=True, source=b'serve') yield { b'totalitems': len(outgoing), } # The phases of nodes already transferred to the client may have changed # since the client last requested data. We send phase-only records # for these revisions, if requested. # TODO actually do this. We'll probably want to emit phase heads # in the ancestry set of the outgoing revisions. This will ensure # that phase updates within that set are seen. if b'phase' in fields: pass nodebookmarks = {} for mark, node in repo._bookmarks.items(): nodebookmarks.setdefault(node, set()).add(mark) # It is already topologically sorted by revision number. for node in outgoing: d = { b'node': node, } if b'parents' in fields: d[b'parents'] = cl.parents(node) if b'phase' in fields: if publishing: d[b'phase'] = b'public' else: ctx = repo[node] d[b'phase'] = ctx.phasestr() if b'bookmarks' in fields and node in nodebookmarks: d[b'bookmarks'] = sorted(nodebookmarks[node]) del nodebookmarks[node] followingmeta = [] followingdata = [] if b'revision' in fields: revisiondata = cl.rawdata(node) followingmeta.append((b'revision', len(revisiondata))) followingdata.append(revisiondata) # TODO make it possible for extensions to wrap a function or register # a handler to service custom fields. if followingmeta: d[b'fieldsfollowing'] = followingmeta yield d for extra in followingdata: yield extra # If requested, send bookmarks from nodes that didn't have revision # data sent so receiver is aware of any bookmark updates. if b'bookmarks' in fields: for node, marks in sorted(pycompat.iteritems(nodebookmarks)): yield { b'node': node, b'bookmarks': sorted(marks), } class FileAccessError(Exception): """Represents an error accessing a specific file.""" def __init__(self, path, msg, args): self.path = path self.msg = msg self.args = args def getfilestore(repo, proto, path): """Obtain a file storage object for use with wire protocol. Exists as a standalone function so extensions can monkeypatch to add access control. """ # This seems to work even if the file doesn't exist. So catch # "empty" files and return an error. fl = repo.file(path) if not len(fl): raise FileAccessError(path, b'unknown file: %s', (path,)) return fl def emitfilerevisions(repo, path, revisions, linknodes, fields): for revision in revisions: d = { b'node': revision.node, } if b'parents' in fields: d[b'parents'] = [revision.p1node, revision.p2node] if b'linknode' in fields: d[b'linknode'] = linknodes[revision.node] followingmeta = [] followingdata = [] if b'revision' in fields: if revision.revision is not None: followingmeta.append((b'revision', len(revision.revision))) followingdata.append(revision.revision) else: d[b'deltabasenode'] = revision.basenode followingmeta.append((b'delta', len(revision.delta))) followingdata.append(revision.delta) if followingmeta: d[b'fieldsfollowing'] = followingmeta yield d for extra in followingdata: yield extra def makefilematcher(repo, pathfilter): """Construct a matcher from a path filter dict.""" # Validate values. if pathfilter: for key in (b'include', b'exclude'): for pattern in pathfilter.get(key, []): if not pattern.startswith((b'path:', b'rootfilesin:')): raise error.WireprotoCommandError( b'%s pattern must begin with `path:` or `rootfilesin:`; ' b'got %s', (key, pattern), ) if pathfilter: matcher = matchmod.match( repo.root, b'', include=pathfilter.get(b'include', []), exclude=pathfilter.get(b'exclude', []), ) else: matcher = matchmod.match(repo.root, b'') # Requested patterns could include files not in the local store. So # filter those out. return repo.narrowmatch(matcher) @wireprotocommand( b'filedata', args={ b'haveparents': { b'type': b'bool', b'default': lambda: False, b'example': True, }, b'nodes': { b'type': b'list', b'example': [b'0123456...'], }, b'fields': { b'type': b'set', b'default': set, b'example': {b'parents', b'revision'}, b'validvalues': {b'parents', b'revision', b'linknode'}, }, b'path': { b'type': b'bytes', b'example': b'foo.txt', }, }, permission=b'pull', # TODO censoring a file revision won't invalidate the cache. # Figure out a way to take censoring into account when deriving # the cache key. cachekeyfn=makecommandcachekeyfn(b'filedata', 1, allargs=True), ) def filedata(repo, proto, haveparents, nodes, fields, path): # TODO this API allows access to file revisions that are attached to # secret changesets. filesdata does not have this problem. Maybe this # API should be deleted? try: # Extensions may wish to access the protocol handler. store = getfilestore(repo, proto, path) except FileAccessError as e: raise error.WireprotoCommandError(e.msg, e.args) clnode = repo.changelog.node linknodes = {} # Validate requested nodes. for node in nodes: try: store.rev(node) except error.LookupError: raise error.WireprotoCommandError( b'unknown file node: %s', (hex(node),) ) # TODO by creating the filectx against a specific file revision # instead of changeset, linkrev() is always used. This is wrong for # cases where linkrev() may refer to a hidden changeset. But since this # API doesn't know anything about changesets, we're not sure how to # disambiguate the linknode. Perhaps we should delete this API? fctx = repo.filectx(path, fileid=node) linknodes[node] = clnode(fctx.introrev()) revisions = store.emitrevisions( nodes, revisiondata=b'revision' in fields, assumehaveparentrevisions=haveparents, ) yield { b'totalitems': len(nodes), } for o in emitfilerevisions(repo, path, revisions, linknodes, fields): yield o def filesdatacapabilities(repo, proto): batchsize = repo.ui.configint( b'experimental', b'server.filesdata.recommended-batch-size' ) return { b'recommendedbatchsize': batchsize, } @wireprotocommand( b'filesdata', args={ b'haveparents': { b'type': b'bool', b'default': lambda: False, b'example': True, }, b'fields': { b'type': b'set', b'default': set, b'example': {b'parents', b'revision'}, b'validvalues': { b'firstchangeset', b'linknode', b'parents', b'revision', }, }, b'pathfilter': { b'type': b'dict', b'default': lambda: None, b'example': {b'include': [b'path:tests']}, }, b'revisions': { b'type': b'list', b'example': [ { b'type': b'changesetexplicit', b'nodes': [b'abcdef...'], } ], }, }, permission=b'pull', # TODO censoring a file revision won't invalidate the cache. # Figure out a way to take censoring into account when deriving # the cache key. cachekeyfn=makecommandcachekeyfn(b'filesdata', 1, allargs=True), extracapabilitiesfn=filesdatacapabilities, ) def filesdata(repo, proto, haveparents, fields, pathfilter, revisions): # TODO This should operate on a repo that exposes obsolete changesets. There # is a race between a client making a push that obsoletes a changeset and # another client fetching files data for that changeset. If a client has a # changeset, it should probably be allowed to access files data for that # changeset. outgoing = resolvenodes(repo, revisions) filematcher = makefilematcher(repo, pathfilter) # path -> {fnode: linknode} fnodes = collections.defaultdict(dict) # We collect the set of relevant file revisions by iterating the changeset # revisions and either walking the set of files recorded in the changeset # or by walking the manifest at that revision. There is probably room for a # storage-level API to request this data, as it can be expensive to compute # and would benefit from caching or alternate storage from what revlogs # provide. for node in outgoing: ctx = repo[node] mctx = ctx.manifestctx() md = mctx.read() if haveparents: checkpaths = ctx.files() else: checkpaths = md.keys() for path in checkpaths: fnode = md[path] if path in fnodes and fnode in fnodes[path]: continue if not filematcher(path): continue fnodes[path].setdefault(fnode, node) yield { b'totalpaths': len(fnodes), b'totalitems': sum(len(v) for v in fnodes.values()), } for path, filenodes in sorted(fnodes.items()): try: store = getfilestore(repo, proto, path) except FileAccessError as e: raise error.WireprotoCommandError(e.msg, e.args) yield { b'path': path, b'totalitems': len(filenodes), } revisions = store.emitrevisions( filenodes.keys(), revisiondata=b'revision' in fields, assumehaveparentrevisions=haveparents, ) for o in emitfilerevisions(repo, path, revisions, filenodes, fields): yield o @wireprotocommand( b'heads', args={ b'publiconly': { b'type': b'bool', b'default': lambda: False, b'example': False, }, }, permission=b'pull', ) def headsv2(repo, proto, publiconly): if publiconly: repo = repo.filtered(b'immutable') yield repo.heads() @wireprotocommand( b'known', args={ b'nodes': { b'type': b'list', b'default': list, b'example': [b'deadbeef'], }, }, permission=b'pull', ) def knownv2(repo, proto, nodes): result = b''.join(b'1' if n else b'0' for n in repo.known(nodes)) yield result @wireprotocommand( b'listkeys', args={ b'namespace': { b'type': b'bytes', b'example': b'ns', }, }, permission=b'pull', ) def listkeysv2(repo, proto, namespace): keys = repo.listkeys(encoding.tolocal(namespace)) keys = { encoding.fromlocal(k): encoding.fromlocal(v) for k, v in pycompat.iteritems(keys) } yield keys @wireprotocommand( b'lookup', args={ b'key': { b'type': b'bytes', b'example': b'foo', }, }, permission=b'pull', ) def lookupv2(repo, proto, key): key = encoding.tolocal(key) # TODO handle exception. node = repo.lookup(key) yield node def manifestdatacapabilities(repo, proto): batchsize = repo.ui.configint( b'experimental', b'server.manifestdata.recommended-batch-size' ) return { b'recommendedbatchsize': batchsize, } @wireprotocommand( b'manifestdata', args={ b'nodes': { b'type': b'list', b'example': [b'0123456...'], }, b'haveparents': { b'type': b'bool', b'default': lambda: False, b'example': True, }, b'fields': { b'type': b'set', b'default': set, b'example': {b'parents', b'revision'}, b'validvalues': {b'parents', b'revision'}, }, b'tree': { b'type': b'bytes', b'example': b'', }, }, permission=b'pull', cachekeyfn=makecommandcachekeyfn(b'manifestdata', 1, allargs=True), extracapabilitiesfn=manifestdatacapabilities, ) def manifestdata(repo, proto, haveparents, nodes, fields, tree): store = repo.manifestlog.getstorage(tree) # Validate the node is known and abort on unknown revisions. for node in nodes: try: store.rev(node) except error.LookupError: raise error.WireprotoCommandError(b'unknown node: %s', (node,)) revisions = store.emitrevisions( nodes, revisiondata=b'revision' in fields, assumehaveparentrevisions=haveparents, ) yield { b'totalitems': len(nodes), } for revision in revisions: d = { b'node': revision.node, } if b'parents' in fields: d[b'parents'] = [revision.p1node, revision.p2node] followingmeta = [] followingdata = [] if b'revision' in fields: if revision.revision is not None: followingmeta.append((b'revision', len(revision.revision))) followingdata.append(revision.revision) else: d[b'deltabasenode'] = revision.basenode followingmeta.append((b'delta', len(revision.delta))) followingdata.append(revision.delta) if followingmeta: d[b'fieldsfollowing'] = followingmeta yield d for extra in followingdata: yield extra @wireprotocommand( b'pushkey', args={ b'namespace': { b'type': b'bytes', b'example': b'ns', }, b'key': { b'type': b'bytes', b'example': b'key', }, b'old': { b'type': b'bytes', b'example': b'old', }, b'new': { b'type': b'bytes', b'example': b'new', }, }, permission=b'push', ) def pushkeyv2(repo, proto, namespace, key, old, new): # TODO handle ui output redirection yield repo.pushkey( encoding.tolocal(namespace), encoding.tolocal(key), encoding.tolocal(old), encoding.tolocal(new), ) @wireprotocommand( b'rawstorefiledata', args={ b'files': { b'type': b'list', b'example': [b'changelog', b'manifestlog'], }, b'pathfilter': { b'type': b'list', b'default': lambda: None, b'example': {b'include': [b'path:tests']}, }, }, permission=b'pull', ) def rawstorefiledata(repo, proto, files, pathfilter): if not streamclone.allowservergeneration(repo): raise error.WireprotoCommandError(b'stream clone is disabled') # TODO support dynamically advertising what store files "sets" are # available. For now, we support changelog, manifestlog, and files. files = set(files) allowedfiles = {b'changelog', b'manifestlog'} unsupported = files - allowedfiles if unsupported: raise error.WireprotoCommandError( b'unknown file type: %s', (b', '.join(sorted(unsupported)),) ) with repo.lock(): topfiles = list(repo.store.topfiles()) sendfiles = [] totalsize = 0 # TODO this is a bunch of storage layer interface abstractions because # it assumes revlogs. for name, encodedname, size in topfiles: if b'changelog' in files and name.startswith(b'00changelog'): pass elif b'manifestlog' in files and name.startswith(b'00manifest'): pass else: continue sendfiles.append((b'store', name, size)) totalsize += size yield { b'filecount': len(sendfiles), b'totalsize': totalsize, } for location, name, size in sendfiles: yield { b'location': location, b'path': name, b'size': size, } # We have to use a closure for this to ensure the context manager is # closed only after sending the final chunk. def getfiledata(): with repo.svfs(name, b'rb', auditpath=False) as fh: for chunk in util.filechunkiter(fh, limit=size): yield chunk yield wireprototypes.indefinitebytestringresponse(getfiledata())