Mercurial > hg
view mercurial/revlogutils/deltas.py @ 45972:8b99c473aae2
copies-rust: move is_ancestor caching within the rust code
Now that the OrdMap merging is fast, smaller things start to matters.
We move the caching of `is_ancestor` call within the Rust code. This avoid
round-trip to Python and help us to shave more time on our slower case:
Repo Cases Source-Rev Dest-Rev Old-Time New-Time Difference Factor
------------------------------------------------------------------------------------------------------------------------------------
pypy x0000_revs_x_added_0_copies d1defd0dc478 c9cb1334cc78 : 2.780174 s, 2.137894 s, -0.642280 s, × 0.7690
mozilla-try x0000_revs_xx000_added_x000_copies 89294cd501d9 7ccb2fc7ccb5 : 9.843481 s, 8.100385 s, -1.743096 s, × 0.8229
Note: I would happily have used native code for ancestors computation, however
I failed (did not tried hard) to created a rust version that goes as fast as
the current C version.
Below are full tables for:
- this change compared to the previous change
- this change compared to filelog performance
Repo Cases Source-Rev Dest-Rev Old-Time New-Time Difference Factor
------------------------------------------------------------------------------------------------------------------------------------
mercurial x_revs_x_added_0_copies ad6b123de1c7 39cfcef4f463 : 0.000049 s, 0.000047 s, -0.000002 s, × 0.9592
mercurial x_revs_x_added_x_copies 2b1c78674230 0c1d10351869 : 0.000182 s, 0.000181 s, -0.000001 s, × 0.9945
mercurial x000_revs_x000_added_x_copies 81f8ff2a9bf2 dd3267698d84 : 0.005872 s, 0.005852 s, -0.000020 s, × 0.9966
pypy x_revs_x_added_0_copies aed021ee8ae8 099ed31b181b : 0.000229 s, 0.000229 s, +0.000000 s, × 1.0000
pypy x_revs_x000_added_0_copies 4aa4e1f8e19a 359343b9ac0e : 0.000058 s, 0.000058 s, +0.000000 s, × 1.0000
pypy x_revs_x_added_x_copies ac52eb7bbbb0 72e022663155 : 0.000148 s, 0.000146 s, -0.000002 s, × 0.9865
pypy x_revs_x00_added_x_copies c3b14617fbd7 ace7255d9a26 : 0.001205 s, 0.001206 s, +0.000001 s, × 1.0008
pypy x_revs_x000_added_x000_copies df6f7a526b60 a83dc6a2d56f : 0.025662 s, 0.025275 s, -0.000387 s, × 0.9849
pypy x000_revs_xx00_added_0_copies 89a76aede314 2f22446ff07e : 0.080113 s, 0.080303 s, +0.000190 s, × 1.0024
pypy x000_revs_x000_added_x_copies 8a3b5bfd266e 2c68e87c3efe : 0.153030 s, 0.152641 s, -0.000389 s, × 0.9975
pypy x000_revs_x000_added_x000_copies 89a76aede314 7b3dda341c84 : 0.098774 s, 0.099107 s, +0.000333 s, × 1.0034
pypy x0000_revs_x_added_0_copies d1defd0dc478 c9cb1334cc78 : 2.780174 s, 2.137894 s, -0.642280 s, × 0.7690
pypy x0000_revs_xx000_added_0_copies bf2c629d0071 4ffed77c095c : 0.022218 s, 0.022202 s, -0.000016 s, × 0.9993
pypy x0000_revs_xx000_added_x000_copies 08ea3258278e d9fa043f30c0 : 0.252125 s, 0.228946 s, -0.023179 s, × 0.9081
netbeans x_revs_x_added_0_copies fb0955ffcbcd a01e9239f9e7 : 0.000186 s, 0.000186 s, +0.000000 s, × 1.0000
netbeans x_revs_x000_added_0_copies 6f360122949f 20eb231cc7d0 : 0.000133 s, 0.000133 s, +0.000000 s, × 1.0000
netbeans x_revs_x_added_x_copies 1ada3faf6fb6 5a39d12eecf4 : 0.000320 s, 0.000320 s, +0.000000 s, × 1.0000
netbeans x_revs_x00_added_x_copies 35be93ba1e2c 9eec5e90c05f : 0.001336 s, 0.001339 s, +0.000003 s, × 1.0022
netbeans x000_revs_xx00_added_0_copies eac3045b4fdd 51d4ae7f1290 : 0.015573 s, 0.015694 s, +0.000121 s, × 1.0078
netbeans x000_revs_x000_added_x_copies e2063d266acd 6081d72689dc : 0.018667 s, 0.018457 s, -0.000210 s, × 0.9888
netbeans x000_revs_x000_added_x000_copies ff453e9fee32 411350406ec2 : 0.112534 s, 0.111691 s, -0.000843 s, × 0.9925
netbeans x0000_revs_xx000_added_x000_copies 588c2d1ced70 1aad62e59ddd : 1.231869 s, 1.166017 s, -0.065852 s, × 0.9465
mozilla-central x_revs_x_added_0_copies 3697f962bb7b 7015fcdd43a2 : 0.000197 s, 0.000197 s, +0.000000 s, × 1.0000
mozilla-central x_revs_x000_added_0_copies dd390860c6c9 40d0c5bed75d : 0.000637 s, 0.000626 s, -0.000011 s, × 0.9827
mozilla-central x_revs_x_added_x_copies 8d198483ae3b 14207ffc2b2f : 0.000303 s, 0.000303 s, +0.000000 s, × 1.0000
mozilla-central x_revs_x00_added_x_copies 98cbc58cc6bc 446a150332c3 : 0.001663 s, 0.001679 s, +0.000016 s, × 1.0096
mozilla-central x_revs_x000_added_x000_copies 3c684b4b8f68 0a5e72d1b479 : 0.007008 s, 0.006947 s, -0.000061 s, × 0.9913
mozilla-central x_revs_x0000_added_x0000_copies effb563bb7e5 c07a39dc4e80 : 0.127385 s, 0.133070 s, +0.005685 s, × 1.0446
mozilla-central x000_revs_xx00_added_0_copies 6100d773079a 04a55431795e : 0.008740 s, 0.008705 s, -0.000035 s, × 0.9960
mozilla-central x000_revs_x000_added_x_copies 9f17a6fc04f9 2d37b966abed : 0.005783 s, 0.005913 s, +0.000130 s, × 1.0225
mozilla-central x000_revs_x000_added_x000_copies 7c97034feb78 4407bd0c6330 : 0.102184 s, 0.101373 s, -0.000811 s, × 0.9921
mozilla-central x0000_revs_xx000_added_0_copies 9eec5917337d 67118cc6dcad : 0.046220 s, 0.046526 s, +0.000306 s, × 1.0066
mozilla-central x0000_revs_xx000_added_x000_copies f78c615a656c 96a38b690156 : 0.315271 s, 0.313954 s, -0.001317 s, × 0.9958
mozilla-central x00000_revs_x0000_added_x0000_copies 6832ae71433c 4c222a1d9a00 : 3.478747 s, 3.367395 s, -0.111352 s, × 0.9680
mozilla-central x00000_revs_x00000_added_x000_copies 76caed42cf7c 1daa622bbe42 : 4.766435 s, 4.691820 s, -0.074615 s, × 0.9843
mozilla-try x_revs_x_added_0_copies aaf6dde0deb8 9790f499805a : 0.001214 s, 0.001199 s, -0.000015 s, × 0.9876
mozilla-try x_revs_x000_added_0_copies d8d0222927b4 5bb8ce8c7450 : 0.001221 s, 0.001216 s, -0.000005 s, × 0.9959
mozilla-try x_revs_x_added_x_copies 092fcca11bdb 936255a0384a : 0.000613 s, 0.000613 s, +0.000000 s, × 1.0000
mozilla-try x_revs_x00_added_x_copies b53d2fadbdb5 017afae788ec : 0.001904 s, 0.001906 s, +0.000002 s, × 1.0011
mozilla-try x_revs_x000_added_x000_copies 20408ad61ce5 6f0ee96e21ad : 0.093000 s, 0.092766 s, -0.000234 s, × 0.9975
mozilla-try x_revs_x0000_added_x0000_copies effb563bb7e5 c07a39dc4e80 : 0.132194 s, 0.136074 s, +0.003880 s, × 1.0294
mozilla-try x000_revs_xx00_added_0_copies 6100d773079a 04a55431795e : 0.009069 s, 0.009067 s, -0.000002 s, × 0.9998
mozilla-try x000_revs_x000_added_x_copies 9f17a6fc04f9 2d37b966abed : 0.006169 s, 0.006243 s, +0.000074 s, × 1.0120
mozilla-try x000_revs_x000_added_x000_copies 1346fd0130e4 4c65cbdabc1f : 0.115540 s, 0.114463 s, -0.001077 s, × 0.9907
mozilla-try x0000_revs_x_added_0_copies 63519bfd42ee a36a2a865d92 : 0.435381 s, 0.433683 s, -0.001698 s, × 0.9961
mozilla-try x0000_revs_x_added_x_copies 9fe69ff0762d bcabf2a78927 : 0.415461 s, 0.411278 s, -0.004183 s, × 0.9899
mozilla-try x0000_revs_xx000_added_x_copies 156f6e2674f2 4d0f2c178e66 : 0.155946 s, 0.155133 s, -0.000813 s, × 0.9948
mozilla-try x0000_revs_xx000_added_0_copies 9eec5917337d 67118cc6dcad : 0.048521 s, 0.048933 s, +0.000412 s, × 1.0085
mozilla-try x0000_revs_xx000_added_x000_copies 89294cd501d9 7ccb2fc7ccb5 : 9.843481 s, 8.100385 s, -1.743096 s, × 0.8229
mozilla-try x0000_revs_x0000_added_x0000_copies e928c65095ed e951f4ad123a : 1.465128 s, 1.446720 s, -0.018408 s, × 0.9874
mozilla-try x00000_revs_x00000_added_0_copies dc8a3ca7010e d16fde900c9c : 1.374283 s, 1.369537 s, -0.004746 s, × 0.9965
mozilla-try x00000_revs_x0000_added_x0000_copies 8d3fafa80d4b eb884023b810 : 5.255158 s, 5.186079 s, -0.069079 s, × 0.9869
Repo Case Source-Rev Dest-Rev filelog sidedata Difference Factor
--------------------------------------------------------------------------------------------------------------------------------------
mercurial x_revs_x_added_0_copies ad6b123de1c7 39cfcef4f463 : 0.000892 s, 0.000047 s, -0.000845 s, × 0.052691
mercurial x_revs_x_added_x_copies 2b1c78674230 0c1d10351869 : 0.001823 s, 0.000181 s, -0.001642 s, × 0.099287
mercurial x000_revs_x000_added_x_copies 81f8ff2a9bf2 dd3267698d84 : 0.018063 s, 0.005852 s, -0.012211 s, × 0.323977
pypy x_revs_x_added_0_copies aed021ee8ae8 099ed31b181b : 0.001505 s, 0.000229 s, -0.001276 s, × 0.152159
pypy x_revs_x000_added_0_copies 4aa4e1f8e19a 359343b9ac0e : 0.205895 s, 0.000058 s, -0.205837 s, × 0.000282
pypy x_revs_x_added_x_copies ac52eb7bbbb0 72e022663155 : 0.017021 s, 0.000146 s, -0.016875 s, × 0.008578
pypy x_revs_x00_added_x_copies c3b14617fbd7 ace7255d9a26 : 0.019422 s, 0.001206 s, -0.018216 s, × 0.062095
pypy x_revs_x000_added_x000_copies df6f7a526b60 a83dc6a2d56f : 0.767740 s, 0.025275 s, -0.742465 s, × 0.032921
pypy x000_revs_xx00_added_0_copies 89a76aede314 2f22446ff07e : 1.188515 s, 0.080303 s, -1.108212 s, × 0.067566
pypy x000_revs_x000_added_x_copies 8a3b5bfd266e 2c68e87c3efe : 1.251968 s, 0.152641 s, -1.099327 s, × 0.121921
pypy x000_revs_x000_added_x000_copies 89a76aede314 7b3dda341c84 : 1.616799 s, 0.099107 s, -1.517692 s, × 0.061298
pypy x0000_revs_x_added_0_copies d1defd0dc478 c9cb1334cc78 : 0.001057 s, 2.137894 s, +2.136837 s, × 2022.605487
pypy x0000_revs_xx000_added_0_copies bf2c629d0071 4ffed77c095c : 1.069485 s, 0.022202 s, -1.047283 s, × 0.020760
pypy x0000_revs_xx000_added_x000_copies 08ea3258278e d9fa043f30c0 : 1.350162 s, 0.228946 s, -1.121216 s, × 0.169569
netbeans x_revs_x_added_0_copies fb0955ffcbcd a01e9239f9e7 : 0.028008 s, 0.000186 s, -0.027822 s, × 0.006641
netbeans x_revs_x000_added_0_copies 6f360122949f 20eb231cc7d0 : 0.132281 s, 0.000133 s, -0.132148 s, × 0.001005
netbeans x_revs_x_added_x_copies 1ada3faf6fb6 5a39d12eecf4 : 0.025311 s, 0.000320 s, -0.024991 s, × 0.012643
netbeans x_revs_x00_added_x_copies 35be93ba1e2c 9eec5e90c05f : 0.052957 s, 0.001339 s, -0.051618 s, × 0.025285
netbeans x000_revs_xx00_added_0_copies eac3045b4fdd 51d4ae7f1290 : 0.038011 s, 0.015694 s, -0.022317 s, × 0.412880
netbeans x000_revs_x000_added_x_copies e2063d266acd 6081d72689dc : 0.198639 s, 0.018457 s, -0.180182 s, × 0.092917
netbeans x000_revs_x000_added_x000_copies ff453e9fee32 411350406ec2 : 0.955713 s, 0.111691 s, -0.844022 s, × 0.116867
netbeans x0000_revs_xx000_added_x000_copies 588c2d1ced70 1aad62e59ddd : 3.838886 s, 1.166017 s, -2.672869 s, × 0.303738
mozilla-central x_revs_x_added_0_copies 3697f962bb7b 7015fcdd43a2 : 0.024548 s, 0.000197 s, -0.024351 s, × 0.008025
mozilla-central x_revs_x000_added_0_copies dd390860c6c9 40d0c5bed75d : 0.143394 s, 0.000626 s, -0.142768 s, × 0.004366
mozilla-central x_revs_x_added_x_copies 8d198483ae3b 14207ffc2b2f : 0.026046 s, 0.000303 s, -0.025743 s, × 0.011633
mozilla-central x_revs_x00_added_x_copies 98cbc58cc6bc 446a150332c3 : 0.085440 s, 0.001679 s, -0.083761 s, × 0.019651
mozilla-central x_revs_x000_added_x000_copies 3c684b4b8f68 0a5e72d1b479 : 0.195656 s, 0.006947 s, -0.188709 s, × 0.035506
mozilla-central x_revs_x0000_added_x0000_copies effb563bb7e5 c07a39dc4e80 : 2.190874 s, 0.133070 s, -2.057804 s, × 0.060738
mozilla-central x000_revs_xx00_added_0_copies 6100d773079a 04a55431795e : 0.090208 s, 0.008705 s, -0.081503 s, × 0.096499
mozilla-central x000_revs_x000_added_x_copies 9f17a6fc04f9 2d37b966abed : 0.747367 s, 0.005913 s, -0.741454 s, × 0.007912
mozilla-central x000_revs_x000_added_x000_copies 7c97034feb78 4407bd0c6330 : 1.152863 s, 0.101373 s, -1.051490 s, × 0.087932
mozilla-central x0000_revs_xx000_added_0_copies 9eec5917337d 67118cc6dcad : 6.598336 s, 0.046526 s, -6.551810 s, × 0.007051
mozilla-central x0000_revs_xx000_added_x000_copies f78c615a656c 96a38b690156 : 3.255015 s, 0.313954 s, -2.941061 s, × 0.096452
mozilla-central x00000_revs_x0000_added_x0000_copies 6832ae71433c 4c222a1d9a00 : 15.668041 s, 3.367395 s, -12.300646 s, × 0.214921
mozilla-central x00000_revs_x00000_added_x000_copies 76caed42cf7c 1daa622bbe42 : 20.439638 s, 4.691820 s, -15.747818 s, × 0.229545
mozilla-try x_revs_x_added_0_copies aaf6dde0deb8 9790f499805a : 0.080923 s, 0.001199 s, -0.079724 s, × 0.014817
mozilla-try x_revs_x000_added_0_copies d8d0222927b4 5bb8ce8c7450 : 0.498456 s, 0.001216 s, -0.497240 s, × 0.002440
mozilla-try x_revs_x_added_x_copies 092fcca11bdb 936255a0384a : 0.020798 s, 0.000613 s, -0.020185 s, × 0.029474
mozilla-try x_revs_x00_added_x_copies b53d2fadbdb5 017afae788ec : 0.226930 s, 0.001906 s, -0.225024 s, × 0.008399
mozilla-try x_revs_x000_added_x000_copies 20408ad61ce5 6f0ee96e21ad : 1.113005 s, 0.092766 s, -1.020239 s, × 0.083347
mozilla-try x_revs_x0000_added_x0000_copies effb563bb7e5 c07a39dc4e80 : 2.230671 s, 0.136074 s, -2.094597 s, × 0.061001
mozilla-try x000_revs_xx00_added_0_copies 6100d773079a 04a55431795e : 0.089672 s, 0.009067 s, -0.080605 s, × 0.101113
mozilla-try x000_revs_x000_added_x_copies 9f17a6fc04f9 2d37b966abed : 0.740221 s, 0.006243 s, -0.733978 s, × 0.008434
mozilla-try x000_revs_x000_added_x000_copies 1346fd0130e4 4c65cbdabc1f : 1.185881 s, 0.114463 s, -1.071418 s, × 0.096521
mozilla-try x0000_revs_x_added_0_copies 63519bfd42ee a36a2a865d92 : 0.086072 s, 0.433683 s, +0.347611 s, × 5.038607
mozilla-try x0000_revs_x_added_x_copies 9fe69ff0762d bcabf2a78927 : 0.081321 s, 0.411278 s, +0.329957 s, × 5.057464
mozilla-try x0000_revs_xx000_added_x_copies 156f6e2674f2 4d0f2c178e66 : 7.528370 s, 0.155133 s, -7.373237 s, × 0.020606
mozilla-try x0000_revs_xx000_added_0_copies 9eec5917337d 67118cc6dcad : 6.757368 s, 0.048933 s, -6.708435 s, × 0.007241
mozilla-try x0000_revs_xx000_added_x000_copies 89294cd501d9 7ccb2fc7ccb5 : 7.643752 s, 8.100385 s, +0.456633 s, × 1.059739
mozilla-try x0000_revs_x0000_added_x0000_copies e928c65095ed e951f4ad123a : 9.704242 s, 1.446720 s, -8.257522 s, × 0.149081
mozilla-try x00000_revs_x_added_0_copies 6a320851d377 1ebb79acd503 : 0.092845 s, killed
mozilla-try x00000_revs_x00000_added_0_copies dc8a3ca7010e d16fde900c9c : 26.626870 s, 1.369537 s, -25.257333 s, × 0.051434
mozilla-try x00000_revs_x_added_x_copies 5173c4b6f97c 95d83ee7242d : 0.092953 s, killed
mozilla-try x00000_revs_x000_added_x_copies 9126823d0e9c ca82787bb23c : 0.227131 s, killed
mozilla-try x00000_revs_x0000_added_x0000_copies 8d3fafa80d4b eb884023b810 : 18.884666 s, 5.186079 s, -13.698587 s, × 0.274619
mozilla-try x00000_revs_x00000_added_x0000_copies 1b661134e2ca 1ae03d022d6d : 21.451622 s, killed
mozilla-try x00000_revs_x00000_added_x000_copies 9b2a99adc05e 8e29777b48e6 : 25.152558 s, killed
Differential Revision: https://phab.mercurial-scm.org/D9303
author | Pierre-Yves David <pierre-yves.david@octobus.net> |
---|---|
date | Fri, 02 Oct 2020 15:41:31 +0200 |
parents | c59eb1560c44 |
children | d4ba4d51f85f |
line wrap: on
line source
# revlogdeltas.py - Logic around delta computation for revlog # # Copyright 2005-2007 Matt Mackall <mpm@selenic.com> # Copyright 2018 Octobus <contact@octobus.net> # # This software may be used and distributed according to the terms of the # GNU General Public License version 2 or any later version. """Helper class to compute deltas stored inside revlogs""" from __future__ import absolute_import import collections import struct # import stuff from node for others to import from revlog from ..node import nullrev from ..i18n import _ from ..pycompat import getattr from .constants import ( REVIDX_ISCENSORED, REVIDX_RAWTEXT_CHANGING_FLAGS, ) from ..thirdparty import attr from .. import ( error, mdiff, util, ) from . import flagutil # maximum <delta-chain-data>/<revision-text-length> ratio LIMIT_DELTA2TEXT = 2 class _testrevlog(object): """minimalist fake revlog to use in doctests""" def __init__(self, data, density=0.5, mingap=0, snapshot=()): """data is an list of revision payload boundaries""" self._data = data self._srdensitythreshold = density self._srmingapsize = mingap self._snapshot = set(snapshot) self.index = None def start(self, rev): if rev == nullrev: return 0 if rev == 0: return 0 return self._data[rev - 1] def end(self, rev): if rev == nullrev: return 0 return self._data[rev] def length(self, rev): return self.end(rev) - self.start(rev) def __len__(self): return len(self._data) def issnapshot(self, rev): if rev == nullrev: return True return rev in self._snapshot def slicechunk(revlog, revs, targetsize=None): """slice revs to reduce the amount of unrelated data to be read from disk. ``revs`` is sliced into groups that should be read in one time. Assume that revs are sorted. The initial chunk is sliced until the overall density (payload/chunks-span ratio) is above `revlog._srdensitythreshold`. No gap smaller than `revlog._srmingapsize` is skipped. If `targetsize` is set, no chunk larger than `targetsize` will be yield. For consistency with other slicing choice, this limit won't go lower than `revlog._srmingapsize`. If individual revisions chunk are larger than this limit, they will still be raised individually. >>> data = [ ... 5, #00 (5) ... 10, #01 (5) ... 12, #02 (2) ... 12, #03 (empty) ... 27, #04 (15) ... 31, #05 (4) ... 31, #06 (empty) ... 42, #07 (11) ... 47, #08 (5) ... 47, #09 (empty) ... 48, #10 (1) ... 51, #11 (3) ... 74, #12 (23) ... 85, #13 (11) ... 86, #14 (1) ... 91, #15 (5) ... ] >>> revlog = _testrevlog(data, snapshot=range(16)) >>> list(slicechunk(revlog, list(range(16)))) [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]] >>> list(slicechunk(revlog, [0, 15])) [[0], [15]] >>> list(slicechunk(revlog, [0, 11, 15])) [[0], [11], [15]] >>> list(slicechunk(revlog, [0, 11, 13, 15])) [[0], [11, 13, 15]] >>> list(slicechunk(revlog, [1, 2, 3, 5, 8, 10, 11, 14])) [[1, 2], [5, 8, 10, 11], [14]] Slicing with a maximum chunk size >>> list(slicechunk(revlog, [0, 11, 13, 15], targetsize=15)) [[0], [11], [13], [15]] >>> list(slicechunk(revlog, [0, 11, 13, 15], targetsize=20)) [[0], [11], [13, 15]] Slicing involving nullrev >>> list(slicechunk(revlog, [-1, 0, 11, 13, 15], targetsize=20)) [[-1, 0], [11], [13, 15]] >>> list(slicechunk(revlog, [-1, 13, 15], targetsize=5)) [[-1], [13], [15]] """ if targetsize is not None: targetsize = max(targetsize, revlog._srmingapsize) # targetsize should not be specified when evaluating delta candidates: # * targetsize is used to ensure we stay within specification when reading, densityslicing = getattr(revlog.index, 'slicechunktodensity', None) if densityslicing is None: densityslicing = lambda x, y, z: _slicechunktodensity(revlog, x, y, z) for chunk in densityslicing( revs, revlog._srdensitythreshold, revlog._srmingapsize ): for subchunk in _slicechunktosize(revlog, chunk, targetsize): yield subchunk def _slicechunktosize(revlog, revs, targetsize=None): """slice revs to match the target size This is intended to be used on chunk that density slicing selected by that are still too large compared to the read garantee of revlog. This might happens when "minimal gap size" interrupted the slicing or when chain are built in a way that create large blocks next to each other. >>> data = [ ... 3, #0 (3) ... 5, #1 (2) ... 6, #2 (1) ... 8, #3 (2) ... 8, #4 (empty) ... 11, #5 (3) ... 12, #6 (1) ... 13, #7 (1) ... 14, #8 (1) ... ] == All snapshots cases == >>> revlog = _testrevlog(data, snapshot=range(9)) Cases where chunk is already small enough >>> list(_slicechunktosize(revlog, [0], 3)) [[0]] >>> list(_slicechunktosize(revlog, [6, 7], 3)) [[6, 7]] >>> list(_slicechunktosize(revlog, [0], None)) [[0]] >>> list(_slicechunktosize(revlog, [6, 7], None)) [[6, 7]] cases where we need actual slicing >>> list(_slicechunktosize(revlog, [0, 1], 3)) [[0], [1]] >>> list(_slicechunktosize(revlog, [1, 3], 3)) [[1], [3]] >>> list(_slicechunktosize(revlog, [1, 2, 3], 3)) [[1, 2], [3]] >>> list(_slicechunktosize(revlog, [3, 5], 3)) [[3], [5]] >>> list(_slicechunktosize(revlog, [3, 4, 5], 3)) [[3], [5]] >>> list(_slicechunktosize(revlog, [5, 6, 7, 8], 3)) [[5], [6, 7, 8]] >>> list(_slicechunktosize(revlog, [0, 1, 2, 3, 4, 5, 6, 7, 8], 3)) [[0], [1, 2], [3], [5], [6, 7, 8]] Case with too large individual chunk (must return valid chunk) >>> list(_slicechunktosize(revlog, [0, 1], 2)) [[0], [1]] >>> list(_slicechunktosize(revlog, [1, 3], 1)) [[1], [3]] >>> list(_slicechunktosize(revlog, [3, 4, 5], 2)) [[3], [5]] == No Snapshot cases == >>> revlog = _testrevlog(data) Cases where chunk is already small enough >>> list(_slicechunktosize(revlog, [0], 3)) [[0]] >>> list(_slicechunktosize(revlog, [6, 7], 3)) [[6, 7]] >>> list(_slicechunktosize(revlog, [0], None)) [[0]] >>> list(_slicechunktosize(revlog, [6, 7], None)) [[6, 7]] cases where we need actual slicing >>> list(_slicechunktosize(revlog, [0, 1], 3)) [[0], [1]] >>> list(_slicechunktosize(revlog, [1, 3], 3)) [[1], [3]] >>> list(_slicechunktosize(revlog, [1, 2, 3], 3)) [[1], [2, 3]] >>> list(_slicechunktosize(revlog, [3, 5], 3)) [[3], [5]] >>> list(_slicechunktosize(revlog, [3, 4, 5], 3)) [[3], [4, 5]] >>> list(_slicechunktosize(revlog, [5, 6, 7, 8], 3)) [[5], [6, 7, 8]] >>> list(_slicechunktosize(revlog, [0, 1, 2, 3, 4, 5, 6, 7, 8], 3)) [[0], [1, 2], [3], [5], [6, 7, 8]] Case with too large individual chunk (must return valid chunk) >>> list(_slicechunktosize(revlog, [0, 1], 2)) [[0], [1]] >>> list(_slicechunktosize(revlog, [1, 3], 1)) [[1], [3]] >>> list(_slicechunktosize(revlog, [3, 4, 5], 2)) [[3], [5]] == mixed case == >>> revlog = _testrevlog(data, snapshot=[0, 1, 2]) >>> list(_slicechunktosize(revlog, list(range(9)), 5)) [[0, 1], [2], [3, 4, 5], [6, 7, 8]] """ assert targetsize is None or 0 <= targetsize startdata = revlog.start(revs[0]) enddata = revlog.end(revs[-1]) fullspan = enddata - startdata if targetsize is None or fullspan <= targetsize: yield revs return startrevidx = 0 endrevidx = 1 iterrevs = enumerate(revs) next(iterrevs) # skip first rev. # first step: get snapshots out of the way for idx, r in iterrevs: span = revlog.end(r) - startdata snapshot = revlog.issnapshot(r) if span <= targetsize and snapshot: endrevidx = idx + 1 else: chunk = _trimchunk(revlog, revs, startrevidx, endrevidx) if chunk: yield chunk startrevidx = idx startdata = revlog.start(r) endrevidx = idx + 1 if not snapshot: break # for the others, we use binary slicing to quickly converge toward valid # chunks (otherwise, we might end up looking for start/end of many # revisions). This logic is not looking for the perfect slicing point, it # focuses on quickly converging toward valid chunks. nbitem = len(revs) while (enddata - startdata) > targetsize: endrevidx = nbitem if nbitem - startrevidx <= 1: break # protect against individual chunk larger than limit localenddata = revlog.end(revs[endrevidx - 1]) span = localenddata - startdata while span > targetsize: if endrevidx - startrevidx <= 1: break # protect against individual chunk larger than limit endrevidx -= (endrevidx - startrevidx) // 2 localenddata = revlog.end(revs[endrevidx - 1]) span = localenddata - startdata chunk = _trimchunk(revlog, revs, startrevidx, endrevidx) if chunk: yield chunk startrevidx = endrevidx startdata = revlog.start(revs[startrevidx]) chunk = _trimchunk(revlog, revs, startrevidx) if chunk: yield chunk def _slicechunktodensity(revlog, revs, targetdensity=0.5, mingapsize=0): """slice revs to reduce the amount of unrelated data to be read from disk. ``revs`` is sliced into groups that should be read in one time. Assume that revs are sorted. The initial chunk is sliced until the overall density (payload/chunks-span ratio) is above `targetdensity`. No gap smaller than `mingapsize` is skipped. >>> revlog = _testrevlog([ ... 5, #00 (5) ... 10, #01 (5) ... 12, #02 (2) ... 12, #03 (empty) ... 27, #04 (15) ... 31, #05 (4) ... 31, #06 (empty) ... 42, #07 (11) ... 47, #08 (5) ... 47, #09 (empty) ... 48, #10 (1) ... 51, #11 (3) ... 74, #12 (23) ... 85, #13 (11) ... 86, #14 (1) ... 91, #15 (5) ... ]) >>> list(_slicechunktodensity(revlog, list(range(16)))) [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]] >>> list(_slicechunktodensity(revlog, [0, 15])) [[0], [15]] >>> list(_slicechunktodensity(revlog, [0, 11, 15])) [[0], [11], [15]] >>> list(_slicechunktodensity(revlog, [0, 11, 13, 15])) [[0], [11, 13, 15]] >>> list(_slicechunktodensity(revlog, [1, 2, 3, 5, 8, 10, 11, 14])) [[1, 2], [5, 8, 10, 11], [14]] >>> list(_slicechunktodensity(revlog, [1, 2, 3, 5, 8, 10, 11, 14], ... mingapsize=20)) [[1, 2, 3, 5, 8, 10, 11], [14]] >>> list(_slicechunktodensity(revlog, [1, 2, 3, 5, 8, 10, 11, 14], ... targetdensity=0.95)) [[1, 2], [5], [8, 10, 11], [14]] >>> list(_slicechunktodensity(revlog, [1, 2, 3, 5, 8, 10, 11, 14], ... targetdensity=0.95, mingapsize=12)) [[1, 2], [5, 8, 10, 11], [14]] """ start = revlog.start length = revlog.length if len(revs) <= 1: yield revs return deltachainspan = segmentspan(revlog, revs) if deltachainspan < mingapsize: yield revs return readdata = deltachainspan chainpayload = sum(length(r) for r in revs) if deltachainspan: density = chainpayload / float(deltachainspan) else: density = 1.0 if density >= targetdensity: yield revs return # Store the gaps in a heap to have them sorted by decreasing size gaps = [] prevend = None for i, rev in enumerate(revs): revstart = start(rev) revlen = length(rev) # Skip empty revisions to form larger holes if revlen == 0: continue if prevend is not None: gapsize = revstart - prevend # only consider holes that are large enough if gapsize > mingapsize: gaps.append((gapsize, i)) prevend = revstart + revlen # sort the gaps to pop them from largest to small gaps.sort() # Collect the indices of the largest holes until the density is acceptable selected = [] while gaps and density < targetdensity: gapsize, gapidx = gaps.pop() selected.append(gapidx) # the gap sizes are stored as negatives to be sorted decreasingly # by the heap readdata -= gapsize if readdata > 0: density = chainpayload / float(readdata) else: density = 1.0 selected.sort() # Cut the revs at collected indices previdx = 0 for idx in selected: chunk = _trimchunk(revlog, revs, previdx, idx) if chunk: yield chunk previdx = idx chunk = _trimchunk(revlog, revs, previdx) if chunk: yield chunk def _trimchunk(revlog, revs, startidx, endidx=None): """returns revs[startidx:endidx] without empty trailing revs Doctest Setup >>> revlog = _testrevlog([ ... 5, #0 ... 10, #1 ... 12, #2 ... 12, #3 (empty) ... 17, #4 ... 21, #5 ... 21, #6 (empty) ... ]) Contiguous cases: >>> _trimchunk(revlog, [0, 1, 2, 3, 4, 5, 6], 0) [0, 1, 2, 3, 4, 5] >>> _trimchunk(revlog, [0, 1, 2, 3, 4, 5, 6], 0, 5) [0, 1, 2, 3, 4] >>> _trimchunk(revlog, [0, 1, 2, 3, 4, 5, 6], 0, 4) [0, 1, 2] >>> _trimchunk(revlog, [0, 1, 2, 3, 4, 5, 6], 2, 4) [2] >>> _trimchunk(revlog, [0, 1, 2, 3, 4, 5, 6], 3) [3, 4, 5] >>> _trimchunk(revlog, [0, 1, 2, 3, 4, 5, 6], 3, 5) [3, 4] Discontiguous cases: >>> _trimchunk(revlog, [1, 3, 5, 6], 0) [1, 3, 5] >>> _trimchunk(revlog, [1, 3, 5, 6], 0, 2) [1] >>> _trimchunk(revlog, [1, 3, 5, 6], 1, 3) [3, 5] >>> _trimchunk(revlog, [1, 3, 5, 6], 1) [3, 5] """ length = revlog.length if endidx is None: endidx = len(revs) # If we have a non-emtpy delta candidate, there are nothing to trim if revs[endidx - 1] < len(revlog): # Trim empty revs at the end, except the very first revision of a chain while ( endidx > 1 and endidx > startidx and length(revs[endidx - 1]) == 0 ): endidx -= 1 return revs[startidx:endidx] def segmentspan(revlog, revs): """Get the byte span of a segment of revisions revs is a sorted array of revision numbers >>> revlog = _testrevlog([ ... 5, #0 ... 10, #1 ... 12, #2 ... 12, #3 (empty) ... 17, #4 ... ]) >>> segmentspan(revlog, [0, 1, 2, 3, 4]) 17 >>> segmentspan(revlog, [0, 4]) 17 >>> segmentspan(revlog, [3, 4]) 5 >>> segmentspan(revlog, [1, 2, 3,]) 7 >>> segmentspan(revlog, [1, 3]) 7 """ if not revs: return 0 end = revlog.end(revs[-1]) return end - revlog.start(revs[0]) def _textfromdelta(fh, revlog, baserev, delta, p1, p2, flags, expectednode): """build full text from a (base, delta) pair and other metadata""" # special case deltas which replace entire base; no need to decode # base revision. this neatly avoids censored bases, which throw when # they're decoded. hlen = struct.calcsize(b">lll") if delta[:hlen] == mdiff.replacediffheader( revlog.rawsize(baserev), len(delta) - hlen ): fulltext = delta[hlen:] else: # deltabase is rawtext before changed by flag processors, which is # equivalent to non-raw text basetext = revlog.revision(baserev, _df=fh, raw=False) fulltext = mdiff.patch(basetext, delta) try: validatehash = flagutil.processflagsraw(revlog, fulltext, flags) if validatehash: revlog.checkhash(fulltext, expectednode, p1=p1, p2=p2) if flags & REVIDX_ISCENSORED: raise error.StorageError( _(b'node %s is not censored') % expectednode ) except error.CensoredNodeError: # must pass the censored index flag to add censored revisions if not flags & REVIDX_ISCENSORED: raise return fulltext @attr.s(slots=True, frozen=True) class _deltainfo(object): distance = attr.ib() deltalen = attr.ib() data = attr.ib() base = attr.ib() chainbase = attr.ib() chainlen = attr.ib() compresseddeltalen = attr.ib() snapshotdepth = attr.ib() def isgooddeltainfo(revlog, deltainfo, revinfo): """Returns True if the given delta is good. Good means that it is within the disk span, disk size, and chain length bounds that we know to be performant.""" if deltainfo is None: return False # - 'deltainfo.distance' is the distance from the base revision -- # bounding it limits the amount of I/O we need to do. # - 'deltainfo.compresseddeltalen' is the sum of the total size of # deltas we need to apply -- bounding it limits the amount of CPU # we consume. textlen = revinfo.textlen defaultmax = textlen * 4 maxdist = revlog._maxdeltachainspan if not maxdist: maxdist = deltainfo.distance # ensure the conditional pass maxdist = max(maxdist, defaultmax) # Bad delta from read span: # # If the span of data read is larger than the maximum allowed. # # In the sparse-revlog case, we rely on the associated "sparse reading" # to avoid issue related to the span of data. In theory, it would be # possible to build pathological revlog where delta pattern would lead # to too many reads. However, they do not happen in practice at all. So # we skip the span check entirely. if not revlog._sparserevlog and maxdist < deltainfo.distance: return False # Bad delta from new delta size: # # If the delta size is larger than the target text, storing the # delta will be inefficient. if textlen < deltainfo.deltalen: return False # Bad delta from cumulated payload size: # # If the sum of delta get larger than K * target text length. if textlen * LIMIT_DELTA2TEXT < deltainfo.compresseddeltalen: return False # Bad delta from chain length: # # If the number of delta in the chain gets too high. if revlog._maxchainlen and revlog._maxchainlen < deltainfo.chainlen: return False # bad delta from intermediate snapshot size limit # # If an intermediate snapshot size is higher than the limit. The # limit exist to prevent endless chain of intermediate delta to be # created. if ( deltainfo.snapshotdepth is not None and (textlen >> deltainfo.snapshotdepth) < deltainfo.deltalen ): return False # bad delta if new intermediate snapshot is larger than the previous # snapshot if ( deltainfo.snapshotdepth and revlog.length(deltainfo.base) < deltainfo.deltalen ): return False return True # If a revision's full text is that much bigger than a base candidate full # text's, it is very unlikely that it will produce a valid delta. We no longer # consider these candidates. LIMIT_BASE2TEXT = 500 def _candidategroups(revlog, textlen, p1, p2, cachedelta): """Provides group of revision to be tested as delta base This top level function focus on emitting groups with unique and worthwhile content. See _raw_candidate_groups for details about the group order. """ # should we try to build a delta? if not (len(revlog) and revlog._storedeltachains): yield None return deltalength = revlog.length deltaparent = revlog.deltaparent sparse = revlog._sparserevlog good = None deltas_limit = textlen * LIMIT_DELTA2TEXT tested = {nullrev} candidates = _refinedgroups(revlog, p1, p2, cachedelta) while True: temptative = candidates.send(good) if temptative is None: break group = [] for rev in temptative: # skip over empty delta (no need to include them in a chain) while revlog._generaldelta and not ( rev == nullrev or rev in tested or deltalength(rev) ): tested.add(rev) rev = deltaparent(rev) # no need to try a delta against nullrev, this will be done as a # last resort. if rev == nullrev: continue # filter out revision we tested already if rev in tested: continue tested.add(rev) # filter out delta base that will never produce good delta if deltas_limit < revlog.length(rev): continue if sparse and revlog.rawsize(rev) < (textlen // LIMIT_BASE2TEXT): continue # no delta for rawtext-changing revs (see "candelta" for why) if revlog.flags(rev) & REVIDX_RAWTEXT_CHANGING_FLAGS: continue # If we reach here, we are about to build and test a delta. # The delta building process will compute the chaininfo in all # case, since that computation is cached, it is fine to access it # here too. chainlen, chainsize = revlog._chaininfo(rev) # if chain will be too long, skip base if revlog._maxchainlen and chainlen >= revlog._maxchainlen: continue # if chain already have too much data, skip base if deltas_limit < chainsize: continue if sparse and revlog.upperboundcomp is not None: maxcomp = revlog.upperboundcomp basenotsnap = (p1, p2, nullrev) if rev not in basenotsnap and revlog.issnapshot(rev): snapshotdepth = revlog.snapshotdepth(rev) # If text is significantly larger than the base, we can # expect the resulting delta to be proportional to the size # difference revsize = revlog.rawsize(rev) rawsizedistance = max(textlen - revsize, 0) # use an estimate of the compression upper bound. lowestrealisticdeltalen = rawsizedistance // maxcomp # check the absolute constraint on the delta size snapshotlimit = textlen >> snapshotdepth if snapshotlimit < lowestrealisticdeltalen: # delta lower bound is larger than accepted upper bound continue # check the relative constraint on the delta size revlength = revlog.length(rev) if revlength < lowestrealisticdeltalen: # delta probable lower bound is larger than target base continue group.append(rev) if group: # XXX: in the sparse revlog case, group can become large, # impacting performances. Some bounding or slicing mecanism # would help to reduce this impact. good = yield tuple(group) yield None def _findsnapshots(revlog, cache, start_rev): """find snapshot from start_rev to tip""" if util.safehasattr(revlog.index, b'findsnapshots'): revlog.index.findsnapshots(cache, start_rev) else: deltaparent = revlog.deltaparent issnapshot = revlog.issnapshot for rev in revlog.revs(start_rev): if issnapshot(rev): cache[deltaparent(rev)].append(rev) def _refinedgroups(revlog, p1, p2, cachedelta): good = None # First we try to reuse a the delta contained in the bundle. # (or from the source revlog) # # This logic only applies to general delta repositories and can be disabled # through configuration. Disabling reuse source delta is useful when # we want to make sure we recomputed "optimal" deltas. if cachedelta and revlog._generaldelta and revlog._lazydeltabase: # Assume what we received from the server is a good choice # build delta will reuse the cache good = yield (cachedelta[0],) if good is not None: yield None return snapshots = collections.defaultdict(list) for candidates in _rawgroups(revlog, p1, p2, cachedelta, snapshots): good = yield candidates if good is not None: break # If sparse revlog is enabled, we can try to refine the available deltas if not revlog._sparserevlog: yield None return # if we have a refinable value, try to refine it if good is not None and good not in (p1, p2) and revlog.issnapshot(good): # refine snapshot down previous = None while previous != good: previous = good base = revlog.deltaparent(good) if base == nullrev: break good = yield (base,) # refine snapshot up if not snapshots: _findsnapshots(revlog, snapshots, good + 1) previous = None while good != previous: previous = good children = tuple(sorted(c for c in snapshots[good])) good = yield children # we have found nothing yield None def _rawgroups(revlog, p1, p2, cachedelta, snapshots=None): """Provides group of revision to be tested as delta base This lower level function focus on emitting delta theorically interresting without looking it any practical details. The group order aims at providing fast or small candidates first. """ gdelta = revlog._generaldelta # gate sparse behind general-delta because of issue6056 sparse = gdelta and revlog._sparserevlog curr = len(revlog) prev = curr - 1 deltachain = lambda rev: revlog._deltachain(rev)[0] if gdelta: # exclude already lazy tested base if any parents = [p for p in (p1, p2) if p != nullrev] if not revlog._deltabothparents and len(parents) == 2: parents.sort() # To minimize the chance of having to build a fulltext, # pick first whichever parent is closest to us (max rev) yield (parents[1],) # then the other one (min rev) if the first did not fit yield (parents[0],) elif len(parents) > 0: # Test all parents (1 or 2), and keep the best candidate yield parents if sparse and parents: if snapshots is None: # map: base-rev: snapshot-rev snapshots = collections.defaultdict(list) # See if we can use an existing snapshot in the parent chains to use as # a base for a new intermediate-snapshot # # search for snapshot in parents delta chain # map: snapshot-level: snapshot-rev parents_snaps = collections.defaultdict(set) candidate_chains = [deltachain(p) for p in parents] for chain in candidate_chains: for idx, s in enumerate(chain): if not revlog.issnapshot(s): break parents_snaps[idx].add(s) snapfloor = min(parents_snaps[0]) + 1 _findsnapshots(revlog, snapshots, snapfloor) # search for the highest "unrelated" revision # # Adding snapshots used by "unrelated" revision increase the odd we # reuse an independant, yet better snapshot chain. # # XXX instead of building a set of revisions, we could lazily enumerate # over the chains. That would be more efficient, however we stick to # simple code for now. all_revs = set() for chain in candidate_chains: all_revs.update(chain) other = None for r in revlog.revs(prev, snapfloor): if r not in all_revs: other = r break if other is not None: # To avoid unfair competition, we won't use unrelated intermediate # snapshot that are deeper than the ones from the parent delta # chain. max_depth = max(parents_snaps.keys()) chain = deltachain(other) for idx, s in enumerate(chain): if s < snapfloor: continue if max_depth < idx: break if not revlog.issnapshot(s): break parents_snaps[idx].add(s) # Test them as possible intermediate snapshot base # We test them from highest to lowest level. High level one are more # likely to result in small delta floor = None for idx, snaps in sorted(parents_snaps.items(), reverse=True): siblings = set() for s in snaps: siblings.update(snapshots[s]) # Before considering making a new intermediate snapshot, we check # if an existing snapshot, children of base we consider, would be # suitable. # # It give a change to reuse a delta chain "unrelated" to the # current revision instead of starting our own. Without such # re-use, topological branches would keep reopening new chains. # Creating more and more snapshot as the repository grow. if floor is not None: # We only do this for siblings created after the one in our # parent's delta chain. Those created before has less chances # to be valid base since our ancestors had to create a new # snapshot. siblings = [r for r in siblings if floor < r] yield tuple(sorted(siblings)) # then test the base from our parent's delta chain. yield tuple(sorted(snaps)) floor = min(snaps) # No suitable base found in the parent chain, search if any full # snapshots emitted since parent's base would be a suitable base for an # intermediate snapshot. # # It give a chance to reuse a delta chain unrelated to the current # revisions instead of starting our own. Without such re-use, # topological branches would keep reopening new full chains. Creating # more and more snapshot as the repository grow. yield tuple(snapshots[nullrev]) if not sparse: # other approach failed try against prev to hopefully save us a # fulltext. yield (prev,) class deltacomputer(object): def __init__(self, revlog): self.revlog = revlog def buildtext(self, revinfo, fh): """Builds a fulltext version of a revision revinfo: _revisioninfo instance that contains all needed info fh: file handle to either the .i or the .d revlog file, depending on whether it is inlined or not """ btext = revinfo.btext if btext[0] is not None: return btext[0] revlog = self.revlog cachedelta = revinfo.cachedelta baserev = cachedelta[0] delta = cachedelta[1] fulltext = btext[0] = _textfromdelta( fh, revlog, baserev, delta, revinfo.p1, revinfo.p2, revinfo.flags, revinfo.node, ) return fulltext def _builddeltadiff(self, base, revinfo, fh): revlog = self.revlog t = self.buildtext(revinfo, fh) if revlog.iscensored(base): # deltas based on a censored revision must replace the # full content in one patch, so delta works everywhere header = mdiff.replacediffheader(revlog.rawsize(base), len(t)) delta = header + t else: ptext = revlog.rawdata(base, _df=fh) delta = mdiff.textdiff(ptext, t) return delta def _builddeltainfo(self, revinfo, base, fh): # can we use the cached delta? revlog = self.revlog chainbase = revlog.chainbase(base) if revlog._generaldelta: deltabase = base else: deltabase = chainbase snapshotdepth = None if revlog._sparserevlog and deltabase == nullrev: snapshotdepth = 0 elif revlog._sparserevlog and revlog.issnapshot(deltabase): # A delta chain should always be one full snapshot, # zero or more semi-snapshots, and zero or more deltas p1, p2 = revlog.rev(revinfo.p1), revlog.rev(revinfo.p2) if deltabase not in (p1, p2) and revlog.issnapshot(deltabase): snapshotdepth = len(revlog._deltachain(deltabase)[0]) delta = None if revinfo.cachedelta: cachebase, cachediff = revinfo.cachedelta # check if the diff still apply currentbase = cachebase while ( currentbase != nullrev and currentbase != base and self.revlog.length(currentbase) == 0 ): currentbase = self.revlog.deltaparent(currentbase) if self.revlog._lazydelta and currentbase == base: delta = revinfo.cachedelta[1] if delta is None: delta = self._builddeltadiff(base, revinfo, fh) # snapshotdept need to be neither None nor 0 level snapshot if revlog.upperboundcomp is not None and snapshotdepth: lowestrealisticdeltalen = len(delta) // revlog.upperboundcomp snapshotlimit = revinfo.textlen >> snapshotdepth if snapshotlimit < lowestrealisticdeltalen: return None if revlog.length(base) < lowestrealisticdeltalen: return None header, data = revlog.compress(delta) deltalen = len(header) + len(data) offset = revlog.end(len(revlog) - 1) dist = deltalen + offset - revlog.start(chainbase) chainlen, compresseddeltalen = revlog._chaininfo(base) chainlen += 1 compresseddeltalen += deltalen return _deltainfo( dist, deltalen, (header, data), deltabase, chainbase, chainlen, compresseddeltalen, snapshotdepth, ) def _fullsnapshotinfo(self, fh, revinfo): curr = len(self.revlog) rawtext = self.buildtext(revinfo, fh) data = self.revlog.compress(rawtext) compresseddeltalen = deltalen = dist = len(data[1]) + len(data[0]) deltabase = chainbase = curr snapshotdepth = 0 chainlen = 1 return _deltainfo( dist, deltalen, data, deltabase, chainbase, chainlen, compresseddeltalen, snapshotdepth, ) def finddeltainfo(self, revinfo, fh): """Find an acceptable delta against a candidate revision revinfo: information about the revision (instance of _revisioninfo) fh: file handle to either the .i or the .d revlog file, depending on whether it is inlined or not Returns the first acceptable candidate revision, as ordered by _candidategroups If no suitable deltabase is found, we return delta info for a full snapshot. """ if not revinfo.textlen: return self._fullsnapshotinfo(fh, revinfo) # no delta for flag processor revision (see "candelta" for why) # not calling candelta since only one revision needs test, also to # avoid overhead fetching flags again. if revinfo.flags & REVIDX_RAWTEXT_CHANGING_FLAGS: return self._fullsnapshotinfo(fh, revinfo) cachedelta = revinfo.cachedelta p1 = revinfo.p1 p2 = revinfo.p2 revlog = self.revlog deltainfo = None p1r, p2r = revlog.rev(p1), revlog.rev(p2) groups = _candidategroups( self.revlog, revinfo.textlen, p1r, p2r, cachedelta ) candidaterevs = next(groups) while candidaterevs is not None: nominateddeltas = [] if deltainfo is not None: # if we already found a good delta, # challenge it against refined candidates nominateddeltas.append(deltainfo) for candidaterev in candidaterevs: candidatedelta = self._builddeltainfo(revinfo, candidaterev, fh) if candidatedelta is not None: if isgooddeltainfo(self.revlog, candidatedelta, revinfo): nominateddeltas.append(candidatedelta) if nominateddeltas: deltainfo = min(nominateddeltas, key=lambda x: x.deltalen) if deltainfo is not None: candidaterevs = groups.send(deltainfo.base) else: candidaterevs = next(groups) if deltainfo is None: deltainfo = self._fullsnapshotinfo(fh, revinfo) return deltainfo