copies: iterate over children directly (instead of parents)
Before this change we would gather all parent → child edges and iterate on
all parent, gathering copy information for children and aggregating them from
there.
They are not strict requirement for edges to be processed in that specific
order. We could also simply iterate over all "children" revision and aggregate
data from both parents at the same time. This patch does that.
It make various things simpler:
* since both parents are processed at the same time, we no longer need to
cache data for merge (see next changeset for details),
* we no longer need nested loop to process data,
* we no longer need to store partial merge data for a rev from distinct loop
interaction to another when processing merges,
* we no longer need to build a full parent -> children mapping (we only rely on
a simpler "parent -> number of children" map (for memory efficiency),
* the data access pattern is now simpler (from lower revisions to higher
revisions) and entirely predicable. That predictability open the way to
prefetching and parallel processing.
So that new iterations order requires simpler code and open the way to
interesting optimisation.
The effect on performance is quite good. In the worse case, we don't see any
significant negative impact. And in the best case, the reduction of roundtrip
to Python provide us with a significant speed. Some example below:
Repo Case Source-Rev Dest-Rev # of revisions old time new time Difference Factor time per rev
---------------------------------------------------------------------------------------------------------------------------------------------------------------
mozilla-try x00000_revs_x00000_added_0_copies
dc8a3ca7010e d16fde900c9c : 34414 revs, 0.962867 s, 0.502584 s, -0.460283 s, × 0.5220, 14 µs/rev
mozilla-try x0000_revs_xx000_added_x_copies
156f6e2674f2 4d0f2c178e66 : 8598 revs, 0.110717 s, 0.076323 s, -0.034394 s, × 0.6894, 8 µs/rev
# full comparison between the previous changeset and this one
Repo Case Source-Rev Dest-Rev # of revisions old time new time Difference Factor time per rev
---------------------------------------------------------------------------------------------------------------------------------------------------------------
mercurial x_revs_x_added_0_copies
ad6b123de1c7 39cfcef4f463 : 1 revs, 0.000048 s, 0.000041 s, -0.000007 s, × 0.8542, 41 µs/rev
mercurial x_revs_x_added_x_copies
2b1c78674230 0c1d10351869 : 6 revs, 0.000153 s, 0.000102 s, -0.000051 s, × 0.6667, 17 µs/rev
mercurial x000_revs_x000_added_x_copies
81f8ff2a9bf2 dd3267698d84 : 1032 revs, 0.004209 s, 0.004254 s, +0.000045 s, × 1.0107, 4 µs/rev
pypy x_revs_x_added_0_copies
aed021ee8ae8 099ed31b181b : 9 revs, 0.000203 s, 0.000282 s, +0.000079 s, × 1.3892, 31 µs/rev
pypy x_revs_x000_added_0_copies
4aa4e1f8e19a 359343b9ac0e : 1 revs, 0.000059 s, 0.000048 s, -0.000011 s, × 0.8136, 48 µs/rev
pypy x_revs_x_added_x_copies
ac52eb7bbbb0 72e022663155 : 7 revs, 0.000194 s, 0.000211 s, +0.000017 s, × 1.0876, 30 µs/rev
pypy x_revs_x00_added_x_copies
c3b14617fbd7 ace7255d9a26 : 1 revs, 0.000380 s, 0.000375 s, -0.000005 s, × 0.9868, 375 µs/rev
pypy x_revs_x000_added_x000_copies
df6f7a526b60 a83dc6a2d56f : 6 revs, 0.010588 s, 0.010574 s, -0.000014 s, × 0.9987, 1762 µs/rev
pypy x000_revs_xx00_added_0_copies
89a76aede314 2f22446ff07e : 4785 revs, 0.048961 s, 0.049974 s, +0.001013 s, × 1.0207, 10 µs/rev
pypy x000_revs_x000_added_x_copies
8a3b5bfd266e 2c68e87c3efe : 6780 revs, 0.083612 s, 0.084300 s, +0.000688 s, × 1.0082, 12 µs/rev
pypy x000_revs_x000_added_x000_copies
89a76aede314 7b3dda341c84 : 5441 revs, 0.058579 s, 0.060128 s, +0.001549 s, × 1.0264, 11 µs/rev
pypy x0000_revs_x_added_0_copies
d1defd0dc478 c9cb1334cc78 : 43645 revs, 0.736783 s, 0.686542 s, -0.050241 s, × 0.9318, 15 µs/rev
pypy x0000_revs_xx000_added_0_copies
bf2c629d0071 4ffed77c095c : 2 revs, 0.022050 s, 0.009277 s, -0.012773 s, × 0.4207, 4638 µs/rev
pypy x0000_revs_xx000_added_x000_copies
08ea3258278e d9fa043f30c0 : 11316 revs, 0.120800 s, 0.114733 s, -0.006067 s, × 0.9498, 10 µs/rev
netbeans x_revs_x_added_0_copies
fb0955ffcbcd a01e9239f9e7 : 2 revs, 0.000140 s, 0.000081 s, -0.000059 s, × 0.5786, 40 µs/rev
netbeans x_revs_x000_added_0_copies
6f360122949f 20eb231cc7d0 : 2 revs, 0.000114 s, 0.000107 s, -0.000007 s, × 0.9386, 53 µs/rev
netbeans x_revs_x_added_x_copies
1ada3faf6fb6 5a39d12eecf4 : 3 revs, 0.000224 s, 0.000173 s, -0.000051 s, × 0.7723, 57 µs/rev
netbeans x_revs_x00_added_x_copies
35be93ba1e2c 9eec5e90c05f : 9 revs, 0.000723 s, 0.000698 s, -0.000025 s, × 0.9654, 77 µs/rev
netbeans x000_revs_xx00_added_0_copies
eac3045b4fdd 51d4ae7f1290 : 1421 revs, 0.009665 s, 0.009248 s, -0.000417 s, × 0.9569, 6 µs/rev
netbeans x000_revs_x000_added_x_copies
e2063d266acd 6081d72689dc : 1533 revs, 0.014820 s, 0.015446 s, +0.000626 s, × 1.0422, 10 µs/rev
netbeans x000_revs_x000_added_x000_copies
ff453e9fee32 411350406ec2 : 5750 revs, 0.076049 s, 0.074373 s, -0.001676 s, × 0.9780, 12 µs/rev
netbeans x0000_revs_xx000_added_x000_copies
588c2d1ced70 1aad62e59ddd : 66949 revs, 0.683603 s, 0.639870 s, -0.043733 s, × 0.9360, 9 µs/rev
mozilla-central x_revs_x_added_0_copies
3697f962bb7b 7015fcdd43a2 : 2 revs, 0.000161 s, 0.000088 s, -0.000073 s, × 0.5466, 44 µs/rev
mozilla-central x_revs_x000_added_0_copies
dd390860c6c9 40d0c5bed75d : 8 revs, 0.000234 s, 0.000199 s, -0.000035 s, × 0.8504, 24 µs/rev
mozilla-central x_revs_x_added_x_copies
8d198483ae3b 14207ffc2b2f : 9 revs, 0.000247 s, 0.000171 s, -0.000076 s, × 0.6923, 19 µs/rev
mozilla-central x_revs_x00_added_x_copies
98cbc58cc6bc 446a150332c3 : 7 revs, 0.000630 s, 0.000592 s, -0.000038 s, × 0.9397, 84 µs/rev
mozilla-central x_revs_x000_added_x000_copies
3c684b4b8f68 0a5e72d1b479 : 3 revs, 0.003286 s, 0.003151 s, -0.000135 s, × 0.9589, 1050 µs/rev
mozilla-central x_revs_x0000_added_x0000_copies
effb563bb7e5 c07a39dc4e80 : 6 revs, 0.062441 s, 0.061612 s, -0.000829 s, × 0.9867, 10268 µs/rev
mozilla-central x000_revs_xx00_added_0_copies
6100d773079a 04a55431795e : 1593 revs, 0.005423 s, 0.005381 s, -0.000042 s, × 0.9923, 3 µs/rev
mozilla-central x000_revs_x000_added_x_copies
9f17a6fc04f9 2d37b966abed : 41 revs, 0.005919 s, 0.003742 s, -0.002177 s, × 0.6322, 91 µs/rev
mozilla-central x000_revs_x000_added_x000_copies
7c97034feb78 4407bd0c6330 : 7839 revs, 0.062597 s, 0.061983 s, -0.000614 s, × 0.9902, 7 µs/rev
mozilla-central x0000_revs_xx000_added_0_copies
9eec5917337d 67118cc6dcad : 615 revs, 0.043551 s, 0.019861 s, -0.023690 s, × 0.4560, 32 µs/rev
mozilla-central x0000_revs_xx000_added_x000_copies
f78c615a656c 96a38b690156 : 30263 revs, 0.192475 s, 0.188101 s, -0.004374 s, × 0.9773, 6 µs/rev
mozilla-central x00000_revs_x0000_added_x0000_copies
6832ae71433c 4c222a1d9a00 : 153721 revs, 1.955575 s, 1.806696 s, -0.148879 s, × 0.9239, 11 µs/rev
mozilla-central x00000_revs_x00000_added_x000_copies
76caed42cf7c 1daa622bbe42 : 204976 revs, 2.886501 s, 2.682987 s, -0.203514 s, × 0.9295, 13 µs/rev
mozilla-try x_revs_x_added_0_copies
aaf6dde0deb8 9790f499805a : 2 revs, 0.001181 s, 0.000852 s, -0.000329 s, × 0.7214, 426 µs/rev
mozilla-try x_revs_x000_added_0_copies
d8d0222927b4 5bb8ce8c7450 : 2 revs, 0.001189 s, 0.000859 s, -0.000330 s, × 0.7225, 429 µs/rev
mozilla-try x_revs_x_added_x_copies
092fcca11bdb 936255a0384a : 4 revs, 0.000563 s, 0.000150 s, -0.000413 s, × 0.2664, 37 µs/rev
mozilla-try x_revs_x00_added_x_copies
b53d2fadbdb5 017afae788ec : 2 revs, 0.001548 s, 0.001158 s, -0.000390 s, × 0.7481, 579 µs/rev
mozilla-try x_revs_x000_added_x000_copies
20408ad61ce5 6f0ee96e21ad : 1 revs, 0.027782 s, 0.027240 s, -0.000542 s, × 0.9805, 27240 µs/rev
mozilla-try x_revs_x0000_added_x0000_copies
effb563bb7e5 c07a39dc4e80 : 6 revs, 0.062781 s, 0.062824 s, +0.000043 s, × 1.0007, 10470 µs/rev
mozilla-try x000_revs_xx00_added_0_copies
6100d773079a 04a55431795e : 1593 revs, 0.005778 s, 0.005463 s, -0.000315 s, × 0.9455, 3 µs/rev
mozilla-try x000_revs_x000_added_x_copies
9f17a6fc04f9 2d37b966abed : 41 revs, 0.006192 s, 0.004238 s, -0.001954 s, × 0.6844, 103 µs/rev
mozilla-try x000_revs_x000_added_x000_copies
1346fd0130e4 4c65cbdabc1f : 6657 revs, 0.065391 s, 0.064113 s, -0.001278 s, × 0.9805, 9 µs/rev
mozilla-try x0000_revs_x_added_0_copies
63519bfd42ee a36a2a865d92 : 40314 revs, 0.317216 s, 0.294063 s, -0.023153 s, × 0.9270, 7 µs/rev
mozilla-try x0000_revs_x_added_x_copies
9fe69ff0762d bcabf2a78927 : 38690 revs, 0.303119 s, 0.281493 s, -0.021626 s, × 0.9287, 7 µs/rev
mozilla-try x0000_revs_xx000_added_x_copies
156f6e2674f2 4d0f2c178e66 : 8598 revs, 0.110717 s, 0.076323 s, -0.034394 s, × 0.6894, 8 µs/rev
mozilla-try x0000_revs_xx000_added_0_copies
9eec5917337d 67118cc6dcad : 615 revs, 0.045739 s, 0.020390 s, -0.025349 s, × 0.4458, 33 µs/rev
mozilla-try x0000_revs_xx000_added_x000_copies
89294cd501d9 7ccb2fc7ccb5 : 97052 revs, 3.098021 s, 3.023879 s, -0.074142 s, × 0.9761, 31 µs/rev
mozilla-try x0000_revs_x0000_added_x0000_copies
e928c65095ed e951f4ad123a : 52031 revs, 0.771480 s, 0.735549 s, -0.035931 s, × 0.9534, 14 µs/rev
mozilla-try x00000_revs_x_added_0_copies
6a320851d377 1ebb79acd503 : 363753 revs, 18.813422 s, 18.568900 s, -0.244522 s, × 0.9870, 51 µs/rev
mozilla-try x00000_revs_x00000_added_0_copies
dc8a3ca7010e d16fde900c9c : 34414 revs, 0.962867 s, 0.502584 s, -0.460283 s, × 0.5220, 14 µs/rev
mozilla-try x00000_revs_x_added_x_copies
5173c4b6f97c 95d83ee7242d : 362229 revs, 18.684923 s, 18.356645 s, -0.328278 s, × 0.9824, 50 µs/rev
mozilla-try x00000_revs_x000_added_x_copies
9126823d0e9c ca82787bb23c : 359344 revs, 18.296305 s, 18.250393 s, -0.045912 s, × 0.9975, 50 µs/rev
mozilla-try x00000_revs_x0000_added_x0000_copies
8d3fafa80d4b eb884023b810 : 192665 revs, 3.061887 s, 2.792459 s, -0.269428 s, × 0.9120, 14 µs/rev
mozilla-try x00000_revs_x00000_added_x0000_copies
1b661134e2ca 1ae03d022d6d : 228985 revs, 103.869641 s, 107.697264 s, +3.827623 s, × 1.0369, 470 µs/rev
mozilla-try x00000_revs_x00000_added_x000_copies
9b2a99adc05e 8e29777b48e6 : 382065 revs, 64.262957 s, 63.961040 s, -0.301917 s, × 0.9953, 167 µs/rev
Differential Revision: https://phab.mercurial-scm.org/D9422
# encoding.py - character transcoding support for Mercurial
#
# Copyright 2005-2009 Matt Mackall <mpm@selenic.com> and others
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
from __future__ import absolute_import, print_function
import locale
import os
import unicodedata
from .pycompat import getattr
from . import (
error,
policy,
pycompat,
)
from .pure import charencode as charencodepure
if pycompat.TYPE_CHECKING:
from typing import (
Any,
Callable,
List,
Text,
Type,
TypeVar,
Union,
)
# keep pyflakes happy
for t in (Any, Callable, List, Text, Type, Union):
assert t
_Tlocalstr = TypeVar('_Tlocalstr', bound='localstr')
charencode = policy.importmod('charencode')
isasciistr = charencode.isasciistr
asciilower = charencode.asciilower
asciiupper = charencode.asciiupper
_jsonescapeu8fast = charencode.jsonescapeu8fast
_sysstr = pycompat.sysstr
if pycompat.ispy3:
unichr = chr
# These unicode characters are ignored by HFS+ (Apple Technote 1150,
# "Unicode Subtleties"), so we need to ignore them in some places for
# sanity.
_ignore = [
unichr(int(x, 16)).encode("utf-8")
for x in b"200c 200d 200e 200f 202a 202b 202c 202d 202e "
b"206a 206b 206c 206d 206e 206f feff".split()
]
# verify the next function will work
assert all(i.startswith((b"\xe2", b"\xef")) for i in _ignore)
def hfsignoreclean(s):
# type: (bytes) -> bytes
"""Remove codepoints ignored by HFS+ from s.
>>> hfsignoreclean(u'.h\u200cg'.encode('utf-8'))
'.hg'
>>> hfsignoreclean(u'.h\ufeffg'.encode('utf-8'))
'.hg'
"""
if b"\xe2" in s or b"\xef" in s:
for c in _ignore:
s = s.replace(c, b'')
return s
# encoding.environ is provided read-only, which may not be used to modify
# the process environment
_nativeenviron = not pycompat.ispy3 or os.supports_bytes_environ
if not pycompat.ispy3:
environ = os.environ # re-exports
elif _nativeenviron:
environ = os.environb # re-exports
else:
# preferred encoding isn't known yet; use utf-8 to avoid unicode error
# and recreate it once encoding is settled
environ = {
k.encode('utf-8'): v.encode('utf-8')
for k, v in os.environ.items() # re-exports
}
_encodingrewrites = {
b'646': b'ascii',
b'ANSI_X3.4-1968': b'ascii',
}
# cp65001 is a Windows variant of utf-8, which isn't supported on Python 2.
# No idea if it should be rewritten to the canonical name 'utf-8' on Python 3.
# https://bugs.python.org/issue13216
if pycompat.iswindows and not pycompat.ispy3:
_encodingrewrites[b'cp65001'] = b'utf-8'
try:
encoding = environ.get(b"HGENCODING")
if not encoding:
encoding = locale.getpreferredencoding().encode('ascii') or b'ascii'
encoding = _encodingrewrites.get(encoding, encoding)
except locale.Error:
encoding = b'ascii'
encodingmode = environ.get(b"HGENCODINGMODE", b"strict")
fallbackencoding = b'ISO-8859-1'
class localstr(bytes):
"""This class allows strings that are unmodified to be
round-tripped to the local encoding and back"""
def __new__(cls, u, l):
s = bytes.__new__(cls, l)
s._utf8 = u
return s
if pycompat.TYPE_CHECKING:
# pseudo implementation to help pytype see localstr() constructor
def __init__(self, u, l):
# type: (bytes, bytes) -> None
super(localstr, self).__init__(l)
self._utf8 = u
def __hash__(self):
return hash(self._utf8) # avoid collisions in local string space
class safelocalstr(bytes):
"""Tagged string denoting it was previously an internal UTF-8 string,
and can be converted back to UTF-8 losslessly
>>> assert safelocalstr(b'\\xc3') == b'\\xc3'
>>> assert b'\\xc3' == safelocalstr(b'\\xc3')
>>> assert b'\\xc3' in {safelocalstr(b'\\xc3'): 0}
>>> assert safelocalstr(b'\\xc3') in {b'\\xc3': 0}
"""
def tolocal(s):
# type: (bytes) -> bytes
"""
Convert a string from internal UTF-8 to local encoding
All internal strings should be UTF-8 but some repos before the
implementation of locale support may contain latin1 or possibly
other character sets. We attempt to decode everything strictly
using UTF-8, then Latin-1, and failing that, we use UTF-8 and
replace unknown characters.
The localstr class is used to cache the known UTF-8 encoding of
strings next to their local representation to allow lossless
round-trip conversion back to UTF-8.
>>> u = b'foo: \\xc3\\xa4' # utf-8
>>> l = tolocal(u)
>>> l
'foo: ?'
>>> fromlocal(l)
'foo: \\xc3\\xa4'
>>> u2 = b'foo: \\xc3\\xa1'
>>> d = { l: 1, tolocal(u2): 2 }
>>> len(d) # no collision
2
>>> b'foo: ?' in d
False
>>> l1 = b'foo: \\xe4' # historical latin1 fallback
>>> l = tolocal(l1)
>>> l
'foo: ?'
>>> fromlocal(l) # magically in utf-8
'foo: \\xc3\\xa4'
"""
if isasciistr(s):
return s
try:
try:
# make sure string is actually stored in UTF-8
u = s.decode('UTF-8')
if encoding == b'UTF-8':
# fast path
return s
r = u.encode(_sysstr(encoding), "replace")
if u == r.decode(_sysstr(encoding)):
# r is a safe, non-lossy encoding of s
return safelocalstr(r)
return localstr(s, r)
except UnicodeDecodeError:
# we should only get here if we're looking at an ancient changeset
try:
u = s.decode(_sysstr(fallbackencoding))
r = u.encode(_sysstr(encoding), "replace")
if u == r.decode(_sysstr(encoding)):
# r is a safe, non-lossy encoding of s
return safelocalstr(r)
return localstr(u.encode('UTF-8'), r)
except UnicodeDecodeError:
u = s.decode("utf-8", "replace") # last ditch
# can't round-trip
return u.encode(_sysstr(encoding), "replace")
except LookupError as k:
raise error.Abort(
pycompat.bytestr(k), hint=b"please check your locale settings"
)
def fromlocal(s):
# type: (bytes) -> bytes
"""
Convert a string from the local character encoding to UTF-8
We attempt to decode strings using the encoding mode set by
HGENCODINGMODE, which defaults to 'strict'. In this mode, unknown
characters will cause an error message. Other modes include
'replace', which replaces unknown characters with a special
Unicode character, and 'ignore', which drops the character.
"""
# can we do a lossless round-trip?
if isinstance(s, localstr):
return s._utf8
if isasciistr(s):
return s
try:
u = s.decode(_sysstr(encoding), _sysstr(encodingmode))
return u.encode("utf-8")
except UnicodeDecodeError as inst:
sub = s[max(0, inst.start - 10) : inst.start + 10]
raise error.Abort(
b"decoding near '%s': %s!" % (sub, pycompat.bytestr(inst))
)
except LookupError as k:
raise error.Abort(k, hint=b"please check your locale settings")
def unitolocal(u):
# type: (Text) -> bytes
"""Convert a unicode string to a byte string of local encoding"""
return tolocal(u.encode('utf-8'))
def unifromlocal(s):
# type: (bytes) -> Text
"""Convert a byte string of local encoding to a unicode string"""
return fromlocal(s).decode('utf-8')
def unimethod(bytesfunc):
# type: (Callable[[Any], bytes]) -> Callable[[Any], Text]
"""Create a proxy method that forwards __unicode__() and __str__() of
Python 3 to __bytes__()"""
def unifunc(obj):
return unifromlocal(bytesfunc(obj))
return unifunc
# converter functions between native str and byte string. use these if the
# character encoding is not aware (e.g. exception message) or is known to
# be locale dependent (e.g. date formatting.)
if pycompat.ispy3:
strtolocal = unitolocal
strfromlocal = unifromlocal
strmethod = unimethod
else:
def strtolocal(s):
# type: (str) -> bytes
return s # pytype: disable=bad-return-type
def strfromlocal(s):
# type: (bytes) -> str
return s # pytype: disable=bad-return-type
strmethod = pycompat.identity
if not _nativeenviron:
# now encoding and helper functions are available, recreate the environ
# dict to be exported to other modules
environ = {
tolocal(k.encode('utf-8')): tolocal(v.encode('utf-8'))
for k, v in os.environ.items() # re-exports
}
if pycompat.ispy3:
# os.getcwd() on Python 3 returns string, but it has os.getcwdb() which
# returns bytes.
if pycompat.iswindows:
# Python 3 on Windows issues a DeprecationWarning about using the bytes
# API when os.getcwdb() is called.
getcwd = lambda: strtolocal(os.getcwd()) # re-exports
else:
getcwd = os.getcwdb # re-exports
else:
getcwd = os.getcwd # re-exports
# How to treat ambiguous-width characters. Set to 'wide' to treat as wide.
_wide = _sysstr(
environ.get(b"HGENCODINGAMBIGUOUS", b"narrow") == b"wide"
and b"WFA"
or b"WF"
)
def colwidth(s):
# type: (bytes) -> int
"""Find the column width of a string for display in the local encoding"""
return ucolwidth(s.decode(_sysstr(encoding), 'replace'))
def ucolwidth(d):
# type: (Text) -> int
"""Find the column width of a Unicode string for display"""
eaw = getattr(unicodedata, 'east_asian_width', None)
if eaw is not None:
return sum([eaw(c) in _wide and 2 or 1 for c in d])
return len(d)
def getcols(s, start, c):
# type: (bytes, int, int) -> bytes
"""Use colwidth to find a c-column substring of s starting at byte
index start"""
for x in pycompat.xrange(start + c, len(s)):
t = s[start:x]
if colwidth(t) == c:
return t
raise ValueError('substring not found')
def trim(s, width, ellipsis=b'', leftside=False):
# type: (bytes, int, bytes, bool) -> bytes
"""Trim string 's' to at most 'width' columns (including 'ellipsis').
If 'leftside' is True, left side of string 's' is trimmed.
'ellipsis' is always placed at trimmed side.
>>> from .node import bin
>>> def bprint(s):
... print(pycompat.sysstr(s))
>>> ellipsis = b'+++'
>>> from . import encoding
>>> encoding.encoding = b'utf-8'
>>> t = b'1234567890'
>>> bprint(trim(t, 12, ellipsis=ellipsis))
1234567890
>>> bprint(trim(t, 10, ellipsis=ellipsis))
1234567890
>>> bprint(trim(t, 8, ellipsis=ellipsis))
12345+++
>>> bprint(trim(t, 8, ellipsis=ellipsis, leftside=True))
+++67890
>>> bprint(trim(t, 8))
12345678
>>> bprint(trim(t, 8, leftside=True))
34567890
>>> bprint(trim(t, 3, ellipsis=ellipsis))
+++
>>> bprint(trim(t, 1, ellipsis=ellipsis))
+
>>> u = u'\u3042\u3044\u3046\u3048\u304a' # 2 x 5 = 10 columns
>>> t = u.encode(pycompat.sysstr(encoding.encoding))
>>> bprint(trim(t, 12, ellipsis=ellipsis))
\xe3\x81\x82\xe3\x81\x84\xe3\x81\x86\xe3\x81\x88\xe3\x81\x8a
>>> bprint(trim(t, 10, ellipsis=ellipsis))
\xe3\x81\x82\xe3\x81\x84\xe3\x81\x86\xe3\x81\x88\xe3\x81\x8a
>>> bprint(trim(t, 8, ellipsis=ellipsis))
\xe3\x81\x82\xe3\x81\x84+++
>>> bprint(trim(t, 8, ellipsis=ellipsis, leftside=True))
+++\xe3\x81\x88\xe3\x81\x8a
>>> bprint(trim(t, 5))
\xe3\x81\x82\xe3\x81\x84
>>> bprint(trim(t, 5, leftside=True))
\xe3\x81\x88\xe3\x81\x8a
>>> bprint(trim(t, 4, ellipsis=ellipsis))
+++
>>> bprint(trim(t, 4, ellipsis=ellipsis, leftside=True))
+++
>>> t = bin(b'112233445566778899aa') # invalid byte sequence
>>> bprint(trim(t, 12, ellipsis=ellipsis))
\x11\x22\x33\x44\x55\x66\x77\x88\x99\xaa
>>> bprint(trim(t, 10, ellipsis=ellipsis))
\x11\x22\x33\x44\x55\x66\x77\x88\x99\xaa
>>> bprint(trim(t, 8, ellipsis=ellipsis))
\x11\x22\x33\x44\x55+++
>>> bprint(trim(t, 8, ellipsis=ellipsis, leftside=True))
+++\x66\x77\x88\x99\xaa
>>> bprint(trim(t, 8))
\x11\x22\x33\x44\x55\x66\x77\x88
>>> bprint(trim(t, 8, leftside=True))
\x33\x44\x55\x66\x77\x88\x99\xaa
>>> bprint(trim(t, 3, ellipsis=ellipsis))
+++
>>> bprint(trim(t, 1, ellipsis=ellipsis))
+
"""
try:
u = s.decode(_sysstr(encoding))
except UnicodeDecodeError:
if len(s) <= width: # trimming is not needed
return s
width -= len(ellipsis)
if width <= 0: # no enough room even for ellipsis
return ellipsis[: width + len(ellipsis)]
if leftside:
return ellipsis + s[-width:]
return s[:width] + ellipsis
if ucolwidth(u) <= width: # trimming is not needed
return s
width -= len(ellipsis)
if width <= 0: # no enough room even for ellipsis
return ellipsis[: width + len(ellipsis)]
if leftside:
uslice = lambda i: u[i:]
concat = lambda s: ellipsis + s
else:
uslice = lambda i: u[:-i]
concat = lambda s: s + ellipsis
for i in pycompat.xrange(1, len(u)):
usub = uslice(i)
if ucolwidth(usub) <= width:
return concat(usub.encode(_sysstr(encoding)))
return ellipsis # no enough room for multi-column characters
def lower(s):
# type: (bytes) -> bytes
"""best-effort encoding-aware case-folding of local string s"""
try:
return asciilower(s)
except UnicodeDecodeError:
pass
try:
if isinstance(s, localstr):
u = s._utf8.decode("utf-8")
else:
u = s.decode(_sysstr(encoding), _sysstr(encodingmode))
lu = u.lower()
if u == lu:
return s # preserve localstring
return lu.encode(_sysstr(encoding))
except UnicodeError:
return s.lower() # we don't know how to fold this except in ASCII
except LookupError as k:
raise error.Abort(k, hint=b"please check your locale settings")
def upper(s):
# type: (bytes) -> bytes
"""best-effort encoding-aware case-folding of local string s"""
try:
return asciiupper(s)
except UnicodeDecodeError:
return upperfallback(s)
def upperfallback(s):
# type: (Any) -> Any
try:
if isinstance(s, localstr):
u = s._utf8.decode("utf-8")
else:
u = s.decode(_sysstr(encoding), _sysstr(encodingmode))
uu = u.upper()
if u == uu:
return s # preserve localstring
return uu.encode(_sysstr(encoding))
except UnicodeError:
return s.upper() # we don't know how to fold this except in ASCII
except LookupError as k:
raise error.Abort(k, hint=b"please check your locale settings")
class normcasespecs(object):
"""what a platform's normcase does to ASCII strings
This is specified per platform, and should be consistent with what normcase
on that platform actually does.
lower: normcase lowercases ASCII strings
upper: normcase uppercases ASCII strings
other: the fallback function should always be called
This should be kept in sync with normcase_spec in util.h."""
lower = -1
upper = 1
other = 0
def jsonescape(s, paranoid=False):
# type: (Any, Any) -> Any
"""returns a string suitable for JSON
JSON is problematic for us because it doesn't support non-Unicode
bytes. To deal with this, we take the following approach:
- localstr/safelocalstr objects are converted back to UTF-8
- valid UTF-8/ASCII strings are passed as-is
- other strings are converted to UTF-8b surrogate encoding
- apply JSON-specified string escaping
(escapes are doubled in these tests)
>>> jsonescape(b'this is a test')
'this is a test'
>>> jsonescape(b'escape characters: \\0 \\x0b \\x7f')
'escape characters: \\\\u0000 \\\\u000b \\\\u007f'
>>> jsonescape(b'escape characters: \\b \\t \\n \\f \\r \\" \\\\')
'escape characters: \\\\b \\\\t \\\\n \\\\f \\\\r \\\\" \\\\\\\\'
>>> jsonescape(b'a weird byte: \\xdd')
'a weird byte: \\xed\\xb3\\x9d'
>>> jsonescape(b'utf-8: caf\\xc3\\xa9')
'utf-8: caf\\xc3\\xa9'
>>> jsonescape(b'')
''
If paranoid, non-ascii and common troublesome characters are also escaped.
This is suitable for web output.
>>> s = b'escape characters: \\0 \\x0b \\x7f'
>>> assert jsonescape(s) == jsonescape(s, paranoid=True)
>>> s = b'escape characters: \\b \\t \\n \\f \\r \\" \\\\'
>>> assert jsonescape(s) == jsonescape(s, paranoid=True)
>>> jsonescape(b'escape boundary: \\x7e \\x7f \\xc2\\x80', paranoid=True)
'escape boundary: ~ \\\\u007f \\\\u0080'
>>> jsonescape(b'a weird byte: \\xdd', paranoid=True)
'a weird byte: \\\\udcdd'
>>> jsonescape(b'utf-8: caf\\xc3\\xa9', paranoid=True)
'utf-8: caf\\\\u00e9'
>>> jsonescape(b'non-BMP: \\xf0\\x9d\\x84\\x9e', paranoid=True)
'non-BMP: \\\\ud834\\\\udd1e'
>>> jsonescape(b'<foo@example.org>', paranoid=True)
'\\\\u003cfoo@example.org\\\\u003e'
"""
u8chars = toutf8b(s)
try:
return _jsonescapeu8fast(u8chars, paranoid)
except ValueError:
pass
return charencodepure.jsonescapeu8fallback(u8chars, paranoid)
# We need to decode/encode U+DCxx codes transparently since invalid UTF-8
# bytes are mapped to that range.
if pycompat.ispy3:
_utf8strict = r'surrogatepass'
else:
_utf8strict = r'strict'
_utf8len = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 3, 4]
def getutf8char(s, pos):
# type: (bytes, int) -> bytes
"""get the next full utf-8 character in the given string, starting at pos
Raises a UnicodeError if the given location does not start a valid
utf-8 character.
"""
# find how many bytes to attempt decoding from first nibble
l = _utf8len[ord(s[pos : pos + 1]) >> 4]
if not l: # ascii
return s[pos : pos + 1]
c = s[pos : pos + l]
# validate with attempted decode
c.decode("utf-8", _utf8strict)
return c
def toutf8b(s):
# type: (bytes) -> bytes
"""convert a local, possibly-binary string into UTF-8b
This is intended as a generic method to preserve data when working
with schemes like JSON and XML that have no provision for
arbitrary byte strings. As Mercurial often doesn't know
what encoding data is in, we use so-called UTF-8b.
If a string is already valid UTF-8 (or ASCII), it passes unmodified.
Otherwise, unsupported bytes are mapped to UTF-16 surrogate range,
uDC00-uDCFF.
Principles of operation:
- ASCII and UTF-8 data successfully round-trips and is understood
by Unicode-oriented clients
- filenames and file contents in arbitrary other encodings can have
be round-tripped or recovered by clueful clients
- local strings that have a cached known UTF-8 encoding (aka
localstr) get sent as UTF-8 so Unicode-oriented clients get the
Unicode data they want
- non-lossy local strings (aka safelocalstr) get sent as UTF-8 as well
- because we must preserve UTF-8 bytestring in places such as
filenames, metadata can't be roundtripped without help
(Note: "UTF-8b" often refers to decoding a mix of valid UTF-8 and
arbitrary bytes into an internal Unicode format that can be
re-encoded back into the original. Here we are exposing the
internal surrogate encoding as a UTF-8 string.)
"""
if isinstance(s, localstr):
# assume that the original UTF-8 sequence would never contain
# invalid characters in U+DCxx range
return s._utf8
elif isinstance(s, safelocalstr):
# already verified that s is non-lossy in legacy encoding, which
# shouldn't contain characters in U+DCxx range
return fromlocal(s)
elif isasciistr(s):
return s
if b"\xed" not in s:
try:
s.decode('utf-8', _utf8strict)
return s
except UnicodeDecodeError:
pass
s = pycompat.bytestr(s)
r = b""
pos = 0
l = len(s)
while pos < l:
try:
c = getutf8char(s, pos)
if b"\xed\xb0\x80" <= c <= b"\xed\xb3\xbf":
# have to re-escape existing U+DCxx characters
c = unichr(0xDC00 + ord(s[pos])).encode('utf-8', _utf8strict)
pos += 1
else:
pos += len(c)
except UnicodeDecodeError:
c = unichr(0xDC00 + ord(s[pos])).encode('utf-8', _utf8strict)
pos += 1
r += c
return r
def fromutf8b(s):
# type: (bytes) -> bytes
"""Given a UTF-8b string, return a local, possibly-binary string.
return the original binary string. This
is a round-trip process for strings like filenames, but metadata
that's was passed through tolocal will remain in UTF-8.
>>> roundtrip = lambda x: fromutf8b(toutf8b(x)) == x
>>> m = b"\\xc3\\xa9\\x99abcd"
>>> toutf8b(m)
'\\xc3\\xa9\\xed\\xb2\\x99abcd'
>>> roundtrip(m)
True
>>> roundtrip(b"\\xc2\\xc2\\x80")
True
>>> roundtrip(b"\\xef\\xbf\\xbd")
True
>>> roundtrip(b"\\xef\\xef\\xbf\\xbd")
True
>>> roundtrip(b"\\xf1\\x80\\x80\\x80\\x80")
True
"""
if isasciistr(s):
return s
# fast path - look for uDxxx prefixes in s
if b"\xed" not in s:
return s
# We could do this with the unicode type but some Python builds
# use UTF-16 internally (issue5031) which causes non-BMP code
# points to be escaped. Instead, we use our handy getutf8char
# helper again to walk the string without "decoding" it.
s = pycompat.bytestr(s)
r = b""
pos = 0
l = len(s)
while pos < l:
c = getutf8char(s, pos)
pos += len(c)
# unescape U+DCxx characters
if b"\xed\xb0\x80" <= c <= b"\xed\xb3\xbf":
c = pycompat.bytechr(ord(c.decode("utf-8", _utf8strict)) & 0xFF)
r += c
return r