phases: large rewrite on retract boundary
The new code is still pure Python, so we still have room to going significantly
faster. However its complexity of the complex part is `O(|[min_new_draft, tip]|)` instead of
`O(|[min_draft, tip]|` which should help tremendously one repository with old
draft (like mercurial-devel or mozilla-try).
This is especially useful as the most common "retract boundary" operation
happens when we commit/rewrite new drafts or when we push new draft to a
non-publishing server. In this case, the smallest new_revs is very close to the
tip and there is very few work to do.
A few smaller optimisation could be done for these cases and will be introduced in
later changesets.
We still have iterate over large sets of roots, but this is already a great
improvement for a very small amount of work. We gather information on the
affected changeset as we go as we can put it to use in the next changesets.
This extra data collection might slowdown the `register_new` case a bit, however
for register_new, it should not really matters. The set of new nodes is either
small, so the impact is negligible, or the set of new nodes is large, and the
amount of work to do to had them will dominate the overhead the collecting
information in `changed_revs`.
As this new code compute the changes on the fly, it unlock other interesting
improvement to be done in later changeset.
#!/usr/bin/env python3
#
# posplit - split messages in paragraphs on .po/.pot files
#
# license: MIT/X11/Expat
#
import polib
import re
import sys
def addentry(po, entry, cache):
e = cache.get(entry.msgid)
if e:
e.occurrences.extend(entry.occurrences)
# merge comments from entry
for comment in entry.comment.split('\n'):
if comment and comment not in e.comment:
if not e.comment:
e.comment = comment
else:
e.comment += '\n' + comment
else:
po.append(entry)
cache[entry.msgid] = entry
def mkentry(orig, delta, msgid, msgstr):
entry = polib.POEntry()
entry.merge(orig)
entry.msgid = msgid or orig.msgid
entry.msgstr = msgstr or orig.msgstr
entry.occurrences = [(p, int(l) + delta) for (p, l) in orig.occurrences]
return entry
if __name__ == "__main__":
po = polib.pofile(sys.argv[1])
cache = {}
entries = po[:]
po[:] = []
findd = re.compile(r' *\.\. (\w+)::') # for finding directives
for entry in entries:
msgids = entry.msgid.split(u'\n\n')
if entry.msgstr:
msgstrs = entry.msgstr.split(u'\n\n')
else:
msgstrs = [u''] * len(msgids)
if len(msgids) != len(msgstrs):
# places the whole existing translation as a fuzzy
# translation for each paragraph, to give the
# translator a chance to recover part of the old
# translation - erasing extra paragraphs is
# probably better than retranslating all from start
if 'fuzzy' not in entry.flags:
entry.flags.append('fuzzy')
msgstrs = [entry.msgstr] * len(msgids)
delta = 0
for msgid, msgstr in zip(msgids, msgstrs):
if msgid and msgid != '::':
newentry = mkentry(entry, delta, msgid, msgstr)
mdirective = findd.match(msgid)
if mdirective:
if not msgid[mdirective.end() :].rstrip():
# only directive, nothing to translate here
delta += 2
continue
directive = mdirective.group(1)
if directive in ('container', 'include'):
if msgid.rstrip('\n').count('\n') == 0:
# only rst syntax, nothing to translate
delta += 2
continue
else:
# lines following directly, unexpected
print(
'Warning: text follows line with directive'
' %s' % directive
)
comment = 'do not translate: .. %s::' % directive
if not newentry.comment:
newentry.comment = comment
elif comment not in newentry.comment:
newentry.comment += '\n' + comment
addentry(po, newentry, cache)
delta += 2 + msgid.count('\n')
po.save()