Wed, 02 Jul 2014 15:47:39 +0200 bundle2-push: drop _pushbundle2extraparts
Pierre-Yves David <pierre-yves.david@fb.com> [Wed, 02 Jul 2014 15:47:39 +0200] rev 21906
bundle2-push: drop _pushbundle2extraparts All core user are now using the new way. We drop the old way.
Wed, 02 Jul 2014 16:10:14 +0200 bundle2-test: use the new way to extend push content
Pierre-Yves David <pierre-yves.david@fb.com> [Wed, 02 Jul 2014 16:10:14 +0200] rev 21905
bundle2-test: use the new way to extend push content The only core user of the old way were tests. We update them.
Wed, 02 Jul 2014 15:26:04 +0200 bundle2-push: introduce a list of part generating functions
Pierre-Yves David <pierre-yves.david@fb.com> [Wed, 02 Jul 2014 15:26:04 +0200] rev 21904
bundle2-push: introduce a list of part generating functions Instead of explicitly calling a few function to generate part in the bundle, we now have a list of all part generators. This should make it easier for extensions to adds new part in the bundle. This new way to extend the push deprecates the old `_pushbundle2extraparts` way.
Wed, 02 Jul 2014 12:55:09 +0200 bundle2-push: move changegroup push validation inside _pushb2ctx
Pierre-Yves David <pierre-yves.david@fb.com> [Wed, 02 Jul 2014 12:55:09 +0200] rev 21903
bundle2-push: move changegroup push validation inside _pushb2ctx When bundle2 push includes more than just changesets, we may have no changegroup to push yet still have other data to push. So we now try to performs a bundle2 push in all cases. The check for changegroup inclusion is moved into the ``_pushb2ctx`` function in charge of creating the changegroup part. The bundle2 part is aborted if no actual payload part have been added to the bundle2.
Mon, 07 Jul 2014 12:30:31 +0200 push: use `stepsdone` to control changegroup push through bundle10 or bundle20
Pierre-Yves David <pierre-yves.david@fb.com> [Mon, 07 Jul 2014 12:30:31 +0200] rev 21902
push: use `stepsdone` to control changegroup push through bundle10 or bundle20 We use the newly introduced `pushop.stepsdone` attribute to inform older methods than changegroup have already been pushed using a newer method.
Wed, 02 Jul 2014 12:48:54 +0200 push: add a ``pushop.stepsdone`` attribute
Pierre-Yves David <pierre-yves.david@fb.com> [Wed, 02 Jul 2014 12:48:54 +0200] rev 21901
push: add a ``pushop.stepsdone`` attribute This attribute will record what steps were performed during the bundle2 push. This will control whenever the old way push must be performed or skipped. This will ultimately be used by changegroup, phases, obsmarkers, bookmarks and any other kind of data ones may want to exchange even when bundle2 support is missing.
Wed, 02 Jul 2014 16:17:54 +0200 bundle2: add a ``bundle20.nbparts`` property
Pierre-Yves David <pierre-yves.david@fb.com> [Wed, 02 Jul 2014 16:17:54 +0200] rev 21900
bundle2: add a ``bundle20.nbparts`` property This property can be used to know how much parts have been added to the bundle2. This will be useful to check if any part have been generated for a push.
Wed, 02 Jul 2014 11:42:35 +0200 bundle2-push: extract changegroup logic in its own function
Pierre-Yves David <pierre-yves.david@fb.com> [Wed, 02 Jul 2014 11:42:35 +0200] rev 21899
bundle2-push: extract changegroup logic in its own function We extract the creation of changegroup related parts into its own function. This precludes the inclusion of more diverse data during the bundle2 push. We use a closure to carry the logic that need to be perform when processing the server reply.
Wed, 02 Jul 2014 14:09:24 +0200 bundle2: call _pushbundle2extraparts a bit sooner
Pierre-Yves David <pierre-yves.david@fb.com> [Wed, 02 Jul 2014 14:09:24 +0200] rev 21898
bundle2: call _pushbundle2extraparts a bit sooner This is the first step of a refactoring that will ease the inclusion of new part in the bundle2 push and includes more information (like phases) in this push We need to move the function a bit sooner to be able to group the generation of `b2x:check:heads` and `b2x:changegroup` part in an external function. We move it sooner to preserve parts creation order bundle2 tests rely on. At the ends of this refactoring the `_pushbundle2extraparts` will be replaced by another mechanism anyway.
Tue, 15 Jul 2014 23:34:13 +0900 templatekw: add 'subrepos' keyword to show updated subrepositories
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Tue, 15 Jul 2014 23:34:13 +0900] rev 21897
templatekw: add 'subrepos' keyword to show updated subrepositories 'subrepos' template keyword newly added by this patch shows updated subrepositories. For the compatibility with the list of subrepositories shown in the editor at commit: - 'subrepos' is empty, at revisions removing '.hgsub' itself - 'subrepos' is calculated between the revision and the first parent of it, at merge revisions To avoid silent regression, this patch also confirms "hg diff" of ".hgsubstate" and parents for each target revisions in the test.
Tue, 15 Jul 2014 23:34:13 +0900 templatekw: add 'currentbookmark' keyword to show current bookmark easily
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Tue, 15 Jul 2014 23:34:13 +0900] rev 21896
templatekw: add 'currentbookmark' keyword to show current bookmark easily Before this patch, complicated template expression below is required to show current active bookmark if it is associated with the changeset. "{bookmarks % '{ifeq(bookmark, current, \"{bookmark}\")}'}" This patch add 'currentbookmark' keyword to show current bookmark easily.
Wed, 16 Jul 2014 14:53:03 -0700 context: extend efficient manifest filtering to when all paths are files
Siddharth Agarwal <sid0@fb.com> [Wed, 16 Jul 2014 14:53:03 -0700] rev 21895
context: extend efficient manifest filtering to when all paths are files On a repository with over 250,000 files and 700,000 commits, this improves cases like hg status --rev <rev> -- <file> # rev is not . from 2.1 seconds to 1.4 seconds. There is further scope for improvement here: for a single file or a small set of files, it is probably more efficient to use filelog linkrevs when possible. However there will always be cases where that will fail (multiple commits pointing to the same file revision, removed files...), so this is independently useful.
Sat, 12 Jul 2014 00:37:08 -0700 revset: remove no longer used _missingancestors revset
Siddharth Agarwal <sid0@fb.com> [Sat, 12 Jul 2014 00:37:08 -0700] rev 21894
revset: remove no longer used _missingancestors revset This was undocumented.
Sat, 12 Jul 2014 00:31:36 -0700 revset: replace _missingancestors optimization with only revset
Siddharth Agarwal <sid0@fb.com> [Sat, 12 Jul 2014 00:31:36 -0700] rev 21893
revset: replace _missingancestors optimization with only revset (::a - ::b) is equivalent to only(a, b).
Sat, 28 Jun 2014 01:42:39 +0200 tags: introduce _readtaghist function
Angel Ezquerra <angel.ezquerra@gmail.com> [Sat, 28 Jun 2014 01:42:39 +0200] rev 21892
tags: introduce _readtaghist function The existing _readtags function has been modified a little and renamed _readtaghist. A new _readtaghist function has been added, which is a wrappger around _readtaghist. Its output is the same as the old _readtaghist. The purpose of this change is to make it possible to automatically merge tag files. In order to do so we will need to get the line numbers for each of the tag-node pairs on the first merge parent. This is not used yet, but will be used on a follow up patch that will introduce an automatic tag merge algorithm. I performed some tests to compare the effect of this change. I used timeit to run the test-tags.t test a 9 times with and without this patch. The results were: - without this patch: 3 loops, best of 3: 8.55 sec per loop - with this patch: 3 loops, best of 3: 8.49 sec per loop The the test was on average was slightly faster with this patch (although the difference was probably not statistically significant).
Fri, 20 Jun 2014 00:42:35 +0900 subrepo: ensure "close()" execution at the end of "_initrepo()"
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Fri, 20 Jun 2014 00:42:35 +0900] rev 21891
subrepo: ensure "close()" execution at the end of "_initrepo()" Before this patch, "close()" for the file object opened in "_initrepo()" may not be executed, if unexpected exception is raised, because it isn't executed in "finally" clause. This patch ensures "close()" execution at the end of "_initrepo()" by moving it into "finally" clause. This patch puts configuration lines into "lines" array and write them out at once, to narrow the scope of "try"/"finally" for review-ability. This patch doesn't use "vfs.write()", because: - current "vfs.write()" implementation doesn't take "mode" argument to open file in "text" mode - writing hgrc file out in binary mode may break backward compatibility
Fri, 20 Jun 2014 00:41:31 +0900 subrepo: add test whether "[paths]" is configured correctly at subrepo creation
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Fri, 20 Jun 2014 00:41:31 +0900] rev 21890
subrepo: add test whether "[paths]" is configured correctly at subrepo creation This test is added for changes in the subsequent patch. This test doesn't use "(glob)" for expected output, because "[paths]" is configured at subrepo creation by "_abssource()" using "posixpath.join()" to join path components.
Fri, 20 Jun 2014 00:21:19 +0900 subrepo: ensure "close()" execution at the end of "_cachestorehash()"
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Fri, 20 Jun 2014 00:21:19 +0900] rev 21889
subrepo: ensure "close()" execution at the end of "_cachestorehash()" Before this patch, "close()" for the file object opened in "_cachestorehash()" may not be executed, if unexpected exception is raised, because it isn't executed in "finally" clause. This patch ensures "close()" execution at the end of "_cachestorehash()" by moving it into "finally" clause.
Fri, 20 Jun 2014 00:21:19 +0900 subrepo: ensure "close()" execution at the end of "_readstorehashcache()"
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Fri, 20 Jun 2014 00:21:19 +0900] rev 21888
subrepo: ensure "close()" execution at the end of "_readstorehashcache()" Before this patch, "close()" for the file object opened in "_readstorehashcache()" may not be executed, if unexpected exception is raised, because it isn't executed in "finally" clause. This patch ensures "close()" execution at the end of "_readstorehashcache()" by moving it into "finally" clause.
Fri, 20 Jun 2014 00:21:19 +0900 subrepo: ensure "close()" execution at the end of "_calcfilehash()"
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Fri, 20 Jun 2014 00:21:19 +0900] rev 21887
subrepo: ensure "close()" execution at the end of "_calcfilehash()" Before this patch, "close()" for the file object opened in "_calcfilehash()" may not be executed, if unexpected exception is raised, because it isn't executed in "finally" clause. This patch ensures "close()" execution at the end of "_calcfilehash()" by moving it into "finally" clause.
Fri, 20 Jun 2014 00:21:19 +0900 subrepo: ensure "lock.release()" execution at the end of "_cachestorehash()"
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Fri, 20 Jun 2014 00:21:19 +0900] rev 21886
subrepo: ensure "lock.release()" execution at the end of "_cachestorehash()" Before this patch, "lock.release()" for "self._repo" in "_cachestorehash()" of "hgsubrepo" may not be executed, if unexpected exception is raised, because it isn't executed in "finally" clause. This patch ensures "lock.release()" execution at the end of "_cachestorehash()" by moving it into "finally" clause.
Fri, 20 Jun 2014 00:21:19 +0900 subrepo: ensure "lock.release()" execution at the end of "storeclean()"
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Fri, 20 Jun 2014 00:21:19 +0900] rev 21885
subrepo: ensure "lock.release()" execution at the end of "storeclean()" Before this patch, "lock.release()" for "self._repo" in "storeclean()" of "hgsubrepo" may not be executed, if unexpected exception is raised, because it isn't executed in "finally" clause. This patch ensures "lock.release()" execution at the end of "storeclean()" by moving it into "finally" clause. This patch chooses moving almost all lines in "storeclean()" into "_storeclean()" instead of indenting them for "try/finally" clauses, to keep diff simple for review-ability.
Mon, 07 Jul 2014 18:45:46 +0900 largefiles: confirm existence of outgoing largefile entities in remote store
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Mon, 07 Jul 2014 18:45:46 +0900] rev 21884
largefiles: confirm existence of outgoing largefile entities in remote store Before this patch, "hg summary" and "hg outgoing" show and count up all largefiles changed/added in outgoing revisions, even though some of them are already uploaded into remote store. This patch confirms existence of outgoing largefile entities in remote store, to show and count up only really outgoing largefile entities at "hg summary" and "hg outgoing".
Mon, 07 Jul 2014 18:45:46 +0900 largefiles: show also how many data entities are outgoing at "hg outgoing"
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Mon, 07 Jul 2014 18:45:46 +0900] rev 21883
largefiles: show also how many data entities are outgoing at "hg outgoing" Before this patch, "hg outgoing --large" shows which largefiles are changed or added in outgoing revisions only in the point of the view of filenames. For example, according to the list of outgoing largefiles shown in "hg outgoing" output, users should expect that the former below costs much more to upload outgoing largefiles than the latter. - outgoing revisions add a hundred largefiles, but all of them refer the same data entity in this case, only one data entity is outgoing, even though "hg summary" says that a hundred largefiles are outgoing. - a hundred outgoing revisions change only one largefile with distinct data in this case, a hundred data entities are outgoing, even though "hg summary" says that only one largefile is outgoing. But the latter costs much more than the former, in fact. This patch shows also how many data entities are outgoing at "hg outgoing" by counting number of unique hash values for outgoing largefiles. When "--debug" is specified, this patch also shows what entities (in hash) are outgoing for each largefiles listed up, for debug purpose. In "ui.debugflag" route, "addfunc()" can append given "lfhash" to the list "toupload[fn]" always without duplication check, because de-duplication is already done in "_getoutgoings()".
Mon, 07 Jul 2014 18:45:46 +0900 largefiles: show also how many data entities are outgoing at "hg summary"
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Mon, 07 Jul 2014 18:45:46 +0900] rev 21882
largefiles: show also how many data entities are outgoing at "hg summary" Before this patch, "hg summary --large" shows how many largefiles are changed or added in outgoing revisions only in the point of the view of filenames. For example, according to the number of outgoing largefiles shown in "hg summary" output, users should expect that the former below costs much more to upload outgoing largefiles than the latter. - outgoing revisions add a hundred largefiles, but all of them refer the same data entity in this case, only one data entity is outgoing, even though "hg summary" says that a hundred largefiles are outgoing. - a hundred outgoing revisions change only one largefile with distinct data in this case, a hundred data entities are outgoing, even though "hg summary" says that only one largefile is outgoing. But the latter costs much more than the former, in fact. This patch shows also how many data entities are outgoing at "hg summary" by counting number of unique hash values for outgoing largefiles. This patch introduces "_getoutgoings" to centralize the logic (de-duplication, too) into it for convenience of subsequent patches, even though it is not required in "hg summary" case.
Mon, 07 Jul 2014 18:45:46 +0900 largefiles: add tests for summary/outgoing improved in subsequent patches
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Mon, 07 Jul 2014 18:45:46 +0900] rev 21881
largefiles: add tests for summary/outgoing improved in subsequent patches This patch adds tests for summary/outgoing improved in subsequent patches, to reduce amount of diffs in each patches. This patch adds new revisions below: - revision #2 adds new largefiles, but they contain as same data as one already existing this causes that multiple standins refer the same data entity - revision #3, #4 and #5 change the already existing largefile this causes that multiple data entities are outgoing for the standin. #5 can be used to check de-duplication of "(hash, filename)" pair.
Sat, 12 Jul 2014 17:59:03 -0700 context: generate filtered manifest efficiently for exact matchers
Siddharth Agarwal <sid0@fb.com> [Sat, 12 Jul 2014 17:59:03 -0700] rev 21880
context: generate filtered manifest efficiently for exact matchers When the matcher is exact, there's no reason to iterate over the entire manifest. It's much more efficient to iterate over the list of files instead. For a repository with approximately 300,000 files, this speeds up hg log -l10 --patch --follow for a frequently modified file from 16.5 seconds to 10.5 seconds.
Sat, 12 Jul 2014 17:57:25 -0700 manifestdict: add a new method to intersect with a set of files
Siddharth Agarwal <sid0@fb.com> [Sat, 12 Jul 2014 17:57:25 -0700] rev 21879
manifestdict: add a new method to intersect with a set of files This is meant to be used when the set of files is known in advance, e.g. with a match object with no patterns.
Sat, 12 Jul 2014 18:31:18 -0700 log: use an exact matcher for --patch --follow
Siddharth Agarwal <sid0@fb.com> [Sat, 12 Jul 2014 18:31:18 -0700] rev 21878
log: use an exact matcher for --patch --follow The arguments to log --patch --follow are expected to be exact paths. This will be used to make manifest filtering for these cases more efficient in upcoming patches.
Wed, 16 Jul 2014 17:35:04 -0500 merge with stable
Matt Mackall <mpm@selenic.com> [Wed, 16 Jul 2014 17:35:04 -0500] rev 21877
merge with stable
Sat, 12 Jul 2014 02:23:17 -0700 log: make --patch --follow work inside a subdirectory stable
Siddharth Agarwal <sid0@fb.com> [Sat, 12 Jul 2014 02:23:17 -0700] rev 21876
log: make --patch --follow work inside a subdirectory Previously, the 'patch' code for hg log --patch --follow would try to resolve patterns relative to the repository root rather than the current working directory. Fix that by using match.files instead of pats, as done elsewhere nearby.
Sat, 12 Jul 2014 20:07:24 +0900 mergetools: add --nofork option to gvimdiff.diffargs for extdiff
Yuya Nishihara <yuya@tcha.org> [Sat, 12 Jul 2014 20:07:24 +0900] rev 21875
mergetools: add --nofork option to gvimdiff.diffargs for extdiff Without --nofork, temporary files are removed immediately before gvimdiff starts. "-d -g -O" are put just for consistency with gvimdiff.args.
Sat, 05 Jul 2014 16:32:28 +0300 contrib/vagrant: use Vagrant for running tests on virtual machine
anatoly techtonik <techtonik@gmail.com> [Sat, 05 Jul 2014 16:32:28 +0300] rev 21874
contrib/vagrant: use Vagrant for running tests on virtual machine $ cd contrib/vagrant $ vagrant up $ vagrant ssh -c ./run-tests.sh Repository is shared at /hgshared in guest machine.
Mon, 14 Jul 2014 18:53:03 -0500 merge with stable
Matt Mackall <mpm@selenic.com> [Mon, 14 Jul 2014 18:53:03 -0500] rev 21873
merge with stable
Sat, 12 Jul 2014 20:44:00 -0700 log: allow revset for --follow to be lazily evaluated
Siddharth Agarwal <sid0@fb.com> [Sat, 12 Jul 2014 20:44:00 -0700] rev 21872
log: allow revset for --follow to be lazily evaluated It is unclear to me why evaluation was forced. For a repository with over 700,000 commits, 'hg log -f' drops from 1.2 seconds to 0.2 seconds.
Mon, 14 Jul 2014 15:42:31 -0700 parsers: remove unused getintat function
Siddharth Agarwal <sid0@fb.com> [Mon, 14 Jul 2014 15:42:31 -0700] rev 21871
parsers: remove unused getintat function Warning detected by clang.
Mon, 14 Jul 2014 17:55:31 -0500 revset: maintain ordering when subtracting from a baseset (issue4289)
Matt Mackall <mpm@selenic.com> [Mon, 14 Jul 2014 17:55:31 -0500] rev 21870
revset: maintain ordering when subtracting from a baseset (issue4289)
Tue, 15 Jul 2014 00:59:09 +0900 cmdutil: separate building commit text from 'commitforceeditor'
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Tue, 15 Jul 2014 00:59:09 +0900] rev 21869
cmdutil: separate building commit text from 'commitforceeditor' This separation makes it easier to extend/hook building commit text from the specified context. This patch uses 'committext' instead of 'edittext' for names of newly added variable and function, because the former is more purpose specific than the latter, even though 'edittext' in 'buildcommittext' is left as it is to reduce amount of diff.
Mon, 14 Jul 2014 23:33:59 +0900 convert: detect removal of ".gitmodules" at git source revisions correctly stable
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Mon, 14 Jul 2014 23:33:59 +0900] rev 21868
convert: detect removal of ".gitmodules" at git source revisions correctly Before this patch, all operations applied on ".gitmodules" at git source revisions are treated as modification, even if they are actually removal of it. If removal of ".gitmodules" is treated as modification unexpectedly, "hg convert" is aborted by the exception raised in "retrievegitmodules()" for ".gitmodules" at the git source revision removing it, because that revision doesn't have any information of ".gitmodules". This patch detects removal of ".gitmodules" at git source revisions correctly. If ".gitmodules" is removed at the git source revision, this patch records "hex(nullid)" as the contents hash value for ".hgsub" and ".hgsubstate" at the destination revision. This patch makes "getfile()" raise IOError also for ".hgstatus" and ".hgsubstate" if the contents hash value is "hex(nullid)", and this tells removal of ".hgstatus" and ".hgsubstate" at the destination revision to "localrepository.commitctx()" correctly. For files other than ".hgstatus" and ".hgsubstate", checking the contents hash value in "getfile()" may be redundant, because "catfile()" for them also does so. But this patch chooses writing it only once at the beginning of "getfile()", to avoid writing same code twice both for ".hgsub" and ".hgsubstate" separately.
Mon, 14 Jul 2014 12:44:45 -0500 templates: escape NUL bytes in jsonescape (issue4303) stable
Matt Mackall <mpm@selenic.com> [Mon, 14 Jul 2014 12:44:45 -0500] rev 21867
templates: escape NUL bytes in jsonescape (issue4303) It's currently possible for various fields to contain NUL bytes, which are disallowed in JSON.
Sat, 12 Jul 2014 10:52:58 -0700 localrepo: document localrepo.hook()
Gregory Szorc <gregory.szorc@gmail.com> [Sat, 12 Jul 2014 10:52:58 -0700] rev 21866
localrepo: document localrepo.hook()
Sun, 06 Jul 2014 02:56:41 +0900 filemerge: use 'util.ellipsis' to trim custom conflict markers correctly
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Sun, 06 Jul 2014 02:56:41 +0900] rev 21865
filemerge: use 'util.ellipsis' to trim custom conflict markers correctly Before this patch, filemerge slices byte sequence directly to trim conflict markers, but this may cause: - splitting at intermediate multi-byte sequence - incorrect calculation of column width (length of byte sequence is different from columns in display in many cases) This patch uses 'util.ellipsis' to trim custom conflict markers correctly, even if multi-byte characters are used in them.
Sun, 06 Jul 2014 02:56:41 +0900 filemerge: use only the first line of the generated conflict marker for safety
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Sun, 06 Jul 2014 02:56:41 +0900] rev 21864
filemerge: use only the first line of the generated conflict marker for safety Before this patch, with careless configuration (missing '|firstline' filtering for '{desc}' keyword, for example), '[ui] mergemarkertemplate' can make conflict markers multiple lines. For ordinary users, advantage of allowing '[ui] mergemarkertemplate' to generate multiple lines for customizing seems to be less than advantage of disallowing it for safety. This patch uses only the first line of the conflict marker generated from '[ui] mergemarkertemplate' configuration for safety.
Sun, 06 Jul 2014 02:56:41 +0900 progress: use 'encoding.colwidth' to get column width of items correctly
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Sun, 06 Jul 2014 02:56:41 +0900] rev 21863
progress: use 'encoding.colwidth' to get column width of items correctly Before this patch, 'progress' extension applies 'len' on byte sequence to get column width of it, but it causes incorrect result, when length of byte sequence and columns in display are different from each other in multi-byte characters. This patch uses 'encoding.colwidth' to get column width of items in output line correctly, even if it contains multi-byte characters.
Sun, 06 Jul 2014 02:56:41 +0900 progress: use 'encoding.trim' to trim items in output line correctly
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Sun, 06 Jul 2014 02:56:41 +0900] rev 21862
progress: use 'encoding.trim' to trim items in output line correctly Before this patch, 'progress' extension trims items in output line by directly slicing byte sequence, but it may split at intermediate multi-byte sequence. This patch uses 'encoding.trim' to trim items in output line correctly, even if it contains multi-byte characters.
Sun, 06 Jul 2014 02:56:41 +0900 encoding: add 'leftside' argument into 'trim' to switch trimming side
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Sun, 06 Jul 2014 02:56:41 +0900] rev 21861
encoding: add 'leftside' argument into 'trim' to switch trimming side
Sun, 06 Jul 2014 02:56:41 +0900 progress: use 'encoding.colwidth' to get column width of output line correctly
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Sun, 06 Jul 2014 02:56:41 +0900] rev 21860
progress: use 'encoding.colwidth' to get column width of output line correctly Before this patch, 'progress' extension applies 'len' on byte sequence to get column width of it, but it causes incorrect result, when length of byte sequence and columns in display are different from each other in multi-byte characters. This patch uses 'encoding.colwidth' to get column width of output line correctly, even if it contains multi-byte characters.
Sun, 06 Jul 2014 02:56:41 +0900 progress: use 'encoding.trim' to trim output line correctly
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Sun, 06 Jul 2014 02:56:41 +0900] rev 21859
progress: use 'encoding.trim' to trim output line correctly Before this patch, 'progress' extension trims output line by directly slicing byte sequence, but it may split at intermediate multi-byte sequence. This patch uses 'encoding.trim' to trim output line correctly, even if it contains multi-byte characters. "rm -f loop.pyc" before changing "loop.py" in "test-progress.t" ensures that re-compilation of "loop.py", even if "loop.py" and "loop.pyc" have same timestamp in seconds.
(0) -10000 -3000 -1000 -300 -100 -48 +48 +100 +300 +1000 +3000 +10000 +30000 tip