Tue, 16 Apr 2013 01:46:39 +0200 largefiles: move protocol conversion into getlfile and make it an iterable
Mads Kiilerich <madski@unity3d.com> [Tue, 16 Apr 2013 01:46:39 +0200] rev 19004
largefiles: move protocol conversion into getlfile and make it an iterable Avoid the intermediate limitreader and filechunkiter between getlfile and copyandhash - return the right protocol and put the complexity where it better can be managed.
Mon, 15 Apr 2013 23:47:04 +0200 largefiles: don't close the fd passed to store._getfile
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 23:47:04 +0200] rev 19003
largefiles: don't close the fd passed to store._getfile
Mon, 15 Apr 2013 23:43:50 +0200 largefiles: remove blecch from lfutil.copyandhash - don't close the passed fd
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 23:43:50 +0200] rev 19002
largefiles: remove blecch from lfutil.copyandhash - don't close the passed fd
Mon, 15 Apr 2013 23:43:44 +0200 largefiles: drop lfutil.blockstream - use filechunkiter like everybody else
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 23:43:44 +0200] rev 19001
largefiles: drop lfutil.blockstream - use filechunkiter like everybody else The old chunk size is kept - just to avoid changing it.
Mon, 15 Apr 2013 23:35:43 +0200 largefiles: refactoring - use findfile in localstore._getfile
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 23:35:43 +0200] rev 19000
largefiles: refactoring - use findfile in localstore._getfile
Mon, 15 Apr 2013 23:35:18 +0200 largefiles: refactoring - return hex from _getfile and copyandhash
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 23:35:18 +0200] rev 18999
largefiles: refactoring - return hex from _getfile and copyandhash
Mon, 15 Apr 2013 23:32:33 +0200 largefiles: refactoring - create destination dir in lfutil.link
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 23:32:33 +0200] rev 18998
largefiles: refactoring - create destination dir in lfutil.link
Tue, 09 Apr 2013 23:40:11 +0900 summary: clear "commonincoming" also if branches are different
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Tue, 09 Apr 2013 23:40:11 +0900] rev 18997
summary: clear "commonincoming" also if branches are different Before this patch, "commonincoming" calculated by "discovery.findcommonincoming()" is cleared, only if "default" URL without branch part (tail "#branch" of URL) differs from "default-push" URL without branch part. But common revisions in "commonincoming" calculated for a branch doesn't include ones for another branch, even if URLs without branch part are same. The result of "discovery.findcommonoutgoing()" invocation with such "commonincoming" becomes incorrect in some cases. This patch clears "commonincoming", also if branch part of "default" differs from one of "default-push". To avoid redundant looking up: - "ui.expandpath('default')" and "ui.expandpath('default-push', 'default')" are not compared directly, even though they contain branch information, because they are not yet normalized by "hg.parseurl()": tail "/" of path, for example - "commonincoming" is not cleared, if branch isn't specified in "default" URL, because such "commonincoming" contains common revisions for all branches This patch also tests "different path, same branch" pattern to check careless degrading around comparison between source and destination.
Tue, 09 Apr 2013 23:40:11 +0900 summary: make "incoming" information sensitive to branch in URL (issue3830)
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Tue, 09 Apr 2013 23:40:11 +0900] rev 18996
summary: make "incoming" information sensitive to branch in URL (issue3830) Before this patch, "incoming" information of "hg summary --remote" is not sensitive to the branch specified in the URL of the destination repository, even though "hg pull"/"hg incoming" are so. Invocation of "discovery.findcommonincoming()" without "heads" argument treats revisions on branches other than the one specified in the URL as incoming ones unexpectedly. This patch looks head revisions, which are already detected by "hg.addbranchrevs()" from URL, up against "other" repository, and invokes "discovery.findcommonincoming()" with list of them as "heads" to limit calculation of incoming revisions.
Tue, 09 Apr 2013 23:40:10 +0900 histedit: make "hg histedit" sensitive to branch in URL
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Tue, 09 Apr 2013 23:40:10 +0900] rev 18995
histedit: make "hg histedit" sensitive to branch in URL Before this patch, "hg histedit" are not sensitive to the branch specified in the URL of the destination repository, even though "hg push"/"hg outgoing" are so: Invocation of "discovery.findcommonoutgoing()" without "onlyheads" argument treats revisions on branches other than the one specified in the URL as outgoing ones unexpectedly. This patch specifies list of head revisions, which are already detected by "hg.addbranchrevs()" from URL and looked up against local repository, as "onlyheads" to "discovery.findcommonoutgoing()" to limit calculation of outgoing revisions.
Tue, 09 Apr 2013 23:40:10 +0900 summary: make "outgoing" information sensitive to branch in URL (issue3829)
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Tue, 09 Apr 2013 23:40:10 +0900] rev 18994
summary: make "outgoing" information sensitive to branch in URL (issue3829) Before this patch, "outgoing" information of "hg summary --remote" is not sensitive to the branch specified in the URL of the destination repository, even though "hg push"/"hg outgoing" are so: Invocation of "discovery.findcommonoutgoing()" without "onlyheads" argument treats revisions on branches other than the one specified in the URL as outgoing ones unexpectedly. This patch looks head revisions, which are already detected by "hg.addbranchrevs()" from URL, up against local repository, and invokes "discovery.findcommonoutgoing()" with list of them as "onlyheads" to limit calculation of outgoing revisions.
Fri, 29 Mar 2013 22:57:16 +0900 annotate: increase refcount of each revisions correctly (issue3841)
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Fri, 29 Mar 2013 22:57:16 +0900] rev 18993
annotate: increase refcount of each revisions correctly (issue3841) Before this patch, refcount (managed in "needed") of parents of each revisions in "visit" is increased, only when parent is not annotated yet (examined by "p not in hist"). But this causes less refcount of the revision like "A" in the tree below ("A" is assumed as the second parent of "C"): A --- B --- C \ / \-----/ Steps of annotation for "C" in this case are shown below: 1. for "C" 1.1 increase refcount of "B" 1.2 increase refcount of "A" (=> 1) 1.3 defer annotation for "C" 2. for "A" 2.1 annotate for "A" (=> put result into "hist[A]") 2.2 clear "pcache[A]" ("pcache[A] = []") 3. for "B" 3.1 not increase refcount of "A", because "A not in hist" is False 3.2 annotate for "B" 3.3 decrease refcount of "A" (=> 0) 3.4 delete "hist[A]", even though "A" is still needed by "C" 3.5 clear "pcache[B]" 4. for "C", again 4.1 not increase refcount of "B", because "B not in hist" is False 4.2 increase refcount of "A" (=> 1) 4.3 defer annotation for "C" 5. for "A", again 5.1 annotate for "A" (=> put result into "hist[A]", again) 5.2 clear "pcache[A]" 6. for "C", once again 6.1 not increase refcount of "B", because "B not in hist" is False 6.2 not increase refcount of "A", because "A not in hist" is False 6.3 annotate for "C" 6.4 decrease refcount of "A", and delete "hist[A]" 6.5 decrease refcount of "B", and delete "hist[B]" 6.6 clear "pcache[C]" At step (5.1), annotation for "A" mis-recognizes that all lines are created at "A", because "pcache[A]" already cleared at step (2.2) prevents from scanning ancestors of "A". So, annotation for "C" or its descendants loses information about "A" or its ancestors. The root cause of this problem is that refcount of "A" is decreased at step (3.3), even though it isn't increased at step (3.1). To increase refcount correctly, this patch increases refcount of each parents of each revisions: - regardless of "p not in hist" or not, and - only once for each revisions in "visit" (by "not pcached") In fact, this problem should occur only on legacy repositories in which a filelog includes the merging between the revision and its ancestor (as the second parent), because: - tree is scanned in depth-first without such merging, revisions in "visit" refer different revisions as parent each other - recent Mercurial doesn't allow such merging changelog and manifest can include such merging someway, but filelogs can't, because "localrepository._filecommit()" converts such merging request to linear history. This patch tests merging cases below: these cases are from filelog of "mercurial/commands.py" in the repository of Mercurial itself. - both parents are same 10 --- 11 --- 12 \_/ filelogrev: changesetid: 10 00ea3613f82c 11 fc4a6e5b5812 12 4f802588cdfb - the second parent is also ancestor of the first one 37 --- 38 --- 39 --- 40 \________/ filelogrev: changesetid: 37 f8d56da6ac8f 38 38919e1c254d 39 d3400605d246 40 f06a4a3b86a7
Fri, 29 Mar 2013 22:57:15 +0900 annotate: reuse already calculated annotation
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Fri, 29 Mar 2013 22:57:15 +0900] rev 18992
annotate: reuse already calculated annotation Before this patch, annotation is re-calculated even if it is already calculated. This may cause unexpected annotation, because already cleared "pcache" ("pcache[f] = []") prevents from scanning ancestors. This patch reuses already calculated annotation if it is available. In fact, "reusable" situation should be seen only on legacy repositories in which a filelog include the merging between the revision and its ancestor, because: - tree is scanned in depth-first without such merging, annotation result should be released soon - recent Mercurial doesn't allow such merging changelog and manifest can include such merging someway, but filelogs can't, because "localrepository._filecommit()" converts such merging request to linear history.
Wed, 17 Apr 2013 00:29:54 +0400 log: fix behavior with empty repositories (issue3497)
Alexander Plavin <me@aplavin.ru> [Wed, 17 Apr 2013 00:29:54 +0400] rev 18991
log: fix behavior with empty repositories (issue3497) Make output in this special case consistent with general case one.
Tue, 16 Apr 2013 13:22:29 -0500 merge with crew
Matt Mackall <mpm@selenic.com> [Tue, 16 Apr 2013 13:22:29 -0500] rev 18990
merge with crew
Tue, 16 Apr 2013 10:08:20 -0700 revlog: don't cross-check ancestor result against Python version
Bryan O'Sullivan <bryano@fb.com> [Tue, 16 Apr 2013 10:08:20 -0700] rev 18989
revlog: don't cross-check ancestor result against Python version
Tue, 16 Apr 2013 10:08:20 -0700 parsers: a C implementation of the new ancestors algorithm
Bryan O'Sullivan <bryano@fb.com> [Tue, 16 Apr 2013 10:08:20 -0700] rev 18988
parsers: a C implementation of the new ancestors algorithm The performance of both the old and new Python ancestor algorithms depends on the number of revs they need to traverse. Although the new algorithm performs far better than the old when revs are numerically and topologically close, both algorithms become slow under other circumstances, taking up to 1.8 seconds to give answers in a Linux kernel repo. This C implementation of the new algorithm is a fairly straightforward transliteration. The only corner case of interest is that it raises an OverflowError if the number of GCA candidates found during the first pass is greater than 24, to avoid the dual perils of fixnum overflow and trying to allocate too much memory. (If this exception is raised, the Python implementation is used instead.) Performance numbers are good: in a Linux kernel repo, time for "hg debugancestors" on two distant revs (24bf01de7537 and c2a8808f5943) is as follows: Old Python: 0.36 sec New Python: 0.42 sec New C: 0.02 sec For a case where the new algorithm should perform well: Old Python: 1.84 sec New Python: 0.07 sec New C: measures as zero when using --time (This commit includes a paranoid cross-check to ensure that the Python and C implementations give identical answers. The above performance numbers were measured with that check disabled.)
Tue, 16 Apr 2013 10:08:19 -0700 revlog: choose a consistent ancestor when there's a tie
Bryan O'Sullivan <bryano@fb.com> [Tue, 16 Apr 2013 10:08:19 -0700] rev 18987
revlog: choose a consistent ancestor when there's a tie Previously, we chose a rev based on numeric ordering, which could cause "the same merge" in topologically identical but numerically different repos to choose different merge bases. We now choose the lexically least node; this is stable across different revlog orderings.
Tue, 16 Apr 2013 10:08:18 -0700 ancestor: a new algorithm that is faster for nodes near tip
Bryan O'Sullivan <bryano@fb.com> [Tue, 16 Apr 2013 10:08:18 -0700] rev 18986
ancestor: a new algorithm that is faster for nodes near tip Instead of walking all the way to the root of the DAG, we generate a set of candidate GCA revs, then figure out which ones will win the race to the root (usually without needing to traverse all the way to the root). In the common case of nodes that are close to each other in both revision number and topology, this is usually a big win: it makes "hg --time debugancestors" up to 9 times faster than the more general ancestor function when measured on heads of the linux-2.6 hg repo. Victory is not assured, however. The older function can still win by a large margin if one node is much closer to the root than the other, or by a much smaller amount if one is an ancestor of the other. For now, we've also got a small paranoid harness function that calls both ancestor functions on every input and ensures that they give equivalent answers. Even without the checker function, the old ancestor function needs to stay alive for the time being, as its generality is used by context.filectx.merge.
Tue, 16 Apr 2013 15:33:18 +0200 update: allow dirty update to foreground (successors)
Pierre-Yves David <pierre-yves.david@logilab.fr> [Tue, 16 Apr 2013 15:33:18 +0200] rev 18985
update: allow dirty update to foreground (successors) Update to "foreground" are no longer seen as cross branch update. "Foreground" are descendants or successors (or successors of descendants (or descendant of successors (etc))). This allows to update with uncommited changes that get automatically merged. This changeset is a small step forward. We want to allow dirty update to "background" (precursors) and takes obsolescence in account when finding the default update destination. But those requires deeper changes and will comes in later changesets.
Tue, 16 Apr 2013 15:16:33 +0200 obsolete: extract foreground computation from bookmark.validdest
Pierre-Yves David <pierre-yves.david@logilab.fr> [Tue, 16 Apr 2013 15:16:33 +0200] rev 18984
obsolete: extract foreground computation from bookmark.validdest This foreground logic will be reused by update logic.
Mon, 15 Apr 2013 17:10:58 +0200 destroyed: invalidate phraserevs cache in all case (issue3858)
Pierre-Yves David <pierre-yves.david@logilab.fr> [Mon, 15 Apr 2013 17:10:58 +0200] rev 18983
destroyed: invalidate phraserevs cache in all case (issue3858) When revisions are destroyed, the `phaserevs` cache becomes invalid in most case. This cache hold a `{rev => phase}` mapping and revision number most likely changed. Since 1c8e0d6ac3b0, we filter unknown phases' roots after changesets destruction. When some roots are filtered the `phaserevs` cache is invalidated. But not if none root where destroyed. We now invalidate the cache in all case filtered root or not. This bug was a bit tricky to reproduce as in most case we either: * rebase a set a draft changeset including root (phaserev invalidated) * strip tip-most changesets (no re-numbering of revision) Note that the invalidation of `phaserevs` are not strictly needed when only tip-most part of the history have been destroyed. But I do not expect the overhead to be significant.
Mon, 15 Apr 2013 01:59:11 +0200 largefiles: deprecate --all-largefiles for pull
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 01:59:11 +0200] rev 18982
largefiles: deprecate --all-largefiles for pull The same can be achieved with --lfrev pulled() and we shouldn't advertise unnecessary command line options.
Mon, 15 Apr 2013 01:59:11 +0200 largefiles: implement pull --all-largefiles as a special case of --lfrev
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 01:59:11 +0200] rev 18981
largefiles: implement pull --all-largefiles as a special case of --lfrev
Mon, 15 Apr 2013 01:59:11 +0200 largefiles: drop --cache-largefiles again
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 01:59:11 +0200] rev 18980
largefiles: drop --cache-largefiles again This goes a step further than d69585a5c5c0 and backs out the unreleased --cache-largefiles option. The same can be achieved with --lfrev heads(pulled()) and we shouldn't introduce unnecessary command line options.
Mon, 15 Apr 2013 01:59:04 +0200 largefiles: introduce pulled() revset expression for use in --lfrev
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 01:59:04 +0200] rev 18979
largefiles: introduce pulled() revset expression for use in --lfrev This provides a general way to do what already can be done with --all-largefiles and --cache-largefiles.
Mon, 15 Apr 2013 01:57:16 +0200 largefiles: introduce pull --lfrev option
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 01:57:16 +0200] rev 18978
largefiles: introduce pull --lfrev option The revset will be evaluated after the changesets has been pulled, and missing largefiles from matching revisions will be pulled to the local caches. This in combination with revsets will make it possible to specify different strategies for pulling largefiles. The revset expressions used for this option might be quite complex and will probably be most useful from scripts or an alias ... but less complicated than configuring hooks.
Mon, 15 Apr 2013 01:54:43 +0200 largefiles: refactor overridepull internals
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 01:54:43 +0200] rev 18977
largefiles: refactor overridepull internals
Mon, 15 Apr 2013 01:53:37 +0200 largefiles: introduce lfpull command for pulling missing largefiles
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 01:53:37 +0200] rev 18976
largefiles: introduce lfpull command for pulling missing largefiles
Mon, 15 Apr 2013 01:46:10 +0200 largefiles: update help
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 01:46:10 +0200] rev 18975
largefiles: update help Some clarifications, and some clean-up after --cache-largefiles was introduced.
Mon, 15 Apr 2013 01:43:31 +0200 largefiles: fix cat of non-largefiles from subdirectory
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 01:43:31 +0200] rev 18974
largefiles: fix cat of non-largefiles from subdirectory We were calling back to the original commands.cat from inside the walk loop that handled and filtered out largefiles. That did however happen with file paths relative to repo root and the original cat would fail when it applied its own walk and match on top of that. Instead we now duplicate and modify the code from commands.cat and patch it to handle both normal and largefiles. A change in test output shows that this also makes the exit code with largefiles consistent with the normal one in the case where one of several specified files are missing. This also fixes the combination of --output and largefiles.
Mon, 15 Apr 2013 01:41:49 +0200 largefiles: don't store whole file in memory for 'cat'
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 01:41:49 +0200] rev 18973
largefiles: don't store whole file in memory for 'cat'
Tue, 16 Apr 2013 13:55:38 +0200 mergetools: rename 'base' to 'merged' in meld
ronvoe12249 <ronny.voelker@elaxy.com> [Tue, 16 Apr 2013 13:55:38 +0200] rev 18972
mergetools: rename 'base' to 'merged' in meld This makes it clear which panel is the target of the merge operation.
Thu, 21 Feb 2013 14:49:25 +0100 mergetools: avoid losing the merged version with meld
ronvoe12249 <ronny.voelker@elaxy.com> [Thu, 21 Feb 2013 14:49:25 +0100] rev 18971
mergetools: avoid losing the merged version with meld Add -o $output. When using Meld as intended (merge from left and right into the center panel), the merged version is written to the wrong file without this option ($base, a temporary file, which is ignored by Mercurial). Add meld.check=changed as a secondary safety net.
Tue, 16 Apr 2013 09:44:29 -0500 templatekw: add default styles for hybrid types (issue3887)
Matt Mackall <mpm@selenic.com> [Tue, 16 Apr 2013 09:44:29 -0500] rev 18970
templatekw: add default styles for hybrid types (issue3887) This allows elements like file_copies to be printed as 'name (source)' when used with join.
Wed, 10 Apr 2013 02:27:35 +0900 largefiles: improve repo wrapping detection
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Wed, 10 Apr 2013 02:27:35 +0900] rev 18969
largefiles: improve repo wrapping detection Before this patch, repo wrapping detection in "reposetup()" of largefiles can detect only limited repo wrapping: replacing target functions by another one named as "wrap". So, it can't detect repo wrapping even in recommended style: replacing "__class__" of repo by derived class. This patch can detect repo wrapping in both styles below: - replacing "__class__" of repo by derived class (recommended style): class derived(repo.__class__): def push(self, *args, **kwargs): return super(derived, self).push(*args, **kwargs) repo.__class__ = derived - replacing function of repo by another one (not recommended style): orgpush = repo.push def push(*args, **kwargs): return orgpush(*args, **kwargs) repo.push = push
Thu, 21 Mar 2013 23:27:37 +0100 hgweb: respond HTTP_NOT_FOUND when an archive request does not match any files
Angel Ezquerra <angel.ezquerra@gmail.com> [Thu, 21 Mar 2013 23:27:37 +0100] rev 18968
hgweb: respond HTTP_NOT_FOUND when an archive request does not match any files
Thu, 21 Mar 2013 22:09:15 +0100 archive: raise error.Abort if the file pattern matches no files
Angel Ezquerra <angel.ezquerra@gmail.com> [Thu, 21 Mar 2013 22:09:15 +0100] rev 18967
archive: raise error.Abort if the file pattern matches no files Note that we could raise this exception even if no pattern were specified, but the revision contained no files. However this should not happen in practice since in that case commands.py/archive would exit earlier with an "no working directory: please specify a revision" error message instead.
Sat, 09 Feb 2013 14:22:52 -0500 ui: add 'force' parameter to traceback() to override the current print setting
Matt Harbison <matt_harbison@yahoo.com> [Sat, 09 Feb 2013 14:22:52 -0500] rev 18966
ui: add 'force' parameter to traceback() to override the current print setting This will allow a current traceback.print_exc() call in dispatch to be replaced with ui.traceback() even if --traceback was not given on the command line.
Sat, 09 Feb 2013 14:15:34 -0500 ui: add support for fully printing chained exception stacks in ui.traceback()
Matt Harbison <matt_harbison@yahoo.com> [Sat, 09 Feb 2013 14:15:34 -0500] rev 18965
ui: add support for fully printing chained exception stacks in ui.traceback() Currently, only SubrepoAbort has a cause chained to it.
Wed, 06 Feb 2013 22:54:09 -0500 subrepo: chain the original exception to SubrepoAbort
Matt Harbison <matt_harbison@yahoo.com> [Wed, 06 Feb 2013 22:54:09 -0500] rev 18964
subrepo: chain the original exception to SubrepoAbort The tracebacks in subrepos are truncated at the point where the original exception is caught and SubrepoAbort is raised in its place since 9e3910db4e78. That hides the most relevant subrepo methods when an error occurs. Python 2.x doesn't support chaining exceptions, so it is manually done here for manual printing later.
Mon, 15 Apr 2013 01:41:47 +0200 debugrebuildstate: rename to debugrebuilddirstate
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 01:41:47 +0200] rev 18963
debugrebuildstate: rename to debugrebuilddirstate There is a lot of state, but this command is for rebuilding the dirstate.
Mon, 15 Apr 2013 01:41:27 +0200 debugstate: rename to debugdirstate
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 01:41:27 +0200] rev 18962
debugstate: rename to debugdirstate There is a lot of state, but this command is for debugging the dirstate.
Mon, 15 Apr 2013 01:39:02 +0200 debugrebuildstate: clarify that rev can't be specified without -r
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 01:39:02 +0200] rev 18961
debugrebuildstate: clarify that rev can't be specified without -r -r has a default value of '' in the command line. The function default value of 'tip' is thus never used and any attempt at specifying revisions without -r will fail. It seems like then intended behavior was that 'hg debugrebuildstate' without any parameters should set the parents to tip. That would be very confusing now when the command primarily is used to recover from incorrect stat info. It is apparently undocumented that '' is the same as '.' ... unless it is passed in a place where revsets are used.
Mon, 15 Apr 2013 01:37:23 +0200 check-code: check txt files for trailing whitespace
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 01:37:23 +0200] rev 18960
check-code: check txt files for trailing whitespace
Mon, 15 Apr 2013 01:37:23 +0200 check-code: catch trailing space in comments
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 01:37:23 +0200] rev 18959
check-code: catch trailing space in comments
Mon, 15 Apr 2013 01:37:23 +0200 spelling: fix typos and spelling errors
Mads Kiilerich <madski@unity3d.com> [Mon, 15 Apr 2013 01:37:23 +0200] rev 18958
spelling: fix typos and spelling errors
Thu, 11 Apr 2013 14:54:18 +0200 wireproto: clarify cryptic 'remote: unsynced changes' error message on push
Mads Kiilerich <madski@unity3d.com> [Thu, 11 Apr 2013 14:54:18 +0200] rev 18957
wireproto: clarify cryptic 'remote: unsynced changes' error message on push The message was not very much to the point and did not in any way help an ordinary user. 'repository changed while preparing/uploading bundle - please try again' is more correct, gives the user some understanding of what is going on, and tells how to 'recover' from the situation. The 'bundle' aspect could be seen as an implementation detail that shouldn't be mentioned, but I think it helps giving an exact error message. The message could still leave the user wondering why Mercurial doesn't lock the repo and how unsafe it thus is. Explaining that is however too much detail.
(0) -10000 -3000 -1000 -300 -100 -48 +48 +100 +300 +1000 +3000 +10000 +30000 tip