Mercurial > hg-stable
changeset 31592:df82f375fa00
checkheads: extract obsolete post processing in its own function
The checkheads function is long and complex, extract that logic in a subfunction
is win in itself. As the comment in the code says, this postprocessing is
currently very basic and either misbehave or fails to detect valid push in many
cases. My deeper motive for this extraction is to be make it easier to provide
extensive testing of this case and strategy to cover them. Final test and logic
will makes it to core once done.
author | Pierre-Yves David <pierre-yves.david@ens-lyon.org> |
---|---|
date | Tue, 21 Mar 2017 23:30:13 +0100 |
parents | c6921568cd20 |
children | 37a0ad669051 |
files | mercurial/discovery.py |
diffstat | 1 files changed, 43 insertions(+), 29 deletions(-) [+] |
line wrap: on
line diff
--- a/mercurial/discovery.py Wed Mar 22 11:26:23 2017 -0700 +++ b/mercurial/discovery.py Tue Mar 21 23:30:13 2017 +0100 @@ -343,38 +343,13 @@ oldhs.update(unsyncedheads) candidate_newhs.update(unsyncedheads) dhs = None # delta heads, the new heads on branch - discardedheads = set() if not repo.obsstore: + discardedheads = set() newhs = candidate_newhs else: - # remove future heads which are actually obsoleted by another - # pushed element: - # - # XXX as above, There are several cases this code does not handle - # XXX properly - # - # (1) if <nh> is public, it won't be affected by obsolete marker - # and a new is created - # - # (2) if the new heads have ancestors which are not obsolete and - # not ancestors of any other heads we will have a new head too. - # - # These two cases will be easy to handle for known changeset but - # much more tricky for unsynced changes. - # - # In addition, this code is confused by prune as it only looks for - # successors of the heads (none if pruned) leading to issue4354 - newhs = set() - for nh in candidate_newhs: - if nh in repo and repo[nh].phase() <= phases.public: - newhs.add(nh) - else: - for suc in obsolete.allsuccessors(repo.obsstore, [nh]): - if suc != nh and suc in allfuturecommon: - discardedheads.add(nh) - break - else: - newhs.add(nh) + newhs, discardedheads = _postprocessobsolete(pushop, + allfuturecommon, + candidate_newhs) unsynced = sorted(h for h in unsyncedheads if h not in discardedheads) if unsynced: if None in unsynced: @@ -434,3 +409,42 @@ repo.ui.note((" %s\n") % short(h)) if errormsg: raise error.Abort(errormsg, hint=hint) + +def _postprocessobsolete(pushop, futurecommon, candidate_newhs): + """post process the list of new heads with obsolescence information + + Exists as a subfunction to contain the complexity and allow extensions to + experiment with smarter logic. + Returns (newheads, discarded_heads) tuple + """ + # remove future heads which are actually obsoleted by another + # pushed element: + # + # XXX as above, There are several cases this code does not handle + # XXX properly + # + # (1) if <nh> is public, it won't be affected by obsolete marker + # and a new is created + # + # (2) if the new heads have ancestors which are not obsolete and + # not ancestors of any other heads we will have a new head too. + # + # These two cases will be easy to handle for known changeset but + # much more tricky for unsynced changes. + # + # In addition, this code is confused by prune as it only looks for + # successors of the heads (none if pruned) leading to issue4354 + repo = pushop.repo + newhs = set() + discarded = set() + for nh in candidate_newhs: + if nh in repo and repo[nh].phase() <= phases.public: + newhs.add(nh) + else: + for suc in obsolete.allsuccessors(repo.obsstore, [nh]): + if suc != nh and suc in futurecommon: + discarded.add(nh) + break + else: + newhs.add(nh) + return newhs, discarded