obsolete: order of magnitude speedup in _computebumpedset
Reminder: a changeset is said "bumped" if it tries to obsolete a immutable
changeset.
The previous algorithm for computing bumped changeset was:
1) Get all public changesets
2) Find all they successors
3) Search for stuff that are eligible for being "bumped"
(mutable and non obsolete)
The entry size of this algorithm is `O(len(public))` which is mostly the same as
`O(len(repo))`. Even this this approach mean fewer obsolescence marker are
traveled, this is not very scalable.
The new algorithm is:
1) For each potential bumped changesets (non obsolete mutable)
2) iterate over precursors
3) if a precursors is public. changeset is bumped
We travel more obsolescence marker, but the entry size is much smaller since
the amount of potential bumped should remains mostly stable with time `O(1)`.
On some confidential gigantic repo this move bumped computation from 15.19s to
0.46s (×33 speedup…). On "smaller" repo (mercurial, cubicweb's review) no
significant gain were seen. The additional traversal of obsolescence marker is
probably probably counter balance the advantage of it.
Other optimisation could be done in the future (eg: sharing precursors cache
for divergence detection)
$ hg init
$ echo This is file a1 > a
$ hg add a
$ hg commit -m "commit #0"
$ echo This is file b1 > b
$ hg add b
$ hg commit -m "commit #1"
$ hg update 0
0 files updated, 0 files merged, 1 files removed, 0 files unresolved
$ echo This is file c1 > c
$ hg add c
$ hg commit -m "commit #2"
created new head
$ hg merge 1
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
(branch merge, don't forget to commit)
$ rm b
$ echo This is file c22 > c
Test hg behaves when committing with a missing file added by a merge
$ hg commit -m "commit #3"
abort: cannot commit merge with missing files
[255]