Sun, 25 Jun 2017 17:00:15 -0700 merge: convert repo.wwrite() calls to wctx[f].write()
Phil Cohen <phillco@fb.com> [Sun, 25 Jun 2017 17:00:15 -0700] rev 33095
merge: convert repo.wwrite() calls to wctx[f].write() As with the previous patch in this series, workingfilectx.write() is a direct call to repo.wwrite(), so this change should be a no-op.
Sun, 25 Jun 2017 16:58:26 -0700 merge: replace repo.wvfs.unlinkpath() with calls to wctx[f].remove()
Phil Cohen <phillco@fb.com> [Sun, 25 Jun 2017 16:58:26 -0700] rev 33094
merge: replace repo.wvfs.unlinkpath() with calls to wctx[f].remove() The code inside workingfilectx.remove() is a straight call to repo.wvfs.unlinkpath, so this should be a no-op.
Sun, 25 Jun 2017 16:56:49 -0700 merge: pass wctx to batchremove and batchget
Phil Cohen <phillco@fb.com> [Sun, 25 Jun 2017 16:56:49 -0700] rev 33093
merge: pass wctx to batchremove and batchget We would like to migrate direct calls of repo.wvfs/wwrite/wread/etc to a call on the relevant workingfilectx, both as a cleanup (to reduce the number of working copy functions on `repo`), and also to facilitate an in-memory merge that doesn't write to the working copy. In order to do that, the first step is to ensure we pass the target wctx around and perform our writes and reads on it. Later, this object might become a memctx.
Sat, 24 Jun 2017 23:05:57 +0900 revset: add depth limit to descendants() (issue5374)
Yuya Nishihara <yuya@tcha.org> [Sat, 24 Jun 2017 23:05:57 +0900] rev 33092
revset: add depth limit to descendants() (issue5374) This is naive implementation using two-pass scanning. Tracking descendants isn't an easy problem if both start and stop depths are specified. It's impractical to remember all possible depths of each node while scanning from roots to descendants because the number of depths explodes. Instead, we could cache (min, max) depths as a good approximation and track ancestors back when needed, but that's likely to have off-by-one bug. Since this implementation appears not significantly slower, and is quite straightforward, I think it's good enough for practical use cases. The time and space complexity is O(n) ish. revisions: 0) 1-pass scanning with (min, max)-depth cache (worst-case quadratic) 1) 2-pass scanning (this version) repository: mozilla-central # descendants(0) (for reference) *) 0.430353 # descendants(0, depth=1000) 0) 0.264889 1) 0.398289 # descendants(limit(tip:0, 1, offset=10000), depth=1000) 0) 0.025478 1) 0.029099 # descendants(0, depth=2000, startdepth=1000) 0) painfully slow (due to quadratic backtracking of ancestors) 1) 1.531138
(0) -30000 -10000 -3000 -1000 -300 -100 -30 -10 -4 +4 +10 +30 +100 +300 +1000 +3000 +10000 tip