tests/test-dirstate-backup.t
author Kyle Lippincott <spectral@google.com>
Fri, 13 Dec 2019 14:40:52 -0800
changeset 43891 7eb6a2680ae6
parent 34940 c2b30348930f
child 45827 8d72e29ad1e0
permissions -rw-r--r--
dirstate: when calling rebuild(), avoid some N^2 codepaths I had a user repo with 200k files in it. Calling `hg debugrebuilddirstate` took tens of minutes (I didn't wait for it). In that situation, changedfiles==allfiles, and both are lists. This meant that we had to run an average of 100k comparisons, for each of 200k files, just to check whether a file needed to have normallookup called (it always did), or drop. While it's probably not a huge issue, in my very awkward synthetic benchmark I wrote (not using a benchmark library or anything), I was seeing some slowdowns for small-changedfiles and very-large-allfiles invocations, with an inflection somewhere around 10 items in changedfiles (regardless of the size of allfiles); above 10 items in changedfiles, the new code appears to always be faster. For the case of 50k files in changedfiles and the same items in allfiles, I'm seeing differences of 15s of just running comparisons vs. 0.003793s. I haven't bothered to run a comparison of 200k items in changedfiles and allfiles. :) Differential Revision: https://phab.mercurial-scm.org/D7665

Set up

  $ hg init repo
  $ cd repo

Try to import an empty patch

  $ hg import --no-commit - <<EOF
  > EOF
  applying patch from stdin
  abort: stdin: no diffs found
  [255]

No dirstate backups are left behind

  $ ls .hg/dirstate* | sort
  .hg/dirstate