view tests/test-narrow-shallow.t @ 39583:ee087f0d7db5

util: allow lrucachedict to track cost of entries Currently, lrucachedict allows tracking of arbitrary items with the only limit being the total number of items in the cache. Caches can be a lot more useful when they are bound by the size of the items in them rather than the number of elements in the cache. In preparation for teaching lrucachedict to enforce a max size of cached items, we teach lrucachedict to optionally associate a numeric cost value with each node. We purposefully let the caller define their own cost for nodes. This does introduce some overhead. Most of it comes from __setitem__, since that function now calls into insert(), thus introducing Python function call overhead. $ hg perflrucachedict --size 4 --gets 1000000 --sets 1000000 --mixed 1000000 ! gets ! wall 0.599552 comb 0.600000 user 0.600000 sys 0.000000 (best of 17) ! wall 0.614643 comb 0.610000 user 0.610000 sys 0.000000 (best of 17) ! inserts ! <not available> ! wall 0.655817 comb 0.650000 user 0.650000 sys 0.000000 (best of 16) ! sets ! wall 0.540448 comb 0.540000 user 0.540000 sys 0.000000 (best of 18) ! wall 0.805644 comb 0.810000 user 0.810000 sys 0.000000 (best of 13) ! mixed ! wall 0.651556 comb 0.660000 user 0.660000 sys 0.000000 (best of 15) ! wall 0.781357 comb 0.780000 user 0.780000 sys 0.000000 (best of 13) $ hg perflrucachedict --size 1000 --gets 1000000 --sets 1000000 --mixed 1000000 ! gets ! wall 0.621014 comb 0.620000 user 0.620000 sys 0.000000 (best of 16) ! wall 0.615146 comb 0.620000 user 0.620000 sys 0.000000 (best of 17) ! inserts ! <not available> ! wall 0.698115 comb 0.700000 user 0.700000 sys 0.000000 (best of 15) ! sets ! wall 0.560247 comb 0.560000 user 0.560000 sys 0.000000 (best of 18) ! wall 0.832495 comb 0.830000 user 0.830000 sys 0.000000 (best of 12) ! mixed ! wall 0.686172 comb 0.680000 user 0.680000 sys 0.000000 (best of 15) ! wall 0.841359 comb 0.840000 user 0.840000 sys 0.000000 (best of 12) We're still under 1us per insert, which seems like reasonable performance for a cache. If we comment out updating of self.totalcost during insert(), performance of insert() is identical to __setitem__ before. However, I don't want to make total cost evaluation lazy because it has significant performance implications for when we need to evaluate the total cost at mutation time (it requires a cache traversal, which could be expensive for large caches). Differential Revision: https://phab.mercurial-scm.org/D4502
author Gregory Szorc <gregory.szorc@gmail.com>
date Fri, 07 Sep 2018 12:14:42 -0700
parents 8d033b348d85
children 34f2c634c8f6
line wrap: on
line source

#require no-reposimplestore

  $ . "$TESTDIR/narrow-library.sh"

  $ hg init master
  $ cd master
  $ cat >> .hg/hgrc <<EOF
  > [narrow]
  > serveellipses=True
  > EOF
  $ for x in `$TESTDIR/seq.py 10`
  > do
  >   echo $x > "f$x"
  >   hg add "f$x"
  > done
  $ hg commit -m "Add root files"
  $ mkdir d1 d2
  $ for x in `$TESTDIR/seq.py 10`
  > do
  >   echo d1/$x > "d1/f$x"
  >   hg add "d1/f$x"
  >   echo d2/$x > "d2/f$x"
  >   hg add "d2/f$x"
  > done
  $ hg commit -m "Add d1 and d2"
  $ for x in `$TESTDIR/seq.py 10`
  > do
  >   echo f$x rev2 > "f$x"
  >   echo d1/f$x rev2 > "d1/f$x"
  >   echo d2/f$x rev2 > "d2/f$x"
  >   hg commit -m "Commit rev2 of f$x, d1/f$x, d2/f$x"
  > done
  $ cd ..

narrow and shallow clone the d2 directory

  $ hg clone --narrow ssh://user@dummy/master shallow --include "d2" --depth 2
  requesting all changes
  adding changesets
  adding manifests
  adding file changes
  added 4 changesets with 13 changes to 10 files
  new changesets *:* (glob)
  updating to branch default
  10 files updated, 0 files merged, 0 files removed, 0 files unresolved
  $ cd shallow
  $ hg log -T '{rev}{if(ellipsis,"...")}: {desc}\n'
  3: Commit rev2 of f10, d1/f10, d2/f10
  2: Commit rev2 of f9, d1/f9, d2/f9
  1: Commit rev2 of f8, d1/f8, d2/f8
  0...: Commit rev2 of f7, d1/f7, d2/f7
  $ hg update 0
  3 files updated, 0 files merged, 0 files removed, 0 files unresolved
  $ cat d2/f7 d2/f8
  d2/f7 rev2
  d2/8

  $ cd ..

change every upstream file once

  $ cd master
  $ for x in `$TESTDIR/seq.py 10`
  > do
  >   echo f$x rev3 > "f$x"
  >   echo d1/f$x rev3 > "d1/f$x"
  >   echo d2/f$x rev3 > "d2/f$x"
  >   hg commit -m "Commit rev3 of f$x, d1/f$x, d2/f$x"
  > done
  $ cd ..

pull new changes with --depth specified. There were 10 changes to the d2
directory but the shallow pull should only fetch 3.

  $ cd shallow
  $ hg pull --depth 2
  pulling from ssh://user@dummy/master
  searching for changes
  adding changesets
  adding manifests
  adding file changes
  added 4 changesets with 10 changes to 10 files
  new changesets *:* (glob)
  (run 'hg update' to get a working copy)
  $ hg log -T '{rev}{if(ellipsis,"...")}: {desc}\n'
  7: Commit rev3 of f10, d1/f10, d2/f10
  6: Commit rev3 of f9, d1/f9, d2/f9
  5: Commit rev3 of f8, d1/f8, d2/f8
  4...: Commit rev3 of f7, d1/f7, d2/f7
  3: Commit rev2 of f10, d1/f10, d2/f10
  2: Commit rev2 of f9, d1/f9, d2/f9
  1: Commit rev2 of f8, d1/f8, d2/f8
  0...: Commit rev2 of f7, d1/f7, d2/f7
  $ hg update 4
  merging d2/f1
  merging d2/f2
  merging d2/f3
  merging d2/f4
  merging d2/f5
  merging d2/f6
  merging d2/f7
  3 files updated, 7 files merged, 0 files removed, 0 files unresolved
  $ cat d2/f7 d2/f8
  d2/f7 rev3
  d2/f8 rev2
  $ hg update 7
  3 files updated, 0 files merged, 0 files removed, 0 files unresolved
  $ cat d2/f10
  d2/f10 rev3

  $ cd ..

cannot clone with zero or negative depth

  $ hg clone --narrow ssh://user@dummy/master bad --include "d2" --depth 0
  requesting all changes
  remote: abort: depth must be positive, got 0
  abort: pull failed on remote
  [255]
  $ hg clone --narrow ssh://user@dummy/master bad --include "d2" --depth -1
  requesting all changes
  remote: abort: depth must be positive, got -1
  abort: pull failed on remote
  [255]