view tests/test-addremove-similar.t @ 35121:66c5a8cf2868

lfs: import the Facebook git-lfs client extension The purpose of this is the same as the built-in largefiles extension- to handle huge files outside of the normal storage system, generally to keep the amount of data cloned to a lower amount. There are several benefits of implementing the git-lfs protocol, instead of using the largefiles extension: - Bitbucket and Github support (and probably wider support in 3rd party hosting sites in general). [1][2] - The number of hg internals monkey patched are several orders of magnitude lower, so it will be easier to reason about and maintain. Future commands will likely just work, without requiring various wrappers. - The "standin" files are only written to the filelog, not the disk. That should avoid weird edge cases where the largefile and standin files get out of sync. [3] It also avoids the occasional printing of the "hidden" standin file in various messages. - Filesets like size() will work, even if the file isn't present. (It always says 41 bytes for largefiles, whether present or not.) The only place that I see where largefiles comes out on top is that it works with `hg serve` for simple sharing, without external infrastructure. Getting lfs-test-server working was a hassle, and took awhile to figure out. Maybe we can do something to make it work in the future. Long term, I expect that this will be highly preferred over largefiles. But if we are to recommend this to largefile users, there are some UI issues to bikeshed. Until they are resolved, I've marked this experimental, and am not putting a pointer to this in the largefiles help. The (non exhaustive) list of issues I've seen so far are: - It isn't sufficient to just enable the largefiles extension- you have to explicitly add a file with --large before it will pay attention to the configured sizes and patterns on future adds. The justification being that once you use it, you're stuck with it. I've seen people confused by this, and haven't liked it myself. But it's also saved me a few times. Should we do something like have a specific enabling config setting that must be set in the local repo config, so that enabling this extension in the user or system hgrc doesn't silently start storing lfs files? - The largefiles extension adds a repo requirement when the first largefile is committed, so that the extension must always be enabled in the future. This extension is not doing that, and since I only enabled it locally to avoid infecting other repos, I got a cryptic error about missing flag processors when I cloned. Is there no repo requirement due to shallow/narrow clone considerations (or other future advanced things)? - In the (small amount of) reading I've done about the git implementation, it seems that the files and sizes are stored in a tracked .gitattributes file. I think a tracked file for this would be extremely useful for consistency across developers, but this kind of touches on the tracked hgrc file proposal a few months back. - The git client can specify file patterns, not just sizes. - The largefiles extension has a cache directory in the local repo, but also a system wide one. We should probably implement a system wide cache too, so that multiple clones don't have to refetch the files from the server. - Jun mentioned other missing features, like SSH authentication, gc, etc. The code corresponds to c0492b73c7ef in hg-experimental. [4] The only tweaks are to load the extension in the tests with 'lfs=' instead of 'lfs=$TESTDIR/../hgext3rd/lfs', change the import in the *.py test to hgext (from hgext3rd), add the 'testedwith' declaration, and mark it experimental for now. The infinite-push, p4fastimport, and remotefilelog tests were left behind. The devel-warnings for unregistered config options are not corrected yet, nor are the import check warnings. [1] https://www.mercurial-scm.org/pipermail/mercurial/2017-November/050699.html [2] https://bitbucket.org/site/master/issues/3843/largefiles-support-bb-3903 [3] https://bz.mercurial-scm.org/show_bug.cgi?id=5738 [4] https://bitbucket.org/facebook/hg-experimental
author Matt Harbison <matt_harbison@yahoo.com>
date Tue, 14 Nov 2017 00:06:23 -0500
parents 75be14993fda
children 4441705b7111
line wrap: on
line source

  $ hg init rep; cd rep

  $ touch empty-file
  $ $PYTHON -c 'for x in range(10000): print(x)' > large-file

  $ hg addremove
  adding empty-file
  adding large-file

  $ hg commit -m A

  $ rm large-file empty-file
  $ $PYTHON -c 'for x in range(10,10000): print(x)' > another-file

  $ hg addremove -s50
  adding another-file
  removing empty-file
  removing large-file
  recording removal of large-file as rename to another-file (99% similar)

  $ hg commit -m B

comparing two empty files caused ZeroDivisionError in the past

  $ hg update -C 0
  2 files updated, 0 files merged, 1 files removed, 0 files unresolved
  $ rm empty-file
  $ touch another-empty-file
  $ hg addremove -s50
  adding another-empty-file
  removing empty-file

  $ cd ..

  $ hg init rep2; cd rep2

  $ $PYTHON -c 'for x in range(10000): print(x)' > large-file
  $ $PYTHON -c 'for x in range(50): print(x)' > tiny-file

  $ hg addremove
  adding large-file
  adding tiny-file

  $ hg commit -m A

  $ $PYTHON -c 'for x in range(70): print(x)' > small-file
  $ rm tiny-file
  $ rm large-file

  $ hg addremove -s50
  removing large-file
  adding small-file
  removing tiny-file
  recording removal of tiny-file as rename to small-file (82% similar)

  $ hg commit -m B

should be sorted by path for stable result

  $ for i in `$PYTHON $TESTDIR/seq.py 0 9`; do
  >     cp small-file $i
  > done
  $ rm small-file
  $ hg addremove
  adding 0
  adding 1
  adding 2
  adding 3
  adding 4
  adding 5
  adding 6
  adding 7
  adding 8
  adding 9
  removing small-file
  recording removal of small-file as rename to 0 (100% similar)
  recording removal of small-file as rename to 1 (100% similar)
  recording removal of small-file as rename to 2 (100% similar)
  recording removal of small-file as rename to 3 (100% similar)
  recording removal of small-file as rename to 4 (100% similar)
  recording removal of small-file as rename to 5 (100% similar)
  recording removal of small-file as rename to 6 (100% similar)
  recording removal of small-file as rename to 7 (100% similar)
  recording removal of small-file as rename to 8 (100% similar)
  recording removal of small-file as rename to 9 (100% similar)
  $ hg commit -m '10 same files'

pick one from many identical files

  $ cp 0 a
  $ rm `$PYTHON $TESTDIR/seq.py 0 9`
  $ hg addremove
  removing 0
  removing 1
  removing 2
  removing 3
  removing 4
  removing 5
  removing 6
  removing 7
  removing 8
  removing 9
  adding a
  recording removal of 0 as rename to a (100% similar)
  $ hg revert -aq

pick one from many similar files

  $ cp 0 a
  $ for i in `$PYTHON $TESTDIR/seq.py 0 9`; do
  >     echo $i >> $i
  > done
  $ hg commit -m 'make them slightly different'
  $ rm `$PYTHON $TESTDIR/seq.py 0 9`
  $ hg addremove -s50
  removing 0
  removing 1
  removing 2
  removing 3
  removing 4
  removing 5
  removing 6
  removing 7
  removing 8
  removing 9
  adding a
  recording removal of 0 as rename to a (99% similar)
  $ hg commit -m 'always the same file should be selected'

should all fail

  $ hg addremove -s foo
  abort: similarity must be a number
  [255]
  $ hg addremove -s -1
  abort: similarity must be between 0 and 100
  [255]
  $ hg addremove -s 1e6
  abort: similarity must be between 0 and 100
  [255]

  $ cd ..

Issue1527: repeated addremove causes Abort

  $ hg init rep3; cd rep3
  $ mkdir d
  $ echo a > d/a
  $ hg add d/a
  $ hg commit -m 1

  $ mv d/a d/b
  $ hg addremove -s80
  removing d/a
  adding d/b
  recording removal of d/a as rename to d/b (100% similar) (glob)
  $ hg debugstate
  r   0          0 1970-01-01 00:00:00 d/a
  a   0         -1 unset               d/b
  copy: d/a -> d/b
  $ mv d/b c

no copies found here (since the target isn't in d

  $ hg addremove -s80 d
  removing d/b (glob)

copies here

  $ hg addremove -s80
  adding c
  recording removal of d/a as rename to c (100% similar) (glob)

  $ cd ..