Mercurial > hg
view tests/test-fix-metadata.t @ 45545:e5e1285b6f6f
largefiles: prevent in-memory merge instead of switching to on-disk
I enabled in-memory merge by default while testing some changes. I
spent quite some time troubleshooting why largefiles was still
creating an on-disk mergestate. Then I found out that it ignores the
callers `wc` argument to `mergemod._update()` and always uses on-disk
merge. This patch changes that so we raise an error if largefiles is
used with in-memory merge. That way we'll notice if in-memory merge is
used with largefiles instead of silently replacing ignoring the
`overlayworkingctx` instance and updating the working copy instead.
I felt a little bad that this would break things more for users with
both largefiles and in-memory rebase enabled. So I also added a
higher-level override to make sure that largefiles disables in-memory
rebase. It turns out that that fixes `run-tests.py -k largefiles
--extra-config-opt rebase.experimental.inmemory=1`.
Differential Revision: https://phab.mercurial-scm.org/D9069
author | Martin von Zweigbergk <martinvonz@google.com> |
---|---|
date | Tue, 22 Sep 2020 23:18:37 -0700 |
parents | 2d70b1118af2 |
children |
line wrap: on
line source
A python hook for "hg fix" that prints out the number of files and revisions that were affected, along with which fixer tools were applied. Also checks how many times it sees a specific key generated by one of the fixer tools defined below. $ cat >> $TESTTMP/postfixhook.py <<EOF > import collections > def file(ui, repo, rev=None, path=b'', metadata=None, **kwargs): > ui.status(b'fixed %s in revision %d using %s\n' % > (path, rev, b', '.join(metadata.keys()))) > def summarize(ui, repo, replacements=None, wdirwritten=False, > metadata=None, **kwargs): > counts = collections.defaultdict(int) > keys = 0 > for fixername, metadatalist in metadata.items(): > for metadata in metadatalist: > if metadata is None: > continue > counts[fixername] += 1 > if 'key' in metadata: > keys += 1 > ui.status(b'saw "key" %d times\n' % (keys,)) > for name, count in sorted(counts.items()): > ui.status(b'fixed %d files with %s\n' % (count, name)) > if replacements: > ui.status(b'fixed %d revisions\n' % (len(replacements),)) > if wdirwritten: > ui.status(b'fixed the working copy\n') > EOF Some mock output for fixer tools that demonstrate what could go wrong with expecting the metadata output format. $ printf 'new content\n' > $TESTTMP/missing $ printf 'not valid json\0new content\n' > $TESTTMP/invalid $ printf '{"key": "value"}\0new content\n' > $TESTTMP/valid Configure some fixer tools based on the output defined above, and enable the hooks defined above. Disable parallelism to make output of the parallel file processing phase stable. $ cat >> $HGRCPATH <<EOF > [extensions] > fix = > [fix] > metadatafalse:command=cat $TESTTMP/missing > metadatafalse:pattern=metadatafalse > metadatafalse:metadata=false > missing:command=cat $TESTTMP/missing > missing:pattern=missing > missing:metadata=true > invalid:command=cat $TESTTMP/invalid > invalid:pattern=invalid > invalid:metadata=true > valid:command=cat $TESTTMP/valid > valid:pattern=valid > valid:metadata=true > [hooks] > postfixfile = python:$TESTTMP/postfixhook.py:file > postfix = python:$TESTTMP/postfixhook.py:summarize > [worker] > enabled=false > EOF See what happens when we execute each of the fixer tools. Some print warnings, some write back to the file. $ hg init repo $ cd repo $ printf "old content\n" > metadatafalse $ printf "old content\n" > invalid $ printf "old content\n" > missing $ printf "old content\n" > valid $ hg add -q $ hg fix -w ignored invalid output from fixer tool: invalid fixed metadatafalse in revision 2147483647 using metadatafalse ignored invalid output from fixer tool: missing fixed valid in revision 2147483647 using valid saw "key" 1 times fixed 1 files with valid fixed the working copy $ cat metadatafalse new content $ cat missing old content $ cat invalid old content $ cat valid new content $ cd ..