Mercurial > hg
view tests/test-bundle-type.t @ 37518:092eff6833a7
lfs: infer the blob store URL from paths.default
If `lfs.url` is specified, it takes precedence. However, now that we support
serving blobs via hgweb, we shouldn't *require* this setting. Less
configuration is better (things will work out of the box once this is sorted
out), and git has similar functionality.
This is not a complete solution- it isn't able to infer the blob store from an
explicitly supplied path, and it should consider `paths.default-push` for push.
The pull solution for that is a bit hacky, and this alone is an improvement for
the vast majority of cases.
Even though there are only a handful of references to the saved remote store,
the location of them makes things complicated.
1) downloading files on demand in the revlog flag processor
2) copying to readonlyvfs with bundlerepo
3) downloading in the file prefetch hook
4) the canupload()/skipdownload() checks
5) uploading blobs
Since revlog doesn't have a repo or ui reference, we can't avoid creating a
remote store when the extension is loaded. While the long term goal is to make
sure the prefetch hook is invoked early for every command for efficiency, this
handling in the flag processor is needed as a last ditch fetch.
In order to support the clone command, the remote store needs to be created
later than when the extension loads, since `paths.default` isn't set until just
before the files are checked out. Therefore, this patch changes the prefetch
hook to ignore the saved reference, and build a new one.
The canupload()/skipdownload() checks simply check if the stored instance is a
`_nullremote`. Since this can only be set via `lfs.url` (which is reflected in
the saved reference), checking only the instance created when the extension
loaded is fine.
The blob uploading function is called from several places:
1) a prepush hook
2) when writing a new bundle
3) from infinitepush
The prepush hook gets an exchange.pushop, so it has a path to where the push is
going. The bundle writer and infinitepush don't. Further, bundle creation for
things like strip and amend are causing blobs to be uploaded. This seems wrong,
but I don't want to side track this sorting that out, so punt on trying to
handle explicit push paths or `paths.default-push`.
I also think that sending blobs to a remote store when pushing to a local repo
is wrong. This functionality predates the usercache, so perhaps that's the
reason for it. I've got some patches floating around to stop sending blobs
remotely in this case, and instead write directly to the other repo's blob
store. But the tests for corruption handling weren't happy with this change,
and I don't have time to rewrite them. So exclude filesystem based paths from
this for now.
I don't think there's much of a chance to implement `paths.remote:lfsurl` style
configs, given how early these are resolved vs how late the remote store is
created. But git has it, so I threw a TODO in there, in case anyone has ideas.
I have no idea why this is now doing http auth twice when it wasn't before. I
don't think the original blobstore's url is ever being used in these cases.
author | Matt Harbison <matt_harbison@yahoo.com> |
---|---|
date | Sat, 07 Apr 2018 22:22:20 -0400 |
parents | 5d10f41ddcc4 |
children | 6a7ff5816c5f |
line wrap: on
line source
bundle w/o type option $ hg init t1 $ hg init t2 $ cd t1 $ echo blablablablabla > file.txt $ hg ci -Ama adding file.txt $ hg log | grep summary summary: a $ hg bundle ../b1 ../t2 searching for changes 1 changesets found $ cd ../t2 $ hg unbundle ../b1 adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files new changesets c35a0f9217e6 (run 'hg update' to get a working copy) $ hg up 1 files updated, 0 files merged, 0 files removed, 0 files unresolved $ hg log | grep summary summary: a $ cd .. Unknown compression type is rejected $ hg init t3 $ cd t3 $ hg -q unbundle ../b1 $ hg bundle -a -t unknown out.hg abort: unknown is not a recognized bundle specification (see 'hg help bundlespec' for supported values for --type) [255] $ hg bundle -a -t unknown-v2 out.hg abort: unknown compression is not supported (see 'hg help bundlespec' for supported values for --type) [255] $ cd .. test bundle types $ testbundle() { > echo % test bundle type $1 > hg init t$1 > cd t1 > hg bundle -t $1 ../b$1 ../t$1 > f -q -B6 -D ../b$1; echo > cd ../t$1 > hg debugbundle ../b$1 > hg debugbundle --spec ../b$1 > echo > cd .. > } $ for t in "None" "bzip2" "gzip" "none-v2" "v2" "v1" "gzip-v1"; do > testbundle $t > done % test bundle type None searching for changes 1 changesets found HG20\x00\x00 (esc) Stream params: {} changegroup -- {nbchanges: 1, version: 02} c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf cache:rev-branch-cache -- {} none-v2 % test bundle type bzip2 searching for changes 1 changesets found HG20\x00\x00 (esc) Stream params: {Compression: BZ} changegroup -- {nbchanges: 1, version: 02} c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf cache:rev-branch-cache -- {} bzip2-v2 % test bundle type gzip searching for changes 1 changesets found HG20\x00\x00 (esc) Stream params: {Compression: GZ} changegroup -- {nbchanges: 1, version: 02} c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf cache:rev-branch-cache -- {} gzip-v2 % test bundle type none-v2 searching for changes 1 changesets found HG20\x00\x00 (esc) Stream params: {} changegroup -- {nbchanges: 1, version: 02} c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf cache:rev-branch-cache -- {} none-v2 % test bundle type v2 searching for changes 1 changesets found HG20\x00\x00 (esc) Stream params: {Compression: BZ} changegroup -- {nbchanges: 1, version: 02} c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf cache:rev-branch-cache -- {} bzip2-v2 % test bundle type v1 searching for changes 1 changesets found HG10BZ c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf bzip2-v1 % test bundle type gzip-v1 searching for changes 1 changesets found HG10GZ c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf gzip-v1 Compression level can be adjusted for bundle2 bundles $ hg init test-complevel $ cd test-complevel $ cat > file0 << EOF > this is a file > with some text > and some more text > and other content > EOF $ cat > file1 << EOF > this is another file > with some other content > and repeated, repeated, repeated, repeated content > EOF $ hg -q commit -A -m initial $ hg bundle -a -t gzip-v2 gzip-v2.hg 1 changesets found $ f --size gzip-v2.hg gzip-v2.hg: size=468 $ hg --config experimental.bundlecomplevel=1 bundle -a -t gzip-v2 gzip-v2-level1.hg 1 changesets found $ f --size gzip-v2-level1.hg gzip-v2-level1.hg: size=475 $ cd .. #if zstd $ for t in "zstd" "zstd-v2"; do > testbundle $t > done % test bundle type zstd searching for changes 1 changesets found HG20\x00\x00 (esc) Stream params: {Compression: ZS} changegroup -- {nbchanges: 1, version: 02} c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf cache:rev-branch-cache -- {} zstd-v2 % test bundle type zstd-v2 searching for changes 1 changesets found HG20\x00\x00 (esc) Stream params: {Compression: ZS} changegroup -- {nbchanges: 1, version: 02} c35a0f9217e65d1fdb90c936ffa7dbe679f83ddf cache:rev-branch-cache -- {} zstd-v2 Explicit request for zstd on non-generaldelta repos $ hg --config format.usegeneraldelta=false init nogd $ hg -q -R nogd pull t1 $ hg -R nogd bundle -a -t zstd nogd-zstd 1 changesets found zstd-v1 always fails $ hg -R tzstd bundle -a -t zstd-v1 zstd-v1 abort: compression engine zstd is not supported on v1 bundles (see 'hg help bundlespec' for supported values for --type) [255] #else zstd is a valid engine but isn't available $ hg -R t1 bundle -a -t zstd irrelevant.hg abort: compression engine zstd could not be loaded [255] #endif test garbage file $ echo garbage > bgarbage $ hg init tgarbage $ cd tgarbage $ hg pull ../bgarbage pulling from ../bgarbage abort: ../bgarbage: not a Mercurial bundle [255] $ cd .. test invalid bundle type $ cd t1 $ hg bundle -a -t garbage ../bgarbage abort: garbage is not a recognized bundle specification (see 'hg help bundlespec' for supported values for --type) [255] $ cd ..