Mercurial > hg-stable
view tests/test-merge4.t @ 39474:a913d2892e17
lfs: ensure the blob is linked to the remote store on skipped uploads
I noticed a "missing" blob when pushing two repositories with common blobs to a
fresh server, and then running `hg verify` as a user different from the one
running the web server. When pushing the second repo, several of the blobs
already existed in the user cache, so the server indicated to the client that it
doesn't need to upload the blobs. That's good enough for the web server process
to serve up in the future. But a different user has a different cache by
default, so verify complains that `lfs.url` needs to be set, because it wants to
fetch the missing blobs.
Aside from that corner case, it's better to keep all of the blobs in the repo
whenever possible. Especially since the largefiles wiki says the user cache can
be deleted at any time to reclaim disk space- users switching over may have the
same expectations.
author | Matt Harbison <matt_harbison@yahoo.com> |
---|---|
date | Thu, 06 Sep 2018 00:51:21 -0400 |
parents | 63c817ea4a70 |
children | 8561ad49915d |
line wrap: on
line source
$ hg init $ echo This is file a1 > a $ hg add a $ hg commit -m "commit #0" $ echo This is file b1 > b $ hg add b $ hg commit -m "commit #1" $ hg update 0 0 files updated, 0 files merged, 1 files removed, 0 files unresolved $ echo This is file c1 > c $ hg add c $ hg commit -m "commit #2" created new head $ hg merge 1 1 files updated, 0 files merged, 0 files removed, 0 files unresolved (branch merge, don't forget to commit) $ rm b $ echo This is file c22 > c Test hg behaves when committing with a missing file added by a merge $ hg commit -m "commit #3" abort: cannot commit merge with missing files [255]