lfs: ensure the blob is linked to the remote store on skipped uploads
I noticed a "missing" blob when pushing two repositories with common blobs to a
fresh server, and then running `hg verify` as a user different from the one
running the web server. When pushing the second repo, several of the blobs
already existed in the user cache, so the server indicated to the client that it
doesn't need to upload the blobs. That's good enough for the web server process
to serve up in the future. But a different user has a different cache by
default, so verify complains that `lfs.url` needs to be set, because it wants to
fetch the missing blobs.
Aside from that corner case, it's better to keep all of the blobs in the repo
whenever possible. Especially since the largefiles wiki says the user cache can
be deleted at any time to reclaim disk space- users switching over may have the
same expectations.
#!/usr/bin/env python
#
# An example hgweb CGI script, edit as necessary
# See also https://mercurial-scm.org/wiki/PublishingRepositories
# Path to repo or hgweb config to serve (see 'hg help hgweb')
config = "/path/to/repo/or/config"
# Uncomment and adjust if Mercurial is not installed system-wide
# (consult "installed modules" path from 'hg debuginstall'):
#import sys; sys.path.insert(0, "/path/to/python/lib")
# Uncomment to send python tracebacks to the browser if an error occurs:
#import cgitb; cgitb.enable()
from mercurial import demandimport; demandimport.enable()
from mercurial.hgweb import hgweb, wsgicgi
application = hgweb(config)
wsgicgi.launch(application)