lfs: infer the blob store URL from paths.default
If `lfs.url` is specified, it takes precedence. However, now that we support
serving blobs via hgweb, we shouldn't *require* this setting. Less
configuration is better (things will work out of the box once this is sorted
out), and git has similar functionality.
This is not a complete solution- it isn't able to infer the blob store from an
explicitly supplied path, and it should consider `paths.default-push` for push.
The pull solution for that is a bit hacky, and this alone is an improvement for
the vast majority of cases.
Even though there are only a handful of references to the saved remote store,
the location of them makes things complicated.
1) downloading files on demand in the revlog flag processor
2) copying to readonlyvfs with bundlerepo
3) downloading in the file prefetch hook
4) the canupload()/skipdownload() checks
5) uploading blobs
Since revlog doesn't have a repo or ui reference, we can't avoid creating a
remote store when the extension is loaded. While the long term goal is to make
sure the prefetch hook is invoked early for every command for efficiency, this
handling in the flag processor is needed as a last ditch fetch.
In order to support the clone command, the remote store needs to be created
later than when the extension loads, since `paths.default` isn't set until just
before the files are checked out. Therefore, this patch changes the prefetch
hook to ignore the saved reference, and build a new one.
The canupload()/skipdownload() checks simply check if the stored instance is a
`_nullremote`. Since this can only be set via `lfs.url` (which is reflected in
the saved reference), checking only the instance created when the extension
loaded is fine.
The blob uploading function is called from several places:
1) a prepush hook
2) when writing a new bundle
3) from infinitepush
The prepush hook gets an exchange.pushop, so it has a path to where the push is
going. The bundle writer and infinitepush don't. Further, bundle creation for
things like strip and amend are causing blobs to be uploaded. This seems wrong,
but I don't want to side track this sorting that out, so punt on trying to
handle explicit push paths or `paths.default-push`.
I also think that sending blobs to a remote store when pushing to a local repo
is wrong. This functionality predates the usercache, so perhaps that's the
reason for it. I've got some patches floating around to stop sending blobs
remotely in this case, and instead write directly to the other repo's blob
store. But the tests for corruption handling weren't happy with this change,
and I don't have time to rewrite them. So exclude filesystem based paths from
this for now.
I don't think there's much of a chance to implement `paths.remote:lfsurl` style
configs, given how early these are resolved vs how late the remote store is
created. But git has it, so I threw a TODO in there, in case anyone has ideas.
I have no idea why this is now doing http auth twice when it wasn't before. I
don't think the original blobstore's url is ever being used in these cases.
# commitextras.py
#
# Copyright 2013 Facebook, Inc.
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
'''adds a new flag extras to commit (ADVANCED)'''
from __future__ import absolute_import
import re
from mercurial.i18n import _
from mercurial import (
commands,
error,
extensions,
registrar,
)
cmdtable = {}
command = registrar.command(cmdtable)
testedwith = 'ships-with-hg-core'
usedinternally = {
'amend_source',
'branch',
'close',
'histedit_source',
'topic',
'rebase_source',
'intermediate-source',
'__touch-noise__',
'source',
'transplant_source',
}
def extsetup(ui):
entry = extensions.wrapcommand(commands.table, 'commit', _commit)
options = entry[1]
options.append(('', 'extra', [],
_('set a changeset\'s extra values'), _("KEY=VALUE")))
def _commit(orig, ui, repo, *pats, **opts):
origcommit = repo.commit
try:
def _wrappedcommit(*innerpats, **inneropts):
extras = opts.get(r'extra')
if extras:
for raw in extras:
if '=' not in raw:
msg = _("unable to parse '%s', should follow "
"KEY=VALUE format")
raise error.Abort(msg % raw)
k, v = raw.split('=', 1)
if not k:
msg = _("unable to parse '%s', keys can't be empty")
raise error.Abort(msg % raw)
if re.search('[^\w-]', k):
msg = _("keys can only contain ascii letters, digits,"
" '_' and '-'")
raise error.Abort(msg)
if k in usedinternally:
msg = _("key '%s' is used internally, can't be set "
"manually")
raise error.Abort(msg % k)
inneropts[r'extra'][k] = v
return origcommit(*innerpats, **inneropts)
# This __dict__ logic is needed because the normal
# extension.wrapfunction doesn't seem to work.
repo.__dict__[r'commit'] = _wrappedcommit
return orig(ui, repo, *pats, **opts)
finally:
del repo.__dict__[r'commit']