largefiles: always create the cache and standin directories when cloning
The standin matcher only works if the .hglf directory exists (and it won't exist
if 'clone -U' is used, unless --all-largefiles is also specified). Since not
even 'update -r null' will get rid of the standin directory, this ensures that
the standin directory always exists if the repo has the 'largefiles'
requirement. This requirement is only set after a largefile is committed, so
these directories will not be created for repos that have the extension enabled
but have not committed a largefile.
With the standin directory in place, 'lfconvert --to-normal' will now be able to
download the required largefiles when converting a repo that was created with
'clone -U', and whose files are not in the usercache.
The downloadlfiles command could probably be put inside the 'largefiles'
requirement conditional too, but given that the user specified --all-largefiles,
there is likely an expectation to print out the number of largefiles downloaded,
even if it is 0.
$ "$TESTDIR/hghave" serve || exit 80
#if windows
$ hg clone http://localhost:$HGPORT/ copy
abort: * (glob)
[255]
#else
$ hg clone http://localhost:$HGPORT/ copy
abort: error: Connection refused
[255]
#endif
$ test -d copy
[1]
$ cat > dumb.py <<EOF
> import BaseHTTPServer, SimpleHTTPServer, os, signal
> def run(server_class=BaseHTTPServer.HTTPServer,
> handler_class=SimpleHTTPServer.SimpleHTTPRequestHandler):
> server_address = ('localhost', int(os.environ['HGPORT']))
> httpd = server_class(server_address, handler_class)
> open("listening", "w")
> httpd.handle_request()
> run()
> EOF
$ python dumb.py 2> log &
$ P=$!
$ while [ ! -f listening ]; do sleep 0; done
$ hg clone http://localhost:$HGPORT/foo copy2
abort: HTTP Error 404: * (glob)
[255]
$ wait $P