localrepo: iteratively derive local repository type
This commit implements the dynamic local repository type derivation
that was explained in the recent commit
bfeab472e3c0 "localrepo: create new function for instantiating a local
repo object."
Instead of a static localrepository class/type which must be customized
after construction, we now dynamically construct a type by building up
base classes/types to represent specific repository interfaces.
Conceptually, the end state is similar to what was happening when
various extensions would monkeypatch the __class__ of newly-constructed
repo instances. However, the approach is inverted. Instead of making
the instance then customizing it, we do the customization up front
by influencing the behavior of the type then we instantiate that
custom type.
This approach gives us much more flexibility. For example, we can
use completely separate classes for implementing different aspects
of the repository. For example, we could have one class representing
revlog-based file storage and another representing non-revlog based
file storage. When then choose which implementation to use based on
the presence of repo requirements.
A concern with this approach is that it creates a lot more types
and complexity and that complexity adds overhead. Yes, it is true that
this approach will result in more types being created. Yes, this is
more complicated than traditional "instantiate a static type." However,
I believe the alternatives to supporting alternate storage backends
are just as complicated. (Before I arrived at this solution, I had
patches storing factory functions on local repo instances for e.g.
constructing a file storage instance. We ended up having a handful
of these. And this was logically identical to assigning custom
methods. Since we were logically changing the type of the instance,
I figured it would be better to just use specialized types instead
of introducing levels of abstraction at run-time.)
On the performance front, I don't believe that having N base classes
has any significant performance overhead compared to just a single base
class. Intuition says that Python will need to iterate the base classes
to find an attribute. However, CPython caches method lookups: as long as
the __class__ or MRO isn't changing, method attribute lookup should be
constant time after first access. And non-method attributes are stored
in __dict__, of which there is only 1 per object, so the number of
base classes for __dict__ is irrelevant.
Anyway, this commit splits up the monolithic completelocalrepository
interface into sub-interfaces: 1 for file storage and 1 representing
everything else.
We've taught ``makelocalrepository()`` to call a series of factory
functions which will produce types implementing specific interfaces.
It then calls type() to create a new type from the built-up list of
base types.
This commit should be considered a start and not the end state. I
suspect we'll hit a number of problems as we start to implement
alternate storage backends:
* Passing custom arguments to __init__ and setting custom attributes
on __dict__.
* Customizing the set of interfaces that are needed. e.g. the
"readonly" intent could translate to not requesting an interface
providing methods related to writing.
* More ergonomic way for extensions to insert themselves so their
callbacks aren't unconditionally called.
* Wanting to modify vfs instances, other arguments passed to __init__.
That being said, this code is usable in its current state and I'm
convinced future commits will demonstrate the value in this approach.
Differential Revision: https://phab.mercurial-scm.org/D4642
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
# based on bundleheads extension by Gregory Szorc <gps@mozilla.com>
from __future__ import absolute_import
import abc
import hashlib
import os
import subprocess
import tempfile
NamedTemporaryFile = tempfile.NamedTemporaryFile
class BundleWriteException(Exception):
pass
class BundleReadException(Exception):
pass
class abstractbundlestore(object):
"""Defines the interface for bundle stores.
A bundle store is an entity that stores raw bundle data. It is a simple
key-value store. However, the keys are chosen by the store. The keys can
be any Python object understood by the corresponding bundle index (see
``abstractbundleindex`` below).
"""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def write(self, data):
"""Write bundle data to the store.
This function receives the raw data to be written as a str.
Throws BundleWriteException
The key of the written data MUST be returned.
"""
@abc.abstractmethod
def read(self, key):
"""Obtain bundle data for a key.
Returns None if the bundle isn't known.
Throws BundleReadException
The returned object should be a file object supporting read()
and close().
"""
class filebundlestore(object):
"""bundle store in filesystem
meant for storing bundles somewhere on disk and on network filesystems
"""
def __init__(self, ui, repo):
self.ui = ui
self.repo = repo
self.storepath = ui.configpath('scratchbranch', 'storepath')
if not self.storepath:
self.storepath = self.repo.vfs.join("scratchbranches",
"filebundlestore")
if not os.path.exists(self.storepath):
os.makedirs(self.storepath)
def _dirpath(self, hashvalue):
"""First two bytes of the hash are the name of the upper
level directory, next two bytes are the name of the
next level directory"""
return os.path.join(self.storepath, hashvalue[0:2], hashvalue[2:4])
def _filepath(self, filename):
return os.path.join(self._dirpath(filename), filename)
def write(self, data):
filename = hashlib.sha1(data).hexdigest()
dirpath = self._dirpath(filename)
if not os.path.exists(dirpath):
os.makedirs(dirpath)
with open(self._filepath(filename), 'wb') as f:
f.write(data)
return filename
def read(self, key):
try:
with open(self._filepath(key), 'rb') as f:
return f.read()
except IOError:
return None
class externalbundlestore(abstractbundlestore):
def __init__(self, put_binary, put_args, get_binary, get_args):
"""
`put_binary` - path to binary file which uploads bundle to external
storage and prints key to stdout
`put_args` - format string with additional args to `put_binary`
{filename} replacement field can be used.
`get_binary` - path to binary file which accepts filename and key
(in that order), downloads bundle from store and saves it to file
`get_args` - format string with additional args to `get_binary`.
{filename} and {handle} replacement field can be used.
"""
self.put_args = put_args
self.get_args = get_args
self.put_binary = put_binary
self.get_binary = get_binary
def _call_binary(self, args):
p = subprocess.Popen(
args, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
close_fds=True)
stdout, stderr = p.communicate()
returncode = p.returncode
return returncode, stdout, stderr
def write(self, data):
# Won't work on windows because you can't open file second time without
# closing it
# TODO: rewrite without str.format() and replace NamedTemporaryFile()
# with pycompat.namedtempfile()
with NamedTemporaryFile() as temp:
temp.write(data)
temp.flush()
temp.seek(0)
formatted_args = [arg.format(filename=temp.name)
for arg in self.put_args]
returncode, stdout, stderr = self._call_binary(
[self.put_binary] + formatted_args)
if returncode != 0:
raise BundleWriteException(
'Failed to upload to external store: %s' % stderr)
stdout_lines = stdout.splitlines()
if len(stdout_lines) == 1:
return stdout_lines[0]
else:
raise BundleWriteException(
'Bad output from %s: %s' % (self.put_binary, stdout))
def read(self, handle):
# Won't work on windows because you can't open file second time without
# closing it
# TODO: rewrite without str.format() and replace NamedTemporaryFile()
# with pycompat.namedtempfile()
with NamedTemporaryFile() as temp:
formatted_args = [arg.format(filename=temp.name, handle=handle)
for arg in self.get_args]
returncode, stdout, stderr = self._call_binary(
[self.get_binary] + formatted_args)
if returncode != 0:
raise BundleReadException(
'Failed to download from external store: %s' % stderr)
return temp.read()