view mercurial/httpclient/_readers.py @ 26623:5a95fe44121d

clonebundles: support for seeding clones from pre-generated bundles Cloning can be an expensive operation for servers because the server generates a bundle from existing repository data at request time. For a large repository like mozilla-central, this consumes 4+ minutes of CPU time on the server. It also results in significant network utilization. Multiplied by hundreds or even thousands of clients and the ensuing load can result in difficulties scaling the Mercurial server. Despite generation of bundles being deterministic until the next changeset is added, the generation of bundles to service a clone request is not cached. Each clone thus performs redundant work. This is wasteful. This patch introduces the "clonebundles" extension and related client-side functionality to help alleviate this deficiency. The client-side feature is behind an experimental flag and is not enabled by default. It works as follows: 1) Server operator generates a bundle and makes it available on a server (likely HTTP). 2) Server operator defines the URL of a bundle file in a .hg/clonebundles.manifest file. 3) Client `hg clone`ing sees the server is advertising bundle URLs. 4) Client fetches and applies the advertised bundle. 5) Client performs equivalent of `hg pull` to fetch changes made since the bundle was created. Essentially, the server performs the expensive work of generating a bundle once and all subsequent clones fetch a static file from somewhere. Scaling static file serving is a much more manageable problem than scaling a Python application like Mercurial. Assuming your repository grows less than 1% per day, the end result is 99+% of CPU and network load from clones is eliminated, allowing Mercurial servers to scale more easily. Serving static files also means data can be transferred to clients as fast as they can consume it, rather than as fast as servers can generate it. This makes clones faster. Mozilla has implemented similar functionality of this patch on hg.mozilla.org using a custom extension. We are hosting bundle files in Amazon S3 and CloudFront (a CDN) and have successfully offloaded >1 TB/day in data transfer from hg.mozilla.org, freeing up significant bandwidth and CPU resources. The positive impact has been stellar and I believe it has proved its value to be included in Mercurial core. I feel it is important for the client-side support to be enabled in core by default because it means that clients will get faster, more reliable clones and will enable server operators to reduce load without requiring any client-side configuration changes (assuming clients are up to date, of course). The scope of this feature is narrowly and specifically tailored to cloning, despite "serve pulls from pre-generated bundles" being a valid and useful feature. I would eventually like for Mercurial servers to support transferring *all* repository data via statically hosted files. You could imagine a server that siphons all pushed data to bundle files and instructs clients to apply a stream of bundles to reconstruct all repository data. This feature, while useful and powerful, is significantly more work to implement because it requires the server component have awareness of discovery and a mapping of which changesets are in which files. Full, clone bundles, by contrast, are much simpler. The wire protocol command is named "clonebundles" instead of something more generic like "staticbundles" to leave the door open for a new, more powerful and more generic server-side component with minimal backwards compatibility implications. The name "bundleclone" is used by Mozilla's extension and would cause problems since there are subtle differences in Mozilla's extension. Mozilla's experience with this idea has taught us that some form of "content negotiation" is required. Not all clients will support all bundle formats or even URLs (advanced TLS requirements, etc). To ensure the highest uptake possible, a server needs to advertise multiple versions of bundles and clients need to be able to choose the most appropriate from that list one. The "attributes" in each server-advertised entry facilitate this filtering and sorting. Their use will become apparent in subsequent patches. Initial inspiration and credit for the idea of cloning from static files belongs to Augie Fackler and his "lookaside clone" extension proof of concept.
author Gregory Szorc <gregory.szorc@gmail.com>
date Fri, 09 Oct 2015 11:22:01 -0700
parents fae47ecaa952
children 1ad9da968a2e
line wrap: on
line source

# Copyright 2011, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
#     * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#     * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
#     * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.

# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Reader objects to abstract out different body response types.

This module is package-private. It is not expected that these will
have any clients outside of httpplus.
"""

import httplib
import logging

logger = logging.getLogger(__name__)


class ReadNotReady(Exception):
    """Raised when read() is attempted but not enough data is loaded."""


class HTTPRemoteClosedError(httplib.HTTPException):
    """The server closed the remote socket in the middle of a response."""


class AbstractReader(object):
    """Abstract base class for response readers.

    Subclasses must implement _load, and should implement _close if
    it's not an error for the server to close their socket without
    some termination condition being detected during _load.
    """
    def __init__(self):
        self._finished = False
        self._done_chunks = []
        self.available_data = 0

    def _addchunk(self, data):
        self._done_chunks.append(data)
        self.available_data += len(data)

    def _pushchunk(self, data):
        self._done_chunks.insert(0, data)
        self.available_data += len(data)

    def _popchunk(self):
        b = self._done_chunks.pop(0)
        self.available_data -= len(b)

        return b

    def done(self):
        """Returns true if the response body is entirely read."""
        return self._finished

    def read(self, amt):
        """Read amt bytes from the response body."""
        if self.available_data < amt and not self._finished:
            raise ReadNotReady()
        blocks = []
        need = amt
        while self._done_chunks:
            b = self._popchunk()
            if len(b) > need:
                nb = b[:need]
                self._pushchunk(b[need:])
                b = nb
            blocks.append(b)
            need -= len(b)
            if need == 0:
                break
        result = ''.join(blocks)
        assert len(result) == amt or (self._finished and len(result) < amt)

        return result

    def readto(self, delimstr, blocks = None):
        """return available data chunks up to the first one in which delimstr
        occurs. No data will be returned after delimstr -- the chunk in which
        it occurs will be split and the remainder pushed back onto the available
        data queue. If blocks is supplied chunks will be added to blocks, otherwise
        a new list will be allocated.
        """
        if blocks is None:
            blocks = []

        while self._done_chunks:
            b = self._popchunk()
            i = b.find(delimstr) + len(delimstr)
            if i:
                if i < len(b):
                    self._pushchunk(b[i:])
                blocks.append(b[:i])
                break
            else:
                blocks.append(b)

        return blocks

    def _load(self, data): # pragma: no cover
        """Subclasses must implement this.

        As data is available to be read out of this object, it should
        be placed into the _done_chunks list. Subclasses should not
        rely on data remaining in _done_chunks forever, as it may be
        reaped if the client is parsing data as it comes in.
        """
        raise NotImplementedError

    def _close(self):
        """Default implementation of close.

        The default implementation assumes that the reader will mark
        the response as finished on the _finished attribute once the
        entire response body has been read. In the event that this is
        not true, the subclass should override the implementation of
        close (for example, close-is-end responses have to set
        self._finished in the close handler.)
        """
        if not self._finished:
            raise HTTPRemoteClosedError(
                'server appears to have closed the socket mid-response')


class AbstractSimpleReader(AbstractReader):
    """Abstract base class for simple readers that require no response decoding.

    Examples of such responses are Connection: Close (close-is-end)
    and responses that specify a content length.
    """
    def _load(self, data):
        if data:
            assert not self._finished, (
                'tried to add data (%r) to a closed reader!' % data)
        logger.debug('%s read an additional %d data',
                     self.name, len(data)) # pylint: disable=E1101
        self._addchunk(data)


class CloseIsEndReader(AbstractSimpleReader):
    """Reader for responses that specify Connection: Close for length."""
    name = 'close-is-end'

    def _close(self):
        logger.info('Marking close-is-end reader as closed.')
        self._finished = True


class ContentLengthReader(AbstractSimpleReader):
    """Reader for responses that specify an exact content length."""
    name = 'content-length'

    def __init__(self, amount):
        AbstractSimpleReader.__init__(self)
        self._amount = amount
        if amount == 0:
            self._finished = True
        self._amount_seen = 0

    def _load(self, data):
        AbstractSimpleReader._load(self, data)
        self._amount_seen += len(data)
        if self._amount_seen >= self._amount:
            self._finished = True
            logger.debug('content-length read complete')


class ChunkedReader(AbstractReader):
    """Reader for chunked transfer encoding responses."""
    def __init__(self, eol):
        AbstractReader.__init__(self)
        self._eol = eol
        self._leftover_skip_amt = 0
        self._leftover_data = ''

    def _load(self, data):
        assert not self._finished, 'tried to add data to a closed reader!'
        logger.debug('chunked read an additional %d data', len(data))
        position = 0
        if self._leftover_data:
            logger.debug(
                'chunked reader trying to finish block from leftover data')
            # TODO: avoid this string concatenation if possible
            data = self._leftover_data + data
            position = self._leftover_skip_amt
            self._leftover_data = ''
            self._leftover_skip_amt = 0
        datalen = len(data)
        while position < datalen:
            split = data.find(self._eol, position)
            if split == -1:
                self._leftover_data = data
                self._leftover_skip_amt = position
                return
            amt = int(data[position:split], base=16)
            block_start = split + len(self._eol)
            # If the whole data chunk plus the eol trailer hasn't
            # loaded, we'll wait for the next load.
            if block_start + amt + len(self._eol) > len(data):
                self._leftover_data = data
                self._leftover_skip_amt = position
                return
            if amt == 0:
                self._finished = True
                logger.debug('closing chunked reader due to chunk of length 0')
                return
            self._addchunk(data[block_start:block_start + amt])
            position = block_start + amt + len(self._eol)
# no-check-code