view hgext/fsmonitor/pywatchman/load.py @ 39772:ae531f5e583c

testing: add interface unit tests for file storage Our strategy for supporting alternate storage backends is to define interfaces for everything then "code to the interface." We already have interfaces for various primitives, including file and manifest storage. What we don't have is generic unit tests for those interfaces. Up to this point we've been relying on high-level integration tests (mainly in the form of existing .t tests) to test alternate storage backends. And my experience with developing the "simple store" test extension is that such testing is very tedious: it takes several minutes to run all tests and when you find a failure, it is often non-trivial to debug. This commit starts to change that. This commit introduces the mercurial.testing.storage module. It contains testing code for storage. Currently, it defines some unittest.TestCase classes for testing the file storage interfaces. It also defines some factory functions that allow a caller to easily spawn a custom TestCase "bound" to a specific file storage backend implementation. A new .py test has been added. It simply defines a callable to produce filelog and transaction instances on demand and then "registers" the various test classes so the filelog class can be tested with the storage interface unit tests. As part of writing the tests, I identified a couple of apparent bugs in revlog.py and filelog.py! These are tracked with inline TODO comments. Writing the tests makes it more obvious where the storage interface is lacking. For example, we raise either IndexError or error.LookupError for missing revisions depending on whether we use an integer revision or a node. Also, we raise error.RevlogError in various places when we should be raising a storage-agnostic error type. The storage interfaces are currently far from perfect and there is much work to be done to improve them. But at least with this commit we finally have the start of unit tests that can be used to "qualify" the behavior of a storage backend. And when implementing and debugging new storage backends, we now have an obvious place to define new tests and have obvious places to insert breakpoints to facilitate debugging. This should be invaluable when implementing new storage backends. I added the mercurial.testing package because these interface conformance tests are generic and need to be usable by all storage backends. Having the code live in tests/ would make it difficult for storage backends implemented in extensions to test their interface conformance. First, it would require obtaining a copy of Mercurial's storage test code in order to test. Second, it would make testing against multiple Mercurial versions difficult, as you would need to import N copies of the storage testing code in order to achieve test coverage. By making the test code part of the Mercurial distribution itself, extensions can `import mercurial.testing.*` to access and run the test code. The test will run against whatever Mercurial version is active. FWIW I've always wanted to move parts of run-tests.py into the mercurial.* package to make the testing story simpler (e.g. imagine an `hg debugruntests` command that could invoke the test harness). While I have no plans to do that in the near future, establishing the mercurial.testing package does provide a natural home for that code should someone do this in the future. Differential Revision: https://phab.mercurial-scm.org/D4650
author Gregory Szorc <gregory.szorc@gmail.com>
date Tue, 18 Sep 2018 16:52:11 -0700
parents 16f4b341288d
children 6469c23a40a2
line wrap: on
line source

# Copyright 2016 Facebook, Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
#  * Redistributions of source code must retain the above copyright notice,
#    this list of conditions and the following disclaimer.
#
#  * Redistributions in binary form must reproduce the above copyright notice,
#    this list of conditions and the following disclaimer in the documentation
#    and/or other materials provided with the distribution.
#
#  * Neither the name Facebook nor the names of its contributors may be used to
#    endorse or promote products derived from this software without specific
#    prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# no unicode literals

try:
    from . import bser
except ImportError:
    from . import pybser as bser

import ctypes

EMPTY_HEADER = b"\x00\x01\x05\x00\x00\x00\x00"


def _read_bytes(fp, buf):
    """Read bytes from a file-like object

    @param fp: File-like object that implements read(int)
    @type fp: file

    @param buf: Buffer to read into
    @type buf: bytes

    @return: buf
    """

    # Do the first read without resizing the input buffer
    offset = 0
    remaining = len(buf)
    while remaining > 0:
        l = fp.readinto((ctypes.c_char * remaining).from_buffer(buf, offset))
        if l is None or l == 0:
            return offset
        offset += l
        remaining -= l
    return offset


def load(fp, mutable=True, value_encoding=None, value_errors=None):
    """Deserialize a BSER-encoded blob.

    @param fp: The file-object to deserialize.
    @type file:

    @param mutable: Whether to return mutable results.
    @type mutable: bool

    @param value_encoding: Optional codec to use to decode values. If
                           unspecified or None, return values as bytestrings.
    @type value_encoding: str

    @param value_errors: Optional error handler for codec. 'strict' by default.
                         The other most common argument is 'surrogateescape' on
                         Python 3. If value_encoding is None, this is ignored.
    @type value_errors: str
    """
    buf = ctypes.create_string_buffer(8192)
    SNIFF_BUFFER_SIZE = len(EMPTY_HEADER)
    header = (ctypes.c_char * SNIFF_BUFFER_SIZE).from_buffer(buf)
    read_len = _read_bytes(fp, header)
    if read_len < len(header):
        return None

    total_len = bser.pdu_len(buf)
    if total_len > len(buf):
        ctypes.resize(buf, total_len)

    body = (ctypes.c_char * (total_len - len(header))).from_buffer(
        buf, len(header))
    read_len = _read_bytes(fp, body)
    if read_len < len(body):
        raise RuntimeError('bser data ended early')

    return bser.loads(
        (ctypes.c_char * total_len).from_buffer(buf, 0),
        mutable,
        value_encoding,
        value_errors)