tests/basic_test_result.py
author Pierre-Yves David <pierre-yves.david@octobus.net>
Sun, 09 Feb 2020 01:34:37 +0100
branchstable
changeset 44299 ee0959e7d435
parent 43076 2372284d9457
child 48273 3a95a4e660b9
permissions -rw-r--r--
remotefilelog-test: glob some flaky output line (issue6083) The two following lines are flaky underload, yet the final result is correct. The command involves background pre-check of output, these are not stable probably because they run in parallel in multiple process. I spent a couple of hours trying to understand the pattern and gave up. The documented intend of these tests is safely guaranteed by checking the cache content after the command. If it become useful to start testing precise internal details of the, they will have to be tested in a more appropriate framework than `.t` tests. Differential Revision: https://phab.mercurial-scm.org/D8102

from __future__ import absolute_import, print_function

import unittest


class TestResult(unittest._TextTestResult):
    def __init__(self, options, *args, **kwargs):
        super(TestResult, self).__init__(*args, **kwargs)
        self._options = options

        # unittest.TestResult didn't have skipped until 2.7. We need to
        # polyfill it.
        self.skipped = []

        # We have a custom "ignored" result that isn't present in any Python
        # unittest implementation. It is very similar to skipped. It may make
        # sense to map it into skip some day.
        self.ignored = []

        self.times = []
        self._firststarttime = None
        # Data stored for the benefit of generating xunit reports.
        self.successes = []
        self.faildata = {}

    def addFailure(self, test, reason):
        print("FAILURE!", test, reason)

    def addSuccess(self, test):
        print("SUCCESS!", test)

    def addError(self, test, err):
        print("ERR!", test, err)

    # Polyfill.
    def addSkip(self, test, reason):
        print("SKIP!", test, reason)

    def addIgnore(self, test, reason):
        print("IGNORE!", test, reason)

    def onStart(self, test):
        print("ON_START!", test)

    def onEnd(self):
        print("ON_END!")

    def addOutputMismatch(self, test, ret, got, expected):
        return False

    def stopTest(self, test, interrupted=False):
        super(TestResult, self).stopTest(test)