view tests/basic_test_result.py @ 42278:8dc22a209420

automation: wait for instance profiles and roles Otherwise there is a race condition between creating the resources and us attempting to use them / them becoming available. The role waiter API was recently introduced, so we had to upgrade the boto3 package to get it. Other packages were also updated to latest versions just because. Even with this change, I still run into issues with the IAM instance profile not being available when we attempt to create an EC2 instance using a just-created profile. I'm not sure what's going on. Possibly a bug on Amazon's end. But the new behavior is "more correct." Differential Revision: https://phab.mercurial-scm.org/D6286
author Gregory Szorc <gregory.szorc@gmail.com>
date Sat, 27 Apr 2019 11:38:58 -0700
parents f4a214300957
children 2372284d9457
line wrap: on
line source

from __future__ import absolute_import, print_function

import unittest

class TestResult(unittest._TextTestResult):

    def __init__(self, options, *args, **kwargs):
        super(TestResult, self).__init__(*args, **kwargs)
        self._options = options

        # unittest.TestResult didn't have skipped until 2.7. We need to
        # polyfill it.
        self.skipped = []

        # We have a custom "ignored" result that isn't present in any Python
        # unittest implementation. It is very similar to skipped. It may make
        # sense to map it into skip some day.
        self.ignored = []

        self.times = []
        self._firststarttime = None
        # Data stored for the benefit of generating xunit reports.
        self.successes = []
        self.faildata = {}

    def addFailure(self, test, reason):
        print("FAILURE!", test, reason)

    def addSuccess(self, test):
        print("SUCCESS!", test)

    def addError(self, test, err):
        print("ERR!", test, err)

    # Polyfill.
    def addSkip(self, test, reason):
        print("SKIP!", test, reason)

    def addIgnore(self, test, reason):
        print("IGNORE!", test, reason)

    def onStart(self, test):
        print("ON_START!", test)

    def onEnd(self):
        print("ON_END!")

    def addOutputMismatch(self, test, ret, got, expected):
        return False

    def stopTest(self, test, interrupted=False):
        super(TestResult, self).stopTest(test)