Sun, 20 Apr 2014 14:55:33 -0700 run-tests: keep track of test start and stop in MercurialTest.run()
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 20 Apr 2014 14:55:33 -0700] rev 21448
run-tests: keep track of test start and stop in MercurialTest.run() This brings run() more compatible with unittest.TestCase.run()
Sun, 20 Apr 2014 14:52:57 -0700 run-tests: keep track of test execution state in Test
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 20 Apr 2014 14:52:57 -0700] rev 21447
run-tests: keep track of test execution state in Test This patch starts a mini series of moving functionality into the newly-established setUp() and tearDown() methods.
Sun, 20 Apr 2014 14:41:11 -0700 run-tests: support setUp() and tearDown() in TestCase wrapper
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 20 Apr 2014 14:41:11 -0700] rev 21446
run-tests: support setUp() and tearDown() in TestCase wrapper unittest.TestCase.run() calls setUp() and tearDown() during run(). We emulate that implementation.
Sun, 20 Apr 2014 14:34:03 -0700 run-tests: fail tests by raising an exception
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 20 Apr 2014 14:34:03 -0700] rev 21445
run-tests: fail tests by raising an exception When in unittest mode, Test.run() will now raise for all non-success cases. This makes it behave like TestCase.run().
Sun, 20 Apr 2014 14:32:03 -0700 run-tests: record warnings by raising WarnTest
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 20 Apr 2014 14:32:03 -0700] rev 21444
run-tests: record warnings by raising WarnTest We continue the conversion of recording test results by raising exceptions.
Sun, 20 Apr 2014 14:28:29 -0700 run-tests: record ignored tests by raising IgnoreTest
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 20 Apr 2014 14:28:29 -0700] rev 21443
run-tests: record ignored tests by raising IgnoreTest
Sun, 20 Apr 2014 14:23:50 -0700 run-tests: record skips by raising SkipTest
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 20 Apr 2014 14:23:50 -0700] rev 21442
run-tests: record skips by raising SkipTest The unittest way of recording a skipped test is to raise an exception (at least with modern unittest implementation). We change Test to raise a SkipTest when operating in unittest mode. This does prevent some "tear down" activities from running in unittest mode. This will be fixed in subsequent patches. Since unittest mode is experimental, this should be OK.
Sun, 20 Apr 2014 14:19:59 -0700 run-tests: implement TestCase.run()
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 20 Apr 2014 14:19:59 -0700] rev 21441
run-tests: implement TestCase.run() Simply wrapping TestCase.run() is not sufficient for robust results reporting because unittest in Python 2.4 does not know about things like skipped tests and reports them as success or failures instead of skips. We will reimplement TestCase.run() with knowledge and semantics present in modern Python releases.
Sun, 20 Apr 2014 14:04:37 -0700 run-tests: don't print progress from Test when in unittest mode
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 20 Apr 2014 14:04:37 -0700] rev 21440
run-tests: don't print progress from Test when in unittest mode unittest does its own printing of progress indicators as part of TestResult. So, there is no need to print them when running in unittest mode. This will fix the double output of progress indicators that had been occurring in unittest mode.
Sun, 20 Apr 2014 13:04:19 -0700 run-tests: define a custom TestSuite that uses _executetests()
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 20 Apr 2014 13:04:19 -0700] rev 21439
run-tests: define a custom TestSuite that uses _executetests() We now have a custom unittest.TestSuite implementation that uses _executetests() and thus knows how to execute tests concurrently. Running tests in --unittest mode will use this TestSuite. Since the TestSuite handles concurrency, the warnings around --jobs and --loop have been removed.
(0) -10000 -3000 -1000 -300 -100 -10 +10 +100 +300 +1000 +3000 +10000 +30000 tip