Xcode doesn't terminate a test routine at failed assertions. Is this correct?
I fail to understand the reason behind this and I'd like it to behave like assert and have it terminate the program.
With the following test, it will print "still running".
Is this intended?
- (void)testTest
{
XCTAssertTrue(false, #"boo");
NSLog(#"still running");
}
I don't see how this would be useful, because often subsequent code would crash when pre-conditions aren't met:
- (void)testTwoVectors
{
XCTAssertTrue(vec1.size() == vec2.size(), #"vector size mismatch");
for (int i=0; i<vec1.size(); i++) {
XCTAssertTrue(vec1[i] == vec2[i]);
}
}
you can change this behavior of XCTAssert<XX>.
In setup method change value self.continueAfterFailure to NO.
IMO stopping the test after test assertion failure is better behavior (prevents crashes what lead to not running other important tests). If test needs continuation after a failure this means that test case is simply to long an should be split.
Yes, it is intended. That's how unit tests work. Failing a test doesn't terminate the testing; it simply fails the test (and reports it as such). That's valuable because you don't want to lose the knowledge of whether your other tests pass or fail merely because one test fails.
If (as you say in your addition) the test method then proceeds to throw an exception, well then it throws an exception - and you know why. But the exception is caught, so what's the problem? The other test methods still run, so your results are still just what you would expect: one method fails its test and then stops, the other tests do whatever they do. You will then see something like this in the log:
Executed 7 tests, with 2 failures (1 unexpected) in 0.045 (0.045) seconds
The first failure is the XCTAssert. The second is the exception.
Just to clarify, you are correct that if a test generates a "failure", that this individual test will still continue to execute. If you want to stop that particular test, you should simply return from it.
The fact that the test resumes can be very useful, whereby you can identify not only the first issue that resulted in the test failure, but all issues resulting in the test failure.
You say:
I don't see how this would be useful, because often subsequent code would crash when pre-conditions aren't met:
- (void)testTwoVectors
{
XCTAssertTrue(vec1.size() == vec2.size(), #"vector size mismatch");
for (int i=0; i<vec1.size(); i++) {
XCTAssertTrue(vec1[i] == vec2[i]);
}
}
Your example is ironic, because, yes, I can understand why you'd want to return if the two vectors were different sizes, but if they happened to be the same size, the second half of your example test is a perfect example of why you might not want it to stop after generating a failure.
Let's assume that the vectors were the same size and five of the items were not the same. It might be nice to have the test report all five of those failures in this test, not just the first one.
When review test results, it's just sometimes nice to know all of the sources of failure, not just the first one.
Related
I am attempting to use the Cypress (at 7.6.0) retries feature as per https://docs.cypress.io/guides/guides/test-retries#How-It-Works
but for some reason it seems to be not working, in that a test that is guaranteed to always fail:
describe('Deliberate fail', function() {
it('Make an assertion that will fail', function() {
expect(true).to.be.false;
});
});
When run from the command line with the config retries set to 1,
npx cypress run --config retries=1 --env server_url=http://localhost:3011 -s cypress/integration/tmp/deliberate_fail.js
it seems to pass, with the only hint that something is being retried being the text "Attempt 1 of 2" and the fact that a screenshot has been made:
The stats on the run look also to be illogical:
1 test
0 passing
0 failing
1 skipped (but does not appear as skipped in summary)
Exactly the same behavior when putting the "retries" option in cypress.json, whether as a single number or options for runMode or openMode.
And in "open" mode, the test does not retry but just fails.
I am guessing that I'm doing something face-palmingly wrong, but what?
I think your problem is that you are not testing anything. Cypress will retry operations that involve the DOM, because the DOM takes time to render. Retrying is more efficient than a straight wait, because it might happen quicker.
So I reckon because you are just comparing 2 literal values, true and false, Cypress says, "Hey, there is nothing to retry here, these two values are never going to change, I'm outta here!"
I was going to say, if you set up a similar test with a DOM element, it might behave as you are expecting, but in fact it will also stop after the first attempt, because when it finds the DOM element, it will stop retrying. The purpose of the retry is to allow the element to be instantiated rather than retrying because the value might be different.
I will admit that I could be wrong in this, but I have definitely convinced myself - what do you think?
Found the cause .. to fix the problem that Cypress does not abort a spec file when one of the test steps (the it() steps) fails, I had the workaround for this very old issue https://github.com/cypress-io/cypress/issues/518 implemented
//
// Prevent it just running on if a step fails
// https://github.com/cypress-io/cypress/issues/518
//
afterEach(function() {
if (this.currentTest.state === 'failed') {
Cypress.runner.stop()
}
});
This means that a describe() will stop on fail, but does not play well with the retry apparently.
My real wished-for use case is to retry at the describe() level, but that may ne expensive as the above issue just got resolved but the solution is only available to those on the Business plan at $300/mo. https://github.com/cypress-io/cypress/issues/518#issuecomment-809514077
When I write tests that involve subscribing to events on the Eventstream or watching actors and listning for "Terminated", the tests work fine running them 1 by 1 but when I run the whole testsuite those tests fail.
Tests also works if each of those tests are in a separate test class with Xunit.
How come?
A repo with those kind of tests: https://github.com/Lejdholt/AkkaTestError
Took a look at your repository. I can reproduce the problems you are describing.
It feels like a bug in the TestKit, some timing issue somewhere. But its hard to pin down.
Also, not all unit test frameworks are created equally. The testkit uses its own TaskDispatcher to enable the testing of what are normally inherently asynchronous processed operations.
This sometimes causes some conflicts with the testframework being used. Is also coincidentally why akka.net also moved to XUnit for their own CI process.
I have managed to fix your problem, by not using the TestProbe. Although im not sure if the problem lies with the TestProbe per say, or the fact that your where using an global reference (your 'process' variable).
I suspect that the testframework, while running tests in parrallel, might be causing some wierd things to happen with your testprobe reference.
Example of how i changed one of your tests:
[Test]
public void GivenAnyTime_WhenProcessTerminates_ShouldLogStartRemovingProcess()
{
IProcessFactory factory = Substitute.For<IProcessFactory>();
var testactor = Sys.ActorOf<FakeActor>("test2");
processId = Guid.NewGuid();
factory.Create(Arg.Any<IActorRefFactory>(), Arg.Any<SupervisorStrategy>()).Returns(testactor);
manager = Sys.ActorOf(Props.Create(() => new Manager(factory)));
manager.Tell(new StartProcessCommand(processId));
EventFilter.Info("Removing process.")
.ExpectOne(() => Sys.Stop(testactor));
}
It should be fairly self explanatory on how you should change your other test.
The FakeActor is nothing more then an empty ReceiveActor implementation.
I'm trying to capture output written from each task as it is executed. The code below works as expected when running Gradle with --max-workers 1, but when multiple tasks are running in parallel this code below picks up output written from other tasks running simultaneously.
The API documentation states the following about the "getLogging" method on Task. From what it says I judge that it should support capturing output from single tasks regardless of any other tasks running at the same time.
getLogging()
Returns the LoggingManager which can be used to control the logging level and standard output/error capture for this task. https://docs.gradle.org/current/javadoc/org/gradle/api/Task.html
graph.allTasks.forEach { Task task ->
task.ext.capturedOutput = [ ]
def listener = { task.capturedOutput << it } as StandardOutputListener
task.logging.addStandardErrorListener(listener)
task.logging.addStandardOutputListener(listener)
task.doLast {
task.logging.removeStandardOutputListener(listener)
task.logging.removeStandardErrorListener(listener)
}
}
Have I messed up something in the code above or should I report this as a bug?
It looks like every LoggingManager instance shares an OutputLevelRenderer, which is what your listeners eventually get added to. This did make me wonder why you weren't getting duplicate messages because you're attaching the same listeners to the same renderer over and over again. But it seems the magic is in BroadcastDispatch, which keeps the listeners in a map, keyed by the listener object itself. So you can't have duplicate listeners.
Mind you, for that to hold, the hash code of each listener must be the same, which seems surprising. Anyway, perhaps this is working as intended, perhaps it isn't. It's certainly worth an issue to get some clarity on whether Gradle should support listeners per task. Alternatively raise it on the dev mailing list.
I'm testing ReactiveUI, it seems very nice.
However, I am a bit puzzled by the MessageBus.
Sample code :
var bus = new MessageBus();
int result = -1;
bus.Listen<int>().Subscribe(x => result = x);
bus.SendMessage(42);
It does work when calling an Assert statement, but in a standard WPF application the result value is never updated. This is probably due to the Scheduler implementation, but it's not quite clear to me yet.
Any hint is welcome.
The result is eventually updated (the same as calling Dispatcher.BeginInvoke), not immediately. By default, RxUI schedules things differently in a unit test runner to make it easier to write unit tests - that's why you see that warning in the unit test runner output.
If you were to instead do something like:
var bus = new MessageBus();
bus.Listen<int>().Subscribe(x => MessageBox.Show("The answer is " + x));
bus.SendMessage(42);
You will see the Message Box (if not, it's definitely a bug!).
Why is the MessageBus deferred? It makes it easier to write async code, since you can now SendMessage from other threads without seeing WPF's dreaded InvalidOperationException due to accessing objects on the wrong thread.
Recently I started using mstest for testing.
Is there any way to write messages to test window if test successed? I don't see the way, messages are alowed only if test fails. What if I want to let say, print little description of a test, so I can see what test means without having to open the test. Or, as now is the case, I'm measuring times of execution for some tests, I want to print that time out.
Is there a way to extend test methods so to easy choose if I want tests with or without time measuring, choosing the mode of test execution?
Thanx
Right click on the columns in the test result window and choose "Add/Remove Columns". Add the columns for "Duration" and "Output (StdOut)". That will give you test timing and let you see what the tests print.
Why not give your tests descriptive names?
[Test]
public void AddsTwoNumbersTogether() {...}
[Test]
public void DividesFirstNumberBySecondNumber() {...}
etc.