Cypress retries making test always pass - continuous-integration

I am attempting to use the Cypress (at 7.6.0) retries feature as per https://docs.cypress.io/guides/guides/test-retries#How-It-Works
but for some reason it seems to be not working, in that a test that is guaranteed to always fail:
describe('Deliberate fail', function() {
it('Make an assertion that will fail', function() {
expect(true).to.be.false;
});
});
When run from the command line with the config retries set to 1,
npx cypress run --config retries=1 --env server_url=http://localhost:3011 -s cypress/integration/tmp/deliberate_fail.js
it seems to pass, with the only hint that something is being retried being the text "Attempt 1 of 2" and the fact that a screenshot has been made:
The stats on the run look also to be illogical:
1 test
0 passing
0 failing
1 skipped (but does not appear as skipped in summary)
Exactly the same behavior when putting the "retries" option in cypress.json, whether as a single number or options for runMode or openMode.
And in "open" mode, the test does not retry but just fails.
I am guessing that I'm doing something face-palmingly wrong, but what?

I think your problem is that you are not testing anything. Cypress will retry operations that involve the DOM, because the DOM takes time to render. Retrying is more efficient than a straight wait, because it might happen quicker.
So I reckon because you are just comparing 2 literal values, true and false, Cypress says, "Hey, there is nothing to retry here, these two values are never going to change, I'm outta here!"
I was going to say, if you set up a similar test with a DOM element, it might behave as you are expecting, but in fact it will also stop after the first attempt, because when it finds the DOM element, it will stop retrying. The purpose of the retry is to allow the element to be instantiated rather than retrying because the value might be different.
I will admit that I could be wrong in this, but I have definitely convinced myself - what do you think?

Found the cause .. to fix the problem that Cypress does not abort a spec file when one of the test steps (the it() steps) fails, I had the workaround for this very old issue https://github.com/cypress-io/cypress/issues/518 implemented
//
// Prevent it just running on if a step fails
// https://github.com/cypress-io/cypress/issues/518
//
afterEach(function() {
if (this.currentTest.state === 'failed') {
Cypress.runner.stop()
}
});
This means that a describe() will stop on fail, but does not play well with the retry apparently.
My real wished-for use case is to retry at the describe() level, but that may ne expensive as the above issue just got resolved but the solution is only available to those on the Business plan at $300/mo. https://github.com/cypress-io/cypress/issues/518#issuecomment-809514077

Related

Cypress not retrying assertion

My Cypress test is acting inconsistently due to an assertion set on header text. Here is my code:
cy.get('.heading-large').should('contain', 'dashboard') // passes
cy.contains('View details').first().click()
cy.get('.heading-large').should('contain', 'Registration details') // sometimes fails
If it fails, it is because the heading still contains 'dashboard' - Cypress appears not to have retried but gives error Timed out retrying: expected '<h1.heading-large>' to contain 'Registration details'
From reading about Cypress retry-ability, my understanding is that the should assertion should keep trying until timeout, which is set as "defaultCommandTimeout" : 5000. This feels true even if I have an element with the same identifier across two pages. There are no major performance issues with the app I'm testing.
The test seems more likely to fail if I am not watching the window and this issue looks like a possible cause.
Can anyone help determine: is there an issue with my test or Cypress, and how might I improve the test? I'm using Cypress 5.1.0 and Chrome 85 on MacOS Catalina.
It is failing occasionally because the request that fills the header with information has not resolved by the time the timeout has been reached.
You can solve this by setting up a route with a route alias to wait for that exact response from the request to resolve before you proceed with the click.
In other words, When you click(), there is a request sent that grabs the information you want to check for in the next get(). This response for this request has sometimes not resolved by the time your get() reaches timeout. You could increase the timeout but that's not recommended and not good practice here. Instead, wait for that specific response with route & route alias. If you do that, in every case, the last get() won't get called until the information it is looking for has been resolved.
I don't know your request but it would work something like this:
// setup the route and alias
cy.server()
cy.route("/youRequestUrlHere").as("myLovelyAlias")
// first get
cy.get('.heading-large').should('contain', 'dashboard')
// this click fires the request url from route() above
cy.contains('View details').first().click()
// wait for route to resolve using route alias
cy.wait("#myLovelyAlias").then((response) => {
// next get called after response resolves
cy.get('.heading-large').should('contain', 'Registration details')
}
Reference:
Route & alias
Route
Best Practice - get()
Network Request - wait()
edit:
As mentioned above, you could also cheat and set the defaultCommandTimeout to a higher number but that is not recommended because you could still run into cases where the response resolution takes longer than the timeout you've set. The route/wait pattern is the better, more stable approach.
Just in case you want to know how its done though, you would change your get() to something like:
cy.get('.heading-large', {defaultCommandTimeout: 60000}).should('contain', 'Registration details')
Again, other way would be much better.
Reference:
Cypress configuration
It looks like we need to wait for the Cypress bug "Some tests flake only if test runner's browser loses focus (or run headlessly)" to be fixed. This is because I have tried the alternative, helpful answers but am consistently facing the original issue when the window is out of focus.
Thank you to those who have answered and commented.

Cypress async form validation - how to capture (possibly) quick state changes

I have some async form validation code that I'd like to put under test using Cypress. The code is pretty simple -
on user input, enter async validation UI state (or stay in that state if there are previous validation requests that haven't been responded to)
send a request to the server
receive a response
if there are no pending requests, leave async validation UI state
Step 1 is the part I want to test. Right now, this means checking if some element has been assigned some class -- but the state changes can happen very fast, and most of the time (not always!) Cypress times out waiting for something that has ALREADY happened (in other words, step 4 has already occurred by the time we get around to seeing if step 1 happened).
So the failing test looks like:
cy.get("#some-input").type("...");
cy.get("#some-target-element").should("have.class", "class-to-check-for");
Usually, by the time Cypress gets to the second line, step 4 has already ran and the test fails. Is there a common pattern I should know about to solve this? I would naturally prefer not to have change the code under test.
Edit 1:
I'm not certain that I've 100% solved the "race" condition here, but if I use the underlying native elements (discarding the jQuery abstraction), I haven't had a failure yet.
So, changing:
cy.get("#some-input").type("...")
to:
cy.get("#some-input").then(jQueryObj => {
let nativeElement = jQueryObj[0];
nativeElement.value = "...";
nativeElement.dispatchEvent(new Event("input")); // make sure the app knows this element changed
});
And then running Cypress' checks for what classes have / haven't been added has been effective.
You can stub the server request that happens during form validation - and slow it down, see delay parameter https://docs.cypress.io/api/commands/route.html#Use-delays-for-responses
While the request is delayed, your app's validation UI is showing, you can validate it and then once the request finishes, check if the UI goes away.

Using XCTFail when waiting on an expectation does not prevent timeout

When running an XCTest of an asynchronous operation, calling XCTFail() does not immediately fail the test, which was my expectation. Instead, whatever timeout period is remaining from the call to wait is first exhausted, which unnecessarily extends the test time and also creates a confusing failure message implying the test failed due to timeout, when in fact it explicitly failed.
func testFoo() {
let x = expectation(description: "foo")
DispatchQueue.main.asyncAfter(deadline: .now() + 2) {
XCTFail("bar")
}
wait(for: [x], timeout: 5)
}
In the above example, although the the failure occurs after roughly 2 seconds, the test does not complete until the timeout period of 5 seconds has elapsed. When I first noticed this behavior I thought I was doing something wrong, but this seems to just be the way it works, at least with the current version of Xcode (9.2).
Since I didn't find any mention of this via google or stackoverflow searches, I'm sharing a workaround I found.
I discovered that the XCTestExpectation can still be fulfilled after calling XCTFail(), which does not count as a pass and immediately expires the wait. So, applying this to my initial example:
DispatchQueue.main.asyncAfter(deadline: .now() + 2) {
XCTFail("bar")
x.fulfill()
}
This may be what Apple expects but it wasn't intuitive to me and I couldn't find it documented anywhere. So hopefully this saves someone else the time that I spent confused.

Catching MochaJS timeouts

This question has been asked before, but the answer given was to re-write the specific code that was used in that case.
The Mocha documentation only mentions changing the duration of timeouts, and not the behaviour on timeout.
In my case, I want to test code that under certain conditions does not have any side effects. The obvious way to do this is to call the code, wait for Mocha to time out, and then run some assertions on my spies, e.g.:
this.onTimeout(function() {
assert.equal(someObject.spiedMethod.called, false);
});
someAsyncFunction();
Is this possible, or do I have to resort to setting my own timeout, e.g.:
// 1. Disable the mocha timeout
this.timeout(0);
// 2. create my own timeout, which should fire before the mocha timeout
setTimeout(function() {
assert.equal(someObject.spiedMethod.called, false);
done();
}, 2000);
someAsyncFunction();
Mocha's own timeouts are designed only to detect when tests are considered to be slow beyond anything reasonable. An asynchronous computation could be buggy in such a way that causes it to never terminate. Rather than wait forever, Mocha lets you decide that after X time it should fail the test. As soon as the timeout is hit, Mocha gives up on the test that timed out.
So you have to set your own timeouts.
Was facing a similar problem & also didn't find a way to do that in mochajs.
However, if you want to run some code on test timeout, you could structure your code like this:
describe('something', function() {
const TEST_TIMEOUT = 10000
this.timeout(TEST_TIMEOUT)
beforeEach('set timeout callback', function() {
setTimeout(function() {console.log('hi')}, TEST_TIMEOUT)
})
})
A few caveats:
1- I'm not sure if it'd affect a test's result. But if it were to, I'd setTimeout with TEST_TIMEOUT minus some time to make sure my assertions run before the mochajs timeout fires.
2- On the contrary, if the goal of the callback is not to do some assertions but to run some code (e.g., cleanup or logging), I'd set the timeout to be TEST_TIMEOUT plus some time.
Starting with mocha v4 (or with --no-exit flag in previous versions), as long as you have a timeout (or other handlers/the process didn't terminate), mocha will not force exit as it used to, so you can be comfortable your code will run.

XCTAssertTrue doesn't stop routine

Xcode doesn't terminate a test routine at failed assertions. Is this correct?
I fail to understand the reason behind this and I'd like it to behave like assert and have it terminate the program.
With the following test, it will print "still running".
Is this intended?
- (void)testTest
{
XCTAssertTrue(false, #"boo");
NSLog(#"still running");
}
I don't see how this would be useful, because often subsequent code would crash when pre-conditions aren't met:
- (void)testTwoVectors
{
XCTAssertTrue(vec1.size() == vec2.size(), #"vector size mismatch");
for (int i=0; i<vec1.size(); i++) {
XCTAssertTrue(vec1[i] == vec2[i]);
}
}
you can change this behavior of XCTAssert<XX>.
In setup method change value self.continueAfterFailure to NO.
IMO stopping the test after test assertion failure is better behavior (prevents crashes what lead to not running other important tests). If test needs continuation after a failure this means that test case is simply to long an should be split.
Yes, it is intended. That's how unit tests work. Failing a test doesn't terminate the testing; it simply fails the test (and reports it as such). That's valuable because you don't want to lose the knowledge of whether your other tests pass or fail merely because one test fails.
If (as you say in your addition) the test method then proceeds to throw an exception, well then it throws an exception - and you know why. But the exception is caught, so what's the problem? The other test methods still run, so your results are still just what you would expect: one method fails its test and then stops, the other tests do whatever they do. You will then see something like this in the log:
Executed 7 tests, with 2 failures (1 unexpected) in 0.045 (0.045) seconds
The first failure is the XCTAssert. The second is the exception.
Just to clarify, you are correct that if a test generates a "failure", that this individual test will still continue to execute. If you want to stop that particular test, you should simply return from it.
The fact that the test resumes can be very useful, whereby you can identify not only the first issue that resulted in the test failure, but all issues resulting in the test failure.
You say:
I don't see how this would be useful, because often subsequent code would crash when pre-conditions aren't met:
- (void)testTwoVectors
{
XCTAssertTrue(vec1.size() == vec2.size(), #"vector size mismatch");
for (int i=0; i<vec1.size(); i++) {
XCTAssertTrue(vec1[i] == vec2[i]);
}
}
Your example is ironic, because, yes, I can understand why you'd want to return if the two vectors were different sizes, but if they happened to be the same size, the second half of your example test is a perfect example of why you might not want it to stop after generating a failure.
Let's assume that the vectors were the same size and five of the items were not the same. It might be nice to have the test report all five of those failures in this test, not just the first one.
When review test results, it's just sometimes nice to know all of the sources of failure, not just the first one.

Resources