I have several cypress spec files running against a web app in CI/CD build pipelines.
For whatever reason, there is a gap in time between each spec file is run in the pipeline, so the more spec files we add, the slower the build runs. I can see in the logs it's about 30s to a full minute between each spec file is run (I turned off the video record option to ensure that was not somehow related). Recently, it has begun to stall out completely and the build step fails from timing out.
To verify it wasn't related to the number of tests, I did an experiment by combining all the different tests into a single spec file and running only that file. This worked perfectly - because there was only a single spec file to load, the build did not experience any of those long pauses in between running multiple spec files.
Of course, placing all our tests into a single file is not ideal. I know with the cypress test runner there is a way to accomplish running all tests across multiple spec files as if they were in a single file using "Run all specs" button. From the cypress docs:
"But when you click on "Run all specs" button after cypress open, the Test Runner bundles and concatenates all specs together..."
I want to accomplish the exact same thing through the command line. Does anyone know how to do this? Or accomplish the same thing in another way?
Using cypress run is not the equivalent. Although this command runs all tests, it still fires up each spec file separately (hence the delay issue in the pipeline).
Seems like they don't want to do it that way. Sorry for not having a better answer.
https://glebbahmutov.com/blog/run-all-specs/
Related
A lot has changed in terms of config in cypress 10 so I've been dealing with a bunch new stuff to create global variables, commands, etc., to be able to run cypress using cucumber preprocessor, in different environments. Also cypress has removed the option to run all specs from the runner, so I can't pick up a folder or filter and run all tests within. I figured the only way to do it is through command line but is still not working.
So I'm not sure if the way to create test suites is still using tags or there is a new improved one. Anyway My question is how can I group test cases to run them separately now? Simple case would be run a #Smoke suite or a #Regression suite.
I've already tried adding the tags on the .feature files above Scenario, and I'm running the command with --tags=#Tag parameter but doesn't seem to pick it up.
Not in need of a straight solution but if you could point me in the right direction, cause I haven't found a straight answer. Thank you!
I have just upgraded to Cypress v10.3 from v9.7. I used to run a sequence of tests by specifying the testFiles sequence in configuration, but now two things have changed that prevent me from doing so.
The run all button has been removed
The configuration has changed from testFiles to specPattern
I know the tests should be independent, and have made sure that is so.
But the test report goes to stakeholders and the order of tests presented on the report is important, so the report needs to read in logical order.
I've been manually re-sorting, but it's a pain to do this. How can I reinstate the missing feature in Cypress v10?
This might work, or might not, depending on how your report outputs the test.
Create a "runner" test that imports all the tests in your batch in the order you want them to run.
// run-batch-in-order.cy.js
import './test1.spec.cy.js' // relative paths
import './test2.spec.cy.js'
...
Run just the special batch test in either cypress open or cypress run.
How can I run all specs from command line in cypress? I have 3 spec files which depends on each other and browser shouldn't reset after each test.
"But when you click on "Run all specs" button after cypress open, the Test Runner bundles and concatenates all specs together..."
I want to accomplish the exact same thing through the command line.
You might not like this answer, but you're going head first against the wall there.
One of goals in pretty much any testing project is making your tests completely independent from one another, and there's plenty of reasons to do so, with just a few being:
You don't care if one test failed and the chain is broken.
Similarly, changing/updating one test case doesn't break a chain.
You can run your tests in parallel, which is a serious point in any projects that plan to scale.
As far as I know, this browser/runner reset after each spec file is desired behavior from cypress side to make parallelization possible (but I can't remember where I read it), so I don't think there's any workaround for your problem.
What is the best elegant approach for debugging a large E2E test?
I am using TestCafe automation framework and currently. I'm facing multiple tests that are flaky and require a fix.
The problem is that every time I modify something within the test code I need to run the entire test from the start in order to see if that new update succeeds or not.
I would like to hear ideas about strategies regard to how to debug an E2E test without losing your mind.
Current debug methods:
Using the builtin TestCafe mechanism for debugging in the problematic area in the code and try to comment out everything before that line.
But that really doesn't feel like the best approach.
When there are prerequisite data such as user credentials,url,etc.. I manually Declare those again just before the debug().
PS: I know that tests should be focused as much as possible and relatively small but this is what we have now.
Thanks in advance
You can try using the flag
--debug-on-fail
This pauses the test when it fails and allows you to view the tested page and determine the cause of the fail.
Also use test.only to to specify that only a particular test or fixture should run while all others should be skipped
https://devexpress.github.io/testcafe/documentation/using-testcafe/command-line-interface.html#--debug-on-fail
You can use the takeScreenshot action to capture the existing state of the application during the test. Test Café stores the screenshots inside the screenshots sub-directory and names the file with a timestamp. Alternatively, you can add the takeOnFails command line flag to automatically capture the screen whenever a test fails, so at the point of failure.
Another option is to slow down the test so it’s easier to observe when it is running. You can adjust the speed using the - speed command line flag. 1 is the fastest speed and 0.01 the slowest. Then you can record the test run using the - video command line flag, but you need to set up FFmpeg for this.
When I run my XCTests, I'd like to automatically rerun, once, any integration (unit/ui) test that fails. Is this possible?
This would be done in the same test, without having to hit 'run' again on my tests, or any part of my tests. I'm running all my tests from the command line.
I am doing UI tests that use the network to make calls to the server. If there is a significant server problem, the test will/should fail and report an error. However, if it is only a temporary problem with the request, it would be nice to rerun the test automatically and see if it passes. Also, with the current state of Xcode UI testing there are some occasional problems where it will crash for an obscure reason, and it would be nice to rerun the test automatically to see if it passes the second time.
It would be especially nice if it could output what happened, i.e. "The test failed the first time, because of failure getting refreshed snapshot, but passed the second time"
You can now do this in Xcode without any scripts.
Press ⌘ + 6 or select the test assistant view.
Filter for failed tests
Select tests, right click and choose Run x test methods > Run in all classes
(optional) If you want to run the same tests again press ⌃ + ⌥ + ⌘ + G for a quick way to run them right away.
Task list to accomplish this without fastlane and with granular test reruns
create a script to rerun the test and capture the list of failed tests, then rerun the tests using the -only-testing:TEST-IDENTIFIER flag of xcodebuild.
parse the resultBundle or .xcresult to find the failed tests
write code to erase the device or simulator executing the tests
reboot the device if it is a real one to clear any possible dialogs
use XCAttachment to collect things you care about, like app logs
use xchtmlreport to generate a nice webpage from the results bundle with your snapshots and attachments.
Fastlane can make it possible here is the awesome blog post regarding same
Stabilizing the CI By Re-runing Flaky iOS XCUI Tests