I have a flapping test in my test suite - it fails once out of every hundred runs or so.
However, this is kind of expected behavior as it's a test for a algorithm that runs a lot of things and returns a result - every once in while a result just can't be found. If I run the test suite again, everything is green.
Is there a way to re-run this one particular test if it fails and use the results of the rerun?
For example, 99% of the time this test passes. In the event that it fails, it should automatically be re-run without triggering a failure. If that re-run test fails, it should be reported as a normal failing test.
I'm thinking something like this, which is used to stop the test suite on failure:
class << MiniTest::Unit.runner; self; end.class_eval do
def puke(suite, test, e)
super(suite, test, e)
if ENV['FAIL_FAST'] && !e.kind_of?(MiniTest::Skip)
turn_reporter.io.puts "****** Fail fast active for tests, exiting (environment variable FAIL_FAST exists)"
exit(1)
end
end
end
but instead of 'failing fast' it 're-runs' the failing test. The difference is that the above code has an environment variable set - so if any test fails, it 'pukes' - I would only want to put this 're-run' flag on one specific test.
Has anyone found a way to do this?
Related
When I run all my component cypress tests locally on a Macbook pro on a react-vite project with around ~10 tests, I get the following error:
An uncaught error was detected outside of a test:
TypeError: The following error originated from your test code, not from Cypress.
> Failed to fetch dynamically imported module: http://localhost:5173/__cypress/src/cypress/support/component.ts
When Cypress detects uncaught errors originating from your test code it will automatically fail the current test.
Cypress could not associate this error to any specific test.
We dynamically generated a new test to display this failure.
the error is not Consistant and doesn't show up on every run. It also throws on a random test every run. How can I solve this?
update: I think a possible lead could be that I import files on my project with the absolute paths pattern.
For example:
import {comp1, comp2} from 'components'
where as components is configured in my tsconfig.ts file
ok so after countless attempts to fix this and also encountering terminal freezes when I execute
cypress run.
I've gave up and created a bash script to run each of the tests in the code base separately:
set -x
#!/bin/bash
for file in $( find . -type f -name '*.spec.cy.tsx' );
do yarn cypress run --component --browser chrome --spec $file || exit 1
done
for now it seems to get the job done. Hope this helps anyone else that encounters this
The documentation for --num-flaky-test-attempts parameter of gcloud firebase test android run says the following:
Specifies the number of times a test execution should be reattempted if one or more of its test cases fail for any reason.
This means it reruns only the failed tests but not the whole suite, right? In other words, as soon as a test passes it won't be retried, right?
The command line parameter --num-flaky-test-attempts of gcloud firebase test android run appears to rerun all of the tests instead of just the failed tests.
I ran a suite of tests using --num-flaky-test-attempts 10 and here timestamps from the logs for one test in the suite:
04-27 03:41:51.225 passed
04-27 03:41:50.519 passed
04-27 03:41:43.533 failed
04-27 03:41:48.625 failed
04-27 03:42:13.886 failed
04-27 03:41:33.749 failed
04-27 03:41:43.694 failed
04-27 03:41:42.101 failed
04-27 03:41:20.310 passed
04-27 03:40:17.819 passed
04-27 03:33:14.154 failed
It appears to have executed the entire test suite each time. In some cases the test mentioned above passed and in some cases it failed. It passed and failed multiple times, so clearly it's rerunning the test no matter whether it passed or failed.
I believe there were 11 total tests because I specified --num-flaky-test-attempts 10 which means it attempted to run the suite once and since that failed it ran 10 more times for a total of 11.
Here is the full command in case that's helpful to anyone:
gcloud firebase test android run \
--project locuslabs-android-sdk \
--app app/build/outputs/apk/debug/app-debug.apk \
--test app/build/outputs/apk/androidTest/debug/app-debug-androidTest.apk \
--device model=walleye,version=28,locale=en_US,orientation=portrait \
--test-targets "class com.locuslabs.android.sdk.TestUITest" \
--use-orchestrator \
--num-flaky-test-attempts 10 \
--timeout 30m \
--environment-variables numShards=10,shardIndex=2 \
--verbosity debug
The documentation states the following for --num-flaky-test-attempts:
Specifies the number of times a test execution should be reattempted if one or more of its test cases fail for any reason. An execution that initially fails but succeeds on any reattempt is reported as FLAKY.
I.e. if one test case in a test execution fails, Test Lab will re-run the whole test execution again. A test execution is comprised of running the whole test suite on one device.
Example: You execute your test suite on two devices, lets call them A and B. The whole test suite succeeds on A, but one test case fails on B. In this case only the test suite on device B will be re-attempted.
I am running the protractor Suite (spec file having multiple test cases), If any test case fails, protractor does not continue with the next test case execution and all the rest of test cases also fail.
EXPECTED BEHAVIOR:
Upon failure on any test case, protractor should continue with next test case execution.
I used "Protractor-Fail-Fast" Npm package to stop the rest test case execution if any test case fail. But ideally I am not looking for the same.
But this will not help me!
Just for reference: In Visual Studio MS test, If I created ordered test (same as Spec file in protractor having multiple test cases) and then set test setting like "continue on failure", ordered test execution will continue even if some test case failed.
I am looking for a similar test setting or any solution for protractor.
If you dont't want to stop all tests run just stop using Protractor-Fail-Fast library? Protractor tests run till the end by default even if some of the tests are failed.
set ignoreUncaughtExceptions: true in config file as following:
/**
* If set, Protractor will ignore uncaught exceptions instead of exiting
* without an error code. The exceptions will still be logged as warnings.
*/
ignoreUncaughtExceptions?: boolean;
you can get above description from here
export.config = {
...
ignoreUncaughtExceptions: true
}
I am facing different issue when I run test cases.
I have around 50 test cases.
During the test run suddenly it is calling #AfterTest and completing the test run. At the end it ran only 10 test cases. Every run it is giving different no.of ran test cases. (Like 4, 10, 15 but not running all test cases)
Questions:
#AfterTest supposed to call after executing all #Test methods.
Is there any process behind will terminate the #Test methods execution and call #AfterTest method? How debug this kind of failures?
Could someone please help me in this?
I am using Appium to run the test cases. Whenever Appium throws uncaught exception, it is coming out the test block and executing AfterTest block without executing all test cases.
Logs from Appium:
2018-06-12 00:56:21:720 - error: uncaughtException: write EPIPE date=Tue Jun 12 2018 00:56:21 GMT+0530 (IST), pid=68705, uid=503, gid=20, cwd=/Users/../Desktop/Automation/CodeBase/MyProject, execPath=/usr/local/bin/node, version=v8.9.4, argv=[/usr/local/bin/node, /usr/local/bin/appium, -a, 127.0.0.1, -p, 4721, -cp, 5721, -bp, 6721, --chromedriver-port, 7721, --no-reset, --log-level, debug, --local-timezone, --log, /Users/../target/AppiumLogs/appiumLogs_201.log], rss=194183168, heapTotal=137883648, heapUsed=129103560, external=213116, loadavg=[2.45849609375, 2.568359375, 2.4462890625], uptime=476242, trace=[column=11, file=util.js, function=_errnoException, line=1022, method=null, native=false, column=14, file=net.js, function=WriteWrap.afterWrite, line=867, method=afterWrite, native=false], stack=[Error: write EPIPE, at _errnoException (util.js:1022:11), at WriteWrap.afterWrite (net.js:867:14)]
Summary: After adding logic to save user account data, my code seems to work fine and sometimes all my (many) tests pass. But sometimes they fail seemingly randomly, with /tmp test files not being deleted during testing.
In my hand-rolled Ruby/Sinatra "to do list" program, I added user accounts and can now save data to user files (.yml format) as well as tmp files for people who aren't logged in. Yay!
As far as I can tell, the code works fine. All tests pass...but only sometimes. Sometimes, the tests related to my new file processing methods fail. Here's a sample:
# Running:
....EF..........................
Finished in 3.930466s, 8.1415 runs/s, 53.1744 assertions/s.
1) Error:
ToDoTest#test_post_newtask:
Errno::EACCES: Permission denied # unlink_internal - tmp/1.yml
C:/Users/user/Dropbox/_Programming/Ruby/learning_projects/todo/test/test_todo.rb:404:in `delete'
C:/Users/user/Dropbox/_Programming/Ruby/learning_projects/todo/test/test_todo.rb:404:in `block (2 levels) in teardown'
C:/Users/user/Dropbox/_Programming/Ruby/learning_projects/todo/test/test_todo.rb:404:in `each'
C:/Users/user/Dropbox/_Programming/Ruby/learning_projects/todo/test/test_todo.rb:404:in `block in teardown'
2) Failure:
ToDoTest#test_get_deleted [C:/Users/user/Dropbox/_Programming/Ruby/learning_projects/todo/test/test_todo.rb:167]:
Expected false to be truthy.
32 runs, 209 assertions, 1 failures, 1 errors, 0 skips
rake aborted!
Command failed with status (1): [ruby -I"lib" -I"C:/Ruby23/lib/ruby/gems/2.3.0/gems/rake-10.4.2/lib" "C:/Ruby23/lib/ruby/gems/2.3.0/gems/rake-10.4.2/lib/rake/rake_test_loader.rb" "test/test_task.rb" "test/test_task_store.rb" "test/test_todo.rb" "test/test_todo_helpers.rb" "test/test_users.rb" ]
Tasks: TOP => default => test
(See full trace by running task with --trace)
This is only a sample, because sometimes many more tests fail or have errors. It's weirdly random. I noticed that my tests, which result in a lot /tmp files being made and deleted very rapidly, sometimes failed to delete some files, and as a result some would be left behind. If I reran my tests when there were some undeleted files in /tmp, there would be even more (again, random) errors.
One common error I saw, which I never saw before adding the new file processing commands, is this one: Errno::EACCES: Permission denied # unlink_internal. I looked this up on SO but there seems to be only (irrelevant-seeming) Rails stuff. This is a Sinatra program running on Windows. So could I replicate the tests in my Ubuntu VM? Yes I could. Precisely the same sort of error pattern.
Anyway, I suspected that system commands were not finishing before execution continued. But apparently not. I tried putting "sleep 2" after all my system commands, and I still got a random failing test and cruft left in /tmp. I also tried using threads, which I have never used before, like this:
delr = Thread.new do
File.delete(#store.path) # seems to help to add this here...
end
delr.join
But that didn't help.
One other thing...I'm teaching myself and this is probably not the way it's supposed to be done, but...all of my get methods are preceded by a check of my session[:id] variable to see if the user is logged in, and to see if the correct datafile is loaded. I don't know if that's relevant but it might be.
Any ideas on what the problem could be or how to fix it?