Sometimes, for whatever reason, one of the tests in cypress will get stuck and continue to run indefinitely until I manually stop the test suite. It completely ignores any timeouts already set up in cypress.
What is the best way to safeguard and automatically fail just that one test -- without halting the entire test suite?
Related
I am now using Jmeter to run the stress test on APIs. When I stopped the test manually by pressing the stop button, Jmeter would return "socket closed exception error".
Therefore, I would like to ask is there any way so that this error will not occur in my reports?
Thanks very much.
Don't use GUI for running the test, it's only for tests development and debugging, you're supposed to run your JMeter test using command-line non-GUI mode.
No matter of the way of execution forcefully stopping the test will result in non-graceful connections termination, you should rather set fixed number of iterations and/or duration in the Thread Group and wait for the test completion
If you really need to manually stop the test you can use shutdown.cmd script (lives in "bin" folder of your JMeter installation
And finally you can remove "unwanted" results i.e. in last X seconds of the test using Filter Results Tool
I have test suite where in one of the spec the 5th test-case is dependent on 3rd test-case. While the case is run locally via cypress runner - I do not see any issue in order of running.
But while case is running in CI - I'm seeing 5th is failing randomly [verified that no script errors] & upon analysis I notice that certain data records which are created in 3rd case are not returned for 5th case & hence its failing.
Is there a way to order tests within a spec in Cypress?
Unfortunately it is not possible to run tests within the same spec file in specific order in Cypress currently.
Cypress is basically just scheduling events to happen, with no additional control beyond that, so there is no way to guarantee tests will run in a specific order.
I have a set of mocha test scripts (= files), located in /test together with mocha.opts. It seems mocha is running all test files in parallel, is this correct? This could be a problem if test data are used in different test scripts.
How can I ensure, that each file is executed separately?
It seems mocha is running all test files in parallel, is this correct?
No.
By default, Mocha loads the test files sequentially, and records all tests that must be run, then it runs the test one by one, again sequentially. Mocha will not run two tests at the same time, no matter whether the tests are in the same file or in different files. Note that whether your tests are asynchronous or synchronous makes no difference: when Mocha starts an asynchronous test, it waits for it to complete before moving on to the next test.
There are tools that patch Mocha to run tests in parallel. So you may see demonstrations showing Mocha tests running in parallel, but this requires additional tools, and is not part of Mocha properly speaking.
If you are seeing behavior that suggest tests running in parallel, that's a bug in your code, or perhaps you are misinterpreting the results you are getting. Regarding bugs, it is possible to make mistakes and write code that will indicate to Mocha that your test is over, when in fact there are still asynchronous operations running. However, this is a bug in the test code, not a feature whereby Mocha is running tests in parallel.
Be careful when assigning environment variables outside of mocha hooks since the assignments to that variables are done in all files before any test execution (i.e eny "before*" or "it" hook).
Hence the value assigned to the environment variable in the first file will be overwritten in the second one, before any Mocha test hook execution.
Eg. if you are assigning process.env.PORT=5000 in test1.js file and process.env.PORT=6000 in test2.js outside of any mocha hook, then when the tests from test1.js starts execution the value of the process.env.PORT will be 6000 and not 5000 as you may expect.
Suppose the test fails due to an error from some other library or some service fails, Nightwatch seem to not care about such situation and doesn't fail gracefully leaving behind hanging selenium session. Also, the xml report is not generated.
This is not reported on TeamCity as a failure.
Is there a config item which I can set or do something my own to take care of such exceptions?
In our build there are certain scenarios that fail for reasons which are out of our control or take too long to debug properly. Things such asynchronous javascript etc.
Anyway the point is sometimes they work sometimes they don't, so I was thinking it would be nice to add a tag to a scenario such as #rerun_on_failure or #retry which would retry the scenarion X number of times before failing the build.
I understand this is not an ideal solution, but the test is still valuable and we would like to keep it without having the false negatives
The actual test that fails clicks on a link and expects a tracking event to be sent to a server for analytics (via javascript). Sometimes the selenium web-driver loads the next page too fast and the event does not have time to be sent.
Thanks
More recent versions of Cucumber have a retry flag
cucumber --retry 2
Will retry tests two times if it fails
I've been considering writing something like what you're describing, but I found this:
http://web.archive.org/web/20160713013212/http://blog.crowdint.com/2011/08/22/auto-retry-failed-cucumber-tests.html
If you're tired of having to re-kick builds in your CI server because of non deterministic failures, this post is for you.
In a nutshell: he makes a new rake task called cucumber:rerun that uses rerun.txt to retry failed tests. It should be pretty easy to add some looping in there to retry at most 3x (for example).
For cucumber + java on maven i found this command:
mvn clean test -Dsurefire.rerunFailingTestsCount=2
, u must have actual version of surefire plugin, my is 3.0.0-M5.
And nothing else special u even need.