The question is quite straightforward. I'm following up the TDD principle, so basically I write controller test case first then functions. One controller action might have multiple test cases and some are passed but the one I'm writing is failed.
Usually I verify http://domain.com/proj/test.php in browser and see if there are any failed test cases. My question is "how can I explicitly run failed test case only?". I want to ignore those passed test cases and only focus failed ones.
If there is no such feature in cakephp 2.4 stable, how to implement? Please guide me.
Thanks
Related
I've a requirement to load test a web application using loadRunner(Community edition : 12.53 ). Currently I've my test scripts recorded using loadrunner default test script recorder. I'm assuming that, the operations I chose to perform in SUT should actually update the application backend/DB when I'm executing the test scripts. Is this the correct behavior of a load testing scenario?
When I ran my test scripts, I couldn't see any value or nothing updated in the application DB.
I've my test scripts written in C and also manual correlation is applied using web_reg_save_param method.
What might be the things that could go wrong in such a scenario?. Any help would be deeply appreciated.
the operations I chose to perform in SUT should actually update the application backend/DB when I'm executing the test scripts. Is this the correct behavior of a load testing scenario? - Yes this is the correct behaviour.
When I ran my test scripts, I couldn't see any value or nothing updated in the application DB. - Something you might be missing in the correlations. this generally happens when some variable is not correlated properly or gets missed. Or something like timestamp that you might think is irrelevant but needs to be taken care of.
I am searching a way to assign failed unit tests to resolvers. The way Sonar raises an issue at a class level whenever one or more unit tests fail does not fit my needs, I would like to assign a specific test to a specific developer.
Since Sonar can raise an issue for unit tests failures and is able to determine which particular test case failed I wonder if there is a way I could assign each failed test case to a different developer rather than the whole test class. And if it is possible, how can I do such a thing ?
This is not possible for the moment. You can indeed assign only the issue that tells how many errors (or failures) you have on a specific file. Most of the time, this will work out of the box as most teams try to avoid having people working on the same class at the same time. But it's true that this can happen.
In our build there are certain scenarios that fail for reasons which are out of our control or take too long to debug properly. Things such asynchronous javascript etc.
Anyway the point is sometimes they work sometimes they don't, so I was thinking it would be nice to add a tag to a scenario such as #rerun_on_failure or #retry which would retry the scenarion X number of times before failing the build.
I understand this is not an ideal solution, but the test is still valuable and we would like to keep it without having the false negatives
The actual test that fails clicks on a link and expects a tracking event to be sent to a server for analytics (via javascript). Sometimes the selenium web-driver loads the next page too fast and the event does not have time to be sent.
Thanks
More recent versions of Cucumber have a retry flag
cucumber --retry 2
Will retry tests two times if it fails
I've been considering writing something like what you're describing, but I found this:
http://web.archive.org/web/20160713013212/http://blog.crowdint.com/2011/08/22/auto-retry-failed-cucumber-tests.html
If you're tired of having to re-kick builds in your CI server because of non deterministic failures, this post is for you.
In a nutshell: he makes a new rake task called cucumber:rerun that uses rerun.txt to retry failed tests. It should be pretty easy to add some looping in there to retry at most 3x (for example).
For cucumber + java on maven i found this command:
mvn clean test -Dsurefire.rerunFailingTestsCount=2
, u must have actual version of surefire plugin, my is 3.0.0-M5.
And nothing else special u even need.
I'm using testcomplete Keyword test for a lot of UI test cases. Quite a lot of them has the same steps.
Is there any Macro functionality which can add multiple preset actions/checkpoints easily?
Sure, you can call another keyword test using the Run Keyword Test operation or a script function using the Run Script Routine operation. Both operations allow specifying parameters for a test. Also, you can use the Run Test operation to run any item that can be treated as a separate test (keyword or script test, network suite job or task, a load test).
Moreover, I think that you will find it useful the Data-Driven Testing functionality of TestComplete that allows running a test for every record in a specified data source. Find more information on this feature in the Data-Driven Testing help topic. Videos demonstrating data-driven approach can be found here and here.
In our development environment, we run a Continuous Integration service (TeamCity) which responds to code checkins by running build/test jobs and reporting the results. While the job is in progress, we can easily see how many unit tests have executed so far, how many have failed, etc.
My automated testing team is delivering UI tests developed in Rational Functional Tester. Extracting those tests from the source control system, compiling them, and executing them from the command line all seem to be pretty straight forward exercises.
What I haven't been able to find is a way to report the test results automatically - there don't appear to be any hooks for listeners, for example, or any way to customize the messages that are emitted.
From my research thus far, I've come to the conclusion that my only option is to (a) wait until the tests finish, then (b) parse the HTML report that RFT generates.
Does anybody have a better answer than that?
Here is the workaround I've used for the similar purpose:
Write a helper super class that overwrite the onTerminate callback method, implement your log parsing logics there.
Change the helper super class of your test scripts to the helper super class create in step1.
Use RFT CLI invoke your scripts in your Continous Integration code.
Expanding on #eric2323223, in your onTerminate override, you can use TeamCity's build script interaction functionality to have your RFT pass/fail status rolled up to TeamCity. You just need these TeamCity specific messages emitted to the command line, so that TeamCity picks them up.
##teamcity[testStarted name='test1']
##teamcity[testFailed name='test1' message='failure message' details='message and stack trace']
##teamcity[testFinished name='test1']
##teamcity[testStarted name='test2']
##teamcity[testFailed type='comparisonFailure' name='test2' message='failure message' details='message and stack trace' expected='expected value' actual='actual value']
##teamcity[testFinished name='test2']