I have tried many orderedtests and the .trx file always shows the wrong count?
for instance, if i had an orderedtest with 2 tests, the results look like this in the .trx file (results summary node):
<Counters total="3" executed="3" passed="3" error="0" failed="0" timeout="0" aborted="0" inconclusive="0" passedButRunAborted="0" notRunnable="0" notExecuted="0" disconnected="0" warning="0" completed="0" inProgress="0" pending="0"/>
But there are only 2 tests!!! If i have 29 tests, it says 30 total, etc...
Any ideas?
I will place my money on the fact that the ordered test itself is also counted by MSTEST as a test that is run. This is because of the way it is structured:
Run Ordered test (test number 1), starts processing the inner tests in sequence recursively re-uses the standard mechanism for running any test.
Run first test in ordered test (test number 2)
Run second test in ordered test (test number 3)
So it always adds the parent ordered test container as a regular test being performed. This would also mean that if you run an ordered test (with to inner tests) from within an ordered test, your count would be 4 while actually only 2 test are functionally relevant and tested.
Personally what I find more disturbing, is that if not all tests in an ordered test are 100% successful (warnings, inconclusive) the ordered test always fails! Completely! Uncontrollable!
But that was an off topic frustration :-)
Related
Lets say i have 300 test cases and among them 100 are failing now i want to run those 100 test cases again (Note: i have even rerun the cypress test cases with appropriate option and it even run the test cases for finding flaky test cases)
Now i have a list of failing 100 test cases in a notepad or Excel sheet now
is there any mechanism to run this test cases in CYPRESS
if i go with
cypress run --spec=cypress/integration/one.sepc.ts,cypress/integration/two.spec.ts"
that 100 test cases will cause a big string and it looks like
cypress run --spec=cypress/integration/one.sepc.ts,cypress/integration/two.spec.ts, ..... hundread.spec.ts"
this will leave that command is a huge text and complex to maintain so is there any way to run the list of failing test cases only at whatever time I want to run after fixing the application code or data
any suggestions will be helpful
More info
I was looking for the way it runs multiple test cases mentioned in one text file reference or dictionary reference
For Example, if I run all 100 test cases and 20 among them failed so I would maintain the file names and paths which are failing in the file or dictionary
and now I want cypress to take this file and run all the test cases references which are failing thereby running those specific test cases which are failing
(Note: i am aware of retrys to be placed for the execution
npx cypress run --spec "cypress/e2e/folderName" in cmd run all the specs in cypress folder.
describe.only("2nd describe", () => { it("Should check xx",() =>{ }); it("Should check yy", () => { }); }); it can only run the specific suit. Run specific test case use it.only("Should check yy", () => { });
I am trying to link the number of Test run founded on the log file in https://api.travis-ci.org/v3/job/29350712/log.txt of the project presto of facebook with the real test in source code.
The source code linked to this run of the build is located in the following link: https://github.com/prestodb/presto/tree/bd6f5ff8076c3fdb2424406cb5a10f506c7bfbeb/presto-raptor/src/test/java/com/facebook/presto/raptor
I am computing the number of places where I encounter '#Test' in the source code then it should be the same number of Test run in the log file.
In most of cases it works. But there is some of them like the subproject 'presto-raptor' where there is 329 Tests run. But in the source code I found 27 time the #Test.
I notice that there is some test preceded by: #Test(singleThreaded = true)
This is an example in the following link:
https://github.com/prestodb/presto/blob/bd6f5ff8076c3fdb2424406cb5a10f506c7bfbeb/presto-raptor/src/test/java/com/facebook/presto/raptor/metadata/TestRaptorSplitManager.java
#Test(singleThreaded = true)
public class TestRaptorSplitManager
{
I expected to have the same number of Test run as in the log file. But It seems that the source code is running in parallel (multi-thread)
My question here is how do I match the number 329 Tests run with real test cases in source code.
TestNG counts the number of tests based on the following (apart from the regular way of counting tests)
Data driven tests are counted as new tests. So if you have a #Test that is powered by a data provider (and lets say the data provider gives 5 sets of data), then to TestNG, there were 5 tests that were run.
Tests with multiple invocation counts are also counted as individual tests (so for e.g., if you have #Test(invocationCount = 5), then TestNG would report this test as 5 tests in the reports, which is what Maven console is showing as well.
So not sure how it would be possible for you to build a matching capability that would cross check this against the source code (especially when your tests involve a data provider)
I want to run a smoke test "does the page exist with the form I want" before running the other tests in that spec. If the above test fails there is not point running the other, form filling tests.
I thought I could structure as
describe
it "smoke test" do
// check form exists on page
end
describe "detailed form filling" do
it "detailed" do
detailed_stuff
end
end
end
and the inner describe would NOT run if the outer one failed but I am wrong, both run.
Is there a way to have a high level rspec test determine where or not to run the lower level detail ones below it ?
rspec should continue to run other test files, so --fail-fast does not seem to be suitable as that would seem to exit the whole suite
When running a test getting.
FAIL 35 tests executed in 16.806s, 35 passed, 0 failed, 2 dubious, 0 skipped.
What does the 'dubious' imply and how to see which assertion or test case is dubious?
Dubious tests occurs when there is mismatch in the number of tests (x) passed as argument to Casperjs test instance casper.test.begin('sometest',x,function(){...}) and the number of actual tests in the file.
In essence, the number of planned tests (x) should be equal to the number of executed tests.
I believe that the dubious tests are those that aren't run because of failed tests.
So if the test case tried to exit after a failed test, but there were still 2 tests that were meant to be run after it, those 2 tests would be considered dubious.
Afaik, there is no way to see which tests are dubious because CasperJS just uses the number of passed/failed tests out of the specified number of tests to get that number.
You shouldn't consider a dubious test as either a pass or as a fail because there is no way to know which way the test would have gone.
In your tests, change the 'X' (see below) to the number of assertions you have inside it and then you will see no more debiuous
casper.test.begin('sometest',X,function(){...})
This worked for me.
The answer of #RoshanMJ is correct, however, each time we create new assertions, we have to update X number.
I just remove the X parameter in casper.test.begin('sometest',X,function(){...}) and it will work, like this:
casper.test.begin('sometest',function(){...})
I'm considering adding #high_priority and #low_priority to certain tests in our test suite in order to find out how many high priority (risk) tests have failed. Ideally I'd like a column in Jenkins next to the test job which displays
1/100 high priority and 8/60 low priority tests failed.
Though I'm happy with a similar output in the console output if necessary.
Currently Jenkins jobs are running a command line execution like:
cucumber --tags #AU_smoke ENVIRONMENT=beta --format html --out 'C:\git\testingworkspace\Reports\smoke_BETA_test_report.html' --format pretty
Edit:
Adding extra jobs isn't really a solution, we have a large amount of jobs which run subsets of all of the tests, so adding extra jobs for high and low priority would require tripling the number of jobs we have.
I've settled on using the description setter plugin with the extra columns plugin. This allows me to add the build description as a column on my views and in my code I have
After do |scenario|
if scenario.status.to_s=="passed"
$passed=$passed+1
elsif scenario.status.to_s=="failed"
$failed=$failed+1
puts "FAILED!"
elsif scenario.status.to_s=="undefined"
$undefined=$undefined+1
end
$scenario_count=$scenario_count+1
if scenario.failed?
Dir::mkdir('screenshots') if not File.directory?('screenshots')
screenshot = "./screenshots/FAILED_#{scenario.name.gsub(' ','_').gsub(/[^0-9A- Za-z_]/, '')}.png"
#browser.driver.save_screenshot(screenshot)
puts "Screenshot created: #{screenshot}"
embed screenshot, 'image/png'
##browser.close
end
##browser.close
end
at_exit do
end_time=Time.now
elapsed_time=end_time.to_i - $start_time.to_i
puts "\#description#scenarios total: #{$scenario_count}, passed: #{$passed}, failed: #{$failed}, known bug fails: #{$known_bug_failures}, undefined: #{$undefined}.#description#"
...
Then in the description setter plugin I use the regex
/#description#(.+)#description#/
and use the first match group as the build description name. This also allows me to look at a job's build history and see at a glance how many tests there were and how many passed over the previous few weeks.