what is a dubious test in casperjs - casperjs

When running a test getting.
FAIL 35 tests executed in 16.806s, 35 passed, 0 failed, 2 dubious, 0 skipped.
What does the 'dubious' imply and how to see which assertion or test case is dubious?

Dubious tests occurs when there is mismatch in the number of tests (x) passed as argument to Casperjs test instance casper.test.begin('sometest',x,function(){...}) and the number of actual tests in the file.
In essence, the number of planned tests (x) should be equal to the number of executed tests.

I believe that the dubious tests are those that aren't run because of failed tests.
So if the test case tried to exit after a failed test, but there were still 2 tests that were meant to be run after it, those 2 tests would be considered dubious.
Afaik, there is no way to see which tests are dubious because CasperJS just uses the number of passed/failed tests out of the specified number of tests to get that number.
You shouldn't consider a dubious test as either a pass or as a fail because there is no way to know which way the test would have gone.

In your tests, change the 'X' (see below) to the number of assertions you have inside it and then you will see no more debiuous
casper.test.begin('sometest',X,function(){...})
This worked for me.

The answer of #RoshanMJ is correct, however, each time we create new assertions, we have to update X number.
I just remove the X parameter in casper.test.begin('sometest',X,function(){...}) and it will work, like this:
casper.test.begin('sometest',function(){...})

Related

How to Run List of random multiple test cases in Cypress and how to make command string shorten

Lets say i have 300 test cases and among them 100 are failing now i want to run those 100 test cases again (Note: i have even rerun the cypress test cases with appropriate option and it even run the test cases for finding flaky test cases)
Now i have a list of failing 100 test cases in a notepad or Excel sheet now
is there any mechanism to run this test cases in CYPRESS
if i go with
cypress run --spec=cypress/integration/one.sepc.ts,cypress/integration/two.spec.ts"
that 100 test cases will cause a big string and it looks like
cypress run --spec=cypress/integration/one.sepc.ts,cypress/integration/two.spec.ts, ..... hundread.spec.ts"
this will leave that command is a huge text and complex to maintain so is there any way to run the list of failing test cases only at whatever time I want to run after fixing the application code or data
any suggestions will be helpful
More info
I was looking for the way it runs multiple test cases mentioned in one text file reference or dictionary reference
For Example, if I run all 100 test cases and 20 among them failed so I would maintain the file names and paths which are failing in the file or dictionary
and now I want cypress to take this file and run all the test cases references which are failing thereby running those specific test cases which are failing
(Note: i am aware of retrys to be placed for the execution
npx cypress run --spec "cypress/e2e/folderName" in cmd run all the specs in cypress folder.
describe.only("2nd describe", () => { it("Should check xx",() =>{ }); it("Should check yy", () => { }); }); it can only run the specific suit. Run specific test case use it.only("Should check yy", () => { });

How to match number of Test run in maven log with source code?

I am trying to link the number of Test run founded on the log file in https://api.travis-ci.org/v3/job/29350712/log.txt of the project presto of facebook with the real test in source code.
The source code linked to this run of the build is located in the following link: https://github.com/prestodb/presto/tree/bd6f5ff8076c3fdb2424406cb5a10f506c7bfbeb/presto-raptor/src/test/java/com/facebook/presto/raptor
I am computing the number of places where I encounter '#Test' in the source code then it should be the same number of Test run in the log file.
In most of cases it works. But there is some of them like the subproject 'presto-raptor' where there is 329 Tests run. But in the source code I found 27 time the #Test.
I notice that there is some test preceded by: #Test(singleThreaded = true)
This is an example in the following link:
https://github.com/prestodb/presto/blob/bd6f5ff8076c3fdb2424406cb5a10f506c7bfbeb/presto-raptor/src/test/java/com/facebook/presto/raptor/metadata/TestRaptorSplitManager.java
#Test(singleThreaded = true)
public class TestRaptorSplitManager
{
I expected to have the same number of Test run as in the log file. But It seems that the source code is running in parallel (multi-thread)
My question here is how do I match the number 329 Tests run with real test cases in source code.
TestNG counts the number of tests based on the following (apart from the regular way of counting tests)
Data driven tests are counted as new tests. So if you have a #Test that is powered by a data provider (and lets say the data provider gives 5 sets of data), then to TestNG, there were 5 tests that were run.
Tests with multiple invocation counts are also counted as individual tests (so for e.g., if you have #Test(invocationCount = 5), then TestNG would report this test as 5 tests in the reports, which is what Maven console is showing as well.
So not sure how it would be possible for you to build a matching capability that would cross check this against the source code (especially when your tests involve a data provider)

test continues to run in cypress runner after it fails, does not timeout

enter image description here
I have the following test in cypress:
cy.visit('/main.html')
cy.get('#homeTitle').contains('Welcome')
this passes.
If I change the contains value to "Welcome2" test should fail, and it fails in the runner, but the timer displayed continues to run, and it will not proceed to next test.
Seems like it doesn't time out or something.
Are you trying to test that the element with id #homeTitle should contain the text message 'welcome' ? If so try this:
cy.get('#homeTitle').should('contain','welcome');
I don't quite understand your issue, but I will look harder.

how to get total specs count and pass and fail specs count in jasmine 2 and protractor

I want to generate the emailable report after my test suite gets completed using Jasmine and protractor.
How can I get the following information after my test suite is completed.
1.Total no of spec count
2.Total pass spec count
3.Total Failed test spec count
4. Total Pending specs counts
I could not find any proper solution so far. Please help me out solve this issue.
Using the spec-reporter writes down on your console the results of your testing, but if you really need something to email, you could also use jasmine2-html-reporter, which generates an html page with the results.
With that file, you can program a function to send via email a file or something that you need
CONSOLE: Jasmine spec reporter
HTML: Jasmine html reporter

MSTest log file shows invalid counts for results of orderedtest

I have tried many orderedtests and the .trx file always shows the wrong count?
for instance, if i had an orderedtest with 2 tests, the results look like this in the .trx file (results summary node):
<Counters total="3" executed="3" passed="3" error="0" failed="0" timeout="0" aborted="0" inconclusive="0" passedButRunAborted="0" notRunnable="0" notExecuted="0" disconnected="0" warning="0" completed="0" inProgress="0" pending="0"/>
But there are only 2 tests!!! If i have 29 tests, it says 30 total, etc...
Any ideas?
I will place my money on the fact that the ordered test itself is also counted by MSTEST as a test that is run. This is because of the way it is structured:
Run Ordered test (test number 1), starts processing the inner tests in sequence recursively re-uses the standard mechanism for running any test.
Run first test in ordered test (test number 2)
Run second test in ordered test (test number 3)
So it always adds the parent ordered test container as a regular test being performed. This would also mean that if you run an ordered test (with to inner tests) from within an ordered test, your count would be 4 while actually only 2 test are functionally relevant and tested.
Personally what I find more disturbing, is that if not all tests in an ordered test are 100% successful (warnings, inconclusive) the ordered test always fails! Completely! Uncontrollable!
But that was an off topic frustration :-)

Resources