I have a project with many modules, and I'd like to know how long each test takes to execute.
Is there any parameter that can output that information? I've searched online and found nothing.
I could run all the test cases on the ide, which logs the execution times, and copy the times to a file, but I don't wanna do this every time I want the tests time log.
Check the target/surefire-reports directory. Each project includes a report for each test it runs, and the reports contain test execution times.
Related
I have a gradle build step in a teamcity configuration which does build test.
There are a lot of text being logged by the tests. More than a gigabyte.
Is it possible to filter test output out of the general log but still have it on the Tests teamcity's tab when I click on the test?
The fact that tests are outputting 1GB+ of text is concerning. Ideally, the test runner should be the only thing outputting text (ie: pass, fail indications and error messages)—the tests themselves should not be outputting any text. Consider supressing/capturing stdout from tests, broadly.
Since the language you are testing/test framework is not specified in the question, I am unable to specifically determine what changes you need to make, but there is typically an idiosyncratic way to do this depending on language/test framework.
I have managed to get Cypress to report results into TestRail using the cypress-testrail-reporter. However, when I execute a test run (only manually at the minute using npx cypress run) the results of each test are logged as separate test runs. They are currently showing like this:
Screenshot from my test runs page on TestRail
I want the results from each time I execute my test suite to appear as 1 run rather than 5 different runs as shown in the screenshot.
I've bumped into the very same issue. So I've started to use Cypress-accumulative-reporter which fixes this issue.
This reporter is not perfect btw from time to time it does not publish the last spec executed, so I've created an empty spec file to run at the end... not clean workaround but it works.
https://www.npmjs.com/package/cypress-testrail-accumulative-reporter
I am trying to run multiple feature files through maven goal(command line) but after 2 feature files that run successfully, the other feature files (3rd one onwards) fails in some test cases which when ran independently passes all the test cases.
So f I run each feature file individually I get proper results but running them all together gives wrong results.
We are using serenity framework with cucumber jvm. Please help how can we resolve this issue.
Your failing tests fail to fully setup the context. Some state is leaking from the previous ones. Look for what has changed during the first runs (database/mocks/whatever state) that has to be reset before running the third and following.
I have a big test suite written in TestNG that takes several hours to complete.
I'd like for maven/surefire to create a new suite that is a copy of the first but with just the failed tests in it, a suite that should be much faster to run.
Is it possible to create such a suite?
As a fall back I could create it on my own from a test report that is easy to parse, if there is such report.
Thank you.
On completion of run, testng generates a testng-failed.xml (in the same output folder as your reports), which basically is your initial suite file with the listeners, but the tests part contains only the failed testcases.
In case you are using Jenkins, you might consider adding a postbuild step that triggers another build that works on the same workspace as the current build and uses this failed xml. Or depending on how you are triggering your tests, you might look at writing a script to run the failed xml.
I have two unit test projects in my VS 2010 solution. Each project has a Data directory with input data needed for the unit tests.
I annotated the test classes that need the data with
[DeploymentItem("Data")]
When I run tests individually, the run fine. However, when I run (or debug) all tests in the solution, I find that only one of the two Data directories is copied to TestResults\MyTestDir-YYYY-MM-DD HH_mm_SS\Out, causing unit tests that rely on the other data directory to fail.
Interestingly if I then use the Test Results window to re-run checked (as in failed) tests, the tests still fail (they do not recognize that the correct Data directory's files are missing). If I then navigate directly to a failed test and select Run Tests in Current Context, the test run gets the correct Data directory and the test succeeds.
My question: How can I cause DeploymentItems from two separate test projects all to be copied to the Out directory where the tests are executed?
For reference, I have reviewed these questions without noting a solution
Problems with DeploymentItem attribute
Do MSTest deployment items only work when present in the project test settings file?
I found that giving each "Data" directory a unique name (e.g. "TestProjectAData") resolved the issue. There seems to be some sort of bug when multiple directories in different paths all have the same relative path to different test projects (i.e. if every test project has a subdirectory called "Data").