I'm doing unit testing using mocha chai. I want to see the reports in browser. Using mochawesome-report I'm able to generate test cases report in a folder mochawesome-report. In that I could see mochawesome.json and mochawesome.html. But unable to open the mochawesome.html file in browser. Kindly help.
Is there any other module to see the report generation in the browser or how can I see the test result in the browser which is displaying in console ?
I also faced a similar issue with respect to the cypress. When I debugged I found that my last suite running had 0 test cases or "it" blocks (which indicates each test cases).
So, please check :
The last suite running, make sure it has test cases.
Do not merge the test reports for each suite, like in the cypress.json we have the options:
"reporter": "mochawesome",
"reporterOptions": {
"charts": false,
"html": false,
"json": true,
"reportDir": "cypress/reports",
"reportFilename": "report",
"overwrite": true
}
By default, the overwrite option is set to true.
Change it to false, and it will help you in getting the separate reports for each of the test suits.
Later you can combine all the test suites to generate a single report.
Keep/generate mochawesome.html file in the same folder where you have "assets" folder. assets folder contains the supporting files for the html.
Related
I have a test in my tests folder which i want to skip while testing but run when checking for coverage. Is this possible? I notice looking at other tests generated by laravel jetstrem that I can conditionally ignore a test if a feature is disabled:
if (! Features::hasApiFeatures()) {
return $this->markTestSkipped('API support is not enabled.');
}
I have not found any way I can check if coverage is being used or not.
We run our protractor regression tests in gitlab CI and we have jasmine HTML reports. Right now it is only the QA team that monitors and checks failure if any.
But we would like to make it more visible. The devs have also asked us if we can make it visible in a single place instead of having to go to gitlab job and browse for artifacts.Also would it be possible to have an overview of pass/fail tests over time.
I'm not sure how and where to start. Any pointers would be appreciated.
You're looking for the expose_as keyword for artifacts. The full docs are here: https://docs.gitlab.com/ee/ci/yaml/#artifactsexpose_as.
If you use expose_as with your artifacts, Gitlab CI will link them to any application Merge Request with the name you give in this field.
For example (from the docs):
test:
script: ["echo 'test' > file.txt"]
artifacts:
expose_as: 'artifact 1'
paths: ['file.txt']
In this example, a Merge Request for this pipeline will have a link called "artifact 1" that opens the file "file.txt".
This also works for directories, but if there's more than one file it will open in the job's artifacts browser (like you currently do).
There are some caveats, like:
If you use a variable in the artifacts path field, expose_as won't work
Max of 10 artifacts can be exposed
Glob patters won't work
If Gitlab Pages is enabled, some file extensions will be automatically rendered using Pages (html, xml, txt, etc.).
since the lasts versions of webdriverIO with browserstack-service 6.4.7, I am facing some issues with the session name on Browserstack : The Continuous integration is sending a name (with a job unique ID) to Browserstack as a "Session Name" and during the test it changed...
(I can see that the name is the right one at the beginning of the test on Browserstack)
it's very difficult for me to find my way around in the tests as they all have the same name which is the suites or Feature name
have you encountered this kind of problem?
thank you very much for any help !
I believe the session name is being picked up and set from within the framework itself. Have you had the chance to output a few variables to the console from this file, https://github.com/itszero/wdio-browserstack-service/blob/47786feacef79c674e79d812cddb99cb87b2a267/lib/browserstack-service.js#L55 and verify the session name being set?
I am using the WebdriverIO version 7 with Mocha framework and BrowserStack.
As per https://webdriver.io/docs/browserstack-service
Add following in your configuration file:
services: [
['browserstack', {
browserstackLocal: false, // Set this flag as per your requirement
}],
],
This will automatically set the session's name to the test suite's name.
Is there a way to prevent karma-jasmine-html-reporter aka kjhtml from reporting skipped/pending tests?
I run some tests using fit and fdescribe and I want to only see results for the selected tests, however, the reporter is always displaying all tests from the suite.
Apparently, yes, there's a way to that with Jasmine (starting from v3.3.0). I've been able to do that in an Angular project. In the test.ts I've put something like:
jasmine.getEnv().configure({ hideDisabled: true });
Official documentation here: https://jasmine.github.io/api/3.5/Configuration.html.
I am trying to figure out how to run unit tests, using Google Test, and send the results to TeamCity.
I have run my tests, and output the results to an xml, using a command-line argument --gtest_output="xml:test_results.xml".
I am trying to get this xml to be read in TeamCity. I don't see how I can get XML Reports passed to TeamCity during build/run...
Except through XML report Processing:
I added XML Report Processing, added Google Test, then...
it asks me to specify monitoring rules, and I added the path to the xml file... I don't understand what monitoring rules are, or how to create them...
[Still, I can see nowhere in the generated xml, the fact that it intends to talk to TeamCity...]
In the log, I have:
Google Test report watcher
[13:06:03][Google Test report watcher] No reports found for paths:
[13:06:03][Google Test report watcher] C:\path\test_results.xml
[13:06:03]Publishing internal artifacts
And, of course, no report results.
Can anyone please direct me to a proper way to import the xml test results file into TeamCity ? Thank you so much !
Edit: is it possible that XML Report Processing only processes reports that were created during build ? (which Google Test doesn't do?) And is ignoring the previously generated reports, as "out of date", while simply saying that it can't find them - or are in the wrong format, or... however I should read the message above ?
I found a bug report that shows that xml reports that are not generated during the build are ignored, making a newbie like me believe that they may not be generated correctly.
Two simple solutions:
1) Create a post build script
2) Add a build step that calls the command line executable with the command-line argument. Example:
Add build step
Add build feature - XML report processing
I had similar problems getting it to work. This is how I got it working.
When you call your google test executable from the command line, prepend %teamcity.build.checkoutDir% to the name of your xml file to set the path to it like this:
--gtest_output=xml:%teamcity.build.checkoutDir%test_results.xml
Then when configuring your additional build features on the build steps page, add this line to your monitoring rules:
%teamcity.build.checkoutDir%test_results.xml
Now the paths match and are in the build directory.