I am working on a framework using webdriverIO and mocha. Recently I have installed the Allure reporter to generate HTML reports using jenkins
I am facing a problem with skipped tests though. I have a lot of tests that consist of a header without any code, that still need to be written.
In mocha I add "it.skip" to skip these tests.
While the tests are skipped, the Allure report only recognizes 1 skipped test per file.
When running the below code, Allure returns 1 passed test, 1 failed test and 1 skipped test
describe('Allure test', function() {
it.skip('1. this is a skipped test without any code', function () {
})
it.skip('2. this is another skipped test without any code', function () {
});
it('3. this is an enabled test that has a successfull assert', function () {
chai.expect("foo", "foo should equal foo").to.contain("foo")
});
it('4. this is an enabled test that has a failed assert', function () {
chai.expect("foo", "foo should equal foo").to.contain("bar")
});
});
I would really like my allure report to show how many tests are skipped, to be able to show how much work is left.
The default mocha logging handles this just fine, it shows this:
Number of specs: 1
1 passing (4.00s)
2 skipped
1 failing
I also use the wdio spec reporter which shows it like this, which is also fine:
1 passing (2s)
2 pending
1 failing
I have tried inplementing a categories.json file to manipulate the Allure categories, but I can't get anything to change.
I tried this as a test, but adding it to my allure results folder changed nothing:
[
{
"name": "Ignored tests",
"matchedStatuses": ["skipped", "Skipped", "pending", "Pending", "failed", "Failed", "broken", "Broken", "skip", "Skip", "failing", "Failing", "passes", "Passes"]
}
]
The tools and versions I use are:
`-- wdio-mocha-framework#0.6.2
`-- wdio-allure-reporter#0.6.3
`-- webdriverio#4.13.1
Can anyone tell me how I can get Allure to see all skipped tests?
It is a bug. I've fixed it in https://github.com/webdriverio/wdio-allure-reporter/pull/127
Thanks for reporting this. For the future if you run into such bug, please, file an issue on github.
Related
Is there a way to include deselected tests in the pytest generated html report, similarly to skipped tests?
generated html file: file://///////reports/report.html -
================ 13 passed, 41 skipped, 6 deselected in 35.54s =================
I found a solution to have them printed in the terminal by adding this to conftest.py
def pytest_deselected(items):
if not items:
return
config = items[0].session.config
reporter = config.pluginmanager.getplugin("terminalreporter")
reporter.ensure_newline()
for item in items:
reporter.line(f"deselected: {item.nodeid}", yellow=True, bold=True)
but can't figure out how to get these tests listed in the report
I ended up marking them as skipped instead of deselecting them, that way they are easily listed on the report.
I just started using async/await in my nodejs code, and noticed that my code coverage tool cannot handle it, I get "Fatal error: Unexpected token" for any lines with async on them. I'm using karma and jasmine as my unit test framework, and grunt-jasmine-node-coverage for code coverage. I checked and grunt-jasmine-node-coverage hasn't been updated in years. I looked for a more modern code coverage library and couldn't find any that had been updated in the past year. I'm fine with using just npm instead of grunt to run my tasks, I know I'm way behind on that, but I couldn't find any code coverage frameworks recent enough that I think that would make a difference.
Does anyone know of a code coverage framework for JS code that works with ES2018 syntax?
I used nyc (https://github.com/istanbuljs/nyc) with jasmine (https://jasmine.github.io/pages/docs_home.html) and it worked great. My package.json config was:
"scripts": {
"test":"jasmine",
"coverage": "nyc --reporter=lcov npm run test"
},
"nyc": {
"report-dir": "spec/coverage",
"exclude": [
"spec/**/*"
]
},
I try to use the rerunFailingTestsCount option to deal with flaky tests. In order to display these in the junit results, I use the flaky test handler plugin which would theoretically deal with displaying flaky tests
In my jenkins file this looks like
pipeline {
stages {
stage('tests') {
steps{
// sh mvn verify here
}
post {
always {
junit testResults: 'target/failsafe-reports/**/*.xml', testDataPublishers: [[$class:
'JUnitFlakyTestDataPublisher']]
}
}
}
}
}
the test run fine, flaky ones are repeated but when it comes to publishing the junit results, i get an
Error when executing always post condition: java.lang.AbstractMethodError: you must override contributeTestData
Google wasn't very helpful, maybe someone here had the same problem and can help me or at least can confirm that this plugin works as pipeline script (there is a pull request regarding pipeline support, so I am not sure...)
I would like to display Spectron test results in TeamCity. I have followed the instructions at the Webdriverio TeamCity Reporter page, which are:
npm install wdio-teamcity-reporter --save-dev
and creating a wdio.conf.js file:
exports.config = {
reporters: ['teamcity'],
}
I have placed this file at the top of the project. It has no other entries; I've never needed it before.
I have also tried the additional configuration suggested at wdio-teamcity-reporter npm page.
This is the Jest object in package.json:
"jest": {
"moduleFileExtensions": [
"ts",
"tsx",
"js"
],
"transform": {
"\\.(ts|tsx)$": "<rootDir>/node_modules/ts-jest/preprocessor.js"
},
"roots": [
"<rootDir>/__tests__/",
"<rootDir>/components/"
],
"modulePaths": [
"<rootDir>/__tests__/",
"<rootDir>/components/"
],
"testMatch": [
"**/?(*.)(spec|test).(ts)?(x)"
]
}
And this is the relevant command (that TeamCity calls) in package.json:
"scripts": {
// ...
"test": "jest --maxWorkers=1 --forceExit",
// ...
},
This testing project is built with Typescript and Jest, and only comprises the e2e Spectron tests for an Electron app. The build artifact for that app is a TeamCity dependency for my test 'build'. In my build, TeamCity installs the app, runs the Spectron tests (which are passing), and then uninstalls the app.
All I can see at the moment is the Jest console output within the build log. While there are some hidden artifacts, I see no normal artifacts. I was thinking that the reporting package should have produced an html artifact. How do I go about displaying a test tab, or some other useful set of results?
It turns out that Jest can collect all the Webdriver results. Try using https://www.npmjs.com/package/jest-teamcity.
In jest.config.js use:
"testResultsProcessor": "jest-teamcity"
I've got a Maven project that contains (Selenium / Cucumber) test code, run through Maven. These tests create custom ExtentReports HTML reports, which are stored in target/extentreports. This HTML report is always created (unless the project does not compile, of course).
What I'd like to accomplish is that after test execution, the contents of this folder are archived into a zip file (and preferably moved to a different folder). This should happen even if the tests fail. So, essentially, I'm looking to add something that will run no matter whether the tests run through the 'test' goal fail.
Currently, I'm running my tests through
mvn clean test
which runs the Cucumber / Selenium tests using JUnit.
I've tried a number of approaches and combinations of Maven plugins (surefire, failsafe), but I haven't found the right solution yet.
I'm running these tests through Jenkins, so any post-build Jenkins solution is fine too. I'm not sure whether installing Jenkins plugins is desirable though (we're not managing Jenkins ourselves), so solutions that do not require this would be preferred.
A standard approach is to:
Make sure the build doesn't fail because of the tests:
mvn test -Dmaven.test.failure.ignore=true
Add a post-build step Publish JUnit test result report which would color the build into yellow.
Jenkins does allow for post operations
pipeline {
agent any
stages {
stage('Example') {
steps {
echo 'Hello World'
}
}
}
post {
always {
echo 'I will always say Hello again!'
//do your zip step here
}
}
}
If you are not using the declarative, then you can wrap your cucumber tests in a try catch
see https://jenkins.io/doc/pipeline/steps/workflow-basic-steps/#code-catcherror-code-catch-error-and-set-build-result
node {
sh './set-up.sh'
try {
sh 'might fail'
echo 'Succeeded!'
} catch (err) {
echo "Failed: ${err}"
} finally {
sh './tear-down.sh'
}
echo 'Printed whether above succeeded or failed.'
}