My group will be implementing CI using Jenkins. As such, I want to make sure that any unit and/or integration tests we create integrate easily with Jenkins. We have several different technologies in our stack we are using from C++ code to Oracle PL/SQL packages to Groovy code. We want to develop test drivers (code that wraps and tests these individual code units) that we can integrate with Jenkins so that these tests are automatically run when we perform commits (git) as well as on a nightly basis. My question is, what are the best practices for writing these test drivers so that they will easily integrate with Jenkins when we implement it?
For example, we have have a PL/SQL stored procedure that we want to run tests against as part of our CI testing. I could write a bash shell script that wraps calls to it, I could write a Java program that calls it. Basically I could wrap it in anything. Then the next question is...is there some sort of standard for outputting results so that Jenkins can easily determine if the test passed or failed?
.is there some sort of standard for outputting results so that Jenkins
can easily determine if the test passed or failed?
If your test results are compliant with Junit results,jenkins have junit plugin which give you the better way for tracing test reports (result trend graph) and also test result archiving. converting ant test log to Junit format easier one.
useful links:
http://nose2.readthedocs.org/en/latest/plugins/junitxml.html
https://wiki.jenkins-ci.org/display/JENKINS/JUnit+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/xUnit+Plugin
Jenkins and JUnit
Basically I could wrap it in anything.
I personally prefer to go with Java among your choices.because it give you better Api to create xml files
Use python unittest to wrap any of your tests.
Produce junit xml test results.
One easy way of getting any python unittest to write out junit is from command-line.
yum install pytest
And call your test script like this:
py.test --junitxml result.xml testscript.py
And in jenkins build configuration Post-build actions Add a "Publish JUnit test result report" action with result.xml and any more test result files you produce.
https://docs.python.org/2.7/library/unittest.html
This is just one way of producing junit xml results with python. There are a good few other methods either using unittest module or junitxml or others.
Related
I have a sample project in which, i have used Maven, TestNg and Cucumber. I run my test using testrunner class.
I have created a feature file with two scenarios and both the scenario are failing. I have two testrunner classes with different feature file -- 1. feature file points to all the features, 2. points to only failed scenarios.
When i try to rerun the scenario it runs only one scenario.
1-> features = "src/test/java/com/ag/features" = has all the features
2-> features = #target/rerun.txt = has reference to both the failed features.
Please advise how to make all failed scenarios execute.
You just need to tell cucumber to run the features/scenario lines defined in your rerun file.
cucumber #rerun.txt should be enough (Although in java you may need a little more)
I'm writing xUnit unit test cases for a dotnet core application which uses DocumentDB (CosmosDB) as storage. The unit test are written to execute against the local cosmos db emulator. On the Azure DevOps build environment, I've setup the Azure Cosmos DB CI/CD task which internally creates a container to install the emulator. However, I'm not able to figure out that how the endpoint of emulator can be passed to xUnit fixture?
Is there any way through which xUnit fixture can read the .runsettings test parameters or parameters can be passed via other source?
Update
Currently, I implemented the scenario using Environment Variable but still not happy to define the connection string as a environment variable using powershell in build task and read it in through code during unit test execution. I was thinking if there could be another way of achieving it..
Below snapshot shows how the build tasks are configured currently as workaround to achieve the desired:
And code to read the value as
var serviceEndpoint = Environment.GetEnvironmentVariable("CosmosDbEmulatorEndpointEnvironmentVariable");
Since, UnitTest task provides the option to pass .runsettings/.testsettings with option to override the test run parameters so was thinking it something can be achieved using those options.
This is not supported in xUnit.
See SO answers here and here, and this github issue indicating that it is not something that will be supported in xUnit.
Currently, I implemented the scenario using Environment Variable but still not happy to define the connection string as a environment variable using powershell in build task and read it in through code during unit test execution. I was thinking if there could be another way of achieving it..
Below snapshot shows how the build tasks are configured currently as workaround to achieve the desired:
And code to read the value as
var serviceEndpoint = Environment.GetEnvironmentVariable("CosmosDbEmulatorEndpointEnvironmentVariable");
Since, UnitTest task provides the option to pass .runsettings/.testsettings with option to override the test run parameters so was thinking it something can be achieved using those options.
Short and simple. When I pass the tags inside my pom file like below:
<tags><tag>#Smoke</tag></tags>
It works correctly. It runs each of my scenarios that have the smoke tag independently and at the same time.
However when I pass it as a maven property like below:
-Dcucumber.options="--tags #Smoke"
It files the correct number of runners, however it runs each each scenario x number of times, where x is the number of scenarios with the tag. So if I have 3 scenarios with the tag, it will run each test 3 times.
I'm hoping to duplicate the functionality of the first run by using properties from maven so that I can run this with Jenkins a bit easier? Am I passing the cucumber options incorrectly?
Found the answer after consulting with some of the developers of the library. The tasks need to be passed:
-Dcucumber.tags="#Smoke"
Cucumber supports the way I was passing, but this library expects them like this.
Thanks
The whole morning I have been trying to setup e2e tests reporting via SonarQube's Generic Execution, by using the Generic Test Data -> Generic Execution feature.
I created a custom xml report that gets added to the scan properties like this:
sonar.testExecutionReportPaths=**/e2e-report.xml
So far, SonarQube seems to completely ignore this property and I no attempt to parse the file in the logs. Has anyone made it work?
These are links by Sonar about the Generic Execution feature:
https://docs.sonarqube.org/display/SONAR/Generic+Test+Data
https://github.com/SonarSource/sonarqube/blob/master/sonar-scanner-engine/src/main/java/org/sonar/scanner/genericcoverage/GenericTestExecutionSensor.java
This is a SonarQube 6.2+ feature. Make sure to use an appropriate SonarQube version.
In addition sonar.testExecutionReportPaths does not allow matchers (like *).
Please provide relative or absolute paths, comma separated.
See also:
The official documentation of the Generic Test Data feature
The source code, that looks up the generic execution files
I have almost 50 freestyle Jenkins builds that run as many performance centre tests everyday morning and evening. Currently I am getting status of these individual runs on email but would like to consolidate the results in one e-mail. Now problem with this is output of Jenkins build is always pass when it is able to run pc tests. To find actual result I need to see the artifact that contains HTML result. Is there a way I can read these individual HTML output and group them in one report. Like
Dev test prod
Test1 pass fails pass
Test2 fail pass pass
Test3 pass pass pass
I have little programming or scripting exp so pls forgive me for not using much resources on my own
You may use build result trigger plugin to monitor as many jobs required. For consolidating use tutorial HTML Agility Pack