Currently at my internship, I have my Jacoco-Maven plugin set up such that after the unit and integration reports are generated, they are merged into one.
Now, adding CircleCI into the mix, we run parallel jobs for our tests. These jobs cause only longest running test's .exec file to be read as the basis for the report, meaning all other tests are ignored, and our coverage would be lowered.
I thought the solution would be to have unique names, and then just merge all .exec file using a wildcard character within my POM, but there seems to be no documentation on it.
Related
I am creating a rather custom task that processes a number of input files and outputs a different number of output files.
I want to check the dates of the input files against the existing output files and also might look at the content of the input files to make the determination whether it is up to date or needs to be invoked to become up to date. What properties do I need to set in a doFirst, code the main action, or whatever ( where and when) in my task to set the right state for their dependency checker and task executor so as either appropriately force dependents to build or not.
Also any doc on standard lib utilities to do things like file date checking etc, getting lists of files etc that are easy like in ruby rake.
How do I specify the inputs and outputs to the task ? Especially as the outputs will not be known until the source is parsed and the output directory is scanned for what exists.
a sample that does this in a larger project that has tasks that are dependent on it would be really nice :)
What properties do I need to set in a doFirst, code the main action, or whatever ( where and when) in my task to set the right state for their dependency checker and task executor so as either appropriately force dependents to build or not.
Ideally this should be done as a custom task type. None of this logic should be in any of the Gradle files at all. Either have the logic in a dedicated plugin project that gets published somewhere which you can then reference in the project, or have the logic in buildSrc.
What you are trying to develop is what is known as an incremental task: https://docs.gradle.org/current/userguide/custom_tasks.html#incremental_tasks
These are used heavily throughout Gradle which makes the incremental build of Gradle possible: https://docs.gradle.org/current/userguide/more_about_tasks.html#sec:up_to_date_checks
How do I specify the inputs and outputs to the task ? Especially as the outputs will not be known until the source is parsed and the output directory is scanned for what exists.
Once you have your tasks defined and whatever else you need, in your main Gradle files you would configure them as you would any other plugin or task.
The two links above should be enough to help get you started.
As for a small example, I developed a Gradle plugin that generates files based on some input that is not known until its configured. The 'custom' task type just extends the provided JavaExec. The custom task is Wsdl2java. Then based on user configuration, tasks get registered here using the input file from the user. Since I reused built-in task types, I know for sure that no extra work will done and can rely on Gradle doing the heavy lifting. There's also a test to ensure that configuration cache works as expected: ConfigurationCacheFunctionalTests.
As I mentioned earlier, the links above should be enough to get you started.
JaCoCo just outputs jacococ.exec which is the input for Sonar. In that file, there seems to be only the info:
- Class name
- Total Class Probes
- Executed Class Probes
But then, SonarQube cannot rely solely on these values as it needs to tell you which are the exact lines unconvered, so Sonar is performing an analysis on itself. So how does it use Jacoco report? And why does it need it?
So how does it use Jacoco report? And why does it need it?
SonarQube itself alone doesn't / can't know anything about which tests you actually executed and how they cover your code. To obtain this information it relies on third-party test coverage tools. In case of Java it relies on data collected and provided by JaCoCo as explained in answer on similar question from you (JaCoCo collects execution information in exec file, and obtains line numbers and other information from class files during generation of report), or SonarQube can rely on data in "generic format".
We are actively using GO-CD. We get JUNIT JASMINE and other results, how ever the build artifacts are always published by go-cd which is picked by other agents to perform automated deployment.
We wish to set percentage value markers for JUNIT JASMINE etc, and if the observed value is lesser than the % marker, then we are interested to make go-cd not publish artifacts.
Any ideas?
Ideally after report creation another task kicks in which verifies the report results.
It could be e.g. a grep command inside a shell script looking for the words fail or error in the XML report files. As soon as the task finishes with a return code not equal to 0, GoCD considers the task to be failed.
Same applies for the percentage marker, a task is needed which calculates the percentage and then provides an appropriate return code. 0 when percentage goal is met or exceeded and different from 0 when the goal has not been met. This one could also be implemented as a custom task such as a shell script evaluating the reports.
The pipeline itself can be configured to not publish any artifacts in case the task fails or errors.
Short and simple. When I pass the tags inside my pom file like below:
<tags><tag>#Smoke</tag></tags>
It works correctly. It runs each of my scenarios that have the smoke tag independently and at the same time.
However when I pass it as a maven property like below:
-Dcucumber.options="--tags #Smoke"
It files the correct number of runners, however it runs each each scenario x number of times, where x is the number of scenarios with the tag. So if I have 3 scenarios with the tag, it will run each test 3 times.
I'm hoping to duplicate the functionality of the first run by using properties from maven so that I can run this with Jenkins a bit easier? Am I passing the cucumber options incorrectly?
Found the answer after consulting with some of the developers of the library. The tasks need to be passed:
-Dcucumber.tags="#Smoke"
Cucumber supports the way I was passing, but this library expects them like this.
Thanks
Is it possible to have a Maven/Jenkins build fail due to JMeter tests failing to achieve a specified throughput?
I'm tinkering with the jmeter-maven-plugin in order to do some basic performance tests as part of a continuous integration process in Jenkins.
One can set a Duration Assertion on a Sampler in the JMeter test to mark the sample as failed if a response time is over a certain threshold. What I'd like is to be able to fail a build (in Maven, or in the Jenkins job that triggers the Maven build) based on the total throughput for a test.
Is this possible with any existing plugins?
Yes its possible. You can use the Jenkins Text Finder plugin and the JMeter "aggregate report". With aggregate report you can write a CSV or XML file. You can search this file for your throughput with the Jenkins Text Finder Plugin and then you can mark the build as failed or unstable. Alternativly, you can use a BASH script to search the generated JMeter report file and return a non null return value. This will make your build fail.