in the Jenkins UI, how can I set whether a build should be considered unstable(yellow), or failure(red)? This is for a Maven build, Gatling performance test.
Use the text-finder plugin . https://wiki.jenkins.io/plugins/servlet/mobile?contentId=753775#content/view/753775 you can search for a status or a string in the build log and mark either failure or unstable
Related
How can I configure a Jenkins Multibranch Pipeline as Maven project?
For a Maven project in Jenkins I get the option.:
Build Triggers
Build whenever a SNAPSHOT dependency is built
But for a Jenkins Multibranch Pipeline I don't get that option.
How to get that option for a Jenkins Multibranch Pipeline?
dEE.
I think it is not possible. But I can share workaround with you.
Create maven project job, configure webhooks between jenkins and gitlab. In Source Code Management add as much branches as you want.
You can create a multibranch pipeline in Jenkin by choosing "Multibranch pipeline " in the option
There is no poll SCM but you can use Scan Multibranch Pipeline Triggers instead. Also take advantage of WebHook if the SCM you use gives that option
and about Build whenever a SNAPSHOT dependency is built this option - I dont think there is a way to do this with multibranch pipeline because this option comes with maven-plugins and can only available in Maven Project. But with out this you can achive the same scenario in different way by following good practice.
Build the dependent project seperately and keep the generate artifact in Nexus or Artifactory, so that the other job will pull independently.
Always make a good practice to use RELEASE version not SNAPSHOT version
Kindly note: "I have edited this question"
I am observing a very strange situation. I have 2 jobs configured in Jenkins having same configuration except that one of them is continuous build and another is nightly build [ poll scm configured #midnight] with Sonarqube configuration to generate report.
Both builds have same Repository URL and Both of them are having build result success. But in continuous build, every modules is analysed and then getting success. whereas in nightly build, modules are skipped.
what my question is same build is running fine for continuous build and not for nightly. so what could be cause of this ?
Earlier i was using -DskipTests which was analysing all module and getting successful in continuous build.. but was skipping in nightly build.
so i refered this link Maven skip tests and added -Dmaven.test.skip=true in maven goals.. and now some of the modules are analysed and getting success. But one fo the module got failed and due to which other modules got skipped. below is the error log
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-surefire-plugin:2.12.4:test
(default-test) on project ASData: There are test failures.
Note: I am using maven 3.3.1 version and SonarQube 5.1
I am still not allowed to add comments so I will ask here.
Where do you keep the modules and can you guarantee that noone will severe the connection between your build and the storage of modules?
I'd like to configure jasmine-maven-plugin to make jenkins unstable if a test fails but the only options appear to be:
set haltOnFailure true and have failures break the build
set haltOnFailure false and have failures reported in the logs but the build succeeds.
Is there a way to check the logs post-test and mark the build unstable?
Sam Hasler's answer only works for freestyle Jenkins jobs. We use Maven jobs and this configuration option of the JUnit Jenkins plugin is unavailable for Maven jobs. So I was looking for a more universal solution.
What we did in order to get it working is to reconfigure the Jasmine Maven plugin so as to
no longer halt the build upon test failures and
write the Jasmine test reports to target/surefire-reports where Jenkins expects to find them. This has the additional advantage that we now also see the failed Jasmine test in the build job alongside our Java tests.
<haltOnFailure>false</haltOnFailure>
<jasmineTargetDir>${project.build.directory}/surefire-reports</jasmineTargetDir>
Now our build jobs are yellow (unstable) as expected, no longer red (failed).
Found the answer myself!
I had to configure jenkins to also look at the jasmine junit report:
under Publish JUnit test result report add **/TEST-jasmine.xml to Test report XMLs, comma separated if there is something there already:
**/TESTS-TestSuites.xml,**/TEST-jasmine.xml
I have a job which runs successfully as a maven build, but fails when run in Hudson.
The regular output shows BUILD SUCCESSFUL but the build is marked as failed (red ball) and Cobertura reporting is skipped "because build was not UNSTABLE or better".
I tried putting a log recorder on hudson.model.Run, which shows that some process is setting the build status to failed (one to failed, then one to successful, then a second one setting it back to failed). However, this doesn't give me any insight into which process this is, or why it is doing that.
What can I do to troubleshoot the reason for and origin of this failure?
EDIT:
The last few lines of my console output show the regular maven BUILD SUCCESS info messages, followed by:
channel stopped
Skipping Cobertura coverage report as build was not Unstable or better
Finished: SUCCESS
The failure of a build in hudson is determined by the last build step returning successfully (RC 0).
Do you build this as a freestyle or maven project in hudson? If it is freestyle, is it the only process run?
Build failures in hudson can also come from failing post build steps such as collecting test result information, etc
My build scenario is like this (simplified):
Compile
Package (*.zip)
Deploy to test environment
Run tests over the environment
If tests fail TeamCity still publishes artifacts. This is unnecessary and consumes disk space. How can i prevent this? Can't find any check box or something (TeamCity 6.5 Enterprise).
As far as I can tell, TeamCity doesn't have a built-in option to disable artifact publishing if the build fails.
However, in the build script called by TeamCity you could try:
Removing artifact paths from the build configuration, and instead emitting the appropriate TeamCity service messages with your artifact paths only when tests are complete and successful.
Only copying files to the artifact paths configured in TeamCity after the tests are complete and successful.