Jenkins: Automate Notifications for a Jenkins Pipeline - jenkins-pipeline

I have a Jenkins pipeline which performs different steps during deployment. While the deployment is being performed, I would like Jenkins to send notifications about the status of each step to a channel added on communication tool "Teams".
Can someone provide a suggestion on the best route to achieve this?

You should be able to use the Office 365 Connector for this.
stage('Upload') {
steps {
// some instructions here
office365ConnectorSend webhookUrl: 'https://outlook.office.com/webhook/123456...',
message: 'Application has been [deployed](https://uat.green.biz)',
status: 'Success'
}
}

Related

When triggering jenkins pipeline from Spinnaker is it possible to pass pipelineParams?

When triggering jobs from Spinnaker is there a way to pass pipelineParams. For example I see
{
"continuePipeline": false,
"failPipeline": true,
"isNew": true,
"job": "job123",
"master": "master123",
"name": "Jenkins",
"parameters": {
"mavenProfile": "FooBar" <-- ???
},
"type": "jenkins"
}
What purpose do parameters field serve? Can we use it to pass parameters to the Jenkins pipelines?
Has anyone successfully accomplished passing parameters to Jenkins pipelines?
When above stage gets triggered, it immediately fails with a message:
job/master, passing params to a job which doesn't need them
The Jenkins integration in Spinnaker launches individual jobs.
The parameters that you have highlighted points to job parameters defined if you select This project is parameterized in the Jenkins Job config. The reason you're getting that error is because Jenkins hits 2 different endpoints for launching jobs, one with parameters and one without.
As far as I know, there is not a way to launch jenkins pipelines from Spinnaker, but I imagine it would look different than the Jenkins launch job stage since it would have to hit a different API endpoint.
As the answer from Thomas Lin states, the problem was I as trying to pass in parameters to a non-parameterized pipeline.
I went ahead and made my jenkins pipeline paramterized:
https://github.com/jenkinsci/pipeline-model-definition-plugin/wiki/Parametrized-pipelines
parameters {
string(defaultValue: "", description: 'What profile?', name: 'mavenProfile')
}
And as soon as I did that my Jenkins pipeline started receiving parameters from my Spinnaker pipeline.

Quality Gate Failure in SonarQube does not fail the build in Teamcity

I set up a Build project in TeamCity and integrated Sonarqube with it. The project is getting build and even publish the report successfully in SonarQube console. But when the quality gate fails, it's not breaking the build. I searched and read about the build breaker, but its already supported with Sonarqube plugin of TeamCity as this document https://confluence.jetbrains.com/display/TW/SonarQube+Integration
Am I missing something to configure/or any gotcha? I tried to search a lot but didn't find any sort of proper documentation or lead on that.
Yeah I have to write a custom script using exit status to break the build. I used API to analyse the status of QG.
PROJECTKEY="%teamcity.project.id%"
QGSTATUS=`curl -s -u SONAR_TOKEN: http://SONAR_URL:9000/api/qualitygates/project_status?projectKey=$PROJECTKEY | jq '.projectStatus.status' | tr -d '"'`
if [ "$QGSTATUS" = "OK" ]
then
exit 0
elif [ "$QGSTATUS" = "ERROR" ]
then
exit 1
fi
I managed to fail the build based on Quality Gate settings using the sonar.qualitygate.wait=true parameter.
There's an example on their GitLab pipeline sample page: https://docs.sonarqube.org/latest/analysis/gitlab-cicd/
SonarQube plugin doesn't break the build when quality gate has failed. Why? Everything is described here: Why You Shouldn't Use Build Breaker
The main conclusion is:
[...] SonarSource doesn't want to continue the feature. [...]
Once we started using wallboards we stopped using the Build Breaker plugin, but still believed that using it was an okay practice. And then came SonarQube 5.2, which cuts the connection between the analyzer and the database. Lots of good things came with that cut, including a major change in architecture: analysis of source code is done on the analyzer side and all aggregate number computation is now done on the server side. Which means… that the analyzer doesn't know about the Quality Gate anymore. Only the server does, and since analysis reports are processed serially, first come first served, it can take a while before the Quality Gate result for a job is available.
In other words, from our perspective, the Build Breaker feature doesn't make sense anymore.
You have to verity quality gate status by your own. You can read how to do it here: Access quality gate status from sonarqube api
The answer to xpmatteo question:
Am I the only one that finds it difficult to understand what the quoted explanation means?
You have two tools. SonarScanner and SonarQube.
1) SonarScanner is executed on CI servers. It analyses source code and pushes analysis results to SonarQube sever.
2) SonarQube server processes data and knows if the new changes pass Quality Gates.
SonarScanner has no idea about the final result (pass or doesn't pass), so it cannot fail the build (it had such information before SQ 5.2, because it was processing all data and pushing only results to databases). It means the Build Breaker plugin has nonsense, because it won't work due to the current design. After executing the SonarScanner you have to poll the server and check the Quality Gates status. Then you may decide if the build should fail or not.
Follow below post that might help you.
https://docs.sonarqube.org/display/SONARQUBE45/Build+Breaker+Plugin
run your sonarqube task with the attribute "sonar.buildbreaker.skip".
eg: gradle clean build sonarqube publish -Dsonar.buildbreaker.skip=false
In my scenario CI is Github actions , irrespective of any CI tool sonar's status (Red/Green) of quality gates should be sent to your CI. you can browse the report status at this url http://:/api/ce/task?id= one report are generated .
you have to run this script after reports are generated to check the status and fail the job if SQ fail

Jenkins Pipeline EmailExt: Who are the "Suspects Causing Build to Begin Failing"?

In a Jenkins Pipeline job, when sending an email with the EmailExt plugin, one of the Recipient providers is "Suspects Causing the Build to Begin Failing".
The class name for this provider is FirstFailingBuildSuspectsRecipientProvider.
Who is this supposed to be in this list? When I try using this provider, it sends no emails out.
according to https://github.com/jenkinsci/email-ext-plugin/blob/master/src/main/java/hudson/plugins/emailext/plugins/recipients/FirstFailingBuildSuspectsRecipientProvider.java it is
A recipient provider that assigns ownership of a failing build to the set of developers (including any initiator) that committed changes that first broke the build.

Viewing TeamCity service messages

I'm troubleshooting a build step in TeamCity 9.0.4. The problem seems to lie within the service message output. Is it possible to view these after the build has completed? They are not included in the build log.
The documentation on service messages simply says In order to be processed by TeamCity, they should be printed into a standard output stream of the build.
https://confluence.jetbrains.com/display/TCD9/Build+Script+Interaction+with+TeamCity
(To some extent the service messages can be viewed by manually rerunning the build step and monitoring standard output, but this is not always feasible.)
The documentation for service message implies that you need to write service messages to standard out/error rather than to a log file. If you write it to standard out, teamcity will automatically pick it up and show it in the **build logs ** tab
What this means is that if you have a
shell script, use echo for your service messages
java class, use System.out.println
and so on
Different languages also have different plugins for this , for ex perl has TapHarness.pl to write teamcity messages to the console.
EDIT:
If you want to just view service messages , you can find them in the build logs on the teamcity agent that the build ran on. If you do not find them in the build logs , either the build log has rolled over or you need to increase the verbosity or debug level of your logs(depends on the language).
There was a problem which is solved nowdays:
TeamCity now parses service messages inside other service messages, but only if original message was tagged with tc:parseServiceMessagesInside. Example:
##teamcity[testStdOut name='test1' out='##teamcity|[buildStatisticValue key=|'my_stat_value|' value=|'125|'|]' tc:tags='tc:parseServiceMessagesInside']
A link to JetBrains bug tracker:
https://youtrack.jetbrains.com/issue/TW-45311

Jenkins/Hudson upstream job does not get the status "ball" color of the downstream jobs

I have a job upstream that executes 4 downstream jobs.
If the upstream job finish successfully the downstream jobs start their execution.
The upstream job, since it finish successfully, gets a blue ball (build result=stable), but even tough the downstream jobs fail (red ball) or are unstable (yellow ball), the upstream job maintain its blue color.
Is there anyway to get the result of the upstream job dependent on the downstream jobs?, i mean, if three downstream jobs get a stable build but one of them get an unstable build, the upstream build result should be unstable.
I found the solution. There is a plugin called Groovy Postbuild pluging that let you execute a Groovy script in the post build phase.
Addind a simple code to the downstream jobs you can modify the upstream overall status.
This is the code you need to add:
upstreamBuilds = manager.build.getUpstreamBuilds();
upstreamJob = upstreamBuilds.keySet().iterator().next();
lastUpstreamBuild = upstreamJob.getLastBuild();
if(lastUpstreamBuild.getResult().isBetterThan(manager.build.result)) {
lastUpstreamBuild.setResult(manager.build.result);
}
You can find more info in the entry of my blog here.
Another option that might work for you is to use the parametrised build plugin. It allows you to have your 4 "downstream" builds as build steps. This means that your "parent" build can fail if any of the child builds do.
We do this when we want to hide complexity for the build-pipeline plugin view.
We had a similar sort of issue and haven't found a perfect solution. A partial solution is to use the Promoted Builds Plugin. Configure it for your upstream project to include some visual indicator when the downstream job finishes. It doesn't change the overall job status, but it does notify us when the downstream job fails.
Perhaps this plugin does what you are looking for?
Jenkins Prerequisite build step Plugin
the work around for my project is to create a new job, which is the down stream of the down streams. We set a post build step "Trigger parameterized build on other projects " in all three of the original downstream jobs. The parameter that parse into the new job depends on the three jobs' status and the parameter will causes the new job react accordingly.
1. Create new job which contains one simple class and one simple test. Both parameters dependens, i.e. class fail if parameter "status" = fail, class pass but test fail if parameter "status"=unstable, etc.
2. Set Trigger parameterized build on other projects for the three original downstream jobs with relevant configurations.
3. Set notification of the new job accordingly.

Resources