Why is Jenkins.get().getRootUrl() not available when generating DSL? - jenkins-pipeline

I'm debugging a problem with atlassian-bitbucket-server-integration-plugin. The behavior occurs when generating a multi-branch pipeline job, which requires a Bitbucket webhook. The plugin works fine when creating the pipeline job from the Jenkins UI. However, when using DSL to create an equivalent job, the plugin errors out attempting to create the webhook.
I've tracked this down to a line in RetryingWebhookHandler:
String jenkinsUrl = jenkinsProvider.get().getRootUrl();
if (isBlank(jenkinsUrl)) {
throw new IllegalArgumentException("Invalid Jenkins base url. Actual - " + jenkinsUrl);
}
The jenkinsUrl is used as the target for the webhook. When the pipeline job is created from the UI, the jenkinsUrl is set as expected. When the pipeline job is created by my DSL in a freeform job, the jenkinsUrl is always null. As a result, the webhook can't be created and the job fails.
I've tried various alternative ways to get the Jenkins root URL, such as static references like Jenkins.get().getRootUrl() and JenkinsLocationConfiguration.get().getUrl(). However, all values come up empty. It seems like the Jenkins context is not available at this point.
I'd like to submit a PR to fix this behavior in the plugin, but I can't come up with anything workable. I am looking for suggestions about the root cause and potential workarounds. For instance:
Is there something specific about the way my freeform job is executed that could cause this?
Is there anything specific to the way jobs are generated from DSL that could cause this?
Is there another mechanism I should be looking at to get the root URL from configuration, which might work better?
Is it possible that this behavior points to a misconfiguration in my Jenkins instance?
If needed, I can share the DSL I'm using to generate the job, but I don't think it's relevant. By commenting out the webhook code that fails, I've confirmed that the DSL generates a job with the correct config.xml underneath. So, the only problem is how to get the right configuration to the plugin so it can set up the webhook.

It turns out that this behavior was caused by a partial misconfiguration of Jenkins.
While debugging problems with broken build links in Bitbucket (pointing me at unconfigured-jenkins-location instead of the real Jenkins URL), I discovered a yellow warning message on the front page of Jenkins which I had missed before, telling me that the root server URL was not set:
Jenkins root URL is empty but is required for the proper operation of many Jenkins features like email notifications, PR status update, and environment variables such as BUILD_URL.
Please provide an accurate value in Jenkins configuration.
This error message had a link to Manage Jenkins > Configure System > Jenkins Location. The correct Jenkins URL actually was set there (I had already double-checked this), but the system admin email address in the same section was not set. When I added a valid email address, the yellow warning went away.
This change fixed both the broken build URL in BitBucket, as well as the problems with my DSL. So, even though it doesn't make much sense, it seems like the missing system admin email address was the root cause of this behavior.

Related

Go CD failure message available in environment

There is a list of env variables available for GoCD at:
https://docs.gocd.org/current/faq/environment_variables.html
However I'm looking for something like: GO_BUILD_ERROR or similar.
I would like to have the failure reason or message when a build fails to pass this to an external script or message.
There seems to be nothing in the documentation.
GoCD doesn't have any such variables. The reason I feel is mostly because GoCD is very generic in terms of what commands constitute a build for a material. You might want to parse the logs manually to figure that out.
Also in the context of GoCD environment variables are used as input to the stages and not as output from them. If you're planning to build a plugin / wrapper for the commands that run consider storing them as properties in the jobs that way they can also be queried upon later if required.

Why would I suddenly get 'KerberosName$NoMatchingRule: No rules applied to user#REALM' errors?

We've been using Kerberos auth with several (older) Cloudera instances without a problem but now are now getting 'KerberosName$NoMatchingRule: No rules applied to user#REALM' errors. We've been modifying code to add functionality but AFAIK nobody has touched either the authentication code or the cluster configuration.
(I can't rule it out - and clearly SOMETHING has changed.)
I've set up a simple unit test and verified this behavior. At the command line I can execute 'kinit -kt user.keytab user' and get the corresponding Kerberos tickets. That verifies the correct configuration and keytab file.
However my standalone app fails with the error mentioned.
UPDATE
As I edit this I've been running the test in the debugger so I can track down exactly where the test is failing and it seems to be succeed when run in the debugger!!! Obviously there's something different in the environments, not some weird heisenbug that is only triggered when nobody is looking.
I'll update this if I find the cause. Does anyone else have any ideas?
Auth_to_local has to have at least one rule.
Make sure you have “DEFAULT” rule at the very end of auth_to_local.
If none rules before match, at least DEAFULT rule would kick in.

Viewing TeamCity service messages

I'm troubleshooting a build step in TeamCity 9.0.4. The problem seems to lie within the service message output. Is it possible to view these after the build has completed? They are not included in the build log.
The documentation on service messages simply says In order to be processed by TeamCity, they should be printed into a standard output stream of the build.
https://confluence.jetbrains.com/display/TCD9/Build+Script+Interaction+with+TeamCity
(To some extent the service messages can be viewed by manually rerunning the build step and monitoring standard output, but this is not always feasible.)
The documentation for service message implies that you need to write service messages to standard out/error rather than to a log file. If you write it to standard out, teamcity will automatically pick it up and show it in the **build logs ** tab
What this means is that if you have a
shell script, use echo for your service messages
java class, use System.out.println
and so on
Different languages also have different plugins for this , for ex perl has TapHarness.pl to write teamcity messages to the console.
EDIT:
If you want to just view service messages , you can find them in the build logs on the teamcity agent that the build ran on. If you do not find them in the build logs , either the build log has rolled over or you need to increase the verbosity or debug level of your logs(depends on the language).
There was a problem which is solved nowdays:
TeamCity now parses service messages inside other service messages, but only if original message was tagged with tc:parseServiceMessagesInside. Example:
##teamcity[testStdOut name='test1' out='##teamcity|[buildStatisticValue key=|'my_stat_value|' value=|'125|'|]' tc:tags='tc:parseServiceMessagesInside']
A link to JetBrains bug tracker:
https://youtrack.jetbrains.com/issue/TW-45311

How to enqueue more than one build of the same configuration?

We are using two teamcity servers (one for builds and one for GUI tests). The gui tests are triggered from http GET in the last step of the build. (as in http://confluence.jetbrains.com/display/TCD65/Accessing+Server+by+HTTP)
The problem is, that there is only one configuration of the same kind in the queue at the same time. Is there a way to enable multiple starts of the same configuration? Can I use some workaround like sending a dummy id?
At the bottom the section "Triggering a Custom Build" here: http://confluence.jetbrains.com/display/TCD65/Accessing+Server+by+HTTP, you can find information about passing custom parameters to the build.
Just define some unused configuration parameter, like "BuildId", and pass, for example, current date (guid will work as well) to it
...&buildId=12/12/23 12:12:12

How to trigger a hudson job by another job which is in a different hudson

I have job A in Hudson A and Job B in Hudson B. I want to trigger job A by Job B.
In your job B configuration, check the Trigger builds remotely (e.g., from scripts) checkbox and provide a token.
The help text there shows you the URL you can call to trigger a build from remote scripts (e.g. from a shell script in Hudson job A).
However, that would trigger job B no matter what the result of job A is.
Morechilli's answer is probably the best solution.
I haven't used Hudson but I would guess your simplest approach would be to use the URL trigger:
http://wiki.hudson-ci.org/display/HUDSON/URL+Change+Trigger
I think there is a latest build url that could be used for this.
In the latest versions of Hudson, the lastSuccessfultBuild/ HTML page will contain the elapased time since it was built, which will be different for each call. This causes the URL Change Trigger to spin.
One fix is to use the xml, json, or python APIs to request only a subset of the information. Using the 'tree' request parameter, the following URL will return an XML document containing only the build number of the last successful build.
http://SERVER:PORT/job/JOBNAME/lastSuccessfulBuild/api/xml?tree=number
Using this URL restored the behavior I expected from the URL Change Trigger.
Personally, I find the easiest way to do this is to watch the build timestamp:
PROJECT_NAME/lastSuccessfulBuild/buildTimestamp
I'm using wget to trigger the build:
wget --post-data 'it-just-need-to-be-a-POST-request'
--auth-no-challenge --http-user=myuser --http-password=mypassword
http://jenkins.xx.xx/xxx/job/A/build?delay=0sec
There's other ways how you can trigger a build, see the REST and other APIs of jenkins.
But this works great on unix.

Resources