Viewing TeamCity service messages - teamcity

I'm troubleshooting a build step in TeamCity 9.0.4. The problem seems to lie within the service message output. Is it possible to view these after the build has completed? They are not included in the build log.
The documentation on service messages simply says In order to be processed by TeamCity, they should be printed into a standard output stream of the build.
https://confluence.jetbrains.com/display/TCD9/Build+Script+Interaction+with+TeamCity
(To some extent the service messages can be viewed by manually rerunning the build step and monitoring standard output, but this is not always feasible.)

The documentation for service message implies that you need to write service messages to standard out/error rather than to a log file. If you write it to standard out, teamcity will automatically pick it up and show it in the **build logs ** tab
What this means is that if you have a
shell script, use echo for your service messages
java class, use System.out.println
and so on
Different languages also have different plugins for this , for ex perl has TapHarness.pl to write teamcity messages to the console.
EDIT:
If you want to just view service messages , you can find them in the build logs on the teamcity agent that the build ran on. If you do not find them in the build logs , either the build log has rolled over or you need to increase the verbosity or debug level of your logs(depends on the language).

There was a problem which is solved nowdays:
TeamCity now parses service messages inside other service messages, but only if original message was tagged with tc:parseServiceMessagesInside. Example:
##teamcity[testStdOut name='test1' out='##teamcity|[buildStatisticValue key=|'my_stat_value|' value=|'125|'|]' tc:tags='tc:parseServiceMessagesInside']
A link to JetBrains bug tracker:
https://youtrack.jetbrains.com/issue/TW-45311

Related

Task fails with no way to debug

I have a node running two jobs - they communicate with an external adaptor and then send the value on-chain.
One job works fine, which already tells me that the node can write on-chain.
The other job, receives the request, talks with the external adaptor (I have verified this on the external adaptor server) and then doesn't submit anything on-chain.
There is no way to debug this through the Operator UI. This is what it shows:
What should I do? I am running the Chainlink develop version because the most up-to-date stable version as a critical bug.
In the Chainlink node version 1.8.0, there are "Error" and "Runs" tabs in your node UI in the browser, and these 2 tabs allow you to view what's wrong with your job run. You can find the latest chainlink docker image here.
The error messages under the "error" tab are shown below, and the info can reflect the error your job encountered in the run.
If there are no "error" and "run" tabs in the browser or there is nothing shown in the UI, you can also find error info in the log file housed by the server running the Chainlink node. The default path of the Chainlink node log file is /chainlink/chainlink_debug.log, so you can log into the server that running the node and check the log for debugging.
Hope it helps.

Why is Jenkins.get().getRootUrl() not available when generating DSL?

I'm debugging a problem with atlassian-bitbucket-server-integration-plugin. The behavior occurs when generating a multi-branch pipeline job, which requires a Bitbucket webhook. The plugin works fine when creating the pipeline job from the Jenkins UI. However, when using DSL to create an equivalent job, the plugin errors out attempting to create the webhook.
I've tracked this down to a line in RetryingWebhookHandler:
String jenkinsUrl = jenkinsProvider.get().getRootUrl();
if (isBlank(jenkinsUrl)) {
throw new IllegalArgumentException("Invalid Jenkins base url. Actual - " + jenkinsUrl);
}
The jenkinsUrl is used as the target for the webhook. When the pipeline job is created from the UI, the jenkinsUrl is set as expected. When the pipeline job is created by my DSL in a freeform job, the jenkinsUrl is always null. As a result, the webhook can't be created and the job fails.
I've tried various alternative ways to get the Jenkins root URL, such as static references like Jenkins.get().getRootUrl() and JenkinsLocationConfiguration.get().getUrl(). However, all values come up empty. It seems like the Jenkins context is not available at this point.
I'd like to submit a PR to fix this behavior in the plugin, but I can't come up with anything workable. I am looking for suggestions about the root cause and potential workarounds. For instance:
Is there something specific about the way my freeform job is executed that could cause this?
Is there anything specific to the way jobs are generated from DSL that could cause this?
Is there another mechanism I should be looking at to get the root URL from configuration, which might work better?
Is it possible that this behavior points to a misconfiguration in my Jenkins instance?
If needed, I can share the DSL I'm using to generate the job, but I don't think it's relevant. By commenting out the webhook code that fails, I've confirmed that the DSL generates a job with the correct config.xml underneath. So, the only problem is how to get the right configuration to the plugin so it can set up the webhook.
It turns out that this behavior was caused by a partial misconfiguration of Jenkins.
While debugging problems with broken build links in Bitbucket (pointing me at unconfigured-jenkins-location instead of the real Jenkins URL), I discovered a yellow warning message on the front page of Jenkins which I had missed before, telling me that the root server URL was not set:
Jenkins root URL is empty but is required for the proper operation of many Jenkins features like email notifications, PR status update, and environment variables such as BUILD_URL.
Please provide an accurate value in Jenkins configuration.
This error message had a link to Manage Jenkins > Configure System > Jenkins Location. The correct Jenkins URL actually was set there (I had already double-checked this), but the system admin email address in the same section was not set. When I added a valid email address, the yellow warning went away.
This change fixed both the broken build URL in BitBucket, as well as the problems with my DSL. So, even though it doesn't make much sense, it seems like the missing system admin email address was the root cause of this behavior.

Logging for Talend job running within spring-boot

We have talend-jobs triggered within Spring-boot application. Is there any way to configure the output of talend-jobs to the application log files?
One workaround we find is to write logs directly to an external file (filePath passed as context-param). But wanted to find if there is a better way to configure this seamlessly.
Not sure if I understood the question correctly, but I guess your concerns might be on what might have happened to the triggered Jobs.
Logging
With Respect to Logging for Talend, You could configure using Log4j,
https://help.talend.com/reader/5DC~TBhDsBie5JTXyVLW4g/QSGCZJKXo~uhKvZDq1DxUg
Monitoring
Regarding the Status of the Job Executed, you could get the execution details retrieved using REST Call(Talend Metaservlet API).
getTaskExecutionStatus
https://help.talend.com/reader/oYf9gKhmYrkWCiSua4qLeg/SLiAyHyDTjuznLR_F~MiQQ
By Modifying the Existing Talend Job,You could also design a like a feedback loop, ie Trigger a REST Call back to your application. With the details of Execution from Talend Job.

Talend ESB deployment on runtime

I'm on Talend ESB Runtime.
I encountered problems while starting ./trun. Nothing on the screen appeared after start. The process is launched but I can't get anything else...
Anyway I tryed to deployed a job, and there is something weird in the log about org.osgi.framework.bundleException in tesb.log.
And Karaf.log is OK
Here tesb.log :
tesb.log
karaf.log :
karaf.log
log in repository data :
timestamplog
I don't know how to investigate, because logs are poor and JVM is equal between Talend ESB and the runtime...
Can you help me please?
You only showed a small snippet of the log. From this I can already see that at least one bundle can not be resolved. This means that this bundle can not be used. In the snippet the bundle seems to be a user bundle but I am pretty sure you have other such log messages that show that one of the main bundles of karaf can not be loaded.
If you want to check for the cause of the problem look into these messages and search for non optional package that are not resolved. Usually this leads to a missing bundle.
If you simply want to get your system running again you can simply reset karaf by using
./trun clean
Remember though that you then have to reinstall all features again.

OEM 13C Log File Monitoring

I have installed OEM 13c and deployed a couple of agents and want to test out the Log File Monitoring utility. I have enabled it and added a log file to monitor.
When I go and test it out, it does not show any alerts when they are put into the Log File. On the agent server, I have tailed the file and see the messages coming into the log file.
Does anyone have experience adding log files to OEM? I could have configured it wrong. Or is there any troubleshooting steps that I can follow to see if the server is even contacting the agent for reading the log file. Status of the agent is good with no incidents.
Without access to the system, it would be difficult to tell you the exact cause of this issue. However, I can list a few potential causes of this issue that I have experienced personally:
Permissions. The Oracle Enterprise Manager Agent is very convoluted when it comes to system permissions within a remote server. The agent can be owned and run as any number of users but during metric evaluation, may also need sudo or pam-authentication permissions to access certain entities on the server. Depending on the authentication profiles on that server, this could be the cause of your issue. There are ways to grant the agent access through the PAM stack if that is necessary.
Syntax. The wildcard syntax in the OEM GUI can be a little confusing as well. I would play with the wildcard elements a bit on the "String" component to ensure that it isn't as simple as adding wildcards to the beginning and end of the string. Without diving into the binaries of the agent plugins, it is difficult to assess exactly how the agent is evaluating this particular metric
One suggestion I would have is to go through the agent commands. There are specific commands you can run to manually force an agent to evaluate a particular metric for a particular target. This can allow you to manually trigger the metric collection locally on the server and evaluate what exactly is being performed at the agent level.
On the system I was running (12c) the command was as follows:
emctl control agent runCollection <hostname>:host host_storage

Resources