Task fails with no way to debug - chainlink

I have a node running two jobs - they communicate with an external adaptor and then send the value on-chain.
One job works fine, which already tells me that the node can write on-chain.
The other job, receives the request, talks with the external adaptor (I have verified this on the external adaptor server) and then doesn't submit anything on-chain.
There is no way to debug this through the Operator UI. This is what it shows:
What should I do? I am running the Chainlink develop version because the most up-to-date stable version as a critical bug.

In the Chainlink node version 1.8.0, there are "Error" and "Runs" tabs in your node UI in the browser, and these 2 tabs allow you to view what's wrong with your job run. You can find the latest chainlink docker image here.
The error messages under the "error" tab are shown below, and the info can reflect the error your job encountered in the run.
If there are no "error" and "run" tabs in the browser or there is nothing shown in the UI, you can also find error info in the log file housed by the server running the Chainlink node. The default path of the Chainlink node log file is /chainlink/chainlink_debug.log, so you can log into the server that running the node and check the log for debugging.
Hope it helps.

Related

Can golang/build/cmd/coordinator be run locally?

I am using the instructions on https://pkg.go.dev/golang.org/x/build/cmd/coordinator#section-readme
to locally run a coordinator and buildlet. I have tried both on Darwin and Linux, but the results are that coordinator displays an error message saying:bad build params
Summary of the buildlet error is: connection refused
I suspect the coordinator is the issue because the buildlet works fine locally when the coordinator is not needed.
https://localhost:8119 is not working either.
It seems to me this could be a potential issue that needs to be submitted. Are these instructions currently working for anyone?

Mesos framework stays inactive due to "Authentication failed: EOF"

I'm currently trying to deploy Eremetic (version 0.28.0) on top of Marathon using the configuration provided as an example. I actually have been able to deploy it once, but suddenly, after trying to redeploy it, the framework stays inactive.
By inspecting the logs I noticed a constant attempt to connect to some service that apparently never succeeds because of some authentication problem.
2017/08/14 12:30:45 Connected to [REDACTED_MESOS_MASTER_ADDRESS]
2017/08/14 12:30:45 Authentication failed: EOF
It looks like the service returning an error is ZooKeeper and more precisely it looks like the error can be traced back to this line in the Go ZooKeeper library. ZooKeeper however seems to work: I've tried to query it directly with zkCli and to run a small Spark job (where the Mesos master is given with zk:// URL) and everything seems to work.
Unfortunately I'm not able to diagnose the problem further, what could it be?
It turned out to be a configuration problem. The master URL was simply wrong and this is how the error was reported.

Viewing TeamCity service messages

I'm troubleshooting a build step in TeamCity 9.0.4. The problem seems to lie within the service message output. Is it possible to view these after the build has completed? They are not included in the build log.
The documentation on service messages simply says In order to be processed by TeamCity, they should be printed into a standard output stream of the build.
https://confluence.jetbrains.com/display/TCD9/Build+Script+Interaction+with+TeamCity
(To some extent the service messages can be viewed by manually rerunning the build step and monitoring standard output, but this is not always feasible.)
The documentation for service message implies that you need to write service messages to standard out/error rather than to a log file. If you write it to standard out, teamcity will automatically pick it up and show it in the **build logs ** tab
What this means is that if you have a
shell script, use echo for your service messages
java class, use System.out.println
and so on
Different languages also have different plugins for this , for ex perl has TapHarness.pl to write teamcity messages to the console.
EDIT:
If you want to just view service messages , you can find them in the build logs on the teamcity agent that the build ran on. If you do not find them in the build logs , either the build log has rolled over or you need to increase the verbosity or debug level of your logs(depends on the language).
There was a problem which is solved nowdays:
TeamCity now parses service messages inside other service messages, but only if original message was tagged with tc:parseServiceMessagesInside. Example:
##teamcity[testStdOut name='test1' out='##teamcity|[buildStatisticValue key=|'my_stat_value|' value=|'125|'|]' tc:tags='tc:parseServiceMessagesInside']
A link to JetBrains bug tracker:
https://youtrack.jetbrains.com/issue/TW-45311

Ruby Stack failed to deploy on Google Developers Console

I tried to deploy Ruby stack using Google Developers Console, but no success. I tried several times at other project, error was always the same (below).
Do you have any idea why it keeps failing?
2014/10/23 15:59:44
rubyStackBox: PENDING
2014/10/23 15:59:55~2014/10/23 16:06:01
rubyStackBox: DEPLOYING
2014/10/23 16:06:11
rubyStackBox: DEPLOYMENT_FAILED
Replica rubystackbox-eaeo failed with status PERMANENTLY_FAILING: Replica State changed to PERMANENTLY_FAILING. Replica was unhealthy 2 consecutive times.
I replicated the issue you experienced several times and it also failed. What finally worked was playing with the zones/regions when deploying the ruby stack :
Developers console > Click-to-deploy > Set MySQL password > Advanced Options, choose a different zone and click Deploy.
Another useful tool when investigating this is Console Output. Even if the deployment fails, you can go to the VM instance and check View Output towards the bottom of the page. It will list all the packages and any errors encountered. The following command will achieve the same thing:
$ gcloud compute instances get-serial-port-output <INSTANCE_NAME> --project <PROJECT_ID> --zone <ZONE_NAME>
Please advise if still seeing issues.

The slowness about Websphere Application Server's app Admin Console

It's the concern about Admin console's performance from Websphere Application Server.
I can login smoothly without any problem, but it's got be very slow on response when doing operations such as showing Node status by clicking "Nodes" under "System administration", showing AppServer status by clicking "Application Servers" under "Servers" etc. However, the funny thing is the problem is more serious on remote node than the local nodes which locate on same box with DmgrNode.
So i suspect it should be the problem with network communication between DmgrNode and remote Node, but i don't know how to fix it.
Anybody got the same issue here??? Or any idea to figure it out? Please do me a favor, please please .......
When the console gets the list of servers to display, it does make mbean calls to all of the servers, and if there is a network problem between the dmgr and the node, this could cause some delay in displaying the server page. The nodes page, however, should not have that issue. What is your topology? How many nodes/servers and how many are local/remote?
How can you tell the problem is more serious on the remote nodes than the local nodes?
Are other console operations slower, or only ones that display status? Do you have the same problem with wsadmin commands? The console issues a queryNames to search for the server mbeans. Does the following wsadmin command run much more slowly on the remote node than the local node? If so, how much more slowly?
print AdminControl.queryNames('WebSphere:type=Server,node=myNode')
Replace myNode with either the local or remote node name.
I assume you are using IE to fire up the console. Fire the console from Chrome and see if it helps.

Resources