Cross-posting from https://groups.google.com/forum/#!topic/kythe/86kNuSCeorI, since I was directed here by Beam faq for Beam questions.
In short, I run a job written using the golang sdk successfully using the direct runner, but trying to use the dataflow runner I get the following error in the google cloud console:
2019-02-17 (12:03:53) Step with name e19 already exists. Duplicates
are not allowed.
I attach the plan that was printed to the stderr at https://pastebin.com/vpu3U52j. Grepping for e19: https://pastebin.com/L24L1guT.
I'm not very familiar with beam yet. I wonder which part is responsible for generating the step names? What are likely causes of a collision?
Thank you!
It was a bug actually, sent PR to beam.
Related
I'm a bit new to this so apologies in advance if this question is bit vague or ill-defined, but I'm trying to set up a new Chainlink OCR node and add a job to it. I have the node running and can successfully add a job but almost immediately see the error:
TrackConfig: error during LatestConfigDetails()
Has anyone experienced this, or know even what config details it's referring to? I'm not sure where to even start trying to trouble shoot. Thanks so much!
Typically, this means that there is an issue with the OCR configuration on-chain, or you ETH node is out of sync.
Please test your node is up to date.
SDK: Apache Beam SDK for Go 0.5.0
We are running Apache Beam Go SDK jobs in Google Cloud Data Flow. They had been working fine until recently when they intermittently stopped working (no changes made to code or config). The error that occurs is:
Failed to retrieve staged files: failed to retrieve worker in 3 attempts: bad MD5 for /var/opt/google/staged/worker: ..., want ; bad MD5 for /var/opt/google/staged/worker: ..., want ;
(Note: It seems as if it's missing a second hash value in the error message message.)
As best I can guess there's something wrong with the worker - It seems to be trying to compare md5 hashes of the worker and missing one of the values? I don't know exactly what it's comparing to though.
Does anybody know what could be causing this issue?
The fix to this issue seems to have been to rebuild the worker_harness_container_image with the latest changes. I had tried this but I didn't have the latest release when I built it locally. After I pulled the latest from the Beam repo, and rebuilt the image (As per the notes here https://github.com/apache/beam/blob/master/sdks/CONTAINERS.md) and reran it seemed to work again.
I'm seeing the same thing. If I look into the Stackdriver logging I see this:
Handler for GET /v1.27/images/apache-docker-beam-snapshots-docker.bintray.io/beam/go:20180515/json returned error: No such image: apache-docker-beam-snapshots-docker.bintray.io/beam/go:20180515
However, I can pull the image just fine locally. Any ideas why Dataflow cannot pull.
I setup Apache Openwhisk locally following this guide: http://jamesthom.as/blog/2018/01/19/starting-openwhisk-in-sixty-seconds/. In general it seems to work correctly, but whenever I'm trying to execute any commands related to api, e.g.
wsk -i api list
it gives me an error,
Unable to obtain the API list: The requested resource does not exist. (code 153)
Any idea how to fix this?
This is unfortunately a temporary issue with docker-compose, and work is in progress to fix this.
I'm deploying a project with IIB.
The good feature is Integration Serivce, but I dont know how to save log before and after each operation.
So can any one know how to resolve that ?
Tks !
There are three ways in my project. Refer to the following.
Code Level
1.JavaComputeNode (Using log4j )
Flow Level
1.TraceNode
2.Message Flow Monitoring
In addition to the other answers there is one more option, which I often use: The IAM3 SupportPac
It adds a log4j-Node and also provides the possibility to log from esql and java compute nodes.
There are two ways of doing this:
You can use Log Node to create audit logging. This option only store in files and the files are not rotatives
You can use the IBM Integrated Monitor these events to create a external flow that intercepts messages and store this message in the way you prefer
I'm troubleshooting a build step in TeamCity 9.0.4. The problem seems to lie within the service message output. Is it possible to view these after the build has completed? They are not included in the build log.
The documentation on service messages simply says In order to be processed by TeamCity, they should be printed into a standard output stream of the build.
https://confluence.jetbrains.com/display/TCD9/Build+Script+Interaction+with+TeamCity
(To some extent the service messages can be viewed by manually rerunning the build step and monitoring standard output, but this is not always feasible.)
The documentation for service message implies that you need to write service messages to standard out/error rather than to a log file. If you write it to standard out, teamcity will automatically pick it up and show it in the **build logs ** tab
What this means is that if you have a
shell script, use echo for your service messages
java class, use System.out.println
and so on
Different languages also have different plugins for this , for ex perl has TapHarness.pl to write teamcity messages to the console.
EDIT:
If you want to just view service messages , you can find them in the build logs on the teamcity agent that the build ran on. If you do not find them in the build logs , either the build log has rolled over or you need to increase the verbosity or debug level of your logs(depends on the language).
There was a problem which is solved nowdays:
TeamCity now parses service messages inside other service messages, but only if original message was tagged with tc:parseServiceMessagesInside. Example:
##teamcity[testStdOut name='test1' out='##teamcity|[buildStatisticValue key=|'my_stat_value|' value=|'125|'|]' tc:tags='tc:parseServiceMessagesInside']
A link to JetBrains bug tracker:
https://youtrack.jetbrains.com/issue/TW-45311