How to debug Spring Cloud Data flow sink application that is silently failing to write - spring

I have a sink application that fails to write to db, but am having trouble debugging. Note that I also asked a more specific question here, but this question in SO is more general: How should I go about debuggging an SCDF stream pipeline when no errors come up?
What I'm trying to do
I am trying to follow a tutorial (specifically, this tutorial) which uses some prebuilt applications. Everything is up and running with no error messages, and the source application is correctly writing to Kafka. However, the sink seems to be failing to write anything.
Note that I do see the debugging guide here:
https://dataflow.spring.io/docs/stream-developer-guides/troubleshooting/debugging-stream-apps/#sinks
However, this seems to only be relevant when you are writing your own sink.
I am not asking about how to solve my issue per se, but rather about debugging protocol for SCDF apps in general. What is the best way to go about debugging in these kinds of situations where no errors come up but the core functionality isn't working?

Assuming you know how to view the logs and there are no error messages, the next step is to turn on DEBUG logging for spring-integration. You can set a property on the sink logging.level.org.springframework.integration=DEBUG that will log any messages coming into the sink.

Related

Kafka Streams Add New Source to Running Application

Is it possible to add another source topic to an existing topology of a running kafka streams java application. Based on the javadoc (https://kafka.apache.org/23/javadoc/org/apache/kafka/streams/KafkaStreams.html) I am guessing the answer is no.
My Use Case:
REST api call triggers a new source topic should be processed by an existing processor. Source topics are stored in a DB and used to generate the topology.
I believe the only option is to shutdown the app and restart it allowing for the new topic to be picked up.
Is there any option to add the source topic without shutting down the app?
You cannot modify the program while it's running. As you point out, to change anything, you need to stop the program and create a new Topology. Depending on your program and the change, you might actually need to reset the application before restarting it. Cf. https://docs.confluent.io/current/streams/developer-guide/app-reset-tool.html

ruby-kafka: is it possible to publish to two kafka instances at the same time

Current flow of the project that I'm working on involves pushing to a local kafka using ruby-kafka gem.
Now the need arose to add producer for the remote kafka, and duplicate also messages there.
And I'm looking for a better way, than calling Kafka.new(...) twice...
Could you please help me, and do you happen to have any ideas?
Another approach to consider would be writing the data once from your application, and then asynchronously replicating the message from one Kafka cluster to another. There are multiple ways of doing this including Apache Kafka's MirrorMaker, Confluent's Replicator, Uber's uReplicator etc.
Disclaimer: I work for Confluent.

How to get errors captured from nifi logs specific to my application when multiple nifi applications are running

We have multiple team nifi applications running in same nifi machine... Is there any way to log the logs specific to my application? Also by default nifi-app.log file is difficult to track the issues and bulletin board shows the error msg for only 5 mins... How to get the errors captured and send an mail alert in Nifi?
Please help me to get through this. Thanks in advance!
There are a couple ways to approach this. One is to route failure relationships from processors to a PutEmail processor which can send an alert on errors. Another is to use a custom reporting task to alert a monitoring service when a certain number of flowfiles are in an error queue.
Finally, we have heard that in multitenant environments, log parsing is difficult. While NiFi aims to reduce or completely eliminate the need to visually inspect logs by providing the data provenance feature, in the event you do need to inspect the logs, we recommend searching the log by processor ID to isolate relevant messages. You can also use NiFi itself to ingest those same logs and perform parsing and filtering activities if desired. Future versions may improve this experience.
By parsing the nifi log, you can separate the logs which is specific to your team applications, by using the processor group id and using Nifi Rest API. Check the below link for the nifi template and python codes to solve this issue:
https://link.medium.com/L6IY1wTimV
You can send all the errors in a Processor Group to the same processor, it could be a regular UpdateAttribute or a custom processor, this processor is going to add the path and all the relevant information and then send this to a general error/logs flow that is going to check the information inside the flowfile regarding to the error, and will make the decision of send or not an email, to whom and this kind of things.
Using this approach, the system keeps simple and inside NiFi, so you don't add more layers of complexity, and you are going to have only one processor to manage the errors per Process Group.
This is the way we are managing errors in my company.

What does Actor[akka:\\play\deadLetters].tell() mean in a New Relic's trace of a Play Framework 2.0 web transaction?

I have a Play Framework 2.0 Java application hosted on Heroku, and I am monitoring it using the free-tier New Relic addon. For most of the transactions, a majority of the time is spent in what New Relic labels as Actor[akka:\\play\deadLetters].tell(). What is the application actually doing during this time?
As a simple description, Akka (http://en.wikipedia.org/wiki/Akka_(toolkit); http://akka.io/) is part of the Play framework as one of their integrations. As the application on Play is instrumented for monitoring there are HTTP requests made by Akka that are traced as a web transaction. In short, we measure it. As for what is is specifically doing, I recommend checking the Play documentation or the Akka link from the first sentence.
If you have a Java agent version older than 3.2.0, upgrading the Java agent will give you the following change:
akka.actor.ActorKilledException is now ignored by default
The ActorKilledException is commonly thrown in Play applications as a
control mechanism in normally functioning applications. In previous
versions, this exception inflated the reported error rate. These
exceptions are now ingored by default. You can override the default
ignore_errors list to provide your own exceptions or to omit the
ActorKilledException.
Let us know if this information is helpful or if you need additional assistance.
Jeanie Swan
New Relic Support
I'm not very familiar with how NewRelic collects data, however deadLetters is a special Actor that receives "all messages that were sent to a dead (or non existent) Actor". You can read more about dead letters in the official docs.
For example you can subscribe to these dead letters and print them (which should give you enough information to then track down their source and fix it). A typical case where many dead letters may be encountered is when you're sending messages to an Actor which has stopped, but someone is still sending messages to it - this you should be able to detect once printing the deadletters.

WSO2 Workflow execution trace for debugging purposes

We are currently evaluating the latest WSO2 BPS 3.0 as an open source replacement for Oracle BPEL. So far I was able to create and deploy a workflow on the BPS server. I was also able to test it and everything seem to work fine.
The problem however is looking at the EXECUTION TRACE like we can on the Oracle BPEL console.
I succesfuuly enabled SOAP TRACING only to see SOAP messages incoming and outgoing from the BPEL process. I however would like to see the output at each interim step of the workflow. Oracle does a wonderful job wherein I can just click on individual steps in the execution trace and view the output after each step. This is a very very important functionality and am surprised is not enabled OUT OF THE BOX.
I also tried the steps at BPEL Designer for Eclipse: how to debug a BPEL process but still cannot get it to work.
Can somebody list the exact steps so I can visualize the output of every step in the workflow.
You can find the input and output messages corresponding to each activity from the instance view. From the management console, go-to instance view and click on a given process instance id. Then you would get to the instance view.
Also by enabling soap tracer from the management console, you can view the incoming and outgoing soap messages.
Additionally, you can enabling message tracing at log level to log all messages coming in and going out.

Resources