Getting a Heartbeat message error in a TIBCO custom adapter - tibco

I am running a TIBCO custom adapter on an AS400 server , there was no issue in the starting but suddenly it started giving the below issue.
Could anyone tell what can i check to fix this issue
Server
AS400/i series`
2017-11-30 13:12:48,927 INFO
[com.eaton.icc.tibco.as400.adapter.AS400WisperReceiipt] - WisperReceipt send completed
2017-11-30 13:12:50,091 INFO
[com.eaton.icc.tibco.as400.adapter.AS400WisperPart]] - WisperPart send
completed
2017-11-30 13:12:50,091 WARN
[com.eaton.icc.tibco.as400.adapter.AS400WisperPart]] - Method pool()
completed successfully
2017-11-30 13:12:57,187 ERROR
[com.eaton.icc.tibco.as400.adapter.AS400Monitor] - Exception sending
heartbeat message
com.tibco.sdk.MException: Operation error: unable to create Tibrv Session
for PubHawkMonitor(MPublisher).
at com.tibco.sdk.events.pubsub.MRvPublisher.send(MRvPublisher.java:76)
at com.tibco.sdk.events.MPublisher.send(MPublisher.java:346)
at com.eaton.icc.tibco.as400.adapter.AS400Monitor.onEvent(AS400Monitor.java:227)
at com.tibco.sdk.events.EventHandoff.run(MEventSource.java:141)
at java.lang.Thread.run(Thread.java:809)

Check the RV parameters of adapter.
The adapter can noto able to start the rv. Please share the configuration of adapter

Related

NativeIoException:io.netty.channel.unix.Errors$NativeIoException recvAddress(..) failed: Connection reset by peer

I am trying to find the solution for below error which occurs in our Appdynamics logs when we perform load testing with Jmeter for 5tps. I am using spring cloud gateway 2.7.8 and netty version - 4.1.87.Final for routing purpose. The same error we do not see in our kubernetes logs or in Kibana logs. I am not able to trace from where the logs is coming up.
NativeIoException:io.netty.channel.unix.Errors$NativeIoException recvAddress(..) failed: Connection reset by peer Error capture limit has been reached, this stack trace is truncated.
Can someone help me understand why this error is occurring ?

How to stop Kafka producer messages (Debezium + Azure EventHub)

I have setup Debezium and Azure Event Hub as CDC engine from PostgeSQL.
Exactly like on this tutorial: https://dev.to/azure/tutorial-set-up-a-change-data-capture-architecture-on-azure-using-debezium-postgres-and-kafka-49h6
Everything was working good until I have changed something (I don't know exactly what I changed).
Now my kafka-connect log is spammed with below WARN entry and CDC stopped working...
[2022-03-03 08:31:28,694] WARN [dbz-ewldb-connector|task-0] [Producer clientId=connector-producer-dbz-ewldb-connector-0] Got error produce response with correlation id 2027 on topic-partition ewldb-0, retrying (2147481625 attempts left). Error: REQUEST_TIMED_OUT (org.apache.kafka.clients.producer.internals.Sender:616)
[2022-03-03 08:31:28,775] WARN [dbz-cmddb-connector|task-0] [Producer clientId=connector-producer-dbz-cmddb-connector-0] Got error produce response with correlation id 1958 on topic-partition cmddb-0, retrying (2147481694 attempts left). Error: REQUEST_TIMED_OUT (org.apache.kafka.clients.producer.internals.Sender:616)
[2022-03-03 08:31:28,800] WARN [dbz-ewldb-connector|task-0] [Producer clientId=connector-producer-dbz-ewldb-connector-0] Got error produce response with correlation id 2028 on topic-partition ewldb-0, retrying (2147481624 attempts left). Error: REQUEST_TIMED_OUT (org.apache.kafka.clients.producer.internals.Sender:616)
[2022-03-03 08:31:28,880] WARN [dbz-cmddb-connector|task-0] [Producer clientId=connector-producer-dbz-cmddb-connector-0] Got error produce response with correlation id 1959 on topic-partition cmddb-0, retrying (2147481693 attempts left). Error: REQUEST_TIMED_OUT (org.apache.kafka.clients.producer.internals.Sender:616)
This messages appear even when I delete the Kafka connectors.
Restarting kafka and kafka connect does not help.
How to stop this retries?
Only thing that helps to workaround is to:
Delete connector from Debezium API
Stop Kafka-Connect
Delete the EventHub
Start Kafka-Connect
Add connector from Debezium API
To permanently change how reconnect works change below parameter of producer:
producer.retries=10 (by default it is set to over 2 billions causing spam in kafka-connect.log)

NIFI IMAP Consume Processor Issue

We are using an IMAP consumer processor in our nifi pipeline to read the email from office 365. We have been observing issues in the IMAP processor while consuming the email from office 365 email box.
Please find the below error log for your references.
2021-01-04 11:00:00,286 ERROR [Timer-Driven Process Thread-6] o.a.nifi.processors.email.ConsumeIMAP ConsumeIMAP[id=c31e4176-842d-3464-b870-2460ee675eee] Failed to receive messages from Email server: [javax.mail.MessagingException - A3 BAD Request is throttled. Suggested Backoff Time: 68448 milliseconds
2021-01-04 11:00:00,286 ERROR [Timer-Driven Process Thread-6] o.a.nifi.processors.email.ConsumeIMAP ConsumeIMAP[id=c31e4176-842d-3464-b870-2460ee675eee] Failed to process session due to org.apache.nifi.processor.exception.ProcessException: Failed to receive messages from Email server: [javax.mail.MessagingException - A3 BAD Request is throttled. Suggested Backoff Time: 68448 milliseconds: org.apache.nifi.processor.exception.ProcessException: Failed to receive messages from Email server: [javax.mail.MessagingException - A3 BAD Request is throttled. Suggested Backoff Time: 68448 milliseconds
org.apache.nifi.processor.exception.ProcessException: Failed to receive messages from Email server: [javax.mail.MessagingException - A3 BAD Request is throttled. Suggested Backoff Time: 68448 milliseconds
at org.apache.nifi.processors.email.AbstractEmailProcessor.fillMessageQueueIfNecessary(AbstractEmailProcessor.java:328)
at org.apache.nifi.processors.email.AbstractEmailProcessor.receiveMessage(AbstractEmailProcessor.java:381)
Below is the nifi properties:
image.png
Please let us know if we are missing some configuration in the above screenshot.
Thanks and Regards,
IBNEY HASAN
This is not an error specific to NiFi - O365 is telling you that you are being throttled. You will need to tune your O365 settings appropriately, which is outside of the scope of NiFi.
See:
https://learn.microsoft.com/en-gb/archive/blogs/exchangedev/exchange-online-throttling-and-limits-faq
https://learn.microsoft.com/en-us/exchange/mailbox-migration/office-365-migration-best-practices#office-365-throttling

Snowflake JDBC Connection using Talend

I am using tDBConnection(JDBC) component to connect with a snowflake.
After all the setup I am getting the below error
net.snowflake.client.jdbc.RestRequest execute
SEVERE: Stop retrying since elapsed time due to network issues has reached
timeout. Elapsed: 71,253(ms), timeout: 60,000(ms)
Exception in component tDBConnection_1 (sf_test)
java.lang.RuntimeException: JDBC driver encountered communication error. Message: Exception encountered for HTTP request: Certificate for <xxxxxxxxxx.ap-southeast-1.snowflakecomputing.com> doesn't match any of the subject alternative names: [*.ap-southeast-1.snowflakecomputing.com, *.global.snowflakecomputing.com, *.sg.ap-southeast-1.aws.snowflakecomputing.com].
at snowflake_poc.sf_test_0_1.sf_test.tDBConnection_1Process(sf_test.java:397)
at snowflake_poc.sf_test_0_1.sf_test.runJobInTOS(sf_test.java:700)
at snowflake_poc.sf_test_0_1.sf_test.main(sf_test.java:550)
I am using latest snowflake JDBC driver 3.12.2
Any leads would be really helpful
Thanks
I had the similar issue, in the Talend account box just give your account name alond which is "<xxxxxxxxxx.ap-southeast-1" don't postfix snowflakecomputing.com to your account name

Socket Exception : recv failed with oracle thin driver

I am facing an issue where my test suite randomly fails with an socket exception
oracle.jdbc.driver.T4CStatement 1267 - Throwing SQLException: java.net.SocketException: Software caused connection abort: recv failed
The test suite fails with this exception when a given set of test cases are executed in a particular order. I got the above error log after enabling the oracle jdbc driver logs. The query which leads to this error is always a "DROP SEQUENCE query". There is nothing special about this query since it is fired 'n' number of times during the execution flow.
One of the blog link points out that the above error is because the server side sockets gets closed before the client expects. To troubleshoot more on this point I tried analyzing the Oracle TNSListener logs - listener.log file but was not able to gather much information since the log file only contained information about the socket CONNECT function call.
What could be the possible causes of the above error in addition to the one the blog link mentions?
How can I configure the Oracle TNSListener to provide more detailed information about the socket communication? For e.g. Trace information when the server socket close event is fired.
I would appreciated if anyone could point out to a possible cause of this error or provide more information which could help me to troubleshoot this issue further based on the above two points
You can set the trace level if you have access to the lsnrctl utility:
LSNRCTL> show trc_level
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=xxx)(PORT=1521)))
LISTENER parameter "trc_level" set to off
The command completed successfully
LSNRCTL> set trc_level admin
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=xxx)(PORT=1521)))
LISTENER parameter "trc_level" set to admin
The command completed successfully
LSNRCTL> show trc_level
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=xxx)(PORT=1521)))
LISTENER parameter "trc_level" set to admin
The command completed successfully
LSNRCTL>
From the docs, trc_level is one of:
Specify one of the following trace levels:
off for no trace output
user for user trace information
admin for administration trace information
support for Oracle Support Services trace information

Resources