We are using an IMAP consumer processor in our nifi pipeline to read the email from office 365. We have been observing issues in the IMAP processor while consuming the email from office 365 email box.
Please find the below error log for your references.
2021-01-04 11:00:00,286 ERROR [Timer-Driven Process Thread-6] o.a.nifi.processors.email.ConsumeIMAP ConsumeIMAP[id=c31e4176-842d-3464-b870-2460ee675eee] Failed to receive messages from Email server: [javax.mail.MessagingException - A3 BAD Request is throttled. Suggested Backoff Time: 68448 milliseconds
2021-01-04 11:00:00,286 ERROR [Timer-Driven Process Thread-6] o.a.nifi.processors.email.ConsumeIMAP ConsumeIMAP[id=c31e4176-842d-3464-b870-2460ee675eee] Failed to process session due to org.apache.nifi.processor.exception.ProcessException: Failed to receive messages from Email server: [javax.mail.MessagingException - A3 BAD Request is throttled. Suggested Backoff Time: 68448 milliseconds: org.apache.nifi.processor.exception.ProcessException: Failed to receive messages from Email server: [javax.mail.MessagingException - A3 BAD Request is throttled. Suggested Backoff Time: 68448 milliseconds
org.apache.nifi.processor.exception.ProcessException: Failed to receive messages from Email server: [javax.mail.MessagingException - A3 BAD Request is throttled. Suggested Backoff Time: 68448 milliseconds
at org.apache.nifi.processors.email.AbstractEmailProcessor.fillMessageQueueIfNecessary(AbstractEmailProcessor.java:328)
at org.apache.nifi.processors.email.AbstractEmailProcessor.receiveMessage(AbstractEmailProcessor.java:381)
Below is the nifi properties:
image.png
Please let us know if we are missing some configuration in the above screenshot.
Thanks and Regards,
IBNEY HASAN
This is not an error specific to NiFi - O365 is telling you that you are being throttled. You will need to tune your O365 settings appropriately, which is outside of the scope of NiFi.
See:
https://learn.microsoft.com/en-gb/archive/blogs/exchangedev/exchange-online-throttling-and-limits-faq
https://learn.microsoft.com/en-us/exchange/mailbox-migration/office-365-migration-best-practices#office-365-throttling
Related
I have setup Debezium and Azure Event Hub as CDC engine from PostgeSQL.
Exactly like on this tutorial: https://dev.to/azure/tutorial-set-up-a-change-data-capture-architecture-on-azure-using-debezium-postgres-and-kafka-49h6
Everything was working good until I have changed something (I don't know exactly what I changed).
Now my kafka-connect log is spammed with below WARN entry and CDC stopped working...
[2022-03-03 08:31:28,694] WARN [dbz-ewldb-connector|task-0] [Producer clientId=connector-producer-dbz-ewldb-connector-0] Got error produce response with correlation id 2027 on topic-partition ewldb-0, retrying (2147481625 attempts left). Error: REQUEST_TIMED_OUT (org.apache.kafka.clients.producer.internals.Sender:616)
[2022-03-03 08:31:28,775] WARN [dbz-cmddb-connector|task-0] [Producer clientId=connector-producer-dbz-cmddb-connector-0] Got error produce response with correlation id 1958 on topic-partition cmddb-0, retrying (2147481694 attempts left). Error: REQUEST_TIMED_OUT (org.apache.kafka.clients.producer.internals.Sender:616)
[2022-03-03 08:31:28,800] WARN [dbz-ewldb-connector|task-0] [Producer clientId=connector-producer-dbz-ewldb-connector-0] Got error produce response with correlation id 2028 on topic-partition ewldb-0, retrying (2147481624 attempts left). Error: REQUEST_TIMED_OUT (org.apache.kafka.clients.producer.internals.Sender:616)
[2022-03-03 08:31:28,880] WARN [dbz-cmddb-connector|task-0] [Producer clientId=connector-producer-dbz-cmddb-connector-0] Got error produce response with correlation id 1959 on topic-partition cmddb-0, retrying (2147481693 attempts left). Error: REQUEST_TIMED_OUT (org.apache.kafka.clients.producer.internals.Sender:616)
This messages appear even when I delete the Kafka connectors.
Restarting kafka and kafka connect does not help.
How to stop this retries?
Only thing that helps to workaround is to:
Delete connector from Debezium API
Stop Kafka-Connect
Delete the EventHub
Start Kafka-Connect
Add connector from Debezium API
To permanently change how reconnect works change below parameter of producer:
producer.retries=10 (by default it is set to over 2 billions causing spam in kafka-connect.log)
In my production environment I got the following error in my server:
Cannot forward to error page for request [/api/validation] as the response has already been committed. As a result, the response may have the wrong status code. If your application is running on WebSphere Application Server you may be able to resolve this problem by setting com.ibm.ws.webcontainer.invokeFlushAfterService to false
org.apache.catalina.connector.ClientAbortException: java.io.IOException: Connection reset by peer
Now I created a client and produced 1000 thread every second to call this [/api/validation].
The error I got was
Exception in thread "Thread-9954" org.springframework.web.client.ResourceAccessException: I/O error on POST request for "http://localhost:7080/v1/name/validate": Timeout waiting for connection from pool; nested exception is org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool.
Now I want to know is what is the cause of Connection reset by peer .
According to what I know is this error occurs when the client aborts the connection by sending the RST packet.
I set the socket Timeout of my client's rest template to 9000. I make the server sleep for about 15000 MS. Now shouldn't the server get Connection reset by peer as the server tries to send the response after 15 seconds and my client just waits for about 9 seconds. Shouldn't I get the error?
Also in the production environment the wait time (Rest template socket time out) for the client is set to about a 90 seconds ( more than the time the server requires to response). Why is the error being produced in the production?
I am running a TIBCO custom adapter on an AS400 server , there was no issue in the starting but suddenly it started giving the below issue.
Could anyone tell what can i check to fix this issue
Server
AS400/i series`
2017-11-30 13:12:48,927 INFO
[com.eaton.icc.tibco.as400.adapter.AS400WisperReceiipt] - WisperReceipt send completed
2017-11-30 13:12:50,091 INFO
[com.eaton.icc.tibco.as400.adapter.AS400WisperPart]] - WisperPart send
completed
2017-11-30 13:12:50,091 WARN
[com.eaton.icc.tibco.as400.adapter.AS400WisperPart]] - Method pool()
completed successfully
2017-11-30 13:12:57,187 ERROR
[com.eaton.icc.tibco.as400.adapter.AS400Monitor] - Exception sending
heartbeat message
com.tibco.sdk.MException: Operation error: unable to create Tibrv Session
for PubHawkMonitor(MPublisher).
at com.tibco.sdk.events.pubsub.MRvPublisher.send(MRvPublisher.java:76)
at com.tibco.sdk.events.MPublisher.send(MPublisher.java:346)
at com.eaton.icc.tibco.as400.adapter.AS400Monitor.onEvent(AS400Monitor.java:227)
at com.tibco.sdk.events.EventHandoff.run(MEventSource.java:141)
at java.lang.Thread.run(Thread.java:809)
Check the RV parameters of adapter.
The adapter can noto able to start the rv. Please share the configuration of adapter
I am using ActiveMQ v5.10.
Am unable to understand root cause of exception am getting in my logs.
[20141116 13:07:30.288 EDT (ActiveMQ Broker[broker] Scheduler) org.apache.activemq.broker.region.Topic#doBrowse 615 WARN] - Failed to browse Topic: cometd.
ProxyPush
java.io.EOFException: Chunk stream does not exist, page: 34 is marked free
at org.apache.activemq.store.kahadb.disk.page.Transaction$2.readPage(Transaction.java:470)
at org.apache.activemq.store.kahadb.disk.page.Transaction$2.<init>(Transaction.java:447)
at org.apache.activemq.store.kahadb.disk.page.Transaction.openInputStream(Transaction.java:444)
at org.apache.activemq.store.kahadb.disk.page.Transaction.load(Transaction.java:420)
at org.apache.activemq.store.kahadb.disk.page.Transaction.load(Transaction.java:377)
at org.apache.activemq.store.kahadb.disk.index.BTreeIndex.loadNode(BTreeIndex.java:262)
at org.apache.activemq.store.kahadb.disk.index.BTreeIndex.getRoot(BTreeIndex.java:174)
at org.apache.activemq.store.kahadb.disk.index.BTreeIndex.iterator(BTreeIndex.java:232)
at org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex$MessageOrderIterator.<init>(MessageDatabase.java:2757)
at org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex.iterator(MessageDatabase.java:2739)
at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore$3.execute(KahaDBStore.java:526)
at org.apache.activemq.store.kahadb.disk.page.Transaction.execute(Transaction.java:779)
at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.recover(KahaDBStore.java:522)
at org.apache.activemq.store.ProxyTopicMessageStore.recover(ProxyTopicMessageStore.java:62)
at org.apache.activemq.store.ProxyTopicMessageStore.recover(ProxyTopicMessageStore.java:62)
at org.apache.activemq.broker.region.Topic.doBrowse(Topic.java:578)
at org.apache.activemq.broker.region.Topic.access$100(Topic.java:65)
at org.apache.activemq.broker.region.Topic$6.run(Topic.java:703)
at org.apache.activemq.thread.SchedulerTimerTask.run(SchedulerTimerTask.java:33)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
Though this exception is infrequent and occurs sometimes but am wondering what could be cause of it.
Please note that broker and client communication is fine. client is able to send and receive messages on that topic but exception is continuously coming. There is no durable subscriber on this topic. Messages sent on this topic are non-persistent.
you can have a look here. seems to be some bug in KahaDB persistence Engine
Below is the related part from a QMGR log file about a WMQ channel issue:
-------------------------------------------------------------------------------
2012-7-23 10:35:25 - Process(340.1) User(MUSR_MQADMIN) Program(runmqchl.exe)
AMQ9206: Error sending data to host 86.0.223.5(1602) 。
EXPLANATION:
An error occurred sending data over TCP/IP to 86.0.223.5(1602). This may be due to
a communications failure.
ACTION:
The return code from the TCP/IP(send) call was 10054 X('2746'). Record these
values and tell your systems administrator.
----- amqccita.c : 2612 -------------------------------------------------------
2012-7-23 10:35:25 - Process(340.1) User(MUSR_MQADMIN) Program(runmqchl.exe)
AMQ9999: Channel program ended abnormally.
EXPLANATION:
Channel program 'CZWJNS.CZWJCZ' ended abnormally.
ACTION:
Look at previous error messages for channel program 'CZWJNS.CZWJCZ' in the
error files to determine the cause of the failure.
----- amqrccca.c : 834 --------------------------------------------------------
2012-7-23 10:35:35 - Process(3616.1) User(MUSR_MQADMIN) Program(runmqchl.exe)
AMQ9002: Channel “CZWJNS.CZWJCZ' is starting。
EXPLANATION:
Channel “CZWJNS.CZWJCZ' is starting。
ACTION:
None。
-------------------------------------------------------------------------------
2012-7-23 10:40:35 - Process(3616.1) User(MUSR_MQADMIN) Program(runmqchl.exe)
AMQ9206: Error sending data to host 86.0.223.5(1602) 。
EXPLANATION:
An error occurred sending data over TCP/IP to 86.0.223.5(1602). This may be due to
a communications failure.
ACTION:
The return code from the TCP/IP(send) call was 10054 X('2746'). Record these
values and tell your systems administrator.
----- amqccita.c : 2612 -------------------------------------------------------
2012-7-23 10:40:35 - Process(3616.1) User(MUSR_MQADMIN) Program(runmqchl.exe)
AMQ9999: Channel program ended abnormally.
EXPLANATION:
Channel program 'CZWJNS.CZWJCZ' ended abnormally.
ACTION:
Look at previous error messages for channel program 'CZWJNS.CZWJCZ' in the
error files to determine the cause of the failure.
----- amqrccca.c : 834 --------------------------------------------------------
2012-7-23 10:40:45 - Process(4848.1) User(MUSR_MQADMIN) Program(runmqchl.exe)
AMQ9002: Channel “CZWJNS.CZWJCZ' is starting。
EXPLANATION:
Channel “CZWJNS.CZWJCZ' is starting。
ACTION:
None。
-------------------------------------------------------------------------------
Right now, the situation is that the target channel (CZWJNS.CZWJCZ) can finally run, but only after a few retry attempts. It keeps happening often. All the messages can be delivered to the target queue in the remote QMGR host successfully. However, they're always delayed due to the multiple retry attempts.
I've searched through the internet for the return code 10054 and it means the connection has been reset by the peer.
My WMQ version is 6.0.10 on Windows 2003.
The "Connection reset by peer" means that something between this node and the other node closed the connection. The cause can range from dodgy/noisy network, to firewall timing out, to channel exits that refuse the connection, or many other causes.
The key to diagnosis in these cases is to narrow down the cause. This requires looking at the error logs on both QMgrs (or the client and QMgr) for the same event. In the case of a channel exit, a look at the channel definitions on both sides reveals whether such an exit is in place but if it is then you need to look at the exit's configuration and logs as well.
If the problem is in the network, the error logs from both QMgrs will show similar errors. However if one QMgr closed the connection intentionally then you will see that in its log files.