I have setup Debezium and Azure Event Hub as CDC engine from PostgeSQL.
Exactly like on this tutorial: https://dev.to/azure/tutorial-set-up-a-change-data-capture-architecture-on-azure-using-debezium-postgres-and-kafka-49h6
Everything was working good until I have changed something (I don't know exactly what I changed).
Now my kafka-connect log is spammed with below WARN entry and CDC stopped working...
[2022-03-03 08:31:28,694] WARN [dbz-ewldb-connector|task-0] [Producer clientId=connector-producer-dbz-ewldb-connector-0] Got error produce response with correlation id 2027 on topic-partition ewldb-0, retrying (2147481625 attempts left). Error: REQUEST_TIMED_OUT (org.apache.kafka.clients.producer.internals.Sender:616)
[2022-03-03 08:31:28,775] WARN [dbz-cmddb-connector|task-0] [Producer clientId=connector-producer-dbz-cmddb-connector-0] Got error produce response with correlation id 1958 on topic-partition cmddb-0, retrying (2147481694 attempts left). Error: REQUEST_TIMED_OUT (org.apache.kafka.clients.producer.internals.Sender:616)
[2022-03-03 08:31:28,800] WARN [dbz-ewldb-connector|task-0] [Producer clientId=connector-producer-dbz-ewldb-connector-0] Got error produce response with correlation id 2028 on topic-partition ewldb-0, retrying (2147481624 attempts left). Error: REQUEST_TIMED_OUT (org.apache.kafka.clients.producer.internals.Sender:616)
[2022-03-03 08:31:28,880] WARN [dbz-cmddb-connector|task-0] [Producer clientId=connector-producer-dbz-cmddb-connector-0] Got error produce response with correlation id 1959 on topic-partition cmddb-0, retrying (2147481693 attempts left). Error: REQUEST_TIMED_OUT (org.apache.kafka.clients.producer.internals.Sender:616)
This messages appear even when I delete the Kafka connectors.
Restarting kafka and kafka connect does not help.
How to stop this retries?
Only thing that helps to workaround is to:
Delete connector from Debezium API
Stop Kafka-Connect
Delete the EventHub
Start Kafka-Connect
Add connector from Debezium API
To permanently change how reconnect works change below parameter of producer:
producer.retries=10 (by default it is set to over 2 billions causing spam in kafka-connect.log)
Related
We are using an IMAP consumer processor in our nifi pipeline to read the email from office 365. We have been observing issues in the IMAP processor while consuming the email from office 365 email box.
Please find the below error log for your references.
2021-01-04 11:00:00,286 ERROR [Timer-Driven Process Thread-6] o.a.nifi.processors.email.ConsumeIMAP ConsumeIMAP[id=c31e4176-842d-3464-b870-2460ee675eee] Failed to receive messages from Email server: [javax.mail.MessagingException - A3 BAD Request is throttled. Suggested Backoff Time: 68448 milliseconds
2021-01-04 11:00:00,286 ERROR [Timer-Driven Process Thread-6] o.a.nifi.processors.email.ConsumeIMAP ConsumeIMAP[id=c31e4176-842d-3464-b870-2460ee675eee] Failed to process session due to org.apache.nifi.processor.exception.ProcessException: Failed to receive messages from Email server: [javax.mail.MessagingException - A3 BAD Request is throttled. Suggested Backoff Time: 68448 milliseconds: org.apache.nifi.processor.exception.ProcessException: Failed to receive messages from Email server: [javax.mail.MessagingException - A3 BAD Request is throttled. Suggested Backoff Time: 68448 milliseconds
org.apache.nifi.processor.exception.ProcessException: Failed to receive messages from Email server: [javax.mail.MessagingException - A3 BAD Request is throttled. Suggested Backoff Time: 68448 milliseconds
at org.apache.nifi.processors.email.AbstractEmailProcessor.fillMessageQueueIfNecessary(AbstractEmailProcessor.java:328)
at org.apache.nifi.processors.email.AbstractEmailProcessor.receiveMessage(AbstractEmailProcessor.java:381)
Below is the nifi properties:
image.png
Please let us know if we are missing some configuration in the above screenshot.
Thanks and Regards,
IBNEY HASAN
This is not an error specific to NiFi - O365 is telling you that you are being throttled. You will need to tune your O365 settings appropriately, which is outside of the scope of NiFi.
See:
https://learn.microsoft.com/en-gb/archive/blogs/exchangedev/exchange-online-throttling-and-limits-faq
https://learn.microsoft.com/en-us/exchange/mailbox-migration/office-365-migration-best-practices#office-365-throttling
I'm running into a bug where RabbitMQ is sometimes complaining about "PRECONDITION_FAILED - fast reply consumer does not exist" although as you can see below the message I send does not have a fast reply, the reply-to is null. About 50% of the time the message will get sent to the exchange/queue as I'm expecting and the other 50% of the time I am getting this error which destroys the message. I am running this code in spring boot 1.3.6 with spring amqp 1.6.0. RabbitMQ server is 3.5.5 with Erlan 18.1. I am unable to update the versions as this is production code.
My code is very simple. I declare a response exchange/queue for further communication.
amqpAdmin.declareExchange(exchange);
amqpAdmin.declareQueue(queue);
amqpAdmin.declareBinding(binding);
I send my amqp message to the exchange/routing key of a topic exchange, but it never makes it there due to the following error:
Publish Message Success: [MyObject], MessageProperties [headers={__TypeId__=com.do.comp.amqp}, timestamp=null, messageId=null, userId=null, appId=null, clusterId=null, type=null, correlationId=null, replyTo=null, contentType=application/json, contentEncoding=UTF-8, contentLength=97, deliveryMode=PERSISTENT, expiration=null, priority=0, redelivered=null, receivedExchange=null, receivedRoutingKey=null, deliveryTag=0, messageCount=null]]
AMQP Connection 10.12.36.75:5672 [ERROR] org.springframework.amqp.rabbit.connection.CachingConnectionFactory.log(CachingConnectionFactory.java:1198) - Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - fast reply consumer does not exist, class-id=60, method-id=40)<br>
http-nio-8122-exec-7 [DEBUG] org.springframework.amqp.rabbit.connection.CachingConnectionFactory.getCachedChannelProxy(CachingConnectionFactory.java:476) - Creating cached Rabbit Channel from AMQChannel(amqp://admin#10.12.36.75:5672/,3)<br>
http-nio-8122-exec-7 [DEBUG] org.springframework.amqp.rabbit.core.RabbitTemplate.doExecute(RabbitTemplate.java:1392) - Executing callback on RabbitMQ Channel: Cached Rabbit Channel: AMQChannel(amqp://admin#10.12.36.75:5672/,3), conn: Proxy#782534f9 Shared Rabbit Connection: SimpleConnection#74658797 [delegate=amqp://admin#10.12.36.75:5672/, localPort= 60282]
Then I listen on the queue I created for a response that will never come:
rabbitTemplate.receive(queue);
The error above has to do with direct reply-to queues and I'm not using that, my reply-to messageHeader is null. Another odd thing is we are running this exact jar on three different servers for testing and development and only one of them seems to be having an issue, but they are all the same version of everything. RabbitMQ v.3.5.5 Erland 18.1
Why is RabbitMQ throwing a fast reply error when reply-to is null?
I have a Hortonworks Hadoop Cluster and everything seems to work fine. I can use the Kafka Producer and the consumer from all the Hosts, that reside inside of that cluster.
But when i try to use the kafka console producer from another host i get the following error message:
ERROR Error when sending message to topic test with key: null, value: 0 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
Org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test-0: 1532 ms has passed since batch creation plus linger time
I can Telnet the Host and Port.
How can I resolve this issue?
I am running a TIBCO custom adapter on an AS400 server , there was no issue in the starting but suddenly it started giving the below issue.
Could anyone tell what can i check to fix this issue
Server
AS400/i series`
2017-11-30 13:12:48,927 INFO
[com.eaton.icc.tibco.as400.adapter.AS400WisperReceiipt] - WisperReceipt send completed
2017-11-30 13:12:50,091 INFO
[com.eaton.icc.tibco.as400.adapter.AS400WisperPart]] - WisperPart send
completed
2017-11-30 13:12:50,091 WARN
[com.eaton.icc.tibco.as400.adapter.AS400WisperPart]] - Method pool()
completed successfully
2017-11-30 13:12:57,187 ERROR
[com.eaton.icc.tibco.as400.adapter.AS400Monitor] - Exception sending
heartbeat message
com.tibco.sdk.MException: Operation error: unable to create Tibrv Session
for PubHawkMonitor(MPublisher).
at com.tibco.sdk.events.pubsub.MRvPublisher.send(MRvPublisher.java:76)
at com.tibco.sdk.events.MPublisher.send(MPublisher.java:346)
at com.eaton.icc.tibco.as400.adapter.AS400Monitor.onEvent(AS400Monitor.java:227)
at com.tibco.sdk.events.EventHandoff.run(MEventSource.java:141)
at java.lang.Thread.run(Thread.java:809)
Check the RV parameters of adapter.
The adapter can noto able to start the rv. Please share the configuration of adapter
I am using twitter-input to fetch data from twitter and ouput-Elasticsearch to store it in elasticsearch
and I am using Logstash 5.2.1 in Ubuntu OS when I run it through the following error
[2017-03-02T08:18:45,576][ERROR][logstash.outputs.elasticsearch] Retrying individual actions
[2017-03-02T08:18:45,576][ERROR][logstash.outputs.elasticsearch] Action
[2017-03-02T08:18:50,796][INFO][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[twitter_news][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest to [twitter_news] containing [1] requests]"})
[2017-03-02T08:18:50,796][ERROR][logstash.outputs.elasticsearch] Retrying individual actions
[2017-03-02T08:18:50,796][ERROR][logstash.outputs.elasticsearch] Action
[2017-03-02T08:18:55,840][INFO][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[twitter_news][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest to [twitter_news] containing [1] requests]"})
[2017-03-02T08:18:55,840][ERROR][logstash.outputs.elasticsearch] Retrying individual actions
[2017-03-02T08:18:55,841][ERROR][logstash.outputs.elasticsearch] Action
This isn't really anything we should've cared about in the first place. Nobody knows why they made that an error message...
Anyway, they fixed it and it's an information now.