Spring Boot and Kafka: Broker disconnected - spring-boot

I have setup a Spring Boot application to receive Kafka messages from an existing and working Kafka producer. The setup is standard, and based on the following: https://www.codenotfound.com/spring-kafka-consumer-producer-example.html
Messages are not received, and the following is continually displayed in the console:
WARN org.apache.clients.NetworkClient :Bootstrap broker <hostname>:9092 disconnected
In addition, the following debug message is logged:
org.apache.common.errors.Timeout: Failed to update metadata after 60000 ms.
The console message is discussed in the following link:
https://community.hortonworks.com/content/supportkb/150148/errorwarn-bootstrap-broker-6668-disconnected-orgap.html
The logged message is discussed here:
https://community.cloudera.com/t5/Data-Ingestion-Integration/Error-when-sending-message-to-topic-in-Kafka/td-p/41440
Very likely, the timeout will not happen when the first issue is resolved.
The solution to the console message which is given is to explicitly pass --security-protocol SSL as an argument to the producer or consumer command.
Given that I am listening on an existing Kafka broker and topic, no settings can be changed there. Any changes must be on the Spring Boot side.
Is it possible to configure application.yml so that --security-protocol SSL is passed an an argument to the consumer? Also, has anyone experienced this before, and is there another way to resolve the issue using the configuration options available in Spring Boot and Spring Kafka?
Thanks

See the documentation.
Scroll down to Kafka. Arbitrary Kafka properties can be set using
spring:
kafka:
properties:
security.protocol: SSL
applies to consumer and producer (and admin in 2.0).
In the upcoming 2.0 release (currently RC1), there is also
spring:
kafka:
properties:
consumer:
some.property: foo
for properties that only apply to consumers (and similarly for producers and admins).

Related

Spring Cloud Stream Binder Kafka broker is not available

I have a Spring Config Server application which is working with Kafka in dev environment but in local environment I keep getting:
{host} could not be established. Broker may not be available.
Is there any way to start the application in local environment even if the broker is not available and not get the warning logs?
The desired behaviour: If broker is not available, application should not have explicit warnings and should continue working.
I have tried to set fatalIfBrokerNotAvailable to false and missingTopicsFatal to false but it does not have any effect.
Unfortunately, the kafka-clients (used by the binder) does not have any API to get the status of the broker(s). So there is no way to determine the state without trying to connect. Those logs are emitted by the kafka-clients code.

Quarkus Kafka: How to configure the number of retry attempts if we are not able to connect to the Kafka Broker?

I am working on a Quarkus application and intend to use Kafka to receive messages, however I want to stop the application if the application is not able to reach Kafka broker after retrying for a certain number of times. The default configuration is to try infinite number of times to reconnect. In the documentation at Smallrye Reactive Messaging Kafka, it says we can use kafka.retry-attempts or mp.messaging.incoming.[channel-name].retry-attempts to configure the number of retries. I have tried both but the application still goes on retring.
Have someone faced a similar issue or can someone help me with the resolution?

Spring cloud Sleuth starts a new trace instead of continuing spans in a single trace

I have 4 spring-boot applications (A, B, C and D).
The lifecycle of a transaction is as follows :
Application A is a kafka streams application and it ultimately produces to a topic which is
consumed by Application B.
Application B then consumes from the topic using #KafkaListener, does some processing and then produces to IBMMQ queue using spring's jmsTemplate.
Application C which is a #JMSListener consumes from the above queue and produces to another
queue using spring's JMSTemplate.
Application D which is again a #JmsListener consumes from the above queue and then produces to a kafka topic, which the again consumed by Application A
Now for a single transaction I would expect a single trace across all four application, but instead I get
One Trace starting from application A to application B (where it produces to IBM MQ)
One trace starting from Application C and ending at Application A
I would have uploaded the pictures to show the zipkin spans, but for some reason I am not able to do so.
All the above applications are Spring boot applications and they utilize spring-cloud-sleuth for producing transactions traces. I am relying on spring boot's autoconfiguration and these are the properties that I have set in all the applications:
zipkin:
enabled: ${ZIPKIN_ENABLED:false}
sender:
type: kafka
baseUrl: ${ZIPKIN_URL:http://localhost:9411}
service:
name: ${spring.application.name}
sleuth:
messaging:
kafka:
enabled: true
jms:
enabled: true
I am not able to understand what's exactly happening here. Why the spans are scattered across 2 traces and not one?
I am using spring-boot 2.3.3 and spring-cloud-dependencies Hoxton.SR8.
So it was application B which was not passing the header along. Turns out that the queue uri had a property targetClient which was set to 1. The uri is something like
queue:///DESTINATION_QUEUE?targetClient=1
Now I am not an IBM MQ expert by far, but the documentation states that setting this property to 1 means that Messages do not contain an MQRFH2 header. I toggled it to 0 and voila, all spans fall into place.

Spring Integration between two message brokers

I am new to Spring-Integration.
My use case is:
Listen to a RabbitMQ queue/topic, get the message, process it, send it to other message broker (mostly it will be another RabbitMQ instance).
Expected load: 5000 messages/sec
In application.properties we can set configurations for one host.
How to use Spring Integration between two message brokers?
All the examples that i see are for one message broker. Any pointers to get started with two message brokers and Spring Integration.
Regards,
Mahesh
Since you mention an application.properties it sounds like you use Spring Boot with its auto-configuration feature. It is very important detail in your question because Spring Boot has opinion about auto-configuration and you really can have only one broker connection configuration auto-configured. If you would like to have an another similar in the same application, then you should forget that auto-configuration feature. You still can use the mentioned application.properties, but you have to manage them manually.
Since you talk about a RabbitMQ connection, so you need to exclude RabbitAutoConfiguration and manage all the required beans manually:
#SpringBootApplication(exclude = RabbitAutoConfiguration.class)
You still can use the #EnableConfigurationProperties(RabbitProperties.class) on some your #Configuration class to be able to inject that RabbitProperties and populate respective CachingConnectionFactory. For the second broker you can introduce your own #ConfigurationProperties or just configure everything manually reading properties via #Value. See more info about manual connection factory configuration in Spring AMQP reference manual: https://docs.spring.io/spring-amqp/docs/2.2.1.RELEASE/reference/html/#connections

Does Spring XD re-process the same message when one of it's container goes down while processing the message?

Application Data Flow:
JSon Messages--> Active MQ --> Spring XD-- Business Login(Transform JSon to Java Object)--> Save Data to Target DB--> DB.
Question:
Sprin-Xd is running in cluster mode, configured with Radis.
Spring XD picks up the message from the Active message queue(AMQ). So message is no longer in AMQ. Now while one of the containers where this message is being processed with some business logic suddenly goes down. In this scenarios-
Will Spring-XD framework automatically re-process that particular message ? what's mechanism behind that?
Thanks,
Abhi
Not with a Redis transport; Redis has no infrastructure to support such a requirement ("transactional" reads). You would need to use a rabbit or kafka transport.
EDIT:
See Application Configuration (scroll down to RabbitMQ) and Message Bus Configuration.
Specifically, the default ackMode is AUTO which means messages are acknowledged on success.

Resources