Spring Cloud Stream Binder Kafka broker is not available - spring

I have a Spring Config Server application which is working with Kafka in dev environment but in local environment I keep getting:
{host} could not be established. Broker may not be available.
Is there any way to start the application in local environment even if the broker is not available and not get the warning logs?
The desired behaviour: If broker is not available, application should not have explicit warnings and should continue working.
I have tried to set fatalIfBrokerNotAvailable to false and missingTopicsFatal to false but it does not have any effect.

Unfortunately, the kafka-clients (used by the binder) does not have any API to get the status of the broker(s). So there is no way to determine the state without trying to connect. Those logs are emitted by the kafka-clients code.

Related

How to restart kubernetes pod when issue because of Rabbit MQ connectivity in logs

I have a Spring Boot 2 standalone application( not REST service) which connect to rabbit MQ and process message. The application is deployed in kubernetes. While it work great, but when Rabbit MQ remain down for little longer and in logs I see hearbeat exception 60sec and eventually connection get drop even if the rabbit mq comes up after certain time:
Automatic retry connection to broker by spring-rabbitmq
https://www.rabbitmq.com/heartbeats.html
While I try to manage above issue by increasing number of retry :https://stackoverflow.com/questions/45385119/how-configure-timeouts-retries-or-max-attempts-in-differents-queues-with-spring
but after expiry of retry still above issue comes.
How can I reboot/delete-recreate pod if I see above issue in logs from kubernetes.
The easiest way is to use actuator, which has a /actuator/health endpoint. (Note that the recent version also add /actuator/health/liveness and /actuator/health/readiness).
You can assign the endpoint to livenessProbe property of k8s. Then it will automatically restart when it is necessary. You can parameterize, when your app is down if necessary.
See the docs:
Kubernetes liveness probe
Spring actuator health

Application deployment on Payara with JMS connection problems

We have noticed our web application is not being deployed on Payara 4.1 when Message Driven Beans fail to connect to the server properly or the queues are missing. We'd rather have the application up and running, then fail a deployment due to JMS connection issues. Is there a way on Payara to prevent deployment crashes due to JMS failing?
EDIT: We use IBM MQ with the wmq.jmsra resource adapter.
You didn't state which exact version of Payara (e.g. 4.1.174) and I forgot when this was added, but could you please try to set the system property
-Ddeployment.resource.validation=false
and check if this behaves as you desire.
You can do
asadmin create-jvm-options -Ddeployment.resource.validation=false
or simply put it in your domain.xml.

How to refresh config clients automatically?

I am new to Spring Config Server/Client technologies.
I am using a spring config server to hold some config values.
Config clients will connect to the server and get the values.
If i change some of the config values at the config server, then currently I have to refresh the clients to load the config details from config server again by invoking "/refresh" on each client.
Is there anyway the clients will be notified by the config server and they will then reload the configuration again ?
Yes there is a way.
The solution is to use the Spring Cloud Bus. Using this module, you would link multiple clients to the server using a message broker. The only message broker implementation currently supported by this module is AMQP. Once the clients are connected to the server, invoking the endpoint on the server /bus/refresh will automatically broadcast the configuration changes to all the subscribed clients. This therefore means it is possible to reload configuration changes for any number of clients with one single refresh request which originates at the server.

Spring Boot and Kafka: Broker disconnected

I have setup a Spring Boot application to receive Kafka messages from an existing and working Kafka producer. The setup is standard, and based on the following: https://www.codenotfound.com/spring-kafka-consumer-producer-example.html
Messages are not received, and the following is continually displayed in the console:
WARN org.apache.clients.NetworkClient :Bootstrap broker <hostname>:9092 disconnected
In addition, the following debug message is logged:
org.apache.common.errors.Timeout: Failed to update metadata after 60000 ms.
The console message is discussed in the following link:
https://community.hortonworks.com/content/supportkb/150148/errorwarn-bootstrap-broker-6668-disconnected-orgap.html
The logged message is discussed here:
https://community.cloudera.com/t5/Data-Ingestion-Integration/Error-when-sending-message-to-topic-in-Kafka/td-p/41440
Very likely, the timeout will not happen when the first issue is resolved.
The solution to the console message which is given is to explicitly pass --security-protocol SSL as an argument to the producer or consumer command.
Given that I am listening on an existing Kafka broker and topic, no settings can be changed there. Any changes must be on the Spring Boot side.
Is it possible to configure application.yml so that --security-protocol SSL is passed an an argument to the consumer? Also, has anyone experienced this before, and is there another way to resolve the issue using the configuration options available in Spring Boot and Spring Kafka?
Thanks
See the documentation.
Scroll down to Kafka. Arbitrary Kafka properties can be set using
spring:
kafka:
properties:
security.protocol: SSL
applies to consumer and producer (and admin in 2.0).
In the upcoming 2.0 release (currently RC1), there is also
spring:
kafka:
properties:
consumer:
some.property: foo
for properties that only apply to consumers (and similarly for producers and admins).

Does Spring XD re-process the same message when one of it's container goes down while processing the message?

Application Data Flow:
JSon Messages--> Active MQ --> Spring XD-- Business Login(Transform JSon to Java Object)--> Save Data to Target DB--> DB.
Question:
Sprin-Xd is running in cluster mode, configured with Radis.
Spring XD picks up the message from the Active message queue(AMQ). So message is no longer in AMQ. Now while one of the containers where this message is being processed with some business logic suddenly goes down. In this scenarios-
Will Spring-XD framework automatically re-process that particular message ? what's mechanism behind that?
Thanks,
Abhi
Not with a Redis transport; Redis has no infrastructure to support such a requirement ("transactional" reads). You would need to use a rabbit or kafka transport.
EDIT:
See Application Configuration (scroll down to RabbitMQ) and Message Bus Configuration.
Specifically, the default ackMode is AUTO which means messages are acknowledged on success.

Resources