I have a scenario where my rabbit mq instance is not always available and would like to set the maximum number of times a connection retry happens, Is this possible with amqp?
Example,
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory factory = new CachingConnectionFactory();
factory.setUri("amqprl//");
factory ../ try uri connection for 4 times max then fail if still no connection
return factory;
}
Message producers will only try to create a connection when you send a message.
Message consumers (container factories) will retry indefinitely.
You can add a ConnectionListener to the connection factory and stop() the listener containers after some number of failures.
#FunctionalInterface
public interface ConnectionListener {
/**
* Called when a new connection is established.
* #param connection the connection.
*/
void onCreate(Connection connection);
/**
* Called when a connection is closed.
* #param connection the connection.
* #see #onShutDown(ShutdownSignalException)
*/
default void onClose(Connection connection) {
}
/**
* Called when a connection is force closed.
* #param signal the shut down signal.
* #since 2.0
*/
default void onShutDown(ShutdownSignalException signal) {
}
/**
* Called when a connection couldn't be established.
* #param exception the exception thrown.
* #since 2.2.17
*/
default void onFailed(Exception exception) {
}
}
Related
I'm trying to connect to rabbitMQ over SSL using Spring Boot 2.7.4 and java 11.0.14 I was following this example here:
I have added the following configurations:
properties file:
# RabbitMQ Server configuration file.
rabbit.username=admin
rabbit.password=admin
rabbit.host=localhost
rabbit.port=5671
rabbit.ssl=TLSv1.2
rabbit.keystore.name=client_key.p12
rabbit.keystore.password=rabbitstore
rabbit.truststore=server_store.jks
rabbit.truststore.password=rabbitstore
client_key.p12 and server_store.jks are in my classpath.
Configuration Class:
#Configuration
#PropertySource("classpath:rabbit.properties")
public class RabbitConfiguration {
/**
* Default sample channel name to respond for requests from clients.
*/
public static final String DEFAULT_QUEUE = "sample_queue";
/**
* Environment properties file from rabbitmq configuration.
*/
#Autowired
private Environment env;
/**
* Establish a connection to a rabbit mq server.
* #return Rabbit connection factory for rabbitmq access.
* #throws IOException If wrong parameters are used for connection.
*/
#Bean
public RabbitConnectionFactoryBean connectionFactoryBean() throws IOException {
RabbitConnectionFactoryBean connectionFactoryBean = new RabbitConnectionFactoryBean();
connectionFactoryBean.setHost(Objects.requireNonNull(env.getProperty("rabbit.host")));
connectionFactoryBean.setPort(Integer.parseInt(Objects.requireNonNull(env.getProperty("rabbit.port"))));
connectionFactoryBean.setUsername(Objects.requireNonNull(env.getProperty("rabbit.username")));
connectionFactoryBean.setPassword(Objects.requireNonNull(env.getProperty("rabbit.password")));
// SSL-Configuration if set
if(env.getProperty("rabbit.ssl") != null) {
connectionFactoryBean.setUseSSL(true);
connectionFactoryBean.setSslAlgorithm(Objects.requireNonNull(env.getProperty("rabbit.ssl")));
// This information should be stored safely !!!
connectionFactoryBean.setKeyStore(Objects.requireNonNull(env.getProperty("rabbit.keystore.name")));
connectionFactoryBean.setKeyStorePassphrase(Objects.requireNonNull(env.getProperty("rabbit.keystore.password")));
connectionFactoryBean.setTrustStore(Objects.requireNonNull(env.getProperty("rabbit.truststore")));
connectionFactoryBean.setTrustStorePassphrase(Objects.requireNonNull(env.getProperty("rabbit.truststore.password")));
}
return connectionFactoryBean;
}
/**
* Connection factory which established a rabbitmq connection used from a connection factory
* #param connectionFactoryBean Connection factory bean to create connection.
* #return A connection factory to create connections.
* #throws Exception If wrong parameters are used for connection.
*/
#Bean(name = "GEO_RABBIT_CONNECTION")
public ConnectionFactory connectionFactory(RabbitConnectionFactoryBean connectionFactoryBean) throws Exception {
return new CachingConnectionFactory(Objects.requireNonNull(connectionFactoryBean.getObject()));
}
/**
* Queue initialization from rabbitmq to listen a queue.
* #return An queue to listen for listen receiver.
*/
#Bean
public Queue queue() {
// Create an new queue to handle incoming responds
return new Queue(DEFAULT_QUEUE, false, false, false, null);
}
/**
* Generates a simple message listener container.
* #param connectionFactory Established connection to rabbitmq server.
* #param listenerAdapter Listener event adapter to listen for messages.
* #return A simple message container for listening for requests.
*/
#Bean
public SimpleMessageListenerContainer container(ConnectionFactory connectionFactory,
MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(DEFAULT_QUEUE);
container.setMessageListener(listenerAdapter);
container.setAcknowledgeMode(AcknowledgeMode.AUTO);
return container;
}
/**
* Message listener adapter to generate a message listener.
* #param deviceMonitoringReceiver Device receive to for listening.
* #return A message listener adapter to receive messages.
*/
#Bean
public MessageListenerAdapter listenerAdapter(DeviceMonitoringReceiver deviceMonitoringReceiver) {
return new MessageListenerAdapter(deviceMonitoringReceiver, "receiveMessage");
}
}
Also I have updated rabbitMQ configurations:
[
{rabbit, [
{ssl_listeners, [5671]},
{ssl_options, [{cacertfile, "D:\\tls-gen\\basic\\result\\ca_certificate.pem"},
{certfile, "D:\\tls-gen\\basic\\result\\server_seliiwvdec53152_certificate.pem"},
{keyfile, "D:\\tls-gen\basic\\result\\server_seliiwvdec53152_key.pem"},
{verify, verify_peer},
{fail_if_no_peer_cert, true}]}
]}
].
But the application is not starting and throwing
Caused by: java.net.SocketException: Connection reset by peer: socket write error
I resolved the issue by adding this to the configurations:
ssl_options.password = xxx
It's mentioned in the official documentation it's optional I don't know why. But whatever the issue is now resolved.
We have a java spring integration application running on aws (multiple pods within a Kubernetes cluster). We use TCP Outbound gateways to communicate with third party systems and cache these connections using a CachingClientConnectionFactory factory. On the factory we have set the sokeepalive as true however we still see that after 350 seconds the connection is dropped. Do we need anythign else in the configuration to keep pinging the server a little before 350 seconds of idle waiting time ? AWS talks about the 350s restriction here -
https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-troubleshooting.html#nat-gateway-troubleshooting-timeout
Configuration of our connection factory and gateway is as follows
#Bean
public AbstractClientConnectionFactory primeClientConnectionFactory() {
TcpNetClientConnectionFactory tcpNetClientConnectionFactory = new TcpNetClientConnectionFactory(host, port);
tcpNetClientConnectionFactory.setDeserializer(new PrimeCustomStxHeaderLengthSerializer());
tcpNetClientConnectionFactory.setSerializer(new PrimeCustomStxHeaderLengthSerializer());
tcpNetClientConnectionFactory.setSingleUse(false);
tcpNetClientConnectionFactory.setSoKeepAlive(true);
return tcpNetClientConnectionFactory;
}
#Bean
public AbstractClientConnectionFactory primeTcpCachedClientConnectionFactory() {
CachingClientConnectionFactory cachingConnFactory = new CachingClientConnectionFactory(primeClientConnectionFactory(), connectionPoolSize);
//cachingConnFactory.setSingleUse(false);
cachingConnFactory.setLeaveOpen(true);
cachingConnFactory.setSoKeepAlive(true);
return cachingConnFactory;
}
#Bean
public MessageChannel primeOutboundChannel() {
return new DirectChannel();
}
#Bean
public RequestHandlerRetryAdvice retryAdvice() {
RequestHandlerRetryAdvice retryAdvice = new RequestHandlerRetryAdvice();
RetryTemplate retryTemplate = new RetryTemplate();
FixedBackOffPolicy fixedBackOffPolicy = new FixedBackOffPolicy();
fixedBackOffPolicy.setBackOffPeriod(500);
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy();
retryPolicy.setMaxAttempts(3);
retryTemplate.setBackOffPolicy(fixedBackOffPolicy);
retryTemplate.setRetryPolicy(retryPolicy);
retryAdvice.setRetryTemplate(retryTemplate);
return retryAdvice;
}
#Bean
#ServiceActivator(inputChannel = "primeOutboundChannel")
public MessageHandler primeOutbound(AbstractClientConnectionFactory primeTcpCachedClientConnectionFactory) {
TcpOutboundGateway tcpOutboundGateway = new TcpOutboundGateway();
List<Advice> list = new ArrayList<>();
list.add(retryAdvice());
tcpOutboundGateway.setAdviceChain(list);
tcpOutboundGateway.setRemoteTimeout(timeOut);
tcpOutboundGateway.setRequestTimeout(timeOut);
tcpOutboundGateway.setSendTimeout(timeOut);
tcpOutboundGateway.setConnectionFactory(primeTcpCachedClientConnectionFactory);
return tcpOutboundGateway;
}
}
See this SO thread for more about Keep Alive: Does a TCP socket connection have a "keep alive"?.
According to current Java Net API we got this class:
/**
* Defines extended socket options, beyond those defined in
* {#link java.net.StandardSocketOptions}. These options may be platform
* specific.
*
* #since 1.8
*/
public final class ExtendedSocketOptions {
Which provides this constant:
/**
* Keep-Alive idle time.
*
* <p>
* The value of this socket option is an {#code Integer} that is the number
* of seconds of idle time before keep-alive initiates a probe. The socket
* option is specific to stream-oriented sockets using the TCP/IP protocol.
* The exact semantics of this socket option are system dependent.
*
* <p>
* When the {#link java.net.StandardSocketOptions#SO_KEEPALIVE
* SO_KEEPALIVE} option is enabled, TCP probes a connection that has been
* idle for some amount of time. The default value for this idle period is
* system dependent, but is typically 2 hours. The {#code TCP_KEEPIDLE}
* option can be used to affect this value for a given socket.
*
* #since 11
*/
public static final SocketOption<Integer> TCP_KEEPIDLE
= new ExtSocketOption<Integer>("TCP_KEEPIDLE", Integer.class);
So, what we need on the TcpNetClientConnectionFactory is this:
public void setTcpSocketSupport(TcpSocketSupport tcpSocketSupport) {
Implement that void postProcessSocket(Socket socket); to be able to do this:
try {
socket.setOption(ExtendedSocketOptions.TCP_KEEPIDLE, 349);
}
catch (IOException ex) {
throw new UncheckedIOException(ex);
}
According to that AWS doc you have shared with us.
See also some info in Spring Integration docs: https://docs.spring.io/spring-integration/docs/current/reference/html/ip.html#the-tcpsocketsupport-strategy-interface
When using #KafkaListener with batches, the error handler logs the content of the full batch (all messages) in case of an exception.
How can I make this less verbose? I'd like to avoid spamming the log files with all the messages and only see the actual exception.
Here is a minimal example of how my consumer currently looks like:
#Component
class TestConsumer {
#Bean
fun kafkaBatchListenerContainerFactory(kafkaProperties: KafkaProperties): ConcurrentKafkaListenerContainerFactory<String, String> {
val configs = kafkaProperties.buildConsumerProperties()
configs[ConsumerConfig.MAX_POLL_RECORDS_CONFIG] = 10000
val factory = ConcurrentKafkaListenerContainerFactory<String, String>()
factory.consumerFactory = DefaultKafkaConsumerFactory(configs)
factory.isBatchListener = true
return factory
}
#KafkaListener(
topics = ["myTopic"],
containerFactory = "kafkaBatchListenerContainerFactory"
)
fun batchListen(values: List<ConsumerRecord<String, String>>) {
// Something that might throw an exception in rare cases.
}
}
What version are you using?
This container property was added in 2.2.14.
/**
* Set to false to log {#code record.toString()} in log messages instead
* of {#code topic-partition#offset}.
* #param onlyLogRecordMetadata false to log the entire record.
* #since 2.2.14
*/
public void setOnlyLogRecordMetadata(boolean onlyLogRecordMetadata) {
this.onlyLogRecordMetadata = onlyLogRecordMetadata;
}
It has been true by default since version 2.7 (which is why the javadocs now read that way).
This was the previous javadoc:
/**
* Set to true to only log {#code topic-partition#offset} in log messages instead
* of {#code record.toString()}.
* #param onlyLogRecordMetadata true to only log the topic/parrtition/offset.
* #since 2.2.14
*/
Also, starting with version 2.5, you can set the log level on the error handler:
/**
* Set the level at which the exception thrown by this handler is logged.
* #param logLevel the level (default ERROR).
*/
public void setLogLevel(KafkaException.Level logLevel) {
Assert.notNull(logLevel, "'logLevel' cannot be null");
this.logLevel = logLevel;
}
Producer of the message is not sending message as persistent and when i am trying to consume the message through MessageListener, and any exception(runtime) occurs, it retries for specific number of times (default is 6 from AMQ side) and message get lost.
Reason is that since producer is not setting the Delivery mode as Persistent, after certain number of retry attempt, DLQ is not being created and message does not move to DLQ. Due to this , i lost the message.
My Code is like this :-
#Configuration
#PropertySource("classpath:application.properties")
public class ActiveMqJmsConfig {
#Autowired
private AbcMessageListener abcMessageListener;
public DefaultMessageListenerContainer purchaseMsgListenerforAMQ(
#Qualifier("AMQConnectionFactory") ConnectionFactory amqConFactory) {
LOG.info("Message listener for purchases from AMQ : Starting");
DefaultMessageListenerContainer defaultMessageListenerContainer =
new DefaultMessageListenerContainer();
defaultMessageListenerContainer.setConnectionFactory(amqConFactory);
defaultMessageListenerContainer.setMaxConcurrentConsumers(4);
defaultMessageListenerContainer
.setDestinationName(purchaseReceivingQueueName);
defaultMessageListenerContainer
.setMessageListener(abcMessageListener);
defaultMessageListenerContainer.setSessionTransacted(true);
return defaultMessageListenerContainer;
}
#Bean
#Qualifier(value = "AMQConnectionFactory")
public ConnectionFactory activeMQConnectionFactory() {
ActiveMQConnectionFactory amqConnectionFactory =
new ActiveMQConnectionFactory();
amqConnectionFactory
.setBrokerURL(System.getProperty(tcp://localhost:61616));
amqConnectionFactory
.setUserName(System.getProperty(admin));
amqConnectionFactory
.setPassword(System.getProperty(admin));
return amqConnectionFactory;
}
}
#Component
public class AbcMessageListener implements MessageListener {
#Override
public void onMessage(Message msg) {
//CODE implementation
}
}
Problem :- By setting the client-id at connection level (Connection.setclientid("String")), we can subscribe as durable subscriber even though message is not persistent. By doing this, if application throws runtime exception , after a certain number of retry attempt, DLQ will be created for the Queue and message be moved to DLQ.
But in DefaultMessageListenerContainer, connection is not exposed to client. it is maintained by Class itself as a pool, i guess.
How can i achieve the durable subscription in DefaultMessageListenerContainer?
You can set the client id on the container instead:
/**
* Specify the JMS client ID for a shared Connection created and used
* by this container.
* <p>Note that client IDs need to be unique among all active Connections
* of the underlying JMS provider. Furthermore, a client ID can only be
* assigned if the original ConnectionFactory hasn't already assigned one.
* #see javax.jms.Connection#setClientID
* #see #setConnectionFactory
*/
public void setClientId(#Nullable String clientId) {
this.clientId = clientId;
}
and
/**
* Set the name of a durable subscription to create. This method switches
* to pub-sub domain mode and activates subscription durability as well.
* <p>The durable subscription name needs to be unique within this client's
* JMS client id. Default is the class name of the specified message listener.
* <p>Note: Only 1 concurrent consumer (which is the default of this
* message listener container) is allowed for each durable subscription,
* except for a shared durable subscription (which requires JMS 2.0).
* #see #setPubSubDomain
* #see #setSubscriptionDurable
* #see #setSubscriptionShared
* #see #setClientId
* #see #setMessageListener
*/
public void setDurableSubscriptionName(#Nullable String durableSubscriptionName) {
this.subscriptionName = durableSubscriptionName;
this.subscriptionDurable = (durableSubscriptionName != null);
}
I have a simple gRPC client as follows:
/**
* Client that calls gRPC.
*/
public class Client {
private static final Context.Key<String> URI_CONTEXT_KEY =
Context.key(Constants.URI_HEADER_KEY);
private final ManagedChannel channel;
private final DoloresRPCStub asyncStub;
/**
* Construct client for accessing gRPC server at {#code host:port}.
* #param host
* #param port
*/
public Client(String host, int port) {
this(ManagedChannelBuilder.forAddress(host, port).usePlaintext(true));
}
/**
* Construct client for accessing gRPC server using the existing channel.
* #param channelBuilder {#link ManagedChannelBuilder} instance
*/
public Client(ManagedChannelBuilder<?> channelBuilder) {
channel = channelBuilder.build();
asyncStub = DoloresRPCGrpc.newStub(channel);
}
/**
* Closes the client
* #throws InterruptedException
*/
public void shutdown() throws InterruptedException {
channel.shutdown().awaitTermination(5, TimeUnit.SECONDS);
}
/**
* Main async method for communication between client and server
* #param responseObserver user's {#link StreamObserver} implementation to handle
* responses received from the server.
* #return {#link StreamObserver} instance to provide requests into
*/
public StreamObserver<Request> downloading(StreamObserver<Response> responseObserver) {
return asyncStub.downloading(responseObserver);
}
public static void main(String[] args) {
Client cl = new Client("localhost", 8999); // fail??
StreamObserver<Request> requester = cl.downloading(new StreamObserver<Response>() {
#Override
public void onNext(Response value) {
System.out.println("On Next");
}
#Override
public void onError(Throwable t) {
System.out.println("Error");
}
#Override
public void onCompleted() {
System.out.println("Completed");
}
}); // fail ??
System.out.println("Start");
requester.onNext(Request.newBuilder().setUrl("http://my-url").build()); // fail?
requester.onNext(Request.newBuilder().setUrl("http://my-url").build());
requester.onNext(Request.newBuilder().setUrl("http://my-url").build());
requester.onNext(Request.newBuilder().setUrl("http://my-url").build());
System.out.println("Finish");
}
}
I don't start any server and run the main method. I would suppose that the program fails on:
client creation
client.downloading call
or observer.onNext
but suprisingly (for me), the code runs successfully, only messages got lost. The output is:
Start
Finish
Error
Because of the asynchronnous nature, the finish can be called even before an error is propagated at least through the response observer. Is that a desired behavior? I can't lose any messages. Am I missing something?
Thank you, Adam
This is the intended behavior. As you mentioned the API is asynchronous and so errors must generally be asynchronous as well. gRPC does not guarantee message delivery and in the case of a streaming RPC failure does not indicate which messages were received by the remote side. The advanced ClientCall API calls this out.
If you need stronger guarantees it must be added at the application-level, such as with replies or with a Status of OK. As an example, in gRPC + Image Upload I mention using a bidirectional stream for acknowledgements.
Creating a ManagedChannelBuilder does not error because the channel is lazy: it only creates a TCP connection when necessary (and reconnects when necessary). Also since most failures are transient, we wouldn't want to prevent all future RPCs on the channel just because your client happened to start when the network was broken.
Since the API is asynchronous already, grpc-java can purposefully throw away messages when sending even when it knows an error has occurred (i.e., it chooses not to throw). Thus almost all errors are delivered to the application via onError().