How to implement Retry logic in RabbitMQ? - spring-boot

In my project I'm setting SimpleRetryPolicy to add custom exception and RetryOperationsInterceptor which is consuming this policy.
#Bean
public SimpleRetryPolicy rejectionRetryPolicy() {
Map<Class<? extends Throwable>, Boolean> exceptionsMap = new HashMap<Class<? extends Throwable>, Boolean>();
exceptionsMap.put(DoNotRetryException.class, false);//not retriable
exceptionsMap.put(RetryException.class, true); //retriable
return new SimpleRetryPolicy(3, exceptionsMap, true);
}
#Bean
RetryOperationsInterceptor interceptor() {
return RetryInterceptorBuilder.stateless()
.retryPolicy(rejectionRetryPolicy())
.backOffOptions(2000L, 2, 3000L)
.recoverer(
new RepublishMessageRecoverer(rabbitTemplate(), "dlExchange", "dlRoutingKey"))
.build();
}
But with these configurations retry is not working for both RetryException and DoNotRetryException, where I want RetryException to be retried finite number of time and DoNotRetryException to send to DLQ
Please help with the issue, I'm attaching repo link if in case of need.
https://github.com/aviralnimbekar/RabbitMQ/tree/main/src

Your GlobalErrorHandler does its logic before the retry happens and you override there an exception with the AmqpRejectAndDontRequeueException. And looks like you do there a publish to DLX. Consider to move your GlobalErrorHandler logic to a more general ErrorHandler for the factory.setErrorHandler(); instead.
See more info in docs: https://docs.spring.io/spring-amqp/reference/html/#exception-handling
UPDATE
After removing errorHandler = "globalErrorHandler" from your #RabbitListener, I got this in logs:
2022-08-03 16:02:08.093 INFO 16896 --- [nio-8080-exec-4] c.t.r.producer.RabbitMQProducer : Message sent -> retry
2022-08-03 16:02:08.095 INFO 16896 --- [ntContainer#0-1] c.t.r.consumer.RabbitMQConsumer : Retrying message...
2022-08-03 16:02:10.096 INFO 16896 --- [ntContainer#0-1] c.t.r.consumer.RabbitMQConsumer : Retrying message...
2022-08-03 16:02:13.099 INFO 16896 --- [ntContainer#0-1] c.t.r.consumer.RabbitMQConsumer : Retrying message...
2022-08-03 16:02:13.100 WARN 16896 --- [ntContainer#0-1] o.s.a.r.retry.RepublishMessageRecoverer : Republishing failed message to exchange 'dlExchange' with routing key dlRoutingKey
2022-08-03 16:02:17.736 INFO 16896 --- [nio-8080-exec-5] c.t.r.producer.RabbitMQProducer : Message sent -> 1231231
2022-08-03 16:02:17.738 INFO 16896 --- [ntContainer#0-1] c.t.r.consumer.RabbitMQConsumer : sending into dlq...
2022-08-03 16:02:17.739 WARN 16896 --- [ntContainer#0-1] o.s.a.r.retry.RepublishMessageRecoverer : Republishing failed message to exchange 'dlExchange' with routing key dlRoutingKey
Which definitely reflects your original requirements.

Related

Spring boot pub/sub subcriber gets removed immediately after deployment

I have a spring boot app deployed to google cloud (via automatic github build).
Unfortunately my subscriber to pub/sub is being removed immediately after deployment :( Does anyone know reason for that?
Removing {service-activator:investobotAPIs.messageReceiver.serviceActivator} as a subscriber to the 'inputMessageChannel' channel
I've created a topic in google pub/sub and a subscription:
I've added these methods:
#RestController
#EnableAsync
public class InvestobotAPIs {
...
// [START pubsub_spring_inbound_channel_adapter]
// Create a message channel for messages arriving from the subscription `sub-one`.
#Bean
public MessageChannel inputMessageChannel() {
return new PublishSubscribeChannel();
}
// Create an inbound channel adapter to listen to the subscription `sub-one` and send
// messages to the input message channel.
#Bean
public PubSubInboundChannelAdapter inboundChannelAdapter(
#Qualifier("inputMessageChannel") MessageChannel messageChannel,
PubSubTemplate pubSubTemplate) {
PubSubInboundChannelAdapter adapter =
new PubSubInboundChannelAdapter(pubSubTemplate, "fetch-gpw-sub");
adapter.setOutputChannel(messageChannel);
adapter.setAckMode(AckMode.MANUAL);
adapter.setPayloadType(String.class);
return adapter;
}
// Define what happens to the messages arriving in the message channel.
#ServiceActivator(inputChannel = "inputMessageChannel")
public void messageReceiver(
String payload,
#Header(GcpPubSubHeaders.ORIGINAL_MESSAGE) BasicAcknowledgeablePubsubMessage message) {
logger.info("Message arrived via an inbound channel adapter from fetch-gpw! Payload: " + payload);
message.ack();
}
// [END pubsub_spring_inbound_channel_adapter]
and this is my build.graddle:
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-web:2.5.5'
// get url and download files
implementation 'commons-io:commons-io:2.6'
implementation 'org.asynchttpclient:async-http-client:2.12.3'
// parse xls files
implementation 'org.apache.poi:poi:5.0.0'
implementation 'org.apache.poi:poi-ooxml:5.0.0'
// firebase messaging and db
implementation 'com.google.firebase:firebase-admin:8.1.0'
// subscribe pub/sub topic
implementation 'com.google.cloud:spring-cloud-gcp-starter-pubsub:2.0.4'
implementation 'com.google.cloud:spring-cloud-gcp-pubsub-stream-binder:2.0.4'
implementation 'com.google.cloud:spring-cloud-gcp-dependencies:2.0.4'
}
Unfortunately when it is deployed the logs show that just after deployment is gets removed:
2021-10-20 11:51:16.190 CEST2021-10-20 09:51:16.190 INFO 1 --- [ main] o.s.i.channel.PublishSubscribeChannel : Channel 'application.inputMessageChannel' has 1 subscriber(s).
Default
2021-10-20 11:51:16.190 CEST2021-10-20 09:51:16.190 INFO 1 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started bean 'investobotAPIs.messageReceiver.serviceActivator'
Default
2021-10-20 11:51:16.214 CEST2021-10-20 09:51:16.214 INFO 1 --- [ main] .g.c.s.p.i.i.PubSubInboundChannelAdapter : started bean 'inboundChannelAdapter'; defined in: 'class path resource [com/miloszdobrowolski/investobotbackend/InvestobotAPIs.class]'; from source: 'com.miloszdobrowolski.investobotbackend.InvestobotAPIs.inboundChannelAdapter(org.springframework.messaging.MessageChannel,com.google.cloud.spring.pubsub.core.PubSubTemplate)'
Default
2021-10-20 11:51:16.295 CEST2021-10-20 09:51:16.294 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
Default
2021-10-20 11:51:19.076 CEST2021-10-20 09:51:19.075 INFO 1 --- [ main] c.m.i.InvestobotBackendApplication : Started InvestobotBackendApplication in 14.475 seconds (JVM running for 19.248)
Info
2021-10-20 11:51:22.386 CESTCloud Runinvestobot-backend-autobuild {#type: type.googleapis.com/google.cloud.audit.AuditLog, resourceName: namespaces/our-shield-329019/services/investobot-backend-autobuild, response: {…}, serviceName: run.googleapis.com, status: {…}}
Debug
2021-10-20 11:51:54.075 CESTContainer Sandbox: Unsupported syscall setsockopt(0x16,0x29,0x31,0x3e96fd1f3a9c,0x4,0x0). It is very likely that you can safely ignore this message and that this is not the cause of any error you might be troubleshooting. Please, refer to https://gvisor.dev/c/linux/amd64/setsockopt for more information.
Debug
2021-10-20 11:51:54.075 CESTContainer Sandbox: Unsupported syscall setsockopt(0x16,0x29,0x12,0x3e96fd1f3a9c,0x4,0x0). It is very likely that you can safely ignore this message and that this is not the cause of any error you might be troubleshooting. Please, refer to https://gvisor.dev/c/linux/amd64/setsockopt for more information.
Default
2021-10-20 11:52:59.196 CEST2021-10-20 09:52:59.195 WARN 1 --- [ault-executor-0] i.g.n.s.i.n.u.internal.MacAddressUtil : Failed to find a usable hardware address from the network interfaces; using random bytes: d1:79:21:a2:71:29:47:49
Default
2021-10-20 11:53:01.010 CEST2021-10-20 09:53:01.010 INFO 1 --- [ionShutdownHook] .g.c.s.p.i.i.PubSubInboundChannelAdapter : stopped bean 'inboundChannelAdapter'; defined in: 'class path resource [com/miloszdobrowolski/investobotbackend/InvestobotAPIs.class]'; from source: 'com.miloszdobrowolski.investobotbackend.InvestobotAPIs.inboundChannelAdapter(org.springframework.messaging.MessageChannel,com.google.cloud.spring.pubsub.core.PubSubTemplate)'
Default
2021-10-20 11:53:01.010 CEST2021-10-20 09:53:01.010 INFO 1 --- [ionShutdownHook] o.s.i.endpoint.EventDrivenConsumer : Removing {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
Default
2021-10-20 11:53:01.010 CEST2021-10-20 09:53:01.010 INFO 1 --- [ionShutdownHook] o.s.i.channel.PublishSubscribeChannel : Channel 'application.errorChannel' has 0 subscriber(s).
Default
2021-10-20 11:53:01.011 CEST2021-10-20 09:53:01.011 INFO 1 --- [ionShutdownHook] o.s.i.endpoint.EventDrivenConsumer : stopped bean '_org.springframework.integration.errorLogger'
Default
2021-10-20 11:53:01.011 CEST2021-10-20 09:53:01.011 INFO 1 --- [ionShutdownHook] o.s.i.endpoint.EventDrivenConsumer : Removing {service-activator:investobotAPIs.messageReceiver.serviceActivator} as a subscriber to the 'inputMessageChannel' channel
Default
2021-10-20 11:53:01.011 CEST2021-10-20 09:53:01.011 INFO 1 --- [ionShutdownHook] o.s.i.channel.PublishSubscribeChannel : Channel 'application.inputMessageChannel' has 0 subscriber(s).
Default
2021-10-20 11:53:01.011 CEST2021-10-20 09:53:01.011 INFO 1 --- [ionShutdownHook] o.s.i.endpoint.EventDrivenConsumer : stopped bean 'investobotAPIs.messageReceiver.serviceActivator'
I've tried to change dependencies in gradle from com.google.cloud to org.springframework.cloud (not sure how they differ)
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-web:2.5.5'
// get url and download files
implementation 'commons-io:commons-io:2.6'
implementation 'org.asynchttpclient:async-http-client:2.12.3'
// parse xls files
implementation 'org.apache.poi:poi:5.0.0'
implementation 'org.apache.poi:poi-ooxml:5.0.0'
// firebase messaging and db
implementation 'com.google.firebase:firebase-admin:8.1.0'
// subscribe pub/sub topic
// implementation 'com.google.cloud:spring-cloud-gcp-starter-pubsub:2.0.4'
// implementation 'com.google.cloud:spring-cloud-gcp-pubsub-stream-binder:2.0.4'
// implementation 'com.google.cloud:spring-cloud-gcp-dependencies:2.0.4'
implementation 'org.springframework.cloud:spring-cloud-gcp-starter-bus-pubsub:1.2.8.RELEASE'
implementation 'org.springframework.cloud:spring-cloud-gcp-pubsub:1.2.8.RELEASE'
implementation 'org.springframework.cloud:spring-cloud-gcp-autoconfigure:1.2.8.RELEASE'
implementation 'org.springframework.integration:spring-integration-core:5.5.4'
}
But it only causes a different error during deployment:
ERROR: (gcloud.run.services.update) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.

Unable to add my micro service as Eureka Client

I have set a sample Micro Service using 3 spring boot applications. Post that I am trying to connect all of them to Eureka Server.
There are 3 spring Boot Applications Task-Display,Task-Repo and Task-Status-Repo.
Task Display communicates with the other two and retrieve data.
Now Problem is except Task-Display all others are linked to eureka server. Getting the following error when Task Display is deployed
2019-08-31 18:43:00.055 INFO 15528 --- [tbeatExecutor-0]
com.netflix.discovery.DiscoveryClient : DiscoveryClient_TASK-DISPLAY-
ME;/192.168.1.9:task-display-me;:8180 - Re-registering apps/TASK-DISPLAY-
ME;
2019-08-31 18:43:00.055 INFO 15528 --- [tbeatExecutor-0]
com.netflix.discovery.DiscoveryClient : DiscoveryClient_TASK-DISPLAY-
ME;/192.168.1.9:task-display-me;:8180: registering service...
2019-08-31 18:43:00.057 WARN 15528 --- [tbeatExecutor-0]
c.n.d.s.t.d.RetryableEurekaHttpClient : Request execution failure with
status code 400; retrying on another server if available
2019-08-31 18:43:00.059 WARN 15528 --- [tbeatExecutor-0]
c.n.d.s.t.d.RetryableEurekaHttpClient : Request execution failure with
status code 400; retrying on another server if available
2019-08-31 18:43:00.060 WARN 15528 --- [tbeatExecutor-0]
com.netflix.discovery.DiscoveryClient : DiscoveryClient_TASK-DISPLAY-
ME;/192.168.1.9:task-display-me;:8180 - registration failed Cannot execute
request on any known server
com.netflix.discovery.shared.transport.TransportException: Cannot execute
request on any known server
This is my Application.java of the server which is not getting linked to Eureka
#SpringBootApplication
#EnableEurekaClient
public class TaskDisplayApplication {
public static void main(String[] args) {
SpringApplication.run(TaskDisplayApplication.class, args);
}
#LoadBalanced
#Bean
public RestTemplate getRestTemplate() {
return new RestTemplate();
}
}

Enabling exactly once causes streams shutdown due to timeout while initializing transactional state

I've written a simple example to test the join functionality. As I sometimes get messages duplicated in the resulting topic and sometimes missing messages in this topic, I thought while pinpointing the problem to enable exactly once semantics. However while doing this through:
props.put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG, StreamsConfig.EXACTLY_ONCE);
I get a timeout that causes kafka streams to shut down in my app:
2019-05-02 17:02:32.585 INFO 153056 --- [-StreamThread-1] o.a.kafka.common.utils.AppInfoParser : Kafka version : 2.0.1
2019-05-02 17:02:32.585 INFO 153056 --- [-StreamThread-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : fa14705e51bd2ce5
2019-05-02 17:02:32.593 INFO 153056 --- [-StreamThread-1] o.a.k.c.p.internals.TransactionManager : [Producer clientId=join-test-90a0aa93-dfd8-4d4f-894b-85a3c5634f72-StreamThread-1-0_0-producer, transactionalId=join-test-0_0] ProducerId set to -1 with epoch -1
2019-05-02 17:03:32.599 ERROR 153056 --- [-StreamThread-1] o.a.k.s.p.internals.StreamThread : stream-thread [join-test-90a0aa93-dfd8-4d4f-894b-85a3c5634f72-StreamThread-1] Error caught during partition assignment, will abort the current process and re-throw at the end of rebalance: {}
org.apache.kafka.common.errors.TimeoutException: Timeout expired while initializing transactional state in 60000ms.
2019-05-02 17:03:32.599 INFO 153056 --- [-StreamThread-1] o.a.k.s.p.internals.StreamThread : stream-thread [join-test-90a0aa93-dfd8-4d4f-894b-85a3c5634f72-StreamThread-1] partition assignment took 60044 ms.
current active tasks: []
current standby tasks: []
previous active tasks: []
2019-05-02 17:03:32.601 INFO 153056 --- [-StreamThread-1] o.a.k.s.p.internals.StreamThread : stream-thread [join-test-90a0aa93-dfd8-4d4f-894b-85a3c5634f72-StreamThread-1] State transition from PARTITIONS_ASSIGNED to PENDING_SHUTDOWN
2019-05-02 17:03:32.601 INFO 153056 --- [-StreamThread-1] o.a.k.s.p.internals.StreamThread : stream-thread [join-test-90a0aa93-dfd8-4d4f-894b-85a3c5634f72-StreamThread-1] Shutting down
2019-05-02 17:03:32.615 INFO 153056 --- [-StreamThread-1] o.a.k.s.p.internals.StreamThread : stream-thread [join-test-90a0aa93-dfd8-4d4f-894b-85a3c5634f72-StreamThread-1] State transition from PENDING_SHUTDOWN to DEAD
2019-05-02 17:03:32.615 INFO 153056 --- [-StreamThread-1] org.apache.kafka.streams.KafkaStreams : stream-client [join-test-90a0aa93-dfd8-4d4f-894b-85a3c5634f72] State transition from REBALANCING to ERROR
2019-05-02 17:03:32.615 WARN 153056 --- [-StreamThread-1] org.apache.kafka.streams.KafkaStreams : stream-client [join-test-90a0aa93-dfd8-4d4f-894b-85a3c5634f72] All stream threads have died. The instance will be in error state and should be closed.
2019-05-02 17:03:32.615 INFO 153056 --- [-StreamThread-1] o.a.k.s.p.internals.StreamThread : stream-thread [join-test-90a0aa93-dfd8-4d4f-894b-85a3c5634f72-StreamThread-1] Shutdown complete
Exception in thread "join-test-90a0aa93-dfd8-4d4f-894b-85a3c5634f72-StreamThread-1" org.apache.kafka.streams.errors.StreamsException: stream-thread [join-test-90a0aa93-dfd8-4d4f-894b-85a3c5634f72-StreamThread-1] Failed to rebalance.
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:870)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:810)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timeout expired while initializing transactional state in 60000ms.
static String ORIGINAL = "original-sensor-data";
static String ERROR = "error-score";
public static void main(String[] args) throws IOException {
SpringApplication.run(JoinTest.class, args);
Properties props = getProperties();
final StreamsBuilder builder = new StreamsBuilder();
final KStream<String, OriginalSensorData> original = builder.stream(ORIGINAL, Consumed.with(Serdes.String(), new OriginalSensorDataSerde()));
final KStream<String, ErrorScore> error = builder.stream(ERROR, Consumed.with(Serdes.String(), new ErrorScoreSerde()));
KStream<String, ErrorScore> result = original.join(
error,
(originalValue, errorValue) -> new ErrorScore(new Date(originalValue.getTimestamp()), errorValue.getE(),
originalValue.getData().get("TE700PV").doubleValue(), errorValue.getT(), errorValue.getR()),
// KStream-KStream joins are always windowed joins, hence we must provide a join window.
JoinWindows.of(Duration.ofMillis(3000).toMillis()),
Joined.with(
Serdes.String(), /* key */
new OriginalSensorDataSerde(), /* left value */
new ErrorScoreSerde() /* right value */
)
).through("atl-joined-data-repartition", Produced.with(Serdes.String(), new ErrorScoreSerde()));
result.foreach((key, value) -> System.out.println("Join Stream: " + key + " " + value));
KafkaStreams streams = new KafkaStreams(builder.build(), props);
streams.start();
}
private static Properties getProperties() {
Properties props = new Properties();
//Url of the kafka broker, this can also be found in the Aiven console
props.put("bootstrap.servers", "localhost:9095");
props.put("group.id", "join-test");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put("application.id", "join-test");
props.put("default.timestamp.extractor", "com.my.SensorDataTimestampExtractor");
//The key of a message is a string
props.put("key.deserializer",
StringDeserializer.class.getName());
props.put("value.deserializer",
StringDeserializer.class.getName());
props.put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG, StreamsConfig.EXACTLY_ONCE);
props.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 1000);
return props;
}
I'm expecting that the app starts without the timeout and continues working

Is it possible to enforce message order on ActiveMQ topics using Spring Boot and JmsTemplate?

In playing around with Spring Boot, ActiveMQ, and JmsTemplate, I noticed that it appears that message order is not always preserved. In reading on ActiveMQ, "Message Groups" are offered as a potential solution to preserving message order when sending to a topic. Is there a way to do this with JmsTemplate?
Add Note: I'm starting to think that JmsTemplate is nice for "getting launched", but has too many issues.
Sample code and console output posted below...
#RestController
public class EmptyControllerSB {
#Autowired
MsgSender msgSender;
#RequestMapping(method = RequestMethod.GET, value = { "/v1/msgqueue" })
public String getAccount() {
msgSender.sendJmsMessageA();
msgSender.sendJmsMessageB();
return "Do nothing...successfully!";
}
}
#Component
public class MsgSender {
#Autowired
JmsTemplate jmsTemplate;
void sendJmsMessageA() {
jmsTemplate.convertAndSend(new ActiveMQTopic("VirtualTopic.TEST-TOPIC"), "message A");
}
void sendJmsMessageB() {
jmsTemplate.convertAndSend(new ActiveMQTopic("VirtualTopic.TEST-TOPIC"), "message B");
}
}
#Component
public class MsgReceiver {
private final String consumerOne = "Consumer.myConsumer1.VirtualTopic.TEST-TOPIC";
private final String consumerTwo = "Consumer.myConsumer2.VirtualTopic.TEST-TOPIC";
#JmsListener(destination = consumerOne )
public void receiveMessage1(String strMessage) {
System.out.println("Received on #1a -> " + strMessage);
}
#JmsListener(destination = consumerOne )
public void receiveMessage2(String strMessage) {
System.out.println("Received on #1b -> " + strMessage);
}
#JmsListener(destination = consumerTwo )
public void receiveMessage3(String strMessage) {
System.out.println("Received on #2 -> " + strMessage);
}
}
Here's the console output (note the order of output in first sequence)...
\Intel\Intel(R) Management Engine Components\DAL;C:\WINDOWS\System32\OpenSSH\;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files (x86)\gnupg\bin;C:\Users\LesR\AppData\Local\Microsoft\WindowsApps;c:\Gradle\gradle-5.0\bin;;C:\Program Files\JetBrains\IntelliJ IDEA 2018.3\bin;;.]
2019-04-03 09:23:08.408 INFO 13936 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2019-04-03 09:23:08.408 INFO 13936 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 672 ms
2019-04-03 09:23:08.705 INFO 13936 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2019-04-03 09:23:08.845 INFO 13936 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2019-04-03 09:23:08.877 INFO 13936 --- [ main] mil.navy.msgqueue.MsgqueueApplication : Started MsgqueueApplication in 1.391 seconds (JVM running for 1.857)
2019-04-03 09:23:14.949 INFO 13936 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2019-04-03 09:23:14.949 INFO 13936 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2019-04-03 09:23:14.952 INFO 13936 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 3 ms
Received on #2 -> message A
Received on #1a -> message B
Received on #1b -> message A
Received on #2 -> message B
<HIT DO-NOTHING ENDPOINT AGAIN>
Received on #1b -> message A
Received on #2 -> message A
Received on #1a -> message B
Received on #2 -> message B
BLUF - Add "?consumer.exclusive=true" to the declaration of the destination for the JmsListener annotation.
It seems that the solution is not that complex, especially if one abandons ActiveMQ's "message groups" in favor or "exclusive consumers". The drawback to the "message groups" is that the sender has to have prior knowledge of the potential partitioning of message consumers. If the producer has this knowledge, then "message groups" are a nice solution, as the solution is somewhat independent of the consumer.
But, a similar solution can be implemented from the consumer side, by having the consumer declare "exclusive consumer" on the queue. While I did not see anything in the JmsTemplate implementation that directly supports this, it seems that Spring's JmsTemplate implementation passes the queue name to ActiveMQ, and then ActiveMQ "does the right thing" and enforces the exclusive consumer behavior.
So...
Change the following...
private final String consumerOne = "Consumer.myConsumer1.VirtualTopic.TEST-TOPIC";
to...
private final String consumerOne = "Consumer.myConsumer1.VirtualTopic.TEST-TOPIC";?consumer.exclusive=true
Once I did this, only one of the two declared receive methods were invoked, and message order was maintained in all my test runs.

TCP Client Spring Integration with Java Config

I am attempting to create a TCP client to connect to a remote tcp server and wait to receive messages. So far I have the following code:
#EnableIntegration
#IntegrationComponentScan
#Configuration
public class TcpClientConfig {
#Bean
public TcpInboundGateway tcpInbound(AbstractClientConnectionFactory connectionFactory) {
TcpInboundGateway gate = new TcpInboundGateway();
gate.setConnectionFactory(connectionFactory);
gate.setClientMode(false);
gate.setRequestChannel(fromTcp());
return gate;
}
#Bean
public MessageChannel fromTcp() {
return new DirectChannel();
}
#MessageEndpoint
public static class Echo {
#Transformer(inputChannel = "fromTcp", outputChannel = "serviceChannel")
public String convert(byte[] bytes) {
return new String(bytes);
}
}
#ServiceActivator(inputChannel = "serviceChannel")
public void messageToService(String in) {
System.out.println(in);
}
#Bean
public EndOfLineSerializer endOfLineSerializer() {
return new EndOfLineSerializer();
}
#Bean
public AbstractClientConnectionFactory clientConnectionFactory() {
TcpNetClientConnectionFactory tcpNetServerConnectionFactory = new TcpNetClientConnectionFactory("192.XXX.XXX.XX", 4321);
tcpNetServerConnectionFactory.setSingleUse(false);
tcpNetServerConnectionFactory.setSoTimeout(300000);
tcpNetServerConnectionFactory.setDeserializer(endOfLineSerializer());
tcpNetServerConnectionFactory.setSerializer(endOfLineSerializer());
tcpNetServerConnectionFactory.setMapper(new TimeoutMapper());
return tcpNetServerConnectionFactory;
}
}
It starts up and connects to the remote server. However, I am not receiving any data in my serviceActivator method messageToService. To assure that data exists, I can successfully connect to my remote tcp server using telnet
telnet 192.XXX.XXX.XX 4321
Trying 192.XXX.XXX.XX...
Connected to 192.XXX.XXX.XX.
Escape character is '^]'.
Hello World
I have confirmed nothing is hitting my EndOfLineSerializer. What is wrong with my TCP client?
Bonus: Let's assume the hostname and port are determined by querying an API. How would I tell the TcpNetClientConnectionFactory to wait to try to connect until I have the correct data for the port?
Debug output:
main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2018-11-22 23:00:46.182 DEBUG 35953 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Autodetecting user-defined JMX MBeans
2018-11-22 23:00:46.194 DEBUG 35953 --- [ main] .s.i.c.GlobalChannelInterceptorProcessor : No global channel interceptors.
2018-11-22 23:00:46.198 DEBUG 35953 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase -2147483648
2018-11-22 23:00:46.198 INFO 35953 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
2018-11-22 23:00:46.198 INFO 35953 --- [ main] o.s.i.channel.PublishSubscribeChannel : Channel 'application.errorChannel' has 1 subscriber(s).
2018-11-22 23:00:46.198 INFO 35953 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started _org.springframework.integration.errorLogger
2018-11-22 23:00:46.198 DEBUG 35953 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Successfully started bean '_org.springframework.integration.errorLogger'
2018-11-22 23:00:46.198 INFO 35953 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {service-activator:tcpClientConfig.messageToService.serviceActivator} as a subscriber to the 'serviceChannel' channel
2018-11-22 23:00:46.198 INFO 35953 --- [ main] o.s.integration.channel.DirectChannel : Channel 'application.serviceChannel' has 1 subscriber(s).
2018-11-22 23:00:46.198 INFO 35953 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started tcpClientConfig.messageToService.serviceActivator
2018-11-22 23:00:46.198 DEBUG 35953 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Successfully started bean 'tcpClientConfig.messageToService.serviceActivator'
2018-11-22 23:00:46.198 INFO 35953 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {transformer:tcpClientConfig.Echo.convert.transformer} as a subscriber to the 'toTcp' channel
2018-11-22 23:00:46.198 INFO 35953 --- [ main] o.s.integration.channel.DirectChannel : Channel 'application.toTcp' has 1 subscriber(s).
2018-11-22 23:00:46.198 INFO 35953 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started tcpClientConfig.Echo.convert.transformer
2018-11-22 23:00:46.198 DEBUG 35953 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Successfully started bean 'tcpClientConfig.Echo.convert.transformer'
2018-11-22 23:00:46.198 DEBUG 35953 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 0
2018-11-22 23:00:46.199 INFO 35953 --- [ main] .s.i.i.t.c.TcpNetClientConnectionFactory : started clientConnectionFactory, host=192.XXX.XXX.90, port=4321
2018-11-22 23:00:46.199 DEBUG 35953 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Successfully started bean 'clientConnectionFactory'
2018-11-22 23:00:46.199 INFO 35953 --- [ main] .s.i.i.t.c.TcpNetClientConnectionFactory : started clientConnectionFactory, host=192.XXX.XXX.90, port=4321
2018-11-22 23:00:46.199 INFO 35953 --- [ main] o.s.i.ip.tcp.TcpInboundGateway : started tcpInbound
2018-11-22 23:00:46.199 DEBUG 35953 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Successfully started bean 'tcpInbound'
When using a client connection factory with an inbound endpoint there is no stimulus to open a connection (client factories are normally used for outbound operations and the connection is established when the first message is sent).
When used in this mode, you need setClientMode(true). This starts a task that opens (and monitors) an outbound connection.
See the documentation
Normally, inbound adapters use a type="server" connection factory, which listens for incoming connection requests. In some cases, you may want to establish the connection in reverse, such that the inbound adapter connects to an external server and then waits for inbound messages on that connection.
This topology is supported by setting client-mode="true" on the inbound adapter. In this case, the connection factory must be of type client and must have single-use set to false.
Two additional attributes support this mechanism. retry-interval specifies (in milliseconds) how often the framework attempts to reconnect after a connection failure. scheduler supplies a TaskScheduler to schedule the connection attempts and to test that the connection is still active.
(The framework provides a default scheduler).
For your Bonus question, you would need to find the host/port before creating the application context; or create the connection factory and gateway dynamically after you have the information.

Resources