We have an activemq broker that we use for exchanging messages between backend services and frontend devices. It's effectively two brokers in one, glued together by camel.
For the backend there's a JMS broker, for the frontend there's an mqtt broker. We use camel to establish routes that cross the line, usually from a JMS queue to an mqtt topic, or the other way around. We've had this service running for years now. Every now and then a feature would require a new routing, so we put it in. That all went well, until now.
Now, for some reason, one of the new jms queues we've added is not forwarding its messages to the mqtt topic, rather it raises an error:
[Camel (camel-1) thread #10 - JmsConsumer[bmetry-command]] o.a.camel.processor.DefaultErrorHandler : Failed delivery for (MessageId: ID:ip-172-31-26-161.eu-west-1.compute.internal-42307-1669731056848-1:115:5:1:406 on ExchangeId: ID-broker-yellow-webcam-1669798394280-0-5852). Exhausted after delivery attempt: 1 caught: java.util.concurrent.RejectedExecutionException
java.util.concurrent.RejectedExecutionException: null
at org.apache.camel.component.jms.JmsProducer.process(JmsProducer.java:144) ~[camel-jms-2.22.0.jar!/:2.22.0]
at org.apache.camel.processor.SendDynamicProcessor$1.doInAsyncProducer(SendDynamicProcessor.java:178) ~[camel-core-2.22.0.jar!/:2.22.0]
at org.apache.camel.impl.ProducerCache.doInAsyncProducer(ProducerCache.java:445) ~[camel-core-2.22.0.jar!/:2.22.0]
at org.apache.camel.processor.SendDynamicProcessor.process(SendDynamicProcessor.java:160) ~[camel-core-2.22.0.jar!/:2.22.0]
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548) ~[camel-core-2.22.0.jar!/:2.22.0]
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201) [camel-core-2.22.0.jar!/:2.22.0]
at org.apache.camel.processor.Pipeline.process(Pipeline.java:138) [camel-core-2.22.0.jar!/:2.22.0]
at org.apache.camel.processor.Pipeline.process(Pipeline.java:101) [camel-core-2.22.0.jar!/:2.22.0]
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201) [camel-core-2.22.0.jar!/:2.22.0]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:97) [camel-core-2.22.0.jar!/:2.22.0]
at org.apache.camel.component.jms.EndpointMessageListener.onMessage(EndpointMessageListener.java:113) [camel-jms-2.22.0.jar!/:2.22.0]
at org.springframework.jms.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:736) [spring-jms-5.2.8.RELEASE.jar!/:5.2.8.RELEASE]
at org.springframework.jms.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:696) [spring-jms-5.2.8.RELEASE.jar!/:5.2.8.RELEASE]
at org.springframework.jms.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:674) [spring-jms-5.2.8.RELEASE.jar!/:5.2.8.RELEASE]
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:318) [spring-jms-5.2.8.RELEASE.jar!/:5.2.8.RELEASE]
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:257) [spring-jms-5.2.8.RELEASE.jar!/:5.2.8.RELEASE]
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1189) [spring-jms-5.2.8.RELEASE.jar!/:5.2.8.RELEASE]
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1179) [spring-jms-5.2.8.RELEASE.jar!/:5.2.8.RELEASE]
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1076) [spring-jms-5.2.8.RELEASE.jar!/:5.2.8.RELEASE]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_292]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_292]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_292]
I can't really get too much useful information out of that stack trace, other than the queue it fails for.
There's a couple of weird things here:
As mentioned, this is the only JMS queue on which that happens, and there's a couple dozen of them.
On this queue however, it happens consistently. There's not a single message going through, while all the others are working fine.
When connecting with an mqtt client and publishing to the topic the jms queue is supposed to deliver to, the message arrives without issue, so it's clearly the JMS end that's the problem.
This particular queue doesn't have much to do most of the time, but for short periods, it can have one of the highest throughputs in the broker (though that doesn't mean much. If it gets more than 1 message a second, it's really busy, and the messages are very short). However, experimentation has shown that it won't be working any better with less throughput (i.e. it still doesn't work at all).
The real bummer though is that an identical broker in the development environment works perfectly fine, even with the same or higher throughput on that queue. It has a lot less load overall, obviously, but the fact that everything works fine except for this one specific queue kind of doesn't sound like a load issue.
Let's get to the code, which is very unspectacular. The whole thing is a spring-boot application with camel inside. The two brokers are launched and configured like this:
// JMS broker component:
#Component
class JmsBroker(jmsProperties: JmsProperties) {
private val broker = BrokerService().apply {
isPersistent = false
isAdvisorySupport = false
addConnector("nio://${jmsProperties.address}:${jmsProperties.port}")
brokerName = "jms"
}
#PostConstruct
fun start() {
broker.start()
}
#PreDestroy
fun stop() {
broker.stop()
}
}
// MQTT config and bean:
#Configuration
class MqttConfig {
val log = logger()
#Bean
fun sslConfig(mqttProperties: MqttProperties): SpringSslContext? {
return if (isTlsActive(mqttProperties)) {
SpringSslContext().apply {
keyStore = "./${mqttProperties.keystore}"
keyStorePassword = mqttProperties.keystorePassword
}
} else null
}
#Bean
fun mqttBroker(mqttProperties: MqttProperties,
authenticationService: AuthenticationService,
ssl: SslContext?): BrokerService {
return BrokerService().apply {
isPersistent = false
isAdvisorySupport = false
brokerName = "mqtt"
plugins = listOf(WebcamServiceAuthenticationPlugin(authenticationService, mqttProperties.internalPort)).toTypedArray()
addConnector("nio://127.0.0.1:${mqttProperties.internalPort}")
if (isTlsActive(mqttProperties)) {
log.info("Starting MQTT connector using TLS")
sslContext = ssl!!
} else {
log.info("Starting a plain text MQTT connector")
}
addConnector(makeHostUri(mqttProperties))
}
}
private fun makeHostUri(mqttProperties: MqttProperties): String =
(if (isTlsActive(mqttProperties)) "mqtt+nio+ssl://" else "mqtt+nio://") +
"${mqttProperties.address}:${mqttProperties.port}"
private fun isTlsActive(mqttProperties: MqttProperties) =
mqttProperties.keystore.isNotEmpty()
}
And here's the whole camel setup and configuring the routes.
I have only put in the route that is not working here, so you can have a look at it, but it's essentially identical to a dozen others in the same config:
#Component
class BrokerRouteBuilder(camelContext: CamelContext,
mqttProperties: MqttProperties,
jmsProperties: JmsProperties,
private val authenticationService: AuthenticationService,
private val objectMapper: ObjectMapper)
: RouteBuilder() {
private val jms = "jmsbroker"
private val mqtt = "mqttbroker"
init {
camelContext.addComponent(jms, activeMQComponent().apply {
setConnectionFactory(PooledConnectionFactory("nio://127.0.0.1:${jmsProperties.port}")
.apply { maxConnections = jmsProperties.maxInternalConnections })
setCacheLevelName("CACHE_CONSUMER")
})
camelContext.addComponent(mqtt, activeMQComponent().apply {
setConnectionFactory(PooledConnectionFactory("nio://127.0.0.1:${mqttProperties.internalPort}")
.apply { maxConnections = mqttProperties.maxInternalConnections })
setCacheLevelName("CACHE_CONSUMER")
})
objectMapper.registerModule(JodaModule())
}
override fun configure() {
// takes message from the queue and directs it to an mqtt topic for the user noted
// in the header of the jms message
from("$jms:queue:bmetry-command")
.id("bmetry-command")
.log("Sending bmetry command to user '\${header.user}'")
.toD("$mqtt:topic:\${header.user}.bmetry-command")
}
And that's pretty much all I got. I would appreciate any help that could point me in the right direction to find a solution for this. I've googled, obviously, but can't find the problem in the right context, and am too unfamiliar with camel internals to really dig down on this on my own.
Additional information
I've dug out the line in the camel source code that the stack trace refers to, and it all seems to come down to ServiceSupport.isRunAllowed() always returning false on that jms producer. Here's the function in question:
public boolean isRunAllowed() {
// if we have not yet initialized, then all options is false
boolean unused1 = !started.get() && !starting.get() && !stopping.get() && !stopped.get();
boolean unused2 = !suspending.get() && !suspended.get() && !shutdown.get() && !shuttingdown.get();
if (unused1 && unused2) {
return false;
}
return !isStoppingOrStopped();
}
So either the exchange is not initialised, or it is already stopped. I have no idea why either could be the case. There's nothing inherently different about this route when compared to many other routes in the same application. Also, as mentioned, it isn't an issue in the development environment.
I have found the same behaviour described here, but the suggested change didn't solve the problem in this case:
https://github.com/camelinaction/camelinaction2/issues/158
Progress, but maybe not
So I've upgraded spring-boot and camel to the latest versions I could afford. That would be camel 3.14, because the thing still has to run on java 8.
This has changed the behaviour slightly. The error now appears in the JmsConsumer, i.e. earlier, so there's still a potential that I figure this one out and then the old one will just be happening again. It does seem unlikely that the two wouldn't be related, though, considering they're behaving the exact same way apart from where exactly they occur. Always on that one route, and only on that route, and only in the production environment.
The new error I'm seeing is this:
java.lang.NoClassDefFoundError: org/apache/camel/util/MessageHelper
at org.apache.camel.processor.RedeliveryErrorHandler.logFailedDelivery(RedeliveryErrorHandler.java:1308) ~[camel-core-2.22.0.jar!/:2.22.0]
at org.apache.camel.processor.RedeliveryErrorHandler.deliverToFailureProcessor(RedeliveryErrorHandler.java:1109) ~[camel-core-2.22.0.jar!/:2.22.0]
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:474) ~[camel-core-2.22.0.jar!/:2.22.0]
at org.apache.camel.processor.Pipeline.process(Pipeline.java:138) ~[camel-core-2.22.0.jar!/:2.22.0]
at org.apache.camel.processor.Pipeline.process(Pipeline.java:101) ~[camel-core-2.22.0.jar!/:2.22.0]
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548) ~[camel-core-2.22.0.jar!/:2.22.0]
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201) ~[camel-core-2.22.0.jar!/:2.22.0]
at org.apache.camel.processor.Pipeline.process(Pipeline.java:138) ~[camel-core-2.22.0.jar!/:2.22.0]
at org.apache.camel.processor.Pipeline.process(Pipeline.java:101) ~[camel-core-2.22.0.jar!/:2.22.0]
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201) ~[camel-core-2.22.0.jar!/:2.22.0]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:97) ~[camel-core-2.22.0.jar!/:2.22.0]
at org.apache.camel.component.jms.EndpointMessageListener.onMessage(EndpointMessageListener.java:113) ~[camel-jms-2.22.0.jar!/:2.22.0]
What is very weird here is that the stacktrace is referring to camel-core 2.22.0. THat was the camel version that was running on there before. But I've checked everything, including the jar file that's been deployed and running on the instance, and there's only camel 3.14 jars in the dependencies. It's a bootjar, obviously, and there's no trace of any camel 2.22 stuff in that jar. And yet here it is in the stack trace. Can somebody explain how that happens?
All in all, I can say that the issue was solved by upgrading to spring-boot 2.7 and camel 3.14 (from sb 2.3.3 and camel 2.22).
I got held up by some very weird but most probably unrelated behaviour probably originating somewhere in deployment, but once that was smoothed out, the original issue was gone.
Related
I am using Spring Boot 2.7.8 with WebFlux.
I have a sink in my class like this:
private final Sinks.Many<TaskEvent> sink = Sinks.many()
.multicast()
.onBackpressureBuffer();
This can be used to subscribe on like this:
public Flux<List<TaskEvent>> subscribeToTaskUpdates() {
return sink.asFlux()
.buffer(Duration.ofSeconds(1))
.share();
}
The #Controller uses this like this to push the updates as a Server-Sent Event (SSE) to the browser:
#GetMapping("/transferdatestatuses/updates")
public Flux<ServerSentEvent<TransferDateStatusesUpdateEvent>> subscribeToTransferDataStatusUpdates() {
return monitoringSseBroker.subscribeToTaskUpdates()
.map(taskEventList -> ServerSentEvent.<TransferDateStatusesUpdateEvent>builder()
.data(TransferDateStatusesUpdateEvent.of(taskEventList))
.build())
This works fine at first, but if I navigate away in my (Thymeleaf) web application to a page that has no connection with the SSE url and then go back, then the browser cannot connect anymore.
After some investigation, I found out that the problem is that the removal of the subscriber closes the flux and a new subscriber cannot connect anymore.
I have found 3 ways to fix it, but I don't understand the internals enough to decide which one is the best solution and if there any things I need to consider to decide what to use.
Solution 1
Disable the autoCancel on the sink by using the method overload of onBackpressureBuffer that allows to set this parameter:
private final Sinks.Many<TaskEvent> sink = Sinks.many()
.multicast()
.onBackpressureBuffer(Queues.SMALL_BUFFER_SIZE, false);
Solution 2
Use replay(0).autoConnect() instead of share():
public Flux<List<TaskEvent>> subscribeToTaskUpdates() {
return sink.asFlux()
.buffer(Duration.ofSeconds(1))
.replay(0).autoConnect();
}
Solution 3
Use publish().autoConnect() instead of share():
public Flux<List<TaskEvent>> subscribeToTaskUpdates() {
return sink.asFlux()
.buffer(Duration.ofSeconds(1))
.publish().autoConnect();
}
Which of the solutions are advisable to make sure a browser can disconnect and connect again later without problems?
I'm not quite sure if it is the root of your problem, but I didn't have that issue by using a keepAlive Flux.
val keepAlive = Flux.interval(Duration.ofSeconds(10)).map {
ServerSentEvent.builder<Image>()
.event(":keepalive")
.build()
}
return Flux.merge(
keepAlive,
imageUpdateFlux
)
Here is the whole file: Github
I've discovered no Masstransit configuration that allows a service bus Topic to be created with Duplicate Detection enabled.
You can do it with Queues simply enough. But for Topics it seems a bit of a mystery.
Does anybody have a working sample?
Perhaps it is not possible.
I've been trying to use the IServiceBusBusFactoryConfigurator provided by the Bus.Factory.CreateUsingAzureServiceBus method.
I'd thought that some use of IServiceBusBusFactoryConfigurator.Publish method and IServiceBusBusFactoryConfigurator.SubscriptionEndpoint method would accomplish the task, but after a myriad of trials I've come up with no solution.
To configure your message type topic with duplicate detection, you must configure the publish topology in both the producer and the consumer (it only needs to be configured once per bus instance, but if your producer is a separate bus instance, it would also need the configuration). The topic must also not already exist as it would not be updated once created in Azure.
To configure the publish topology:
namespace DupeDetection
{
public interface DupeCommand
{
string Value { get; }
}
}
var busControl = Bus.Factory.CreateUsingAzureServiceBus(cfg =>
{
cfg.Publish<DupeCommand>(x => x.EnableDuplicateDetection(TimeSpan.FromMinutes(10)));
cfg.ReceiveEndpoint("dupe", e =>
{
e.Consumer<DupeConsumer>();
});
}
The consumer is normal (no special settings required).
class DupeConsumer :
IConsumer<DupeCommand>
{
public Task Consume(ConsumeContext<DupeCommand> context)
{
return Task.CompletedTask;
}
}
I've added a unit test to verify this behavior, and can confirm that when two messages with the same MessageId are published back-to-back, only a single message is delivered to the consumer.
Test log output:
10:53:15.641-D Create send transport: sb://masstransit-build.servicebus.windows.net/MassTransit.Azure.ServiceBus.Core.Tests.DupeDetection/DupeCommand
10:53:15.784-D Topic: MassTransit.Azure.ServiceBus.Core.Tests.DupeDetection/DupeCommand (dupe detect)
10:53:16.375-D SEND sb://masstransit-build.servicebus.windows.net/MassTransit.Azure.ServiceBus.Core.Tests.DupeDetection/DupeCommand dc3a0000-ebb8-e450-949c-08d8e8939c7f MassTransit.Azure.ServiceBus.Core.Tests.DupeDetection.DupeCommand
10:53:16.435-D SEND sb://masstransit-build.servicebus.windows.net/MassTransit.Azure.ServiceBus.Core.Tests.DupeDetection/DupeCommand dc3a0000-ebb8-e450-949c-08d8e8939c7f MassTransit.Azure.ServiceBus.Core.Tests.DupeDetection.DupeCommand
10:53:16.469-D RECEIVE sb://masstransit-build.servicebus.windows.net/MassTransit.Azure.ServiceBus.Core.Tests/input_queue dc3a0000-ebb8-e450-949c-08d8e8939c7f MassTransit.Azure.ServiceBus.Core.Tests.DupeDetection.DupeCommand MassTransit.IConsumer<MassTransit.Azure.ServiceBus.Core.Tests.DupeDetection.DupeCommand>(00:00:00.0017972)
You can see the (dupe detect) attribute shown on the topic declaration.
Here is the solution I finally found. It does not rely on trying any of the ReceiveEndpoint or SubscriptionEndpoint configuration methods which never seemed to give me what I wanted.
IBusControl bus = Bus.Factory.CreateUsingAzureServiceBus(cfg =>
{
cfg.Publish<MembershipNotifications.MembershipSignupMessage>(configure =>
{
configure.EnableDuplicateDetection(_DuplicateDetectionWindow);
configure.AutoDeleteOnIdle = _AutoDeleteOnIdle;
configure.DefaultMessageTimeToLive = _MessageTimeToLive;
});
}
await bus.Publish(new MessageTest());
I have the following scenario whereby my program is using blocking queue to process message asynchronously. There are multiple RSocket clients who wish to receive this message. My design is such a way that when a message arrives in the blocking queue, the stream that binds to the Flux will emit. I have tried to implement this requirement as below, but the client doesn't receive any response. However, I could see Stream supplier getting triggered correctly.
Can someone pls help.
#MessageMapping("addListenerHook")
public Flux<QueryResult> addListenerHook(String clientName){
System.out.println("Adding Listener:"+clientName);
BlockingQueue<QueryResult> listenerQ = new LinkedBlockingQueue<>();
Datalistener.register(clientName,listenerQ);
return Flux.fromStream(
()-> Stream.generate(()->streamValue(listenerQ))).map(q->{
System.out.println("I got an event : "+q.getResult());
return q;
});
}
private QueryResult streamValue(BlockingQueue<QueryResult> inStream){
try{
return inStream.take();
}catch(Exception e){
return null;
}
}
This is tough to solve simply and cleanly because of the blocking API. I think this is why there aren't simple bridge APIs here to help you implement this. You should come up with a clean solution to turn the BlockingQueue into a Flux first. Then the spring-boot part becomes a non-event.
This is why the correct solution is probably involving a custom BlockingQueue implementation like ObservableQueue in https://www.nurkiewicz.com/2015/07/consuming-javautilconcurrentblockingque.html
A alternative approach is in How can I create reactor Flux from a blocking queue?
If you need to retain the LinkedBlockingQueue, a starting solution might be something like the following.
val f = flux<String> {
val listenerQ = LinkedBlockingQueue<QueryResult>()
Datalistener.register(clientName,listenerQ);
while (true) {
send(bq.take())
}
}.subscribeOn(Schedulers.elastic())
With an API like flux you should definitely avoid any side effects before the subscribe, so don't register your listener until inside the body of the method. But you will need to improve this example to handle cancellation, or however you cancel the listener and interrupt the thread doing the take.
I am using Spring Cloud SQS messaging for listening to a specified queue. Hence using #SqsListener annotation as below:
#SqsListener(value = "${QUEUE}", deletionPolicy = SqsMessageDeletionPolicy.ALWAYS )
public void receive(#Headers Map<String, String> header, #Payload String message) {
try {
logger.logInfo("Message payload is: "+message);
logger.logInfo("Header from SQS is: "+header);
if(<Some condition>){
//Dequeue the message once message is processed successfully
awsSQSAsync.deleteMessage(header.get(LOOKUP_DESTINATION), header.get(RECEIPT_HANDLE));
}else{
logger.logInfo("Message with header: " + header + " FAILED to process");
logger.logError(FLEX_TH_SQS001);
}
} catch (Exception e) {
logger.logError(FLEX_TH_SQS001, e);
}
}
I am able to connect the specified queue successfully and read the message as well. I am setting a message attribute as "Key1" = "Value1" along with message in aws console before sending the message. Following is the message body:
{
"service": "ecsservice"
}
I am expecting "header" to receive a Map of all the message attributes along with the one i.e. Key1 and Value1. But what I am receiving is:
{service=ecsservice} as the populated map.
That means payload/body of message is coming as part of header, although body is coming correctly.
I wonder what mistake I am doing due to which #Header header is not getting correct message attributes.
Seeking expert advice.
-PC
I faced the same issue in one of my spring projects.
The issue for me was, SQS configuration of QueueMessageHandlerFactory with Setting setArgumentResolvers.
By default, the first argument resolver in spring is PayloadArgumentResolver.
with following behavior
#Override
public boolean supportsParameter(MethodParameter parameter) {
return (parameter.hasParameterAnnotation(Payload.class) || this.useDefaultResolution);
}
Here, this.useDefaultResolution is by default set to true – which means any parameter can be converted to Payload.
And Spring tries to match your method actual parameters with one of the resolvers, (first is PayloadArgumentResolver) - Indeed it will try to convert all the parameters to Payload.
Source code from Spring:
#Nullable
private HandlerMethodArgumentResolver getArgumentResolver(MethodParameter parameter) {
HandlerMethodArgumentResolver result = this.argumentResolverCache.get(parameter);
if (result == null) {
for (HandlerMethodArgumentResolver resolver : this.argumentResolvers) {
if (resolver.supportsParameter(parameter)) {
result = resolver;
this.argumentResolverCache.put(parameter, result);
break;
}
}
}
return result;
}
How I solved this,
The overriding default behavior of Spring resolver
factory.setArgumentResolvers(
listOf(
new PayloadArgumentResolver(converter, null, false),
new HeaderMethodArgumentResolver(null, null)
)
)
Where I set, default flag to false and spring will try to convert to payload only if there is annotation on parameter.
Hope this will help.
Apart from #SqsListener, you need to add #MessageMapping to the method. This annotation will helps to resolve method arguments.
I had this issue working out of a rather large codebase. It turned out that a HandlerMethodArgumentResolver was being added to the list of resolvers that are used to basically parse the message into the parameters. In my case it was the PayloadArgumentResolver, which usually always resolves an argument to be the payload regardless of the annotation. It seems by default it's supposed to come last in the list but because of the code I didn't know about, it ended up being added to the front.
Anyway, if you're not sure take a look around your code and see if you're doing anything regarding spring's QueueMessageHandler or HandlerMethodArgumentResolver.
It helped me to use a debugger and look at HandlerMethodArgumentResolver.resolveArgument method to start tracing what happens.
P.S. I think your #SqsListener code looks fine except that I think #Headers is supposed to technically resolve to a Map of < String, Object >", but I'm not sure that would cause the issue you're seeing.
About SEDA component in Camel, anybody knows if a router removes the Exchange object from the queue when routing it? My router is working properly, but I'm afraid it keeps the Exchange objects in the queue, so my queue will be continuously growing...
This is my router:
public class MyRouter extends RouteBuilder {
#Override
public void configure() {
from("seda:input")
.choice()
.when(someValue)
.to("bean:someBean?method=whatever")
.when(anotherValue)
.to("bean:anotherBean?method=whatever");
}
}
If not, does anybody know how to remove the Exchange object from the queue once it has been routed or processed (I am routing the messages to some beans in my application, and they are working correctly, the only problem is in the queue).
Another question is, what happens if my input Exchange does not match any of the choice conditions? Is it kept in the queue as well?
Thanks a lot in advance.
Edited: after reading Claus' answer, I have added the end() method to the router. But my problem persists, at least when testing the seda and the router together. I put some messages in the queue, mocking the endpoints (which are receiving the messages), but the queue is getting full every time I execute the test. Maybe I am missing something. This is my test:
#Test
public void test() throws Exception {
setAdviceConditions(); //This method sets the advices for mocking the endpoints
Message message = createMessage("text", "text", "text"); //Body for the Exchange
for (int i = 0; i < 10; i++) {
template.sendBody("seda:aaa?size=10", message);
}
template.sendBody("seda:aaa?size=10", message); //java.lang.IllegalStateException: Queue full
}
Thanks!!
Edited again: after checking my router, I realised of the problem, I was writing to a different endpoint than the one the router was reading from (facepalm)
Thank you Claus for your answer.
1)
Yes when a Exchange is routed from a SEDA queue its removed immediately. The code uses poll() to poll and take the top message from the SEDA queue.
SEDA is in-memory based so yes the Exchanges is stored on the SEDA queue in-memory. You can configure a queue size so the queue can only hold X messages. See the SEDA docs at: http://camel.apache.org/seda
There is also JMX operations where you can purge the queue (eg empty the queue) which you can use from a management console.
2)
When the choice has no predicates that matches, then nothing happens. You can have an otherwise to do some logic in these cases if you want.
Also mind that you can continue route after the choice, eg
#Override
public void configure() {
from("seda:input")
.choice()
.when(someValue)
.to("bean:someBean?method=whatever")
.when(anotherValue)
.to("bean:anotherBean?method=whatever")
.end()
.to("bean:allGoesHere");
}
eg in the example above, we have end() to indicate where the choice ends. So after that all the messages goes there (also the ones that didnt match any predicates)