Currently, in my project, I'm using spring cloud stream with rabbitmq. Everything is okay, but the problem is when in StreamListener I set a condition for the message type. If RabbitMQ doesn't match type (Cannot find a #StreamListener matching for a message with id: ZZZZZZ) then this message disappeared from the queue. I don't want to remove this message if the message has a bad type. Is some solution for this problem?
There are two ways you can do it:
Throw an exception if the message has bad type. In this case, the message will not be removed from the queue since it wasn't processed successfully and no ACK was received.
Insert the message in the queue yourself once you have detected the type is bad.
Both approaches can suffer what is called an infinite loop. The message is processed, it is of bad type, it is re-inserted, and this repeats. To avoid this you can add some policy of reinserting, like: exponenetial delay, or a limited number of re-insertions etc.
But some doubt arises: why is your service consuming messages that it shouldn't? Perhaps you need a specific processor this messages? In this case you can route to the suitable processor.
Related
I am using github.com/nsqio/go-nsq Go package to work with NSQ, and I've met following problem. When producer writes a message - it creates a topic, but not a channel, and it seems that NSQ server just discards a messages when there are no channels. I don't want to lose messages, also I don't want to rely on consumer and producer startup order. So a possible solution is to create a channel on producer side, so when it writes it will not discard messages. How can I create and destroy a consumer without consuming a message, just for sake of channel creation?
sequence of
nsq.NewConsumer()
consumer.AddConcurrentHandlers()
consumer.ConnectToNSQLookupd()
can consume
removing AddConcurrentHandlers leads to error
From protocol spec I cannot see how is it supposed to do, other that sending SUB followed by CLS, but because those are two commands and non "atomic" kind of op - something theoretically may happen in between..
So probably I am trying to do that wrong? RABBITMQ for example can pre-create it, is here something like that?
1) I'm interested to learn if it is possible to keep the messages that were delivered using Spring Integration. I'm already using the mongo persistent storage (ConfigurableMongoDbMessageStore), but only failed messages remain in the collection. Ideally, I want all messages to remain with the functionality to list them and retry them.
I would use a field "status" or similar to identify queued, succesful or failed messages. Not sure if this field exists already, but I'm guessing something similar must be in place.
2) Also, when a message fails and is persited, there is a lot more data in the message. This data is serialised, so I'm curious how I can extract the original message and retry it.
3) The goal is to create an interface in the webapp where all queued messages can be seen, and retried. Not only failed messages, but also succesful deliveries (useful for testing).
I looked everywhere for an answer to this, but could not find it.
Thanks
I'd say it isn't good design for queue component.
Right it returns failed messages to the queue back for the future redelivery, but good message should be removed from the queue to avoid duplication on the next poll from queue.
No, there is no "status" field on the message, because you use store as a queue.
BTW Spring Integration provides separete implementation for queue channels: MongoDbChannelMessageStore.
You can achieve it with separate parallel Mongo collection and store your message twice: for the queue and for the future analysis. Here you can introduce "status" field and control it, when message successful or not.
From here you can introduce you UI to manage that collection and provide actions like send, retry. Remove the message from here and send it again to those two collections.
HTH
I am using camel to integrate with ActiveMQ JMS. I am receiving prices for products on this queue. I am using JMSXGroupID on productId to ensure ordering across a productId. Now if I fail to process this message I move it to a DeadLetterQueue. This could be because of a connection error on a dependent service or because of error with the message itself.
In case of the former I would have to manually remove it from the DLQ and put it back into the JMS queue.
Now the problem is that I dont know if any other message on that groupId has been received and processed or not. And hence unsidelining from DLQ will disrupt the order. On the other hand if I dont unsideline it and no other message has been received the product Id will not get the correct price.
1 solution that I have in mind is to use a fast key-value store(Redis) to store the last messageId or JMSTimestamp against a productId(message group). This is updated everytime I dequeue a message. Any other solution for this?
Relying on message order in JMS is a risky business - at best.
The best thing to do is to make the receiver handle messages out of sequence as a special case (but may take advantage message order during normal operation).
You may also want to distinguish between two errors: posion messages and temporary connection problems, maybe even use two different error queues for them. In the case of a posion message (invalid payload etc.) then there is nothing you can really do about it except starting a bug investigation. In such cases, you can probably send along "something else", such as dummy message to not interfere with order.
For the issues with connection problems, you can have another strategy - ActiveMQ Redelivery Policies. If there is network trouble, it's usually no use in trying to process the second message until the first has been handled. A Redelivery Policy ensures that (given you have a single consumer, that is). There is another question at SO where the poster actually has a solution to your problem and wants to avoid it. Read it. :)
We have quartz process that polls a ActiveMQ JMS queue.
We know that we could get several messages a minute would like to only respond to the most current message at a configured polling rate of a minute or more.
We don't need to process any of the previous messages.
Is there a way to configure the queue to get this behavior?
Its seems like a topic has the ability to do this via the subscription recovery policy using a count of 1. We would like to do this using a queue to guarantee (more or less) a single delivery of the message.
Or is there a conceptual flaw in our assumptions...
Thanks
In my opinion there is no standard operation for this, so you will have to write some code....
One possible solution would be to use a QueueBrowser together with a QueueReceiver:
Through the QueueReceiver you would get an Enumeration of the messages in the queue. For each message you can now perform a receive with a MessageSelector on the JMSMessageID as long as hasMoreElements() returns true. The last message will be the one you want to have....
When using activemq, you can use "image caching" on topics. One of the settings there is to always keep the last mesage sent..
Take a look at the Subscription recovery Policy settings:
http://activemq.apache.org/subscription-recovery-policy.html
I have a single AMQ queue that receives simple messages with string body. Consider I'm sending CLSIDs as message bodies. CLSIDs could be not unique, but I'd like to reject all messages with not unique bodies and keep only single instance of such messages in the queue. Is there any simple way to do it?
Currently I'm using a workaround. Messages from the queue are consumed by some processor that tries to insert bodies into a simple DB table with UNIQUE constraint applied to message_body field. If processor inserts the messages succesfuly - it's assigned to exchange.out.body and sent to other queue. If ConstraintViolationException is thrown - nothing is resent to other queue.
I would like to know does AMQ support something similar out of the box?
I believe you can write an interceptor for activemq where you can perform certain actions on messages. Check out: http://activemq.apache.org/interceptors.html
That being said, in my personal opinion this is bad practice. ActiveMQ is a messaging system which should only be responssible for transport of the message. All logic can beter be performed using your application ( either make sure the sender cannot send the same message more then once OR , create an intermediate consumer which indeed matches the received body with a database that contains already seen message bodies BEFORE, routing the message to the actual receiver queue)