We are getting the below error when we are posting a message to rabbitmq from a spring boot service. Also this is intermittent and we are not able to reproduce this.
[AMQP Connection 123.11.xxx.xx:5672] ERROR [] org.springframework.amqp.rabbit.connection.CachingConnectionFactory - Channel shutdown: channel error; protocol method: #method(reply-code=406, reply-text=PRECONDITION_FAILED - fast reply consumer does not exist, class-id=60, method-id=40)
Does anyone faced similar issue with rabbitmq .
Please help us with your inputs.
For anyone else that ended up here due to this error, in my case the issue was resolved when I ensured both of the following:
the publisher ("client") used the same channel for both publishing (to whatever normal queue you publish to) and for consuming from the RabbitMQ amq.rabbitmq.reply-to queue
the consumer ("server") used the same channel for both consuming (from whatever normal queue you published to above) and for publishing to the specified RabbitMQ reply-to queue
Unfortunately, I don't see this documented anywhere.
I had the exact same problem with direct reply mode and what I had to do was to:
Ensure that the consumer for the replies (not the requests) was running BEFORE publishing the request message.
Also, as the docs say:
The RPC client must consume in the automatic acknowledgement mode. This is because there is no queue for the reply message to
be returned to if the client disconnects or rejects the reply
message.
I also used the same model/channel for the publisher and consumer, as user joniba said, but I am not sure if that has helped indeed.
It most likely means the requestor has timed out and canceled the consumer on the direct replyTo queue. Or, the requesting application has been stopped.
If the requestor is also a Spring AMQP application, the default replyTimeout is 5000ms (5 seconds). If the server side takes longer than that, the requestor will timeout and you'll get this error on the server.
You can increase the replyTimeout property on the requesting RabbitTemplate.
EDIT
Spring AMQP 2.0.x (requires Spring Framework 5.x) uses longer-lived direct replyTo consumers so you shouldn't get these messages (but the client will still time out and you'll get a warning log on the client side and a log when the late delivery arrives).
Related
We want to use spring websockets + STOMP + amazon MQ as a full featured message broker. We were trying to do benchmarking, to find out how many client websocket connections single tomcat node can handle. But it appears that we hit amazonMQ connection limit first. As per the aws documentation, amazonMQ has a limit of 1000 connections per node (as far as I understand we can ask support to increase the limit, but I doubt that it can be increased big time). So my questions is:
1) Am I correct in assuming that for every websocket connection from client to spring/tomcat server, a corresponding connection being opened from server to broker? Is this correct behavior or we're doning something wrong here/missing something?
2) What can be done here? I mean I don't think this is a good idea to create broker node per evry 1000 users..
According to https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/messaging/simp/stomp/StompBrokerRelayMessageHandler.html your are doing everything right, and it is documented behavior.
Quote from javadoc:
For each new CONNECT message, an independent TCP connection to the broker is opened and used exclusively for all messages from the client that originated the CONNECT message. Messages from the same client are identified through the session id message header. Reversely, when the STOMP broker sends messages back on the TCP connection, those messages are enriched with the session id of the client and sent back downstream through the MessageChannel provided to the constructor.
As for a fix, you can write your own message broker relay, with tcp connection pooling.
I have an application using jms that sends data to an ActiveMQ Artemis queue. I got an exception with this message:
The transaction was rolled back on failover however commit may have been successful
This exception is basically telling me that the message may or may not have reached the queue so I don't know if I need to send the message again. Whats the best way to handle an exception like this when:
I cannot send duplicate messages to applications on the other end of the queue.
and
I cannot skip a message.
I can't state it better than the ActiveMQ Artemis documentation:
When sending messages from a client to a server, or indeed from a server to another server, if the target server or connection fails sometime after sending the message, but before the sender receives a response that the send (or commit) was processed successfully then the sender cannot know for sure if the message was sent successfully to the address.
If the target server or connection failed after the send was received and processed but before the response was sent back then the message will have been sent to the address successfully, but if the target server or connection failed before the send was received and finished processing then it will not have been sent to the address successfully. From the senders point of view it's not possible to distinguish these two cases.
When the server recovers this leaves the client in a difficult situation. It knows the target server failed, but it does not know if the last message reached its destination ok. If it decides to resend the last message, then that could result in a duplicate message being sent to the address. If each message was an order or a trade then this could result in the order being fulfilled twice or the trade being double booked. This is clearly not a desirable situation.
Sending the message(s) in a transaction does not help out either. If the server or connection fails while the transaction commit is being processed it is also indeterminate whether the transaction was successfully committed or not!
To solve these issues Apache ActiveMQ Artemis provides automatic duplicate messages detection for messages sent to addresses.
See more details about how to configure and use duplicate detection in the ActiveMQ Artemis documentation.
I'm evaluating NATS for migrating an existing msg based software
I did not find documentation about msg timeout exception and overload.
For Example:
After Subscriber has been chosen , Is it aware of timeout settings posted by Publisher ? Is it possible to notify an additional time extension ?
If the elected subscriber is aware that some DBMS connection is missing and cannot complete It could be possible to bounce the message
NATS server will pickup another subscriber and will re-post the same message ?
Ciao
Diego
For your first question: It seems to me that you are trying to publish a request message with a timeout (using the nc.Request). If so, the timeout is managed by the client. Effectively the client publishes the request message and creates a subscription on the reply subject. If the subscription doesn't get any messages within the timeout it will notify you of the timeout condition and unsubscribe from the reply subject.
On your second question - are you using a queue group? A queue group in NATS is a subscription that specifies a queue group name. All subscriptions having the same queue group name are treated specially by the server. The server will select one of the queue group subscriptions to send the message to rotating between them as messages arrive. However the responsibility of the server is simply to deliver the message.
To do what you describe, implement your functionality using request/reply using a timeout and a max number of messages equal to 1. If no responses are received after the timeout your client can then resend the request message after some delay or perform some other type of recovery logic. The reply message should be your 'protocol' to know that the message was handled properly. Note that this gets into the design of your messaging architecture. For example, it is possible for the timeout to trigger after the request recipient received the message and handled it but before the client or server was able to publish the response. In that case the request sender wouldn't be able to tell the difference and would eventually republish. This hints that such type of interactions need to make the requests idempotent to prevent duplicate side effects.
Is there a way to set timeout for sending a message to broker.
I want to send large messages to ActiveMQ broker but I do not want it to take forever, so I was planning to set a timeout while sending message.
you can set connection.sendTimeout=some ms in URI while connecting to broker
Official document for sendTimeout says
Time to wait on Message Sends for a Response, default value of zero
indicates to wait forever. Waiting forever allows the broker to have
flow control over messages coming from this client if it is a fast
producer or there is no consumer such that the broker would run out
of memory if it did not slow down the producer. Does not affect Stomp
clients as the sends are ack'd by the broker. (Since ActiveMQ-CPP
2.2.1)
here is the documentation https://activemq.apache.org/components/cms/configuring
hope this helps!
Good luck!
My requirement is that:
I have an IBM MQ which is shared across 20 servers and runs the JMS client. Now there will be a specific message in the queue which is intended for a particular thread. The thread need to use a correlationID to fetch the message from all the messages in the MQ.
When I am using onMessage() it is uncertain which thread will Listen to the message. Suppose server-1 is waiting for the message but server-15 listens it. Server-1 gets eventually timed out even though there was a message intended for the thread in server-1.
Please suggest how we are going to handle this scenario without introducing major performance issue.
Use a MessageSelector on the listener container(s). If the correlationId is in the standard JMSCorrelationID header the selector would be JMSCorrelationID=foo to receive all foo messages.